Subscribe Now

* You will receive the latest news and updates on your favorite celebrities!

Trending News

THE GLOBAL KNOWLEDGE WORKER

Curated current state of knowledge from across the world – knowledge, tools and technologiesbest practices, thought leadership and affordable trainingand certifications for professional management practitioners,teachers, students, advisors, consultants, bankers and all thosethat value the need to knowledgeable in the ever-changing landscape.

The AI controversy

Letter signed by Elon Musk demanding AI research pause sparks controversy

The statement has been revealed to have false signatures and researchers have condemned its use of their work

A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.

On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.

Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs and summarise lengthy documents. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.

The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter have expressed concern that their research was used to make such claims.

https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt


What Exactly Are the Dangers Posed by A.I.?

A recent letter calling for a moratorium on A.I. development blends real threats with speculation. But concern is growing among experts.

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

The group, which included Elon Musk, Tesla’s chief executive and the owner of Twitter, urged A.I. labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter, which now has over 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with A.I. Mr. Musk, for example, is building his own A.I. start-up, and he is one of the primary donors to the organization that wrote the letter.

But the letter represented a growing concern among A.I. experts that the latest systems, most notably GPT-4, the technology introduced by the San Francisco start-up OpenAI, could cause harm to society. They believed future systems will be even more dangerous.

Some of the risks have arrived. Others will not for months or years. Still others are purely hypothetical.

“Our ability to understand what could go wrong with very powerful A.I. systems is very weak,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “So we need to be very careful.”

https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html


The debate over whether AI will destroy us is dividing Silicon Valley

Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.

At a congressional hearing this week, OpenAI CEO Sam Altman delivered a stark reminder of the dangers of the technology his company has helped push out to the public.

He warned of potential disinformation campaigns and manipulation that could be caused by technologies like the company’s ChatGPT chatbot, and called for regulation.

Tech is not your friend. We are. Sign up for The Tech Friend newsletter.

AI could “cause significant harm to the world,” he said.

Altman’s testimony comes as a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.

Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction. And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren’t rooted in good science. Instead, it distracts from the very real problems that the tech is already causing, including the issues Altman described in his testimony. It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyber defenses and is allowing governments to deploy deadly weapons that can kill without human control.

https://www.washingtonpost.com/technology/2023/05/20/ai-existential-risk-debate/


One of the ‘godfathers’ of AI says that today’s systems don’t pose an existential risk, but warned that things could get ‘catastrophic’

One of the ‘godfathers’ of AI says that today’s systems don’t pose an existential risk, but warned that things could get ‘catastrophic’

There’s a chance that AI development could get “catastrophic,” Yoshua Bengio told The New York Times.

“Today’s systems are not anywhere close to posing an existential risk,” but they could in the future, he said.

The future of artificial intelligence remains murky, but there’s a chance thing could get “catastrophic,” an expert in the field told The New York Times.

“Today’s systems are not anywhere close to posing an existential risk,” Yoshua Bengio, a professor at the Université de Montréal, told the publication. The so-called AI “godfather” was part of the three-person team that won the Turing Award in 2018 for breakthroughs in machine learning.

“But in one, two, five years? There is too much uncertainty,” Bengio continued. “That is the issue. We are not sure this won’t pass some point where things get catastrophic.”

Capability development is critical for businesses who want to push the envelope of innovation. Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.

Anthony Aguirre, a cosmologist at the University of California, Santa Cruz, told The Times that as AI became more autonomous it could “usurp decision making and thinking from current humans and human-run institutions.”

“At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he continued.

https://www.businessinsider.in/tech/news/one-of-the-godfathers-of-ai-says-that-todays-systems-dont-pose-an-existential-risk-but-warned-that-things-could-get-catastrophic/articleshow/100939147.cms


The current legal cases against generative AI are just the beginning

AI that can generate art, text and more is in for a reckoning. As generative AI enters the mainstream, each new day brings a new lawsuit.

Microsoft, GitHub and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.

Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images.

And just last week, stock image supplier Getty Images took Stability AI to court for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI.

At issue, mainly, is generative AI’s tendency to replicate images, text and more — including copyrighted content — from the data that was used to train it. In a recent example, an AI tool used by CNET to write explanatory articles was found to have plagiarized articles written by humans — articles presumably swept up in its training dataset. Meanwhile, an academic study published in December found that image-generating AI models like DALL-E 2 and Stable Diffusion can and do replicate aspects of images from their training data.

The generative AI space remains healthy — it raised $1.3 billion in venture funding through November 2022, according to PitchBook, up 15% from the year prior. But the legal questions are beginning to affect business.

Some image-hosting platforms have banned AI-generated content for fear of legal blowback. And several legal experts have cautioned generative AI tools could put companies at risk if they were to unwittingly incorporate copyrighted content generated by the tools into any of products they sell.

“Unfortunately, I expect a flood of litigation for almost all generative AI products,” Heather Meeker, a legal expert on open source software licensing and a general partner at OSS Capital, told TechCrunch via email. “The copyright law needs to be clarified.”

https://techcrunch.com/2023/01/27/the-current-legal-cases-against-generative-ai-are-just-the-beginning/


‘AI Pause’ Open Letter Stokes Fear and Controversy IEEE signatories say they worry about ultrasmart, amoral systems without guidanceMARGO ANDERSON07 APR 20233 MIN READ

The letters Ai with a pause symbol in the dot for the i. 4 hands are reaching in with writing implements as if signing it.

The recent call for a six-month “AI pause”—in the form of an online letter demanding a temporary artificial intelligence moratorium—has elicited concern among IEEE members and the larger technology world. The Institute contacted some of the members who signed the open letter, which was published online on 29 March. The signatories expressed a range of fears and apprehensions including about rampant growth of AI large-language models (LLMs) as well as of unchecked AI media hype.

The open letter, titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

“AI can be manipulated by a programmer to achieve objectives contrary to moral, ethical, and political standards of a healthy society,” says IEEE Fellow Duncan Steel, a professor of electrical engineering, computer science, and physics at the University of Michigan, in Ann Arbor. “I would like to see an unbiased group without personal or commercial agendas to create a set of standards that has to be followed by all users and providers of AI.”

https://spectrum.ieee.org/ai-pause-letter-stokes-fear


Here are 3 big concerns surrounding AI – and how to deal with them

‘We can benefit from AI innovation while we are figuring out how to regulate the technology’

Get involved with our crowdsourced digital platform to deliver impact at scale

Artificial intelligence (AI) needs to be democratized to help more people understand it and embrace its potential;

We need to develop regulations for AI that are agile and adapt to this rapidly progressing technology;

A focus on “Trustworthy AI” offers a promising model for innovation and the governance of AI.

Artificial intelligence (AI) has gained widespread attention in recent years. AI is viewed as a strategic technology to lead us into the future. Yet, when interacting with academics, industry leaders and policy-makers alike, I have observed some growing concerns around the uncertainty of this technology.

In my observation, these concerns can be categorized into three perspectives:

Many people lack a full understanding of AI and therefore are more likely to view it as a nebulous cloud instead of a powerful driving force that can create a lot of value for society;

Some companies or individuals worry that they will fall behind as AI becomes more prevalent;

As is often the case with new technology, AI is increasingly used despite policy and regulation being behind the pace.

https://www.weforum.org/agenda/2020/02/where-is-artificial-intelligence-going/


Controversy erupts over non-consensual AI mental health experiment [Updated]

Koko let 4,000 people get therapeutic help from GPT-3 without telling them first.

On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counselling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counselling.

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

On Discord, users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., “What’s the darkest thought you have about this?”). It then shares a person’s concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own.

During the AI experiment—which applied to about 30,000 messages, according to Morris—volunteers providing assistance to others had the option to use a response automatically generated by OpenAI’s GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).

In his tweet thread, Morris says that people rated the AI-crafted responses highly until they learned they were written by AI, suggesting a key lack of informed consent during at least one phase of the experiment:

Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute. And yet… we pulled this from our platform pretty quickly. Why? Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.

In the introduction to the server, the admins write, “Koko connects you with real people who truly get you. Not therapists, not counsellors, just people like you.”

https://arstechnica.com/information-technology/2023/01/contoversy-erupts-over-non-consensual-ai-mental-health-experiment/


AI-generated arguments changed minds on controversial hot-button issues, according to study

Suddenly, the world is abuzz with chatter about chatbots. Artificially intelligent agents, like ChatGPT, have shown themselves to be remarkably adept at conversing in a very human-like fashion. Implications stretch from the classroom to Capitol Hill. ChatGPT, for instance, recently passed written exams at top business and law schools, among other feats both awe-inspiring and alarming.

Researchers at Stanford University’s Polarization and Social Change Lab and the Institute for Human-Centered Artificial Intelligence (HAI) wanted to probe the boundaries of AI’s political persuasiveness by testing its ability to sway real humans on some of the hottest social issues of the day—an assault weapon ban, the carbon tax, and paid parental leave, among others.

“AI fared quite well. Indeed, AI-generated persuasive appeals were as effective as ones written by humans in persuading human audiences on several political issues,” said Hui “Max” Bai, a postdoctoral researcher in the Polarization and Social Change Lab and first author on a new paper about the experiment in pre-print.

https://phys.org/news/2023-03-ai-generated-arguments-minds-controversial-hot-button.html


Shifting AI controversies:

How do we get from the AI controversies we have to the controversies we need?

Executive summary

What features of AI, especially, have triggered controversy in English-language expert debates during the last 10 years? This report discusses insights gathered during a recent research workshop about AI controversies hosted by the ESRC-funded project Shaping AI. The workshop was dedicated to evaluating this project’s provisional research results and the main findings are as follows:

AI controversies during 2012-2022 focused not only on the application of AI in society, such as the use of facial recognition in schools and by the police, but highlighted structural problems with general purpose AI, such as lack of transparency, misinformation, machine bias, data appropriation without consent, worker exploitation and the high environmental costs associated with the large models that define AI today.

During the last 10 years, participation in AI research controversies has been diverse but relatively narrow, with experts from industry, science and activism making notable contributions, but this relative diversity of perspectives appears to be under-utilized in recent media and public policy debates on AI in the UK.

AI research controversies in the relevant period varied in terms of who participated, the geographic scope of the issues addressed as well as their resolvability, but all controversies under investigation are marked by concern with the concentration of power over critical infrastructure in the tech industry.

Download the workshop report (PDF) here

https://warwick.ac.uk/fac/cross_fac/cim/research/shaping-21st-century-ai/shifting-ai-report/

WordPress Theme built by Shufflehound. © Copyright 2022 | The Global Knowledge Worker |  Website Developed by Digital Vega