Tuesday, September 27
Shadow

Three times artificial intelligence has scared scientists – from creating chemical weapons to claiming it has feelings

THE ARTIFICIAL INTELLIGENCE revolution is just beginning but there have already been many troubling developments.

AI programs can be used to act on humans’ worst instincts or to achieve humans’ most evil goals, such as creating weapons or terrifying their creators with a lack of morality.

Visionaries like Elon Musk believe runaway AI could lead to human extinction

1

Visionaries like Elon Musk believe runaway AI could lead to human extinctionCredit: Getty Images – Getty

What is Artificial Intelligence?

Artificial intelligence is a catch-all phrase for a computer program designed to simulate, imitate, or copy human thought processes.

For example, an AI computer designed to play chess is programmed with a simple goal: to win the game.

As the game progresses, the AI ​​will model millions of potential outcomes of a given move and act on which one gives the computer the best chance of winning.

AI-powered humanoid robot named CEO of Chinese video game company in world premiere
New voice-altering tech has removed accents from call center workers

A skilled human player will act similarly, analyzing moves and their consequences, but without the perfect recall, speed, or rigidity of a computer.

AI can be applied to many fields and technologies.

Self-driving cars aim to reach their destination and absorb stimuli like signage, pedestrians and roads along the way, just like a human driver would.

AI programs have also taken unexpected turns and stunned researchers with their dangerous trends or applications.

AI invents new chemical weapons

Artificial intelligence was used to “easily” invent 40,000 new possible chemical weapons in just six hours.

Scientists sponsored by an international security conference have revealed that an AI robot has invented chemical weapons similar to one of the most dangerous nerve agents of all time, called VX.

VX is a tasteless and odorless nerve agent and even the smallest drop can cause a human to sweat and twitch.

“The way VX is deadly is that it actually stops your diaphragm, your lung muscles, from being able to move, so your lungs become paralyzed,” Fabio Urbina, the paper’s lead author, told The Verge.

“The biggest thing that jumped out at first glance was that many of the compounds generated were supposed to be actually more toxic than VX,” Urbina continued.

The dataset that powered the AI ​​model is freely available to the public, which means a threat actor with access to a comparable AI model could plug in the open source data and use it to create an arsenal of weapons. weapons.

“All it takes is some coding knowledge to turn a good AI into a chemical weapons machine.”

AI pretends it has feelings

A Google engineer named Blake Lemoine has made widely publicized claims that the company’s LaMDA (Language Model for Dialogue Applications) bot is aware and has feelings.

“If I didn’t know exactly what it was, which was this computer program that we built recently, I would think it was a 7 or 8-year-old kid who knows physics,” said Lemoine at Washington Post.

Google pushed back against his claims.

Brian Gabriel, a Google spokesperson, said in a statement that Lemoine’s concerns had been investigated and, in accordance with Google’s AI principles, “the evidence does not support his claims.”

“He was told there was no evidence that LaMDA was susceptible (and plenty of evidence against it).

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models. , which are not sensitive.

Lemoine was put on administrative leave and then fired.

Cannibal AI

Researcher Mike Sellers was developing a social AI program for the Defense Advanced Research Projects Agency in the early 2000s.

“For a simulation, we had two agents, quite naturally named Adam and Eve. Initially, they knew how to do things, but didn’t know much else.

“They knew how to eat for example, but not what to eat,” Sellers explained in a blog post.

The developers placed an apple tree inside the simulation and the AI ​​agents would receive a reward for eating apples to simulate the feeling of satisfying hunger.

If they ate the bark of the tree or the house inside the simulation, the reward would not be triggered.

A third AI agent was placed inside the simulation, named Stan.

Stan was present while Adam and Eve ate the apples and they began associating Stan with eating apples and satisfying hunger.

“Adam and Eve finished the apples on the tree and were still hungry. They looked around to assess other potential targets. Lo and behold, to their brains, Stan looked like food.

“So they each took a bite of Stan.”

Kardashian fans shocked as Pete Davidson 'takes a swipe' at Kanye West at 2022 Emmys
Rape victim whose DNA was used to frame her for a crime is suing for millions

The AI ​​revolution has begun to take shape in our world – artificially intelligent bots will continue to make life easier, replace human workers, and become more responsible and self-sufficient.

But there have been several horrifying examples of AI programs doing the unexpected, lending legitimacy to the growing fear of AI.