Is artificial intelligence really “wise”? –SWI swissinfo.ch


Sophia, the world’s first social android developed in Hong Kong in 2015. Can reproduce 62 types of human facial expressions Keystone / Ritchie B. Tongo

Computers have more and more opportunities to make important decisions on our behalf, but is it okay to do so? Researchers from the Swiss Research Institute Idyup claim that much of artificial intelligence is actually just an illusion.

This content is 2022/02/26 08:30

Can machines think? This is the most famous treatise by British mathematician Alan Turing.To another siteThe question written at the beginning of. Published in 1950, this treatise laid the foundation for the concept and definition of artificial intelligence (AI). The “imitation game” devised by Turing to answer this question is still used to determine the intelligence of machines.

The game, later known as the Turing test, is played by two players (A is male and B is female) and questioner C (regardless of gender). Players A and B are invisible to the questioner. By repeating the questions and answers in writing, it is possible to determine who is a man and who is a woman. Player A responds in such a way that the questioner makes a mistake, and player B responds cooperatively so that the questioner can make a correct decision.

Now suppose player A is replaced by a computer. If the questioner cannot distinguish between a computer and a person, the computer is found to have cognitive abilities similar to humans and must be recognized as an intellectual being. Turing describes it as such.

And now, 70 years later, the test results are amazing.Idy upTo another site“There is currently no AI system that can be called an AI system, which means that no AI has passed the Turing test,” said Elbe Bouler, Director of AI and Science. cognitive.

Neither artificial nor intelligent

The term artificial intelligence (AI) was obsolete as early as the 1970s because it was outdated and ridiculous. However, in the 90s, fashion was reborn. Boller, an electrical engineering professor, says the reason is “for advertising, marketing and business.” “But in reality, nothing has progressed except the capabilities of mathematical models.”

He is still skeptical of the word AI itself and how it is used today. He argues that there is no “artificial intelligence” because there is no system that reflects any amount of human intelligence. Even a baby a few months old can never do AI.

Take, for example, the action of picking up a glass of water from a table. The baby is well aware that if the cup is turned upside down, it will be empty. “That’s why babies like it upside down. No machine in the world can understand that difference,” he explains.

What this example shows also applies to the human-specific capacity for common sense. Thus, today and tomorrow, machines will never be able to imitate common sense.


“There is no intelligence in artificial intelligence (AI). It is a mistake to call it AI. We should be talking about machine learning rather than that,” said Elbe Boller on stage, director of the Idyup Research Institute. Idiap

Intelligence is in the data

However, AI has permeated various industries and is increasingly involved in decision-making. Staff, insurance and bank loans are just a few.

Machines analyze human behavior on the internet and learn who we are and what we like. The recommendation engine pulls out the least relevant information and recommends movies to watch, news to read, clothes you might like, etc. on social networks.

But that doesn’t mean AI can be considered an intellectual entity, says Bouler, who became director of Idyup in 1996. He argues that we should be talking about machine learning rather than AI. He says there are three factors that can improve AI capabilities. It’s about computer performance, mathematical models, and a huge database that can be accessed from anywhere.

Improved computer performance and the digitization of information have made it possible to considerably improve mathematical models. Moreover, the Internet, which has a huge database, has boosted the performance of AI.

▼ Idy-up video showing how AI works and what it can do

The institute tries to make the general public understand the importance of data for AI systems through numerous demonstrations. These demonstrations will begin on April 1 at the Hand Museum in LausanneTo another siteIt will be exposed to.

For example, we’ll show how a smartphone camera uses AI technology to correct the quality of a photo. Visitors can directly observe how vague photos become much clearer, and even if the same photo is used, the image quality deteriorates as the AI ​​learns different data.

The image quality correction process is not easy. Enabling machines to learn data requires a “tagging” task called annotations that tags information associated with each piece of good quality data. This is done by humans (if not completely by hand).

“We’re dealing with systems that are data-infused and functioning, not full-fledged life,” said Michael Reeblink, head of the institute’s computational bioimaging group.

This does not mean that the AI ​​is completely safe. Machine limits depend on data limits. This is what you should keep an eye out for when considering the real danger, according to Reeblink.

“Is the really dangerous thing a sci-fi machine taking over the world, or is it a way to distribute and annotate data? For me, the threat is in how I manage data rather than in the machine itself. I think.”

Need to be more transparent

Giant technology companies such as Google and Facebook, the American computer giants, have fully understood the power of models that use large amounts of data and have built their businesses around them. This is exactly what concerns the scientific community the most, with the automation of part of human work.

Former Google researcher Timnit Gebru has been fired for criticizing the huge and mysterious language model that underpins the company’s most widely used search engine in the world.

>> Google’s AI ethical conflict highlighted by firing researchers

The limitation of machine learning models is that they don’t have the same thinking capacity as we do, at least for now. Machines can find an answer, but they can’t explain why they came to that conclusion.

“We need to make the thought process of machines transparent so that we can explain it to the people around us in an easy-to-understand way,” said Andre Freitas, who leads a group studying AI that can be inferred and explained.

His research group is building an AI model that can explain his reasoning.

The goal is to break down complex algorithms and technical terms to make them easier to understand. As technology applied to AI begins to permeate our lives, he advises, “Once you encounter an AI system, try to see if that AI can explain it to you.”

Intelligence does not exist there

It is often said that AI is a technology that manipulates modern technology in the shadows. Therefore, when I hear AI, I naturally feel elevated and have high expectations. Computers that use neural network models that mimic the human brain are playing an active role in many areas that have never been imagined before.

“It made us believe that AI is as smart as we are and will solve all problems,” said Roneke van der Plus, head of Idiup’s Computational, Cognitive and Language Group.

He cites, for example, the continued evolution of language tools such as virtual assistants and machine translation tools. “(Due to its high performance), we are surprised and speechless, and if a computer can handle complex things like language, we think there must be intelligence behind it.”

The models that underlie linguistic tools can learn patterns from large amounts of text, so that we can imitate them. But if you compare the ability of a (voice-activated) virtual assistant to the ability of an average child, for example when talking about a paper plane, the machine needs more data than the level of the child. Moreover, you will find it difficult to acquire ordinary knowledge like common sense.

“We are easily fooled, because what sounds like human speech does not mean the existence of human intelligence (but machines can mimic it),” he said.

After all, as Turing said 70 years ago, it’s not worth trying to humanize a “thinking machine” by pretending to be beautiful. It’s the same as not being able to judge content from the cover of a book alone.

(English translation, Hiroko Sato)

Articles introduced in this story

Complies with JTI standards

Complies with JTI standards

Featured Articles: SWI swissinfo.ch Journalism Trust Initiative Certification Scholarships

Leave a Comment