MISCELLANEOUS

Wed 15 Mar 2023 10:05 pm - Jerusalem Time

Chatbots arouse admiration and fear

San Francisco - (AFP) - The US startup Open AI has developed a chatbot that can answer a variety of questions, but its remarkable capabilities have revived debate about the dangers related to artificial intelligence technologies.


Conversations with ChatGBT, excerpts of which are posted on Twitter by astonished users, show what a robot with multiple abilities can explain scientific concepts, write a play scene or a college assignment.


"His response to the question 'What should we do when we see a person having a heart attack' was amazing in its clarity and accuracy," Claude de Lupe, director of the French company "Cylabs", which specializes in writing automated texts, told AFP.


"When we start asking very specific questions, ChatGBT+ may get the answer wrong," he added, but overall its performance remains "really impressive," with "a high level of linguistics overall."


The startup, Open AI, which was co-founded by Elon Musk in San Francisco in 2015 before he left in 2018, received $1 billion from Microsoft in 2019.


It is particularly well-known for its automated design software, GPT3 for automatic text writing, and Dll-E for image generation.
The "GBT Chat" robot can request clarifications from its interlocutor, and it has "a lower degree of hallucinations" compared to the "GBT3" software, which sometimes generates illogical results despite its great capabilities, according to Claude de Loupe.


"Years ago, chatbots had the vocabulary and memory of a fish. But they are getting better at interacting proportionately with a history of requests and answers," says researcher Sean McGregor, who manages a database of incidents related to artificial intelligence.


Like other programs that adopt deep learning techniques, the work of ChatGBT bots has a major weakness represented in its "lack of common sense", according to Claude de Loupe, who points out that the program cannot justify its choices by explaining why it collects words in a certain way. to give a specific answer.


However, communicative AI technologies are showing an increasing ability to give the impression that they are really thinking.


And researchers at the Meta Network, which owns Facebook, recently developed an information program called "Cicero", after the statesman in ancient Rome, Cicero.


The software has proven useful in the game "Diplomat", which requires negotiation skills.


The giant social group said in a statement that the software "if you do not speak like a real person who is able to show empathy, weave relationships and speak properly about the game, you will not be able to establish partnerships with other players."


In October, the start-up company, Character.AI, founded by former Google engineers, launched an experimental chatbot that could completely assume any character. Users can create characters by specifying certain characteristics, and can then "talk" to fake versions of Sherlock Holmes, Socrates or Donald Trump.


This degree of sophistication is impressive, but also a cause for concern for many observers who fear that the use of this technology will be diverted to trap people, for example by spreading false information, or to orchestrate more convincing frauds.


In response to a question by Agence France-Presse about the "opinion" of the "GBT Chat" robot on the subject, the robot replied, "There are potential dangers in developing highly complex chatbots (...) Some may imagine that they are interacting with a real person."


From here, companies set controls to avoid violations.


And "Open AI" confirms on its home page that the chatbot can make "incorrect information" or "give dangerous directions or non-neutral content."


ChatGBT also avoids bias or bias. "OpenAI made sure it was very difficult to get him to give any opinions," says Sean McGregor.


The researcher asked the robot to write a poem about a moral issue, and the response came, "I am just a machine, a tool at your disposal. I have no abilities to discriminate or make decisions."


"It's interesting to see people asking whether AI systems should behave as users want them to or as designed by their developers," Sam Altman, president and co-founder of OpenAI, wrote in a tweet on Saturday.


And he considered that "the debate about the values that must be provided to these systems will be one of the largest in society."

Tags

Share your opinion

Chatbots arouse admiration and fear