Research from the University of Cambridge calls on developers and policy makers to prioritize approaches to AI design that take children's needs more into account.

The study, published in the journal Learning, Media & Technology, shows that children consider chatbots to be "lifelike, semi-human confidants", but when the technology fails to respond to their unique needs and vulnerabilities, it can impact children. Can.

This is evident from cases where Alexa instructed a 10-year-old girl to touch an electrical plug with a coin, and My AI posing as a 13-year-old girl gave adult researchers tips on how to lose your virginity at the age of 31 . -Year.

In a separate reported interaction with a Bing chatbot, which was designed to be friendly to teens, the AI ​​became aggressive and began gaslighting a user.

“Children are probably the most neglected stakeholders of AI,” said Dr Nomisha Kurian, an academic at the University of Cambridge.

He said that while creating a human-like chatbot could provide many benefits, "For a child, it's very hard to draw a hard, rational boundary between something that looks human and reality".

Kurian said that children "may not be able to form proper emotional bonds."

Furthermore, he argued that it could be "confusing and upsetting for children, who may trust a chatbot the same way they would a friend".

To make AI "an incredible ally for children", it must be designed with children's needs in mind.

“The question is not about banning AI, but about how to make it safe,” he said.