Artificial intelligence has the potential to "change how society works and functions," but it doesn't seem to be doing the same for kids, according to new research out of the University of Berkeley.
In the journal Perspectives on Psychological Science, researchers say large language models like ChatGPTthe first artificial intelligence to produce "truly human-like content"fail to perform as well on basic problem-solving tasks as kids.
The researchers tested children ages 3 to 7 with various problems, including drawing a circle without a tool and choosing between a ruler, a teapot, and a stove.
About 85% of the time, the kids were right.
But when the researchers presented the same problems in written form to other large language models, such as OpenAI's GPT-3.5 Turbo, Anthropic's Claude, and Google's FLAN-T5, the AIs faltered.
ChatGPT-4, for instance, came out on top, correctly identifying a circle as a circle 76% of the time.
"To illustrate, if large language models existed in a world where the only things that could fly were birds, and you asked one to devise a flying machine, they would never come up with an airplane," researcher Alison Gopnik tells Big Think.
A customized collection of grant news from foundations and the federal government from around the Web.
Caroline Diehl is a serial social entrepreneur in the impact media space. She is Executive Chair and Founder of the UK’s only charitable and co-operatively owned national broadcast television channel Together TV, the leading broadcaster for social change runs a national TV channel in the UK and digital platform which helps people find inspiration to do good in their lives and communities.