
Don’t Bet with ChatGPT—Study Shows Language AIs Often Make Irrational Decisions
The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts.
Despite their impressive abilities, large language models don’t actually think. They tend to make elementary mistakes and even make things up. However, because they generate fluent language, people tend to respond to them as though they do think. This has led researchers to study the models’ “cognitive” abilities and biases, work that has grown in importance now that large language models are widely accessible.
Συνέχεια εδώ
nextgov.com