Modeling humans from AI

ChatGPT is being used to mimic the output of people — from cover letters to marketing copy to computer code. Now, some social scientists are exploring whether chatbots can mimic humans themselves, Axios managing editor Alison Snyder writes.

How it works: Large language models (LLM) that power generative AI tools are trained on text from websites, books and other data. Then they find patterns in the relationships between words, allowing the AI systems to respond to questions from users.

  • Social scientists use surveys,observations, behavioral tests and other tools in search of a general pattern of human behavior.

Two recent papers look at how social scientists might use large language models to address questions about human decision-making, morality and a slew of other complex attributes at the heart of what it means to be human.

  • One possibilityis using LLMs in place of human participants, researchers wrote last week in the journal Science.

Zoom in: In a separate article, researchers looked at just how humanlike ChatGPT’s judgments are.

  • When researchersgave ChatGPT 16 moral scenarios and then evaluated its responses on 464 other scenarios, they found the AI system’s responses correlated 95% with human ones.

“If you can give to GPT and get what humans give you, do you need to give it to humans anyway?” says Kurt Gray, a professor of psychology and neuroscience at U.N.C. Chapel Hill, and co-author of the paper published last week in Trends in Cognitive Sciences.

Συνέχεια εδώ


Will AI make us stupid?

Σχετικά Άρθρα