Decoding the Hype About AI

If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh.

The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams.

Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content). The Atlantic declared that it could “destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “pocket nuclear bomb” and chastised its makers for launching it on an unprepared society.

Even the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “lights out for all of us.”

But others say the hype is overblown. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fiction and should stay there.”

So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “AI snake oil.” In 2019, Narayanan gave a talk at MIT called “How to recognize AI snake oil” that laid out a taxonomy of AI from legitimate to dubious. To his surprise, his obscure academic talk went viral, and his slide deck was downloaded tens of thousands of times; his accompanying tweets were viewed more than two million times.

Narayanan then teamed up with one of his students, Sayash Kapoor, to expand the AI taxonomy into a book. Last year, the pair released a list of 18 common pitfalls committed by journalists covering AI. (Near the top of the list: illustrating AI articles with cute robot pictures. The reason: anthropomorphizing AI incorrectly implies that it has the potential to act as an agent in the real world.)

Narayanan is also a co-author of a textbook on fairness and machine learning and led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use personal information. He is a recipient of the White House’s Presidential Early Career Award for Scientists and Engineers.

Our conversation, edited for brevity and clarity, is below.

Συνέχεια  εδώ



What Does AI Mean for Writers? I Have Thoughts.

AI Writing is all the rage. If I had a nickel for every post I saw in my social feed this week about ChatGPT… I’d be wrapping all my holiday gifts in gold leaf tied with ribbons made from twenty-dollar bills.

I have a lot of thoughts about AI platforms and tools, and how they help/hurt writers, marketers, and other tender creative souls. I’m working on a new keynote about it now.*

*(Well, not now-now: Right now I’m writing to you. But you knew that.)

But it’s December. It’s my final letter of the year. (You might call it my “urine” letter to you.)

So I’m going to step back for a minute and leave you with a few of my early thoughts about AI & writing that have been knocking around my noggin.

AI is a tool.

Συνέχεια εδώ

Info photo: Arvind Narayanan

Σχετικά Άρθρα