The debate over sentient machines

We’ve hit the science-fiction moment in the debate over generative AI, where people are warning of the human-like conversational skills of ChatGPT.

  • Why it matters: ChatGPT, which ate the internet so it can spit out answers to human questions, isn’t sentient — it’s not self-aware. But even the early, imperfect, restrained version of the tech shows how easy human-like conversations and ideas are to replicate — and abuse.

The backstory: Jim VandeHei and I have spent the past week reading everything we can get our hands on about the tech, and talking to experts who understand it best.

 Our biggest takeaway: This is the most important tech breakthrough since at least the iPhone — and perhaps the internet itself.

  • The ability of machinesto devour billions of words written on the internet — then predict what we want to know, say and even think — is uncanny, thrilling and scary.
  • You’ve read abouttech columnists prompting creepy, human-like conversations with Sydney, the code name of Microsoft’s new chat version of Bing.

Zoom out: Right now we’re getting only a small glimpse of the technology’s full power. Google, for instance, has been hesitant to unveil and unleash its full generative AI because of its awesome and potentially dangerous capabilities.

  • Even Microsoft and OpenAI are only giving some people limited access to a not fully formed version of its ChatGPT.

What’s out there: An app called Replika bills itself as the “World’s best AI friend – Need a friend? Create one now.” A 24/7 friend for just $5.83/month! (The app is now trying to rein in erotic roleplay).

  • A host ofpaid AI image generators — including Midjourney, and DALL·E 2 (which, like ChatGPT, is from OpenAI) — are now available.
  • Many moreservices are on the way.

 How it works: AI isn’t sentient, but it sure seems like it. Here’s why:

  • The tools have devouredlots and lots of what sentient beings have written — and therefore can mimic human emotions, Axios’ chief tech correspondent Ina Fried explains.
  • Generative AI essentially scansprevious writing on the internet to predict the most likely next words — infinitely.

The best article I’ve seen on the mechanics of ChatGPT is by Stephen Wolfram, who has studied neural nets for 43 years.

  • The gistis that it’s just adding one word at a time: “ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.'” (Go deeper.)

What we’re watching: The longer the Bing sessions went on, the more open the door became for creepy responses.

  • Beginning last Friday, Microsoft said, “the chat experience will be capped at 50 chat turns [a user question + Bing reply] per day and 5 chat turns per session.”

The bottom line: Computer science experts are much more concerned with how ChatGPT and its brethren will spread misinformation and perpetuate bias than with the AI being sentient or even superhuman.

  • Bing isn’t reallyhappy or mad or in love. But it knows really well what humans sound like when we are.

Go deeper: “ChatGPT’s edge: We want to believe,” by Scott Rosenberg.

 
-AI can’t have a “woke mind virus” — it doesn’t have a mind

Read to the end for Rose the armadillo

Earlier this month, Microsoft opened up invites to the new Bing search which integrates OpenAI’s ChatGPT. One key difference between ChatGPT and Bing’s AI, which is called Sydney internally, is that Sydney does not have a knowledge cutoff. ChatGPT claims that it can’t tell you anything that happened after 2021, and though a few recent news stories seem to have slipped in via research testing, that is largely still true. Sydney, meanwhile, is completely plugged into the internet. Which means it can see what people are writing about it and react in real-time. It also means that Sydney very quickly went insane.

Συνέχεια εδώ

 
Plus

-Big Tech’s future is up to a Supreme Court that doesn’t understand it

The firestorm over Big Tech and content moderation is coming to a head at the Supreme Court — but some experts fear it’s a job the court simply isn’t equipped to do well.

Why it matters: The court has historically not been great at grappling with new technology. As it dives into the political battle over social-media algorithms, there’s a real fear that the justices could end up creating more controversies than they solve.

Driving the news: The court is set to hear arguments this week in two cases involving Section 230, the federal law that says tech platforms aren’t liable for what their users choose to post.

Συνέχεια εδώ

Πηγή: axios.com

Σχετικά Άρθρα