
Experts don’t trust tech CEOs on AI
Dishonest, untrustworthy and disingenuous — that’s how a majority of experts surveyed from leading universities view AI companies’ CEOs and executives, Axios’ Margaret Talev and Ryan Heath report.
What’s happening: 56% of computer science professors at top U.S. research universities surveyed by Axios, Generation Lab and Syracuse University described the corporate leaders as “extremely disingenuous” or “somewhat disingenuous” in their calls for regulation of AI.
Why it matters: The latest Axios-Generation Lab-Syracuse University AI Experts Survey shows how deep the divide has grown between those who make and sell AI and those who study and advance it.
The big picture: Some critics of Big Tech have argued that leading AI companies like Google, Microsoft and Microsoft-funded OpenAI support regulation as a way to lock out upstart challengers who’d have a harder time meeting government requirements.
- Our survey suggests that this perspective is shared by many computer science professors at top U.S. research universities.
Context: U.S. policymakers rely on help from tech companies and their leaders to shape the rules for protecting individuals’ safety, freedoms and livelihoods in the AI era.
- Top tech executives have been meeting in closed-door sessionswith U.S. senators in an unusual push for their own regulation.
The intrigue: Survey respondents weighed in on several other provocative ideas.
- 55% favor or lean toward the idea of the federal government creating a national AI stockpile of chips through the Defense Production Act to avert future shortages.
- 85% said they believe AI can be at least somewhat effective in predicting criminal behavior — but only 9% said they believe it can be highly effective.
- One in four say AI will become so advanced at medical diagnosesthat it will generally outperform doctors.
By the numbers: Asked to prioritize just one dimension of AI regulation, “misinformation” was the respondents’ top concern (34%) followed by “national security” (20%), while”job protection” (5%) and “elections” (4%) came last.
- 62% said misinformation is the biggest challenge in maintaining the credibility and authenticity of news in an environment that includes AI-generated articles.
- 95% assessed AI’s current deepfake capability as “advanced” when it comes to video and audio content, with 27% saying it’s “highly advanced, indistinguishable from real content” and 68% saying it’s “moderately advanced, with some imperfections.”
Yes, but: 72% of respondents were “extremely optimistic” or “somewhat optimistic” about “where we will land with AI in the end.”
What they’re saying: “You have the people that can look under the hood at what these companies are churning out into society at a historic scale, and that’s the conclusion they’ve come out with — that they’re worried about the intentions of the men running the machines,” said Cyrus Beschloss, CEO of Generation Lab.
How it works: The survey includes responses from 216 professors of computer science at 67 of the top 100 U.S. programs.
More on our survey’s methodology … dive into the results.
– Behind the Curtain: AI architects’ greatest fear
Brace yourself: You will soon need to wonder if what you see — not just what you read — is real across every social media platform, Axios’ Jim Vandehei and Mike Allen write in their “Behind the Curtain” column.
Why it matters: Open AI and other creators of artificial intelligence technologies are close to releasing tools that make easy — almost magical — creation of fake videos ubiquitous.
One leading AI architect told us that in private tests, they can no longer distinguish fake from real — something they didn’t expect would be possible so soon.
- This technology will be available to everyone — including bad actors internationally — as soon as early 2024.
- Making matters worse, this will hit when the biggest social platforms have cut the number of staffpolicing fake content. Most have weakened their policies to curb misinformation.
The big picture: Just as the 2024 presidential race hits high gear, more people will have more tools to create more misinformation or fake content on more platforms — with less policing.
- A former top national security official told us that Russia’s Vladimir Putin sees these tools as an easy, low-cost, scalable way to help tear apart Americans.
- S. intelligence shows Russia actively triedin 2020 to help re-elect former President Trump. Top U.S. and European officials fear Putin will push for a 2024 win by Trump, who wants to curtail U.S. aid to Ukraine.
Yes, the White House and some congressional leaders want regulations to call out real versus fake videos. The top idea: mandating watermarking so it’ll be clear what videos are AI-generated.
- But researchers have tried that. The tech doesn’t work
- In any case, deciding which content is “AI-generated” is rapidly becoming impossible, as the tech industry rolls AI into every product used to create and edit media.
“Of course, it’s a worry,” said Reid Hoffman, co-creator of LinkedIn and forceful defender of AI.
- “It’s one of the places where AI and amplification intelligencecould [produce] a negative outcome,” he added.
Sam Altman, co-founder and CEO of Open AI, told us: “This is an important near-term risk for the industry to address. We need a combination of responsible model deployment and public awareness.”
Reality check: The best self-policing in the world won’t stop the faucet of fake. The sludge will flow. Fast. Furiously.
- It could get so bad that some AI architects told us they’re pushing to speed up the release of powerful new versions so the public can deal with the consequences — and adapt — long before the election.
A senior White House official told us officials’ biggest concern is the use of this technology and other AI capabilities to dupe voters, scam consumers on a massive scale and carry out cyberattacks.