
Google’s AI quandary
-New ideas for reading AI “minds”
-Microsoft signs nuclear fusion deal as part of sustainability push
-Docs warn about AI’s “existential threat to humanity”
-OpenAI, Anthropic aim to quell concerns over how AI works
Google’s tough challenge at its I/O developer conference today: Show that it’s at the forefront of the white-hot generative AI battle while also reassuring more than a billion users that it’s moving carefully enough to avoid AI’s many potential harms and doomsday scenarios.
Why it matters: The company faces a chorus of critics, inside and out — with some saying it’s moving too recklessly and others warning it’s falling behind.
Driving the news: Google is expected to announce PaLM 2, an updated version of its large language model and rival to OpenAI’s GPT-4, according to CNBC, which said the company also plans to show off new features for its Bard chatbot and generative AI enhancements to its search results.
- Google is also aiming to demonstrate that its cloud can compete for AI customers with Microsoft’s Azure, which plays host to OpenAI’s services. On this front, Google will announce a deal with Character.ai, a high-profile startup, executives from both companies tell Axios.
- Finally, Google is poised to announce new hardware and consumer services that take advantage of AI advances. It has shown glimpses of the Pixel Fold, its first foldable smartphone, and released an unlisted adfor it last night. Google’s also expected to debut other devices, including the Pixel 7a and perhaps a Pixel Tablet as well.
The big picture: Google is locked in a battle with Microsoft and OpenAI at the same time critics argue that it is allowing that competition to blind it to the real risks posed by its technology.
- Several key figures working on AI issues at Google have sounded alarms, including Timnit Gebru, who was ousted in 2020 along with several colleagues, and more recently Geoffrey Hinton, the machine-learning pioneer known as “the Godfather of AI,” who left the company at the end of last month.
State of play: Microsoft and OpenAI have been racing to release products to the public, both consumer tools like ChatGPT and Dall-E 2 as well as services that businesses can use to make use of the underlying technology.
- Microsoft said Monday that it is moving to a paid preview for 600 of its customers to use Copilots, Microsoft’s term for the AI-powered assistants it’s adding to programs including Word, Excel, PowerPoint, Outlook and OneNote.
- Microsoft has also broadenedits preview of the AI-powered Bing to include image and video results and eliminated the waitlist.
Between the lines: Delivering search results in the form of AI-generated conversation presents Google with a business dilemma.
- The vast majority of its revenue comes from advertisers who pay to include their links alongside search results.
- While the company certainly can — and likely will — place ads near conversational search results, it’s not clear whether they will work as well as the current system for users or advertisers — or whether they will prove as profitable.
What they’re saying: In an interview with Axios, Google Cloud CEO Thomas Kurian called Character.ai “by far one of the most sophisticated teams we are working with.”
- Kurian said Google looks at the startup as both a customer and an engineering partner as it builds out its cloud computing services for AI.
- ai chief Noam Shazeer — a former Googler — told Axios that he appreciated access to Google’s TPU processors as an employee and is excited to continue taking advantage of their power. “It’s going to really let us scale out our projects and really accelerate our research too,” he said.
Details: Google’s I/O keynote runs from 10am PT until around noon at Shoreline Amphitheater in Mountain View, with hundreds of thousands of people expected to watch online.
-New ideas for reading AI “minds”
Two leading AI startups are offering novel approaches to making the inner workings of the latest generative technologies more visible and readily governed.
Why it matters: Generative AI can craft impressive combinations of words and images, but the opaque nature of the technology makes it difficult to understand, let alone evaluate, its choices.
Driving the news: OpenAI released a paper and blog post discussing how one AI system can be used to explain how individual “neurons” in another AI system appear to work.
- The company used the state-of-the-art GPT-4 to analyze the work of GPT-2, a far older system — an approach that may not be able to help us understand the most advanced AI models.
Meanwhile, Anthropic on Tuesday proposed the idea of a “constitution” to govern the behavior of Claude, its chatbot.
- The idea is that rather than using bits of human feedback to evaluate output, the engine would use a series of documented principles.
The big picture: The new announcements are just some of the ways that companies creating generative AI systems are responding to criticism and trying to both understand and constrain powerful systems whose inner decision process remains mysterious even to those who create them.
What they’re saying: Open AI researcher Jeff Wu said in an interview that even though the company was using a newer technology to analyze an older one, the early results suggest the approach itself has promise.
- “There’s at least reason for hope,” he said. “Ideally we would be looking forward, not backward.”
-Microsoft signs nuclear fusion deal as part of sustainability push
Microsoft has signed a power purchase agreement with nuclear fusion energy startup Helion for at least 50 megawatts of electricity beginning in 2028, the companies announced Wednesday.
Why it matters: The agreement is being billed as the world’s first such deal for a fusion firm. It comes as money and interest pours into the much heralded, yet-to-be-realized clean energy source.
Zoom in: Helion plans to locate its fusion plant in Washington state, home to both companies, and sell power directly into the grid via Constellation.
Συνέχεια εδώ
-Docs warn about AI’s “existential threat to humanity”
Artificial intelligence poses “an existential threat to humanity” akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated, an international group of doctors and public health experts warned Tuesday in BMJ Global Health.
What they’re saying: “With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing,” wrote the authors, among them experts from the International Physicians for the Prevention of Nuclear War and the International Institute for Global Health.
Συνέχεια εδώ
-OpenAI, Anthropic aim to quell concerns over how AI works
Two leading AI startups are offering novel approaches to making the inner workings of the latest generative technologies more visible and readily governed.
Why it matters: Generative AI can craft impressive combinations of words and images, but the opaque nature of the technology makes it difficult to understand, yet alone evaluate, its choices.
Driving the news: OpenAI released a paper and blog post discussing how one AI system can be used to explain how individual “neurons” in another AI system appear to work.
Συνέχεια εδώ