
Your AI personal assistant is coming
Meet the people, protocols, and possibilities to build a better tech future.
 
We might not be far from a world where we can all have a full-time AI personal assistant.
- Want to order pizza and have it delivered to your door? It will do it for you.
- Want to finally get to your to-do list? It will auto-complete it, step by step.
- Want help planning a trip? Get your customized itinerary in seconds.
Autonomous artificial intelligence agents are starting to emerge in the last few months, and AI personal assistants are just one of many use-cases. These new tools have been hyped to carry out multi-step tasks with limited human engagement, and some regard AI autonomous agents as precursors to artificial general intelligence, where AI algorithms do things they weren’t specifically trained to do.
This week we’re exploring autonomous AI agents to understand this new application of AI, how AI agents work, and what it means for the future where large language models (LLMs) that power artificial intelligence are learning faster and faster.
Autonomous AI agents
An autonomous AI agent is the software interface that strings together multiple actions of LLMs so the “agent” can achieve a number of successive tasks on its own. You can give the agent a goal, and it will break down that goal into a series of tasks, which it begins completing in order.
While using good old ChatGPT requires continuous prompting from a human, AI agents take the output from GPT and feed it back into its memory so it can build upon what it’s just done: from correcting mistakes, making improvements, and taking the next step in a sequence of actions.
AI agents are still quite limited, but they represent a type of application built upon ChatGPT. OpenAI released an API for its ChatGPT in March, which has spurred a flurry of new applications, integrations, and use-cases for AI. AutoGPT, BabyAGI, and AgentGPT, three different AI agent applications, use OpenAI’s new API.
Endless possibilities
While autonomous AI agents aren’t exactly autonomous yet, they could be soon, given the rate of growth and learning so far by LLMs. As LLMs get more powerful, there will likely be more use-cases for AI agents and more ways AI is integrated into everyday life.
In the last month, many have begun to predict what today’s early AI agents could become.
- Personal Assistants:Dustin Moskovitz, former co-founder of Facebook and Asana predicted on Twitter that: “By 2030, I expect everyone will have a personal AI agent that, among other things, can function as a kind of data broker to help you with: health providers, service providers generally, filing taxes, government services generally.”
- Personal tutors:In the education space, AI agents could replace tutors and help students with personalized learning.
- Career coach:Justine Moore, a partner at venture fund a16z, anticipates that “I fully expect to see an AI job search stack in the next few months. There will be agents that find jobs that are a fit for you, submit personalized applications, prep you for interviews, give feedback afterwards, and negotiate salary on your behalf.”
- Online writing and posting: AI agents will soon be skilled at managing social media accounts, optimizing SEO, and creating podcasts.
- Search: Using Google to search for things (one Twitter user predictedwe won’t use a search bar anymore because we’ll have AI agents search for us)
- This role: Researching, drafting, editing, fact-checking, and sending this weekly newsletter each week
One LinkedIn post expects that the next billion-dollar startup will be so AI-powered that it will only need three humans to run it. Another tweet predicted that AI will be transformative for solo-preneurs.
Beware of the hype cycle
If you spend long enough on social media, and thumb up enough of the same posts, the algorithm will begin to dispense tweet after tweet about the latest application that’s built on AI, about the game-changing potential of autonomous agents, or about some flashy solution that seems like it’s looking for a problem to solve (did you think you needed an AI teammate just for your workplace Slack?).
There are several reasons to beware of the hype cycle around AI:
- The techno-optimist trap: We could begin to believe that everything becomes a problem AI can solve, leading us to develop solutions looking for problems, over-investing in AI, and creating an AI bubble that is inflated with the unrealistic optimism of AI solving problems it’s not well-suited to solve.
- The techno-pessimist trap: We could grow tired of all the hype around AI and not recognize its true power and potential. One could argue that the hype cycle around web3 has created a blindspot in many who have written off blockchain technology too soon (see a recent editionon the power of blockchain—after the bust).
Most autonomous AI agents are not even three months old. They’re clunky, largely unavailable to the masses, and often get stuck trying to complete a series of tasks. But the rate of learning by LLMs is staggering; the early experiments in autonomous AI agents will only lay the groundwork for more powerful and effective tools in the future.
Still early days
The limitations of autonomous AI agents are rooted in the limitations of GPT-4, which struggles to stay focused on the ultimate goal that’s been set. As the AI agent makes its way further down the task list, it can be prone to “hallucinate” or make things up. For example, one developer said he set the goal for his AI agent to do research into waterproof shoes, but it became fixated on shoelaces and was stuck in its own loop (pun intended).
Today autonomous AI agents can:
- Code: Write, analyze, edit, and save code. It can develop code to build a website, which can take over 45 minutes and dozens of steps to complete.
- Research: Conduct market researchfor a prospective company or other research endeavor.
- Complete to-do lists: Create a to-do list that completes itself.
Are we creating a monster?
Autonomous AI agents are just one example of what’s being built on top of AI. In the coming months and years, AI will be channeled to solve problems, generate content, and influence people—for good and bad. With OpenAI’s API and AI chatbots from competitors like Google, new companies and applications are showing up everyday. But with so many possibilities, we’ll need guardrails to direct AI towards positive ends.
Geoffrey Hinton, the AI pioneer who recently left Google so he could speak freely about the dangers of AI, told The New York Times, “It is hard to see how you can prevent the bad actors from using it for bad things.” Hinton didn’t sign the open letter calling for a pause of AI development, but his voice is joining a chorus of others that are concerned that the wild west of AI development, with the absence of thoughtful guardrails, could spell problems in the future. The open letter collected almost 30,000 signatures from people who agreed with the letter that “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
As big tech firms pivot their strategies to outcompete one another on AI, we’re in the beginning of an AI “arms race” where the pursuit of profit and market share supersedes the responsibility to develop these LLMs responsibly. When profitability outweighs safety as the most important priority, the tech industry is liable to repeat past mistakes, encoding its growth-at-all-cost ethos into AI algorithms.
While the latest autonomous AI agent might be intriguing from a personal productivity perspective, it symbolizes the moment we’re in: an AI renaissance that could produce a million new solutions and a million new problems.
Plus
While DAOs and blockchain technologies promise trustless, decentralized coordination and governance, they still struggle to solve for the complexity of human interaction. Axios profiled the human drama inside of Aragon, an organization that builds tools for DAOs.
An article in The Guardian outlined five predictions from workplace experts on the way AI will change the workplace: from farming and education to healthcare and the military.
Digital Equity Data Dashboard
Connect Humanity is excited to launch its Digital Equity Data Dashboard. In 2022, Connect Humanity, in partnership with the TechSoup Global Network, Civicus, Forus, WINGS, NTEN, and others, conducted the largest mapping to date of the digital capacities and gaps of civil society organizations and the people they serve. Now they’ve launched an interactive data dashboard to ensure others can benefit from the data shared by survey participants and to further understanding about the state of digital inequity. Check it out here.
Πηγή: projectliberty.io