
AI takes on insurance shopping
For all the frenzied speculation about how AI can transform health care, some companies are leveraging the technology for a decidedly simpler but still critical task: making shopping for health insurance less terrible.
Why it matters: Many Americans typically stick with their health plan year after year even when better and cheaper options are available, Maya writes.
- That’s often because it’s too hard to predict how much care they’ll need or figure out if they can actually get a better deal.
- Companies are rolling out AI-powered tools aimed at making the shopping experience easier, and even brokers and agents selling health plans say they see the technology as a helpful aid, rather than an existential threat.
Context: The tools can be especially helpful for the tens of millions of people purchasing Medicare Advantage plans or shopping on the Affordable Care Act marketplaces.
- The average shopper on the ACA marketplaces during the current enrollment seasonhas 100 plans to choose from, and Medicare Advantage options have surged in recent years.
How it works: The AI tools generally gather basic information about an individual insurance shopper and their expected health needs and then use that data to churn out predictions for the best health plan options.
- Alight, a company providing cloud-based HR services, said95% of the employers it serves used AI technology — including a virtual assistant feature — to help employees pick health benefits during fall open enrollment.
- But machines aren’t taking over everything. “AI is great, and it’s fantastic, and can respond to so many different things. But it doesn’t always have a psychological understanding” of what customers are seeking, said Carey Gruenbaum, CEO of AI-driven The Big Plan.
Reality check: Tech tools that help people pick health insurance aren’t exactly new.
- But the current hype around AI could boost consumer interest in the tools, or at least give companies a new angle for promoting them.
Συνέχεια εδώ
-Copyright is new AI battlefield
Looming fights over copyright in AI are likely to set the new technology’s course in 2024 faster than legislation or regulation.
- Why it matters: After a year of lawsuits from creatorsprotecting their works from getting gobbled up and repackaged by generative AI tools, the new year could see significant rulings that alter the progress of AI innovation, Axios’ Megan Morrone reports.
What’s happening: The copyright decisions — over both the use of copyrighted material in the development of AI systems, and the status of works that are created by or with the help of AI — are crucial to the technology’s future and could determine winners and losers in the market.
- The New York Timesfiled a lawsuit against OpenAI and Microsoft last week, claiming their AI systems depend on “widescale copying” that constitutes mass copyright infringement.
“Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their work,” James Grimmelmann, professor of digital and information law at Cornell, tells Axios. “2024 is likely to be the year we find out whether there is money inside.”
- “If copyright law says that some kinds of AI models are legal and others aren’t, it will steer innovation down a path determined not by what uses of AI are beneficial to society but one based on irrelevant technical details of the training process,” Grimmelmann adds.
Reality check: The copyright system has ways to adapt to an AI world.
- Jerry Levine,general counsel for ContractPodAI, a generative AI tool that helps lawyers analyze legal documents, said that if a chatbot response might violate a copyright, the tool could offer to summarize the text and link to the original, instead of reproducing the entire copyrighted work.
Threat level: This is another place the big could get so much bigger. The biggest risk to AI innovation could lie in a ruling that limits generative AI to players with the resources to fight lawsuits and license large amounts of data.
– Chief justice urges “humility” on AI
Chief Justice John Roberts wrote in his year-end report on the federal judiciary that “any use of AI requires caution and humility.”
“[L]egal determinations often involve gray areas that still require application of human judgment,” Roberts said.
“Nuance matters,” Roberts added:
“Much can turn on a shaking hand, a quivering voice, a change of inflection, a bead of sweat, a moment’s hesitation, a fleeting break in eye contact. And most people still trust humans more than machines to perceive and draw the right inferences from these clues.”
The bottom line: “AI is based largely on existing information, which can inform but not make such decisions … I predict that human judges will be around for a while.”
“But with equal confidence I predict that judicial work—particularly at the trial level — will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.”
Συνέχεια εδώ