AI could supercharge misinformation
For all the promise that artificial intelligence holds for health care, one of the industry’s big fears is its potential to churn out more convincing misinformation.
Why it matters: AI experts are warning that tech used to create sophisticated false images, audio and video known as deepfakes is getting so good it could soon become almost impossible to distinguish fact from fiction, Tina writes.
- The COVID-19 pandemic laid bare the deadly stakesof health care misinformation, as false information on vaccines, treatments and masks flooded social media sites.
- Deepfakes could make it even more challenging to react to public health threats, secure patients’ sensitive data or combat increasing cyberattackson hospitals, experts told Axios.
The big picture: This technology is becoming better and more ubiquitous sooner than experts expected at a time when health information is being politicized and social media’s already weak guardrails have been whittled down.
- “We do not want to play catch-up as we have, unfortunately, in the past with, for instance, ransomware attacks,” said John Riggi, national adviser for cybersecurity and risk for the American Hospital Association.
- The AHA in September urged health systemsto be vigilant about the emerging risk of deepfakes.
False images and audio that appear to come from a trusted source will make it harder to spread accurate health messages and will erode the public’s confidence in legitimate sources.
- Imagine the impact of a deepfake Anthony Fauci video telling people not to get vaccinated, for instance.
- AI could enable disinformation to be automated and disseminated at scale. “That’s the super-threat here,” said Heather Lane, senior architect of the data science team for Athenahealth.
Πηγή: axios.com




