
AI’s uncertain cyber path
Cybersecurity experts are cautiously optimistic about the new wave of generative AI innovations like ChatGPT, while malicious actors are already leaping to experiment with them.
Cyber leaders see multiple ways generative AI can help assist organizations’ defense: reviewing code for efficiency and potential security vulnerabilities, exploring new tactics that malicious actors might employ, and automating recurring tasks like writing reports.
- “I’m really excited as to what I believe it to be in terms of ChatGPT as being kind of a new interface,” Resilience Insurance CISO Justin Shattuck told Axios. “A lot of what we’re constantly doing is sifting through noise. And I think using machine learning allows us to get through that noise quicker. And then also notice patterns that we humans aren’t typically going to notice.”
- “Text-based generative AI systems are great for inspiration,” Chris Anley, chief scientist at IT security company NCC Group, told Axios. “We can’t trust them on factual matters, and there are some types of questions they are currently very bad at answering, but they are very good at making us better writers — and even better thinkers.”
Reality check: The idea of using chatbots to review or write secure code has already been called into question by some experts and researchers.
- A Cornell University studyreleased in November showed that AI assistants led to coders creating more vulnerable code: “Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access,” researchers wrote in the study’s overview.
- “Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.”
- Anley conducted an experimentlast week in which he asked ChatGPT to find vulnerabilities in various levels of flawed security code. He found a number of limitations: “Like a talking dog, it’s not remarkable because it’s good; it’s remarkable because it does it at all.”
Using generative AI to review code strikes some experts as particularly dangerous.
- “How the hell are software engineers pasting their code into something they don’t own?” Ian McShane, vice president of strategy at security firm Arctic Wolf and a former Gartner analyst, told Axios. “Would you phone up random Steve off the street and say, ‘Hey, come and have a look through my financial auditing. Can you tell me if anything’s wrong?'”
- McShane does see benefits in the approachable chatbot user interface for lowering the barrier to entry to security. But unknowns around data set information and transparency also make him pause.
- “What mustn’t get lost is that this is still machine learning, or machine learning to train from data that’s provided,” he says. “And you know, there’s no better phrase than ‘garbage in, garbage out.'”
Meanwhile, hackers and malicious actors, always on the prowl for ways to speed up their operations, have been quick to incorporate generative AI into attacks.
- Researchers at Check Point Research spotted malicious hackers last month using ChatGPTto write malware, create data encryption tools and write code creating new dark web marketplaces.
- “Recent AI systems are excellent at generating plausible-sounding text and can generate variations on a theme quickly and easily, without telltale spelling or grammar errors,” Anley said. “This makes them ideal for generating variations of phishing emails.”
The bottom line: Shattuck maintains that organizations exploring AI usage should see through the larger hype and “understand the limitations, like truly understand where it’s at.”
- “It’s not a one size fits all,” he said. “Don’t try to apply it to something it’s not …. Don’t push it to prod[uction] tomorrow.”
-AI revolution: Tech finds its next platform
When Silicon Valley insiders say that ChatGPT and generative AI are “the next platform,” here’s what they mean:
- Users are rushing to try it out — and staying with it.
- Entrepreneurs are finding endless new applications for it.
- Companies haven’t yet figured out how to make money with it, but they’re confident that will come.
Συνέχεια εδώ
-Bing chatbot’s freakouts show AI’s wild side
As users test-drive Microsoft Bing’s new AI-powered chat mode, they’re finding example after example of the bot seeming to lose its mind — in different ways.
What’s happening: In the past few days, Bing has displayed a whole therapeutic casebook’s worth of human obsessions and delusions.
Συνέχεια εδώ
-New AI wave will find uses and abuses in cybersecurity
Cybersecurity experts are cautiously optimistic about the new wave of generative AI innovations like ChatGPT, while malicious actors are already leaping to experiment with it.
Cyber leaders see multiple ways generative AI can help assist organizations’ defense: reviewing code for efficiency and potential security vulnerabilities; exploring new tactics that malicious actors might employ; and automating recurring tasks like writing reports.
Συνέχεια εδώ