rw-book-cover

Metadata

Highlights

  • As New Scientist reports, ETH Zurich computer science grad student David Zollikofer and Ohio State University AI malware researcher Ben Zimmerman created a computer file that can spread to a victim’s computer in the form of an email attachment. (View Highlight)
  • “We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit,” Zollikofer told New Scientist. (View Highlight)
  • As a result, the “synthetic cancer,” as the researchers call the virus, isn’t even detectable by antivirus scans, making it the perfect camouflaged intruder. (View Highlight)
  • Once established on the victim’s system, the virus then opens up Outlook and starts writing contextually relevant email replies — while including itself as a seemingly harmless attachment. (View Highlight)
  • It’s a terrifying example of how AI chatbots can be exploited to efficiently spread malware. Worse yet, experts warn the tools themselves could even aid bad actors in making them even harder to detect. (View Highlight)
  • “Our submission includes a functional minimal prototype, highlighting the risks that LLMs pose for cybersecurity and underscoring the need for further research into intelligent malware,” the pair wrote in a yet-to-be-peer-reviewed paper. (View Highlight)
  • The AI was alarmingly believable in its attempts to “socially engineer” email replies. (View Highlight)
  • Other researchers have previously used ChatGPT to create AI “worms” that can similarly infiltrate a victim’s emails and access data. (View Highlight)
  • “The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload),” a team of researchers wrote in a different paper earlier this year. (View Highlight)
  • To experts, viruses like the one devised by Zollikofer and Zimmerman are only the tip of the iceberg. (View Highlight)
  • “I think we should be concerned,” University of Surrey cyber security researcher Alan Woodward, who wasn’t involved in the research, told New Scientist. “There are various ways we already know that LLMs can be abused, but the scary part is the techniques can be improved by asking the technology itself to help.” (View Highlight)