[Caice-csse] an interesting Cybersecurity research avenue in AI code generation

N Narayanan naraynh at auburn.edu
Thu May 1 09:15:16 CDT 2025


AI Code Hallucinations Increase the Risk of 'Package Confusion' Attacks
A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.
https://www.wired.com/story/ai-code-hallucinations-increase-the-risk-of-package-confusion-attacks/<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.wired.com%2Fstory%2Fai-code-hallucinations-increase-the-risk-of-package-confusion-attacks%2F&data=05%7C02%7Ccaice-csse%40eng.auburn.edu%7Cb519d86efb42449929e508dd88ba98d9%7Cccb6deedbd294b388979d72780f62d3b%7C0%7C0%7C638817057176401953%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=llbBXf3Y1znJvwyJuJjpdH7XeiT105B2MO7CmiHrY4U%3D&reserved=0>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.eng.auburn.edu/pipermail/caice-csse/attachments/20250501/acf0b297/attachment.htm>


More information about the Caice-csse mailing list