home All News open_in_new Full Article

Topological approach detects adversarial attacks in multimodal AI systems

New vulnerabilities have emerged with the rapid advancement and adoption of multimodal foundational AI models, significantly expanding the potential for cybersecurity attacks. Researchers at Los Alamos National Laboratory have put forward a novel framework that identifies adversarial threats to foundation models—artificial intelligence approaches that seamlessly integrate and process text and image data. This work empowers system developers and security experts to better understand model vulnerabilities and reinforce resilience against ever more sophisticated attacks.


today 4 w. ago attach_file Events

attach_file Science
attach_file Events
attach_file Events
attach_file Events
attach_file Politics
attach_file Politics
attach_file Events
attach_file Sport
attach_file Events
attach_file Events
attach_file Politics
attach_file Culture
attach_file Politics
attach_file Politics
attach_file Politics
attach_file Events
attach_file Politics
attach_file Politics
attach_file Politics
attach_file Events


ID: 4045656024
Add Watch Country

arrow_drop_down