Fujitsu Laboratories announced the development of a technology to make AI models more robust against deception attacks. The technology protects against attempts to use forged attack data to trick AI models into making a deliberate misjudgement when AI is used for sequential data consisting of multiple elements.
With the use of AI technologies progressing in various fields in recent years, the risk of attacks that intentionally interfere with AI's ability to make correct judgments represents a source of growing concern. Many suitable conventional security resistance enhancement technologies exist for media data like images and sound. Their application to sequential data such as communication logs and service usage history remains insufficient, however, because of the challenges posed by preparing simulated attack data and the loss of accuracy.
To overcome these challenges, Fujitsu has developed a robustness enhancement technology for AI models applicable to sequential data. This technology automatically generates a large amount of data simulating an attack and combines it with the original training data set to improve resistance to potential deception attacks while maintaining the accuracy of judgment.
By applying this technology to an AI model developed by Fujitsu to judge the necessity of countermeasures against cyberattacks, it was confirmed that misjudgement of about 88% could be prevented. Find out more at https://bit.ly/31QacSk.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.