You may have “lost your face” without even knowing it...
Not long ago, news footage of a man wearing a helmet while visiting a real estate exhibition went viral. Investigative reporters revealed that the man had donned the helmet not because he was worried about disclosing his wealth or having his privacy invaded, but because he knew that savvy developers were collecting facial data through artificial intelligence (AI) and using it to determine the attribution of commissions in respective sales channels. This effectively prevents customers from shopping around through different sales channels, because once you have “shown your face,” the system will know the quotations you’ve previously received from other distributors, thus eliminating the possibility of getting additional discounts or incentives. It’s no wonder that customers have had to resort to primitive disguises to prevent their "faces" from being "stolen" by developers and used against them.
This case prompted public concern regarding the increasingly widespread use of AI applications such as facial recognition, and also sparked debate on the real price that we may have to pay in terms of security as we enjoy the new experiences enabled by AI.
AI security issues
In fact, this case portrays only one of several AI security issues. Today's AI faces challenges in at least three areas.
Firstly, there is an inherent security risk with AI technology. The machine learning that AI relies on actually turns the normal logic that we traditionally operate on into a "black box operation.” We only know the results of the decision output, but are unable to fully grasp the logic of the inner workings of the neural network. This is the biggest difference between AI and traditional "automatic control." If, in this process, machine learning materials are contaminated, the accuracy of samples and judgments will be affected. Furthermore, flaws in the machine learning model itself entail more deeply concealed security threats. In other words, compared to traditional information security issues, AI's inherent security issues are more difficult to detect and respond to in time.
Secondly, there is potential for the malicious use of AI technology. Historical experience tells us that hackers are always up to date with or even ahead of the latest technology. The emergence of AI undoubtedly provides hackers with new offensive weapons. For instance, machine learning can be used to analyze massive amounts of data, conjecture the security principles of targets of attack, simulate methods of attack, and identify weaknesses. We must recognize that the greatest dilemma in the field of security lies in the fact that defenders must predict and block all of the attack methods that the attacker may use, while the attacker only needs to identify one loophole in the defender’s defenses to hack and attack. Therefore, as global digital transformation rapidly advances and AI technology develops at an exponential pace, information security risks are higher than ever.
Thirdly, there is the issue of privacy leakage and data abuse in AI applications. The example cited at the beginning of this article falls into this category. Following the emergence of "Artificial Intelligence Internet of Things (AIoT),” a hybrid product of AI and IoT, this risk will spread even faster, exert a wider scope of influence, and in turn become more difficult to prevent.
Responding to AI security challenges
From this perspective, AI security issues present substantial challenges. But no matter how difficult these challenges may be, they must be overcome.
A lack of tools may be the biggest drawback when tackling AI issues. Kevin Mitnick, a world-renowned white hat hacker, once said that there is no tool or product that truly conforms to AI core technology, adding that his own experience in security analysis and evaluation of AI products was insufficient.
However, just as AI can be used by hackers as a means of attack, AI itself can also be used as a security tool. Today's network security vendors are drawing on the powerful capabilities of AI technology to solve new threats that traditional defense solutions cannot solve, improve the detection accuracy of original detection solutions, perform more efficient automated data classification, and achieve faster threat response and processing. In other words, AI can not only be used to fight against existing attacks, but also to proactively perceive and predict security threats that may occur in the future.
Figure 1: Machine learning-enabled abnormal behavior detection (Image source: NXP)
According to the "White Paper on China Cyber Security Industry (2019)" issued by the China Academy of Information and Communications Technology in the field of cyber security, AI technology is being heavily utilized in aspects including (but not limited to) the following:
- In terms of abnormal traffic detection, AI provides a new solution for encrypted traffic analysis.
- In terms of malware defense, AI applications for specific scenarios have made positive progress.
- In terms of abnormal behavior analysis, AI can be an effective supplement to pattern recognition.
- In terms of sensitive data protection, AI helps improve data identification and protection capabilities.
- In terms of security operation management, AI-based security orchestration and automated response (SORA) is being increasingly used.
Figure 2: The role of machine learning in AI security (Image source: NXP)
Now we’ve added AI to our arsenal of weapon for security defense, our next step is to explore a systematic security solution. For example, in order to counteract the possible security risks in each link of the AI model and give corresponding defense recommendations, Tencent has released an "AI Security Attack Matrix.”
The significance of this AI security attack and defense matrix lies in its ability to cover the entire AI product life cycle from the establishment of the environment before development of the AI model, to the training and deployment of the model, and subsequent use and maintenance of the AI product. It lists almost all the foreseeable security issues that may be encountered in the process, and offers corresponding coping strategies. Based on this "matrix,” developers can troubleshoot potential existing security issues, based on the basic conditions of AI deployment and operations, and deploy defenses based on the recommended defense options, in order to reduce known security risks. Although the matrix is unddoubtedly a work in progress, it’s a good starting point.
Figure 3: "AI Security Attack Matrix" released by Tencent (Source: QbitAI)
Efforts beyond technology
As the impact of AI on human society is complex and profound, AI security principles must extend beyond technology. Specifically, in addition to technology, we must take action in at least two areas.
- The first is to build a security line of defense from a legal perspective by clarifying the "red line" of AI security through legislation, thus ensuring hackers and AI abusers are appropriately penalized and deterred.
- The second is to establish industry-recognized codes of conduct that, through corporate self-monitoring and other means, offer coverage for gray areas where legislation is unable to keep pace with technological development, thereby maximizing user safety while simultaneously building a healthier ecosystem for the rapid development of AI.
Before wrapping up, we would like to share some key facts and figures: According to IDC research data, global cybersecurity spending in 2019 increased by nearly 9.4% compared with 2018, reaching US$106.63 billion. At the same time, however, it was predicted that the total cost of cybercrime in the same period may have exceeded US$2 trillion, which means that the cost of cybercrime activities was about 20 times that of security expenditure.
This is the security environment in which we now find ourselves, and the emergence of AI will make things even more complicated. When it comes to security, we must allow no compromises, because once lines have been crossed, we stand to lose much more than just our “faces.” We can only hope that the dreadful vision of the “worst case scenarios” will drive further, much-needed advances in AI security.

