Artificial Intelligence (AI) has been promoted as the best defence against cyber threats in 2023 but security experts warn the protector might also be used as an attacker. This has prompted security experts at a recent United States Senate hearing to call for AI regulation which is likely to flow on to Australia and New Zealand. As businesses, consumers and government agencies look for ways to take advantage of artificial intelligence tools and AI threat prevention, experts are warning that AI regulations addressing the challenges facing the technology are needed now, not later.
Cyber risk management is becoming more forward-looking and predictive as it moves from being a typical reactive activity, focusing on risks and loss events that have already occurred, to the rising adoption of advanced analytics and AI technologies. Predictive risk intelligence uses analytics and AI to provide advance notice of emerging risks, increase awareness of external threats and improve an organisation’s understanding of its risk exposure and potential losses.
Cyber threat actors are exploiting this approach by using AI to learn how to evade signature-based systems and developing ways around them. Attackers have been observed using AI tools to constantly change their malware signatures, enabling them to evade detection and spawn large amounts of malware to increase the power of their attacks. Using AI, malicious actors can launch new attacks created by analysing an organisation’s vulnerabilities through spyware before it is detected. Manipulating an AI system can be simple if you have the right tools. AI systems are built on the data sets used to train them and making small, subtle changes can slowly steer AI in the wrong direction.
Further, modifying input data can easily lead to system malfunction and expose vulnerabilities. Cybercriminals can use AI to scope and identify vulnerable applications, devices and networks to mount social engineering attacks. AI can easily spot behaviour patterns and identify vulnerabilities on a personal level, making it easy for hackers to identify opportunities to access sensitive data.
Everything cybercriminals do on social media platforms, emails and even phone calls can be improved with the help of AI. For example, creating deepfake content and posting it on social media can propagate disinformation and encourage users to click phishing links and go down rabbit holes that will compromise their individual security.
Spam and phishing emails have used AI to develop sophisticated emails that are indistinguishable from the real thing. Recent AI learning has seen Microsoft Azure and T-Mobile come under constant attack, with several successes for threat actors.
Cyber threats are increasing through ransomware attacks, commodity malware and heightened dark-web enablement. Interpol reports the projected worldwide financial loss to cybercrime for 2021 is US$6 trillion, twice as much as in 2015, with damages set to cost the global economy US$10.5 trillion annually by 2025.
Globally, leading tech experts report that 60% of intrusions incorporate data extortion, with a 12-day average operational downtime due to ransomware. The concerns reached the European Union in April 2021, prompting the first published proposal for regulation on artificial intelligence.
The US Chamber of Commerce on 9 March 2023 also called for regulation of AI technology to ensure it does not hurt growth or become a national security risk. New Zealand has heard the call and in February 2023 began work on regulating AI by incorporating AI rules into existing legislative frameworks.
However, most of the existing laws focus on privacy collection, data protection and data sharing, leaving much of the development of technology under standard business ethics, which we have seen to be largely lacking. But how far should we regulate? And will regulation create a reduction in invention? Arguably, regulation is needed to prevent mischief but the lack of understanding by legislators is likely to lead to poor legislation. It is suggested a risk-based approach may be the best to balance issues with rights, but the regulation is still developing.
No matter what happens, AI is here to stay and government officials must recognise the industry has led the development of this technology and endeavoured at self-regulation for a long time. Working with companies to find reasonable protections for privacy and other concerns is paramount in maintaining trust and safety between society, government and industry.
Such collaborative efforts ensure the best possible practices are established. Otherwise, society risks creating policies that allow unconscious bias within algorithms, loopholes within otherwise acceptable business cases that allow for abuse and misuse by third-party actors and other negative unforeseen consequences associated with AI technology. These actions will erode societal trust in the technology as well as in the institutions that are meant to serve and protect it.
Lloyd Gallagher is the managing partner at Gallagher & Co Consultants ■
0 Comments