Technology

AI is not a silver bullet for cybersecurity – Opinions

0
Please log in or register to do it.
AI is not a silver bullet for cybersecurity – Opinions


AI is not a silver bullet for cybersecurity



Sunny Tan, Head of Security Business, East Asia, BT 

Artificial intelligence (AI) has been a sleeping giant quietly infiltrating our daily lives. We might not have realised it, but AI has been around for long, long while.  

While OpenAI’s ChatGPT has recently highlighted the profound extent of AI’s permeation in our world, AI was first brought into the mainstream when Apple introduced Siri as a personal assistant in 2011. Three years later, when Google announced Amazon Alexa, it further cemented the role of AI in our everyday lives.  

The field continues to evolve with recent additions such as Google’s Gemini, Claude AI, along with Microsoft’s integration of Copilot into everyday applications like Microsoft Word. These developments illustrate how AI has transitioned from a technological novelty to a crucial support system, simplifying tedious tasks and even becoming a national focus for some countries.  

This proliferation of AI extends across industries, with a significant impact on cybersecurity. AI excels at processing massive amounts of data, recognising patterns, and improving both threat detection and response times that are otherwise hard to manage with our naked eye. 

However, this same advantage empowers cybercriminals. Adversaries now possess stronger capabilities and are exploiting AI technologies to enhance their malicious capabilities. This includes utilising GenAI to launch deepfake scams which cost a multinational firm in Hong Kong USS$34 million, and large language models (LLMs) launching more sophisticated cyberattacks in the form of malware, ransomware, and even misinformation campaigns. These AI tools also assist in identifying vulnerable targets, evading detection systems, and poisoning data sources to compromise the integrity of these LLMs.

While AI offers tremendous promises in the cybersecurity landscape, we must acknowledge that it doesn’t come without perils.  

Why AI is important for cybersecurity  

Industry forecasts predict a significant rise in the global AI cybersecurity market. Valued at US$20.19 billion in 2023, it’s expected to grow at a compound annual growth rate (CAGR) of 24.2 percent. This trend reflects not only the increasing integration of AI with traditional security tools, but also a growing reliance on AI to combat cybercrime, which caused an estimated US$12 trillion in damages in 2023.  

Why is AI so important in cybersecurity? AI, particularly machine learning and deep learning models, excels at pattern recognition in vast amounts of data. This allows AI to identify attack precursors that security analysts might miss. Beyond reducing the risk of human error, such as misconfigurations or data leaks, AI technologies facilitate early threat detection and anomaly identification. This enables proactive threat hunting – preventing breaches and allowing analysts to respond in less than 60 seconds.  

Ultimately, AI frees up analysts’ time for strategic tasks, from investigating high-priority threats to developing incident response plans and security policies. These are just a few examples – the potential applications of AI in cybersecurity are vast.  

As with most things, however, greater power comes with greater responsibility.  

The risks of using AI in cybersecurity  

The rapid development of AI tools introduces new challenges to cybersecurity. 

While artificial intelligence technology offers significant benefits, it can be vulnerable to data poisoning, leading to false positives, negatives, and algorithmic bias. This can result in missed threats and compromised security, potentially opening the door to sophisticated attacks like deepfakes, cloud jacking, and network exploits.  

Industry reports indicate that that security analysts are already encountering these issues and may lack the resources to manage them effectively. What’s worse – with AI tools, cybercrimes has the potential to attack not just our digital infrastructures, but also our physical ones, causing systems to burnout or explode.  

Therein lies our great responsibility.  

AI’s safety ultimately lies in our hands 

AI based tools and solutions are only as good as the data it trains on. Without proper governance, the tools could inadvertently expose confidential information. Hence, the onus is on security analysts – as well as organisations – to double up their roles to ensure AI framework and strategies are reliable and accurate, and efficient in addressing security vulnerabilities. (A good reference point is National Institute of Standards and Technology’s AI Risk Management framework.) Because AI isn’t the silver bullet to cybersecurity, no matter how complicated the security event.  

To that end, it is crucial that we combine human intelligence and artificial intelligence to build a robust and effective defender. While AI excels in speed and scalability, for instance, it lacks the human ability to understand context. Humans can consider factors like attacker motivations, industry trends, and historical data to make informed decisions.  

Additionally, cybersecurity decisions often have ethical implications. Humans are wired to consider these ethical nuances and make choices that align with organisational values, something AI may not be programmed to do.  

AI represents a powerful tool for the cybersecurity landscape. But it is far from the be-all-and-end-all.  

It is therefore our duty to mitigate risks from AI technology, which requires involving humans in final decision-making and establishing responsible tech principles, guardrails, and governance.  

Only by combining the power of AI with human expertise can we truly secure our digital future.  

Sunny Tan is the Head of Security Business, East Asia, BT 
 



Source link

Just some random numbers. Nothing special about them.
EV Adoption Accelerates, hitting 32.6% Market Share in 2024