AI can be a double-edged sword in cybersecurity. While businesses can leverage the latest AI-based tools to better detect threats and protect their systems and data resources, large datasets and interconnected systems create attractive targets for cybercriminals aiming to exploit vulnerabilities through AI algorithms, manipulate data inputs to deceive our IT systems, or launch sophisticated attacks at speed and scale.
This trend is projected to intensify through 2025, posing significant resilience challenges. Despite AI’s potential to enhance cybersecurity defences, businesses must also be vigilant about the cybersecurity implications.
Consequently, defending against AI-driven threats demands robust strategies that encompass AI-specific risks, continuous monitoring, adaptive defences, and a proactive approach to mitigating emerging threats.
To find out if AI has made the threat landscape for businesses safer or more dangerous to cyber attacks, and how we can better safeguard against AI-enabled cyberattacks, iTNews Asia speaks to industry practitioners to hear their views:
Matthew Swinbourne, CTO Cloud Architecture, NetApp Asia Pacific;
Andrew Lim, Managing Director, Kyndryl ASEAN;
Robert Pizzari, Vice President, Strategic Advisory, APAC, Splunk;
Sunny Nehra, ethical hacker and founder of CyberSecurity firm, Secure Your Hacks;
Jitendra Tripathi, VP & Head Cyber Security Operations, JIO Platforms Ltd;
Sakshi Grover Senior Research Manager, Cybersecurity, at IDC Asia/Pacific; and
Feng Gao, Senior Director and Analyst, Research and Advisory, Gartner
iTNews Asia: Do you agree that the economics of cyber-attacks make it easier and cheaper today to launch attacks than to build effective defences, and why ?
Swinbourne (NetApp): The cyber threat landscape has evolved to the point where threat actors do not need to have much expertise themselves, but can employ cybercriminals or leverage ransomware-as-a-service (RaaS) to carry out attacks. The barriers to executing a sophisticated attack have indeed been lowered over the past few years.
Tripathi (JIO): The open source revolution has made it so much easier for the hackers to get access to versatile open source tools and software to create sophisticated hacks at very little or no expense. At the same time, the adoption of new technology will be time consuming and costlier due to organisational processes involved, leaving them vulnerable during the intervening period.
While businesses need to deploy comprehensive solutions for all round defence to be effective, a cybercriminal needs just one exploitable attack vector to breach the defences.
Pizzari (Splunk): Yes, the economics of cyber-attacks tend to favour attackers, particularly in Asia where the digital economy is rapidly growing. Launching a cyber-attack often requires less investment in terms of time, resources, and cost compared to building robust defences. However, this should not be taken as an indication that defending against cyber threats is impossible.
Lim (Kyndryl): I agree, it’s surprising that many organisations are not prepared to defend against these attacks.
Rather than having well-thought-out plans in place, C-suite leaders often scramble to develop business continuity strategies during or after an attack, which may be too late. A successful attack can force even the most secure organisations to negotiate with ransomware threat actors.
-Andrew Lim, Managing Director, Kyndryl ASEAN
Beyond economics, organisations should adopt a mindset of cyber resilience—the ability to anticipate, protect against, withstand, and recover from the adverse conditions, stresses, attacks, and compromises of cyber-enabled businesses.
Nehra (Secure Your Hacks): I agree and this is because the cost of launching a cyber-attack has decreased significantly due to the availability of tools and techniques online, as well as increasing vulnerabilities that can be exploited. Building effective defenses requires significant investment in technology, training, and personnel can be expensive and time-consuming.
Grover (IDC): It is true –cybercriminals can acquire sophisticated malware and ransomware kits on the dark web at low costs, and with RaaS, even non-experts can deploy these attacks easily. This enables them to exploit vulnerabilities in critical infrastructure and supply chains with minimal financial outlay.
In contrast, investing heavily in technology, skilled personnel, and continuous updates to maintain effective defences, creates a significant economic imbalance.
Gao (Gartner): I partially agree with this statement. AI reduces the technical barrier for launching attacks, but the economics of attacks is related to the effectiveness of defense.
Currently, many organisations lack robust defenses, making low-cost attacks potentially successful.
iTNews Asia: Can you share the top most challenges in detecting and mitigating AI-generated attacks, and how can organisations overcome them?
Grover (IDC): Organisations must invest in advanced security solutions, integrate AI within Security Operations Centres (SOCs), and utilise advanced Security Information and Event Management (SIEM) systems.
Implementing User and Entity Behaviour Analytics (UEBA) can detect unusual patterns that indicate potential threats, while advanced threat hunting techniques proactively identify sophisticated attack vectors.
Strengthening identity management for both human users and machines is crucial. SOC resources are often burdened with handling attacks, risks, and false positives; thus, investing in technology that eases their workload and frees up bandwidth for focused investigations is essential.
Additionally, building AI models responsibly with a focus on security and ethics is vital.
Lim (Kyndryl): Challenges include data leakage or inadvertently exposing private enterprise data to the large language models (LLMs), issues relating to regulatory limits such as traceability, the accuracy of the outputs generated, and accountability in terms of human involvement or feedback to the process and lack of sufficient talent to sustainably drive enhanced security operations.
Organisations can strengthen their resilience encompassing all aspects of IT risk, including cybersecurity, business continuity, disaster recovery, compliance, and more. This broader approach enables leaders to ensure that the business can continue to operate and deliver critical functions without the threat of downtime, breaches, or fines.
This involves ensuring seamless integration of data from various sources to form the fundamental elements and structures to inform business decision-making. Whether you are deploying AI or mitigating AI-enabled cyberattacks, data excellence will no longer be an optional checkbox; but a strategic necessity.
Pizzari (Splunk): Cybercriminals are using AI to create malware that’s more complex and stealthy. Then, there’s the issue of volume and speed, evasion techniques that are designed to dodge detection. Human error is the number one offender and the toughest to detect and remediate.
One way organisations can fight back is through penetration testing and threat hunting. Leveraging the right tools is important to maintain end-to-end visibility across their networks to detect and respond to threats in real-time.
– Robert Pizzari, Vice President, Strategic Advisory, APAC, Splunk
Nehra (Secure Your Hacks): With AI-generated malware on the go and organisations lacking technology and personnel resources, investing in machine learning and AI tools and improving overall cybersecurity posture through regular updates and training becomes essential.
Gao (Gartner): Organisations should first conduct security assessments to identify gaps in preventing AI-enhanced cyber attacks, build up skills and evolve their cyber defence strategy by adopting new sets of security capabilities or tools.
Swinbourne (NetApp): Evasion Techniques – AI-generated malware can evade traditional detection methods by mimicking legitimate behaviours, Adversarial Attacks – AI generates adaptive malware that can learn and respond to defensive measures in real-time, and Scale and Speed of AI-Generated Attacks – AI-generated malware can launch attacks at a scale and speed that overwhelm traditional security measures are top most challenges.
Organisations need to deploy the latest AI and ML-powered cyber defence solutions that can perform real-time threat detection to counter the fast-evolving nature of AI-generated attacks.
For instance, recovering from a ransomware/cryptolocker type of attack could take months to recover the unencrypted data from backups. Whereas stopping an attack before it’s able to propagate would prevent the bulk of data from becoming encrypted, identify the source of the attack to recover the data that may have been encrypted.
Tripathi (JIO): Attackers are able to create targeted phishing emails with the right context using Generative AI to the extent that it becomes humanly impossible for the victims to differentiate between genuine and malicious phishing emails. There is a need to employ AI based systems and continuously train them to detect such attacks.
iTNews Asia: What changes are needed in cyber defense strategies to stay secure?
Nehra (Secure Your Hacks): Human behaviour is also a critical factor in cybersecurity, and organisations need to focus on educating and training their employees to identify and avoid potential threats.
Leverage AI and machine learning technologies and Zero-trust networks that assume that all users and devices are untrusted and verify before granting access to sensitive data and systems.
Sunny Nehra, ethical hacker and founder of CyberSecurity firm, Secure Your Hacks
Tripathi (JIO): The complete environment should be assessed and use-cases identified for adoption of AI systems and SOAR (security orchestration, automation and response) technologies to detect previously unknown threats and for automated response.
Gao (Gartner): Many organisations adopt AI by purchasing services like package software and API (application programming interface). Not all of them are aware of security issues and most of them are not doing enough to ensure AI is used safely.
Using AI securely does not simply mean purchasing a security tool, this requires a set of security capabilities/tools with cross-team collaboration and companies must carefully evaluate before adopting any new technology.
Pizzari (Splunk): To combat these sophisticated threats, companies need to invest heavily in both talent and tools. It’s worth noting that resilient leaders rebound from downtime faster. These leaders are also embracing the benefits of AI by embedding generative AI features in their tools at a rate 4x faster than others.
Companies also need to rethink their strategies and place a greater emphasis on collaboration and tool consolidation. Digitally resilient organisations are breaking down silos and increasing collaboration amongst engineering, security, and particularly with IT operations.
Lim (Kyndryl): When businesses look towards engaging with AI, they should keep three key strategies in mind to de-risk the business systemically. Look out for emerging AI standards for guidance, pay attention to the source and integrity of the data and begin the AI journey with a use case and scale up through the cyber defense strategy.
Grover (IDC):
Before enterprises explore any GenAI implementation, they must establish a responsible AI policy, build an AI strategy and roadmap, design an intelligence architecture, and map the skills required for success.
-Sakshi Grover Senior Research Manager, Cybersecurity, at IDC Asia/Pacific
They must secure all control points, assess how much control of data is provided to the AI model, look at their partnerships, and ensure third-party partnerships are secure.
Swinbourne (NetApp): Organisations can deploy technologies capable of detecting changes in the “normal pattern” of data access, in combination with zero-trust architecture. They would need a last line of defense that must include tamper-proof copies of active datasets with WORM (write once, read many) like capabilities for long term data retention.
Organisations must also be ready to provide recovery points that must also be high performance to ensure quick recovery.
It’s important that these defenses also leave audit trails that enable investigators to determine the root cause of an attack. This will allow them to use this information to further enhance both the machine-driven defenses and human education.
iTNews Asia: How can AI be integrated with traditional cybersecurity tools?
Pizzari (Splunk): AI can be seamlessly integrated with traditional cybersecurity tools, enhancing their effectiveness in two key ways – improving detection and response accuracy, and automating routine tasks.
For instance, IDC Frontier Inc. (IDCF), a company with extensive data centres in Japan, faced challenges handling a growing volume of logs due to expanding cloud services. Their outdated monitoring system hindered their response to issues.
IDCF automated alert management, improving response times by over 80 per cent. This integration enabled them to consolidate and analyse alerts from various systems, enhancing their initial response to failures. They could quickly identify affected systems, notify customers, and mobilise relevant teams.
Additionally, machine learning capabilities streamlined alert resolution, allowing predictive incident detection and resolution from a unified platform. This automation extended to tailored dashboards accessible to up to 100 employees, providing real-time insights.
Swinbourne (NetApp): Traditional tools still play an important role in a multi-layered cyber defence strategy. Organisations should consider solutions that are able to integrate the latest AI/ML-powered cyber threat detection with other cybersecurity tools under a unified, intelligent data infrastructure to offer the most comprehensive defense.
Where possible, organisations should have a control plane to intelligently coordinate, update, monitor including workload-centric, policy-driven ransomware defence at the storage layer to protect the most business-critical data.
– Matthew Swinbourne, CTO Cloud Architecture, NetApp Asia Pacific
Grover (IDC): Organisations face a significant challenge in integrating AI with their current security setups. By incorporating AI into SIEM (security information and event management), companies can automate the analysis of security logs, speeding up incident detection and response.
AI excels at identifying patterns and anomalies that traditional tools may overlook, enhancing threat intelligence. Integrating AI with identity management systems improves authentication and secures access for both humans and machines. Embedding AI in existing security monitoring enhances investigation and response capabilities, managing risks more effectively across diverse cloud environments. This comprehensive approach centralises alert management and data handling, strengthening overall security posture.
Nehra (Secure Your Hacks): Al can be used with traditional tools such as antivirus protection, data-loss prevention, fraud detection, identity and access management, intrusion detection, risk management, and other security areas to enhance their capabilities and improve their effectiveness.
For example, Al can be used to analyse vast real-time data to detect and respond to cyber threats swiftly, enhancing detection against attackers. It also automates and enhances traditional security tools, easing the workload on security teams.
Tripathi (JIO):
The traditional tools in any environment generate their own data. These precious datasets could be lying in silos. Using AI and Machine Learning systems on these data sets can provide great insight into the security posture of the organisation.
– Jitendra Tripathi, VP & Head Cyber Security Operations, JIO Platforms Ltd;
I recommend remedial measures to improve the overall security posture and help in proactive detection and quick response.
Lim (Kyndryl): We anticipate and advise companies to integrate AI into cybersecurity by step-by-step implementation to mitigate risks. A robust data strategy and governance framework are essential foundations for effective AI deployment across all company operations, including legacy systems like mainframes. Rushing adoption could overwhelm IT systems and complicate the AI transformation process significantly.
iTNews Asia: How can machine learning models be trained to better recognise and respond to AI-driven cyber threats without generating false positives?
Lim (Kyndryl): Having a well-defined data strategy is crucial to mitigate costly false positives.The rigidity of algorithms that lack comprehensive data access contributes significantly to these errors.
Generative AI tools can be promising through continuously learning and evolving from past interactions, provided it is trained on accurate data. This iterative process requires precision and time but offers substantial returns on investment. Continuous refinement of tools, filter upgrades, and learning from errors are essential for IT teams. Embracing new tools and adapting to evolving threats with a robust LLMOps framework is necessary for sustained security effectiveness.
Pizzari (Splunk): SOC teams need advanced AI-enabled tools to effectively detect and respond to AI-driven cyber threats. Using diverse and continuously updated datasets, machine learning models can be trained for more precise threat detection, minimising false positives and alarm fatigue.
We have designed AI to facilitate users in performing their tasks more efficiently and easily – AI now aids in creating customer filters, rules, and alert definitions based on natural language requirements. Faster, better coding of these essential components is critical for robust infrastructure protection.
Swinbourne (NetApp): Latest AI ransomware detection solutions are able to automatically detect and respond to file system anomalies in real-time. AI and ML solutions can trigger alerts, disable potentially malicious user accounts as well as take automatic recovery snapshots that are crucial for timely data recovery.
Using AI-driven detection algorithms means fewer false positives and false negatives, resulting in less alert fatigue while simultaneously providing the additional security of their data estate. This is because the models are trained on the actual utilisation patterns of that organisation rather than a legacy “watermark” type of detection that would only alert when activity breached an arbitrary threshold.
iTNews Asia: How can AI help identify weaknesses and vulnerabilities before hackers exploit them?
Lim (Kyndryl): AI-supported cybersecurity solutions pre-emptively identify potential threats as well as vulnerabilities within the system. These systems, if backed by machine learning models, can continuously investigate weaknesses within the system, and even suggest solutions for them.
Despite investments in high availability and data replication, businesses often face significant challenges in recovering from cyberattacks.
This is because attacks frequently compromise all data backups, whether stored on-premises, in the cloud, or on SaaS platforms, making them ineffective for restoration purposes. Therefore, proactive planning and careful consideration of data management practices are crucial for resilience against cyber threats.
iTNews Asia: What are the ethical concerns, especially regarding privacy and biases?
Gao (Gartner):
Ethics must be considered from the beginning of an AI project and throughout the whole life cycle.
– Feng Gao, Senior Director and Analyst,
Research and Advisory, Gartner
Some examples to ensure ethics include ensuring the quality of training data, using privacy-enhancing computing technologies to avoid privacy leakage during training; deploying guardrails to filter and monitor interactions between users and AI; and adopting content anomaly detection products that mitigate input and output risks to enforce acceptable use policy and prevent unwanted or illegitimate response.
Nehra (Secure Your Hacks): Al systems collect and store large amounts of personal data, which could be used for nefarious purposes if it falls into the wrong hands. Additionally, Al algorithms can perpetuate and even amplify existing biases, leading to unfair and discriminatory outcomes.
Tripathi (JIO): Privacy is a major concern and any biases can put the businesses at serious risk. The terms and conditions for data collection must be clearly articulated and declared in unambiguous terms. It should be ensured that the data used for training the AI systems is clean and the systems are trained in a controlled and secure environment.
Grover (IDC): Trusted AI systems must focus on responsible AI principles, including accuracy, confidence, reliability, and performance. Enterprises must address AI bias, ethical data collection, consent management, data source misattribution, misinformation, and intellectual property and legal issues.
Organisations must ensure compliance with data protection regulations, such as those in the Asia-Pacific region, which are increasingly focusing on responsible AI deployment. Cultural considerations also play a significant role, as different regions may have varying perceptions and acceptance levels of AI technologies. Ensuring transparency and explainability in AI systems is crucial for building trust and accountability, allowing stakeholders to understand how AI decisions are made.