- TechDBS
- Posts
- If I Had Been A Hacker, I Would Have Loved AI
If I Had Been A Hacker, I Would Have Loved AI
No, I am not a hacker, at least not in the negative sense. In this update, I explain why attackers are loving AI.
The cybersecurity landscape is experiencing a significant shift, with artificial intelligence (AI) playing a dual role as both a protector and a potential threat. As organizations worldwide strive to reinforce their defenses against cyberattacks, the evolution and hype of AI and large language models (LLMs) has introduced an additional dimension to the ongoing battle between cybersecurity experts and attackers.
On the one hand, AI offers unprecedented capabilities in detecting, preventing, and responding to cyber threats with efficiency and speed previously unattainable by human operators alone. The same technology presents a formidable tool for attackers, providing them with sophisticated means to enhance their malicious activities.
Hackers and cybercriminals are increasingly leveraging AI and LLMs to automate and refine their strategies, making cyberattacks more complex, targeted, and difficult to detect. These adversaries leverage AI to carry out a wide range of malicious activities, including identifying system vulnerabilities and creating deceptive phishing emails.
AI can assist attackers in analyzing large datasets to uncover patterns and weaknesses, generating malicious code, and even automating the spread of malware or ransomware across networks with minimal human intervention. The adaptability of AI allows cyberthreat actors to evolve their tactics rapidly, staying one step ahead of traditional security measures.
There is one certainty.
If you are an attacker, AI is cool. Yes, it is.
Gif by WBPictures on Giphy
This change represents an imminent challenge for organizations of every size and sector. Adoption and understanding of new technologies are no longer optional; they have become an absolute necessity to ensure security and resilience.
Reality, not a scam
This is the current situation in the world of cybersecurity, and it’s an inevitable reality for organizations. It is important to comprehend that it’s not possible to evade the progress of technology, specifically artificial intelligence (AI), which is completely revolutionizing the techniques used in cyber attacks.
The ability to adapt and defend against such threats is crucial: organizations that do not equip themselves to face this reality, remaining undecided on how to react, will already be at a disadvantage when the attack takes place; it’s happening, period.
The acceleration of attack launch and adaptation, facilitated by AI, has transformed time into a vital factor. Attackers can now methodically examine and optimize their strategies with an unprecedented level of automation, substantially diminishing the duration required for attack preparation and implementation. This presents a substantial challenge for conventional defense mechanisms, causing an evolution to maintain pace with these advancements.
It is important to emphasize that the issue goes beyond the mere technological confrontation between attackers and defenders. The element of social engineering, the psychological manipulation of individuals to induce them to perform unsafe actions, is a crucial aspect of this new era of cyber attacks. Artificial intelligence can amplify the effectiveness of such tactics, making it even more important for organizations to understand and mitigate the risks associated.
Attackers love Social Engineering, and they love it because it works.
The scammer advantage
A movie frame: the “fake” call
The movie "Manhattan Murder Mystery" directed by Woody Allen in 1993 features a group of friends who come up with a brilliant plan to capture a murderer. The method used reflects the technological limitations of the time: the protagonists collect various snippets of a woman's voice, an accomplice of the murderer, through a fake movie audition. They then manually edit these audio fragments, through a mechanical process of cutting and splicing film, to recreate specific sentences to be used in a trap call with the murderer. In 1993, audio editing was intricate and time-consuming.
Today, thanks to technological evolution and artificial intelligence, replicating such an action would be much simpler. It would no longer be necessary to physically collect many voice fragments to assemble them manually. Using modern digital editing techniques, anyone can achieve highly convincing results with less effort by utilizing amateur software on a personal computer.
The role of artificial intelligence in this context is even more relevant. By using vocal deepfakes, a person's voice can be sampled to train AI models that can then generate highly accurate vocal content resembling the original voice. The tone, cadence, and other vocal characteristics of this content can be refined so much that the result becomes nearly indistinguishable from an actual recording or live speaking (at least technically).
If the aim is to breach an organization's security or exfiltrate sensitive data, the initial strategy would often involve impersonating an individual within the organization. This procedure involves obtaining and using their access credentials to gain entry into the system, then freely exploring the infrastructure to achieve the desired goal.
If you think that a simple phishing email attack is not effective enough, a vocal deepfake could be a more sophisticated tool. Back in 1993, at the time of the making of the Woody Allen film, the attacker would have needed specific technical skills, manual dexterity, and access to certain devices to create something similar. Today, however, all the attacker needs is an internet connection. So easy.
Humans are the advantage of scammers because they are weak, prone to fall for traps, and have an innate tendency to trust others. This often happens even with people they don't know, so imagine what could happen with a familiar, friendly voice asking for sensitive information. The perceived familiarity and trustworthiness of a known voice can break down a person's defenses, making them more susceptible to providing data or complying with requests that they would otherwise refuse.
The scammer disadvantage
While it's undeniable that scammers possess the advantage of exploiting human weaknesses and employing sophisticated social engineering tactics, this topic only covers one side of the cybersecurity equation. Organizations are not defenseless spectators in this digital arena. A critical countermeasure lies in the proactive education and training of their workforce. By raising awareness about modern cyber threats, including the use of vocal deepfakes and AI-powered scams, organizations can significantly mitigate the risks associated with human error.
Empowering employees with the knowledge to recognize and respond to potential threats is akin to building a human firewall, one that adds a robust layer of defense against malicious actors. I can say that, probably, the human firewall is the best firewall a defender shall have.
The technological arms race is not a one-sided affair. Just as attackers harness advanced technologies to perpetrate their crimes, organizations have access to equally, if not more, sophisticated tools to defend themselves (at least, most of the times, they know what to defend). The deployment of AI and machine learning algorithms for anomaly detection, threat hunting, and incident response has transformed cybersecurity practices. These technologies enable real-time monitoring and analysis of vast data streams, allowing security teams to identify and neutralize threats more swiftly and accurately than ever before. Once an intruder breaches an organization's defenses, the same tools that facilitated the entry become instrumental in tracking the movements, understanding tactics, and ultimately defeating intruder’s objectives.
The growing landscape of cyber threats causes a dual approach that merges technological solutions with human vigilance. Attackers can use AI to enhance their abilities, and defenders can also strategically employ similar technologies to create a fair competition. Through informed education and empowerment, the human element, often seen as a vulnerability, can be transformed into a significant asset (or at least shall be).
The challenge that a cyber criminal face isn't just about technology. It also (should) lie in dealing with organizations that are well-prepared and able to adapt quickly.
Some final thoughts
It’s a battlefield where attackers and defenders continuously adapt to overtake each other.
The evolution and hype of AI and large language models has undeniably reshaped the battleground. While the potential for AI to augment the capabilities of cyber criminals presents a daunting challenge, it is crucial to recognize that this technology also offers powerful tools for defense.
The key to maintaining cybersecurity in the era of AI lies in embracing a holistic approach that combines technological innovation with human insight. By investing in advanced security technologies and fostering a culture of vigilance and continuous learning among employees, organizations can fortify their defenses.
The fundamental challenge for organizations is to revolutionize their structure to prepare for the future. The tools to do this are available, but their effectiveness depends on the organization's ability to adapt, overcoming beliefs, fears, and inefficiencies. Unwillingness in change represents a significant risk.
For attackers, artificial intelligence represents a thrilling opportunity. It offers new vulnerable spaces to exploit and advanced resources to optimize attacks. Indeed, AI can amplify the possibilities of success for those looking to exploit gaps in security systems.
They are loving it. That’s why you should defend from it.
📚My Readings
Articles, videos and podcasts I find interesting
Reply