Introduction
According to reports by BBC several calls made to New Hampshire voters, claiming to be President Joe Biden, urged them not to vote in the primary election. This is the kind of impact deepfakes have on collective social engineering and is pervasive. Deepfake attacks driven by AI supercharge cybercrimes that directly impact the spread misinformation on internet but also impact the security, and integrity of organizations, governments and the cultural fabric of society at a macro-level.
“Deepfake, a portmanteau of “deep learning” and “fake,” refers to media that has been digitally altered to replace a person’s face or body with that of another.” (Builtin)
While the growth and development of AI is infamously associated in tandem with the rise of deepfake attacks on the internet, deepfakes could also be a direct evolution of photoshops and morphing images in the early 2000s. This article aims to delve deeper into the role of AI/ML in these cybercrimes and how can we amalgamate AI/ML with cybersecurity to counter & minimize falling prey to it.
Instances of Deepfake attacks
The Observer in its report says PwC has warned in its report that “undetected deepfakes can lead to severe consequences, including financial losses, job terminations, identity theft” etc. Some instances of these deepfake attacks as reported by BBC are:
- “Senior British politicians have been subject to audio deepfakes as have politicians in other nations including Slovakia and Argentina.”
- “Sigurdur Arnason runs a music creation platform in Iceland. Before Christmas he was asked by the Icelandic National Broadcasting Service to create a video music skit using a deepfake of a beloved dead Icelandic comedian called Hemmi Gunn for a show that aired on New Year's Eve.”
- “A Maryland high school teacher at Baltimore had been arrested for allegedly using AI to deepfake a bogus recording of his principal making racist comments.”
Role of AI/ML in Deepfake attacks
TechTarget states “According to the U.S Department of Homeland Security's "Increasing Threat of Deepfake Identities" report, several AI tools are commonly used to generate deepfakes in a matter of seconds. Those tools include Deep Art Effects, Deepswap, Deep Video Portraits, FaceApp, etc...”
AI/ML play a potent role in enabling deepfake attacks at the foundational level. It is only after the advent of AI and the development of Machine Learning technologies in tandem, that have spiked the cases of deepfake attacks in the last year or so. The specific aspects of these technologies that are used by attackers:
- Generative Adversarial Networks (GANs): It consists of two neural networks: a. A generator that creates synthetic media b. a discriminator that evaluates their authenticity, they work together to produce convincing deepfakes that continuously improve because the generator keeps getting better.
- Deep Learning Techniques: A subset of ML, deep learning models
analyze volumes of data to learn and replicate complex patterns. Especially
in video, image, and audio manipulation, enable AI to replicate a person’s
facial expressions, movements, etc.
Autoencoders and Variational Autoencoders (VAEs) are another type of deep learning that compress and reconstruct data are used in deepfakes to map an individual’s expressions on the other enabling seamless expression manipulation in videos. - Data Augmentation and Synthetic Data: By artificially expanding and diversifying training datasets through techniques like rotation and scaling, AI models learn to generate more robust and varied deepfakes. This is aided by synthetic data generation also aids in creating realistic fake content even with limited real data.
All these technologies are automated and are streamlined to create deepfakes on a large scale, allowing even non-experts to create and distribute them.
Types of AI/ ML driven attacks
- AI-driven social engineering: CrowdStrike mentions these
social-engineering attacks to pose the most threats to cybersecurity:
- Identify an ideal target, including both the overall corporate target and a person within the organization who can serve as a gateway to the IT environment.
- Develop a realistic and plausible scenario that would generate attention
- Multimedia assets, such as audio recordings or video footage, to engage the target.
- Adversarial AI/ML: The most common adversarial AI/ML techniques used
by attackers are:
- Poisoning attacks to inject fake/misleading information into the system to compromise the model's accuracy
- Model tampering targets the structure of a pre-trained AI/ML. This allows unauthorized alterations to the model to compromise its output accuracy.
- Malicious GPTs: This refers to an altered and evil version of commercial Gen AI that produces harmful or deliberately misinformed outputs.
- Financial Frauds: These kinds of attacks are most rampant, where attackers use voice phishing or vishing to impersonate executives to get sensitive information out of their employees or even for fund transfer.
Collating AI/ML with Cybersecurity to mitigate these risks
While the advancement in AI/ML technology are the bedrock of deepfake cyber-attacks, it is also the antigen that can be used to counter these attacks and minimize identity theft, privacy violation and spread of false information.
According to a study by Deloitte “69% of enterprises believe that AI is necessary for cybersecurity due to increasing number of threats that cybersecurity analysts can handle.”
Some approaches that could help are listed below:
- Deepfake Detection Algorithms: AI/ML models can be trained and continuously iterated to identify subtle inconsistencies in deepfake content, such as unnatural facial movements, irregular blinking, or mismatched audio-visual cues. distinguish between them.
- Forensic Analysis Tools: AI/ML can be used to develop forensic tools that examine things like compression inconsistencies, lighting anomalies, or pixel-level discrepancies and determine the origin of deepfakes.
- Behavioral Biometrics: AI/ML can analyze behavioral patterns such as typing speed, mouse movements, or speech cadence to authenticate users and detect anomalies.
- Blockchain for Content Authentication: Combining AI/ML with blockchain technology can help establish the authenticity of the content and create immutable records of the same. AI can embed cryptographic signatures into media files, and any subsequent alterations would be easily detectable.
MetricDust's Expertise in Cybersecurity
With every day that passes, more and more companies are becoming aware of the benefits of leveraging AI to counter deepfake attacks, Deloitte makes a point by predicting the growth in AI in cybersecurity market to grow from US $17.4 billion in 2022 to US$102.78 billion by 2032, growing at a CAGR of 19.43%.
We at MetricDust are vigilant and cognitive of these developments and are well-equipped to flex our cybersecurity muscles by helping you develop and enforce comprehensive cybersecurity policies, including:
- data protection,
- access control,
- incident management to protect its assets,
- comply with regulations and
- maintain customer trust in an increasingly digital world.
By investing in the measures like employee training, network security, conduct regular vulnerability assessments and penetration testing to identify and mitigate security gaps and Compliance and Risk Management we significantly help in reduce the risk of cyber threats and position our company as a secure and resilient organization