Cyber criminals have started weaponising the deepfake technology, which involves using Artificial Intelligence and Machine Learning to make realistic-looking videos and images. The technology manipulates images, audio, and video to make synthetic content appear authentic. The manipulation often takes place in real time.
The technology has been in vogue for long, with the face-changing filters of Snapchat and TikTok being popular manifestations of the same. These social media channels take real-time data and feed it through an algorithm to produce a synthetic image. Such deepfakes appeared as doctored and users could identify the manipulations easily. But as it is with technology, deepfakes have endured a maturity curve and it is now hard to tell what is real and what is fake. The example of a speaking Mona Lisa, created by the Samsung AI Center in Moscow, is the portent of things to come.
Deepfake works by feeding images or voice clips into artificial neural networks to train data. The algorithm uses these inputs to identify and reconstruct voice and face patterns. Advancements in AI technology have reduced the efforts and costs of creating deepfakes. Improved algorithms require lesser source material to produce more accurate deepfakes, enabling fraudsters to use these tools at scale.
The implications of deepfake for businesses
Cybercriminals can use deepfake technology to wreak havoc in many ways:
- Siphoning off money. Cyber attackers have already used deepfake messages to impersonate the voice of the CEO or other senior executives. They may use the doctored voice over the phone to perfection to request a money transfer, give access rights, or just about anything else. In 2019, hackers scammed a UK energy executive out of £200,000 by faking a phone call purportedly from the executive’s boss asking for money.
- More malware, ransomware and other standard attacks. The popularity of virtual offices and work-from-home has increased the risk of deepfakes. Hackers may impersonate through video conferencing and other digital collaboration modes, to launch social engineering attacks.
- Blackmailing. Blackmailers may manipulate an obscene video by swapping the face in the video with that of the target. They could target IT admins or other senior executives for access credentials, to steal money, trade secrets, or other sensitive data.
- Stock price and public opinion manipulation. Cybercriminals may create fake news using credible faces and voices to spread misinformation. Accurate portrayals created using deepfakes of popular figures such as Barack Obama, Donald Trump, and Tom Cruise have already fooled many. Such impersonations could impact public opinion, share prices, and impact business in many other ways.
How to protect against deepfakes
Deception-identifying software will eventually make explicit fact from fiction at scale. But such software is still a long way off. Enterprises need a multi-faceted defence strategy that combines awareness and technology.
Verification
The mainstreaming of deepfake has made it inevitable for businesses to adopt a zero-trust policy even for messages. There is no longer any “trusted source.”
- Verify the authenticity of any media before sharing or acting on it. Fact-check the video.
- Apply tools that detect any signs of manipulation. Ideally, seek out multiple sources for confirmation.
- For internal communications, use technology to authenticate videos. For example, use a cryptographic algorithm to insert hashes at set intervals during the video. Altering the video will change the hashes.
- Promote a security minded company culture that makes xero-trust a cultural norm.
Training and Education
One of the reasons deepfake poses a danger is because of its novelty factor. Educate the workforce on the dangers of deepfakes and other manipulated media forms. Make them aware of:
- signs of manipulation, such as unnatural movements or inconsistencies in the media content.
- the ways hackers leverage the technology. Awareness enables them to remain on guard when they encounter transactions that are out of normal.
- spotting social engineering attempts such as phishing. Often cybercriminals use deepfake technology to make phishing attempts more convincing.
Enhanced security protocols
Conventional cyber security approaches cannot detect deepfakes. Tackling deepfakes requires entirely new security protocols.
- Add multiple checkpoints when processing audio and video. The probability of deepfake fooling an individual is high. But the chances of fooling keep on reducing as more people become involved.
- Double-check and verify before acting on video calls or messages that require doing something out of the norm.
Use Artificial Intelligence powered tools
Deepfakes thrive on machine learning and advanced analytics. Businesses can use the same capabilities to fight back.
The ways to use AI to defend against deepfakes include:
- Detection. Develop machine learning algorithms to detect tell-tale signs of manipulated media. Train the algorithms to identify inconsistencies, unnatural movements or unrealistic facial expressions.
- Authentication. Analyse metadata, such as time, location and device.
- Prevention. Train algorithms to recognise patterns or techniques used in deepfakes. Frag content that exhibits these patterns for in-depth review.
Legal approach
A viable solution to end the menace of deepfakes may lie outside the company. Enterprises could lobby for legal measures such as:
- Legislation to criminalise creation or distribution of deepfakes.
- A proactive approach by law enforcement to trace deepfake content creators.
Businesses would do well to identify the beneficiaries of the deepfake attacks and pursue damages through a law court. The chances of identifying those responsible are slim in today’s world, where anonymity is easy. But a sustained, unrelenting approach may deter the attackers from targeting your company the next time.
Be ready with a response strategy.
The best preventive plans may still not keep deepfake attacks at bay. Have an action plan ready in the eventuality of a deepfake strike.
The best plan depends on the enterprise. But make sure the action plan:
- Details clear-cut individual responsibilities and a step-by-step list of to-dos.
- Integrates with the company’s standard incident management or crisis response process.
- Are flexible. Deepfakes are still an evolving technology. The nature of the threat can change anytime. Cyber-security has to remain flexible to change the security approaches and stacks, depending on changes to the threat.
Deepfakes exemplify how technology can mislead despite the best of plans and precautions. Enterprises who understand the threat and take proactive measures can overcome the danger and avoid costly disruptions and losses.