Saturday, March 25, 2023
AI Home Security
No Result
View All Result
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology
No Result
View All Result
Morning News
No Result
View All Result
Home Cyber Security

How Threat Actors Manipulate Reality

justmattg by justmattg
March 7, 2023
in Cyber Security
0
How Threat Actors Manipulate Reality
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Malicious Python Package Uses Unicode Trickery to Evade Detection and Steal Data

Red Teaming at Scale to Uncover Your Big Unknowns


Disinformation to Deep Fakes

Deep fakes are expected to become a more prominent attack vector. Here’s how to identify them.

What are Deep Fakes?

A deep fake is the act of maliciously replacing real images and videos with fabricated ones to perform information manipulation. To create images, video and audio that are high quality enough to be used in deep fakes, AI and ML are required. Such use of AI, ML and image replacement are unlike other types of information manipulation, which use less extreme manipulation techniques, like misrepresentation of information, isolating parts of the information or editing it in a deceptive manner. Etay Maor, Senior Director of Security Strategy at Cato Networks adds “To add complications, the recent advancements and accessibility to AI generated text, such as GPT3, have already been used in combination with deepfakes (as a proof of concept) to create interactive, human looking conversation bots”

What Do Deep Fakes Look Like?

Deep fakes come in all shapes and sizes. Some are simpler and some are more advanced. Some of the most popular examples of deep fakes are:

Face Swap

Face swapping is the act of replacing the face in a video or image from one person to another. Face swapping requires dedicated software, but it does not have to be based on advanced technology – today one can even find mobile apps that support face swapping. Face swapping that is available in mobile apps is usually limited to simple use cases, like swapping between the user’s photos and actors’ faces in movie scenes.

Deep Fakes

More advanced face swapping does exist, but it requires more model training and code, and – as a result – GPUs, which is expensive and resource-intensive. An example of a more advanced face swapping deep fake can be see in this video, in which Tom Cruise is swapped with the presenter’s face:

Deep Fakes

This Tom Cruise face swap required two hours of training on a GPU as well as days of professional video editing post-processing. This might sound like a lot, but it was also considered a simpler swap than others, because the presenter had a similar haircut to Cruise and can impersonate his voice, which means less training and post-processing were required.

Puppet Master (Lip Sync)

The ‘Puppet Master’ deep fake is a technique in which the image of a person’s mouth movements are manipulated, making it seem like the person is saying something they haven’t actually said. Compared to face swapping, which trains a model on the new, swapped face, ‘Puppet Master’ trains a model on the face of the original image, and specifically on the mouth movements.

Here’s what it looks like:

[warning – explicit language]

Deep Fakes

The technology behind the ‘Puppet Master’ is based on synthesizing the mask, i.e. the original image, and placing it on top of the model of the person who is impersonating and lip syncing them.

Deep Fakes

Audio

The third prominent type of deep fake is audio-based. Audio deep fakes are audio files that take a real person’s voice and make it sounds like they’re saying something they had never said. Audio deep fakes are created by taking audio files, allocating annotations to the sounds, training an ML model based on the annotations to associate sounds with text and then generating a new audio file.

Here’s what is sounds like:

Deep Fakes vs. Cheap Fixes

Not all modified images or audio are deep fakes. While deep fakes are media synthesized or modified using AI, cheap fixes are media synthesized or modified using low-tech methods, which are easy to spot. They often have distortions and have been clearly manipulated. Here’s what a cheap fix looks like:

Deep Fakes

The Cyber Risk of Deep Fakes

Deep fakes have become more realistic and accessible and they are also faster to create than ever before. This makes them a powerful tool for weaponization. As a result, they pose a risk for businesses and for countries. They can be used for cyber crime, social engineering, fraud, by threat actor nations to influence foreign operations, and more.

For example, deep fake was used to mimic a CEO’s voice and convince an executive to wire $243,000 to a scam account. Etay Maor of Cato Networks “Business email comprise and phishing attacks are becoming harder and harder to detect based on simple analysis of the language used. There is a need for a holistic approach, such as the one offered by a single vendor SASE solution, that can detect the attack at different multiple choke points and not rely on isolated point products that are doomed to fail”. In another case, a deep fake was presented as evidence in a child custody case.

Deep fakes can also be used for spreading disinformation, i.e false distribution of information to influence public opinion or obscure the truth. For example, deep fakes could be used to impersonate world leaders and ignite an attack, or to impersonate a CEO and manipulate a company’s stock price. In other cases, deep fakes enable plausible deniability, in which people could deny all sources of media by claiming they are deep fakes, which creates a social breach of trust.

Finally, deep fake can be used for defamation, i.e damaging someone’s good reputation. For example, by creating revenge porn.

How to Detect Deep Fakes

There are two main types of methods for accurately detecting deep fakes:

  • Low-level detection methods
  • High-level detection methods

Low-level Detection Methods

Low-level detection methods rely on ML models that are trained to identify artifacts or pixellations that were introduced through the deep fake generation process. These artifacts may be imperceptible to the human eye, but the models, which were trained on real images and deep fake images, are able to detect them.

Deep Fakes

High-level Detection Methods

High-level detection methods use models that can identify semantically meaningful features. These include unnatural movements, like blinking, head-pose or unique mannerisms, and phoneme-viseme mismatches.

Deep Fakes

Today, these detection methods are considered accurate. However, as deep fake technology improves and becomes more sophisticated, they are expected to become less effective and will need to be updated and improved. In addition to these techniques, each one of us can help detect deep fakes by verifying the media source of videos and images we receive.

To learn more about different types of cybersecurity attacks and how to prevent them, Cato Networks’ Cyber Security Masterclass series is available for your viewing.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.





Source link

Related Posts

Malicious Python Package Uses Unicode Trickery to Evade Detection and Steal Data
Cyber Security

Malicious Python Package Uses Unicode Trickery to Evade Detection and Steal Data

March 25, 2023
Red Teaming at Scale to Uncover Your Big Unknowns
Cyber Security

Red Teaming at Scale to Uncover Your Big Unknowns

March 24, 2023
Inside the High Risk of 3rd-Party SaaS Apps
Cyber Security

Inside the High Risk of 3rd-Party SaaS Apps

March 24, 2023
Open Source Vulnerabilities Still Pose a Big Challenge for Security Teams
Cyber Security

Open Source Vulnerabilities Still Pose a Big Challenge for Security Teams

March 24, 2023
Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts
Cyber Security

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

March 24, 2023
2023 Cybersecurity Maturity Report Reveals Organizational Unpreparedness for Cyberattacks
Cyber Security

2023 Cybersecurity Maturity Report Reveals Organizational Unpreparedness for Cyberattacks

March 23, 2023
Next Post
Machine Learning Improves Prediction of Exploited Vulnerabilities

Machine Learning Improves Prediction of Exploited Vulnerabilities

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Hackers Create Malicious Dota 2 Game Modes to Secretly Access Players’ Systems

Hackers Create Malicious Dota 2 Game Modes to Secretly Access Players’ Systems

February 13, 2023
Trickbot Members Sanctioned for Pandemic-Era Ransomware Hits

Trickbot Members Sanctioned for Pandemic-Era Ransomware Hits

February 11, 2023
The New Threats to Cryptocurrency Users

The New Threats to Cryptocurrency Users

February 12, 2023
Do you know who is watching you?

Do you know who is watching you?

January 2, 2023
PopID announces big customer deployment for face biometric payments in UAE

PopID announces big customer deployment for face biometric payments in UAE

February 14, 2023

EDITOR'S PICK

Crypto Drainers Are Ready to Ransack Investor Wallets

Cybersecurity Threats Overhyped or Not?

March 21, 2023
Modern Software: What’s Really Inside?

Modern Software: What’s Really Inside?

February 20, 2023
Opening This Year? A Look At Abu Dhabi’s Midfield Terminal

Opening This Year? A Look At Abu Dhabi’s Midfield Terminal

January 29, 2023
5 Ways to Fight School Ransomware Attacks

5 Ways to Fight School Ransomware Attacks

March 16, 2023

About

We bring you the best news & updates related to Home security, Cyber security and Biometric technology. Keep visiting our website for latest updates.

Follow us

Categories

  • Biometric Technology
  • Cyber Security
  • Home Security

Recent Posts

  • Another Big BIPA Ruling, Paris Olympics Legislation, NEOM Airlines, and More: Identity News Digest
  • Malicious Python Package Uses Unicode Trickery to Evade Detection and Steal Data
  • The NSA and CISA Highlight On-device Privacy: Identity News Digest
  • Red Teaming at Scale to Uncover Your Big Unknowns
  • Privacy Policy
  • Contact Us

© 2023 AI Home Security - All rights reserved.

No Result
View All Result
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology

© 2023 AI Home Security - All rights reserved.