Saturday, June 3, 2023
AI Home Security
No Result
View All Result
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology
No Result
View All Result
Morning News
No Result
View All Result
Home Cyber Security

Bad Actors Will Use Large Language Models — but Defenders Can, Too

justmattg by justmattg
April 10, 2023
in Cyber Security
0
Bad Actors Will Use Large Language Models — but Defenders Can, Too
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter



AI is dominating headlines. ChatGPT, specifically, has become the topic du jour. Everyone is taken by the novelty, the distraction. But no one is addressing the elephant in the room: how large language models (LLMs) can and will be weaponized.

The Internet has become incredibly large and complex, exposing our most precious crown jewels. Whereas a decade ago a company had a single website, today it may have dozens — filled with unknown and untracked assets and subsidiaries — that an attacker can use to exfiltrate IP and/or breach into their network and systems. Recent research we conducted provided an eye-opening glimpse into this reality:

  • Companies’ attack surfaces fluctuate 9% in size each month, making security gaps harder to detect. 
  • Organizations have, on average, 104 subsidiaries (i.e., entities owned by a parent company, which might be business units, brands, or standalone companies), and the core security team is unaware of 10 to 31 of them.
  • Invisible or hard-to-detect subsidiaries contain an average of 56% of the critical and high-priority vulnerabilities affecting customer assets.

In short, companies’ attack surfaces have never been larger and more vulnerable. And security leaders are in constant fear that another issue like Log4j is going to cripple their business.

And as if we didn’t have enough security threats to contend with, large language models like ChatGPT have entered the mainstream, shining a light on language AI as a potential weapon for cyberattacks. Should we be worried? The short answer is yes. But there is a bright side, which I’ll address later.

Large Language Models Can and Will Be Used Against You

There are several stages of cyberattacks where LLMs can give bad actors a major advantage of scale, scope, reach, and speed. Here are a few:

  • Automated reconnaissance. Map and discover any assets (devices, files, etc.) and subsidiaries, brands, and services associated with your organization. Find sensitive information such as exposed credentials in AWS directories.
  • Vulnerability discovery. Find weaknesses in the targeted network.
  • Exploitation. To begin, initial exploitation is about using a technique like phishing to gain access to a network. Then, targeted exploitation might use watering-hole attacks to develop and exploit vulnerabilities within the network. 
  • Data theft. Copy or exfiltrate sensitive or valuable data from the network.

Also, consumer applications based on LLMs, most notably ChatGPT, can be used both intentionally and unintentionally by employees to leak company IP, simply by using the free public version. Companies like JP Morgan caught on to this early and were swift to ban corporate use of ChatGPT.

Spear-phishing campaigns provide another use case. High-quality phishing is based on deep understanding of the target; that is precisely what large language models can do quite well, because they process large volumes of data very quickly and customize messages effectively. Emails created by a large language model can impersonate a boss, co-worker, friend, or reputable organization with increasing precision and believability. Since 82% of data breaches involve a human element, including phishing and the use of stolen credentials, this will be an area to watch as hackers use LLMs to ramp up such attacks.

Security Teams Can Turn the Tables on Attackers

There is good news: Security teams can also use machine learning and LLMs to do reconnaissance on their own companies and remediate vulnerabilities before attackers get to them. They can use them to quickly and cost-effectively scan and map their own attack surfaces deeply to find exposed sensitive assets, personal identifiable information (PII), files, etc. By contrast, performing the same feat with manual methods could take months and/or cost hundreds of thousands of dollars.

Knowing the business context of any given asset is the only way security teams can effectively prioritize risk — and machine learning can help. For example, machine learning could recognize a database holding PII and play a role in revenue transactions. 

Machine learning can also determine the business purpose of an asset, distinguishing between a payment mechanism, a critical database, and a random device — and classify its risk profile. This context allows exponentially better risk prioritization and a higher level of threat intelligence. Without proper prioritization, security teams confront endless lists of vulnerabilities with labels like Urgent and Critical that are often, in fact, not correct. 

Preparing for a New Era of Attacks

There is every reason to expect attackers will make the most of large language models to automate reconnaissance and map your attack surfaces. It is time for security teams to embark on yet another learning curve: Find the best, most effective uses of large language models for defensive purposes. Right now, someone somewhere is looking for your organization’s vulnerabilities, and it’s just a matter of time before they use this newly popular type of tool to find them.



Source link

READ ALSO

The Importance of Managing Your Data Security Posture

‘PostalFurious’ SMS Attacks Target UAE Citizens for Data Theft

Related Posts

The Importance of Managing Your Data Security Posture
Cyber Security

The Importance of Managing Your Data Security Posture

June 3, 2023
Undetected Attacks Against Middle East Targets Conducted Since 2020
Cyber Security

‘PostalFurious’ SMS Attacks Target UAE Citizens for Data Theft

June 2, 2023
New Botnet Malware ‘Horabot’ Targets Spanish-Speaking Users in Latin America
Cyber Security

New Botnet Malware ‘Horabot’ Targets Spanish-Speaking Users in Latin America

June 2, 2023
Evasive QBot Malware Leverages Short-lived Residential IPs for Dynamic Attacks
Cyber Security

Evasive QBot Malware Leverages Short-lived Residential IPs for Dynamic Attacks

June 2, 2023
Malicious PyPI Packages Using Compiled Python Code to Bypass Detection
Cyber Security

Malicious PyPI Packages Using Compiled Python Code to Bypass Detection

June 1, 2023
Cybercriminals Targeting Apache NiFi Instances for Cryptocurrency Mining
Cyber Security

Cybercriminals Targeting Apache NiFi Instances for Cryptocurrency Mining

June 1, 2023
Next Post
Estonian National Charged in U.S. for Acquiring Electronics and Metasploit Pro for Russian Military

Estonian National Charged in U.S. for Acquiring Electronics and Metasploit Pro for Russian Military

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Hackers Create Malicious Dota 2 Game Modes to Secretly Access Players’ Systems

Hackers Create Malicious Dota 2 Game Modes to Secretly Access Players’ Systems

February 13, 2023
Trickbot Members Sanctioned for Pandemic-Era Ransomware Hits

Trickbot Members Sanctioned for Pandemic-Era Ransomware Hits

February 11, 2023
Do you know who is watching you?

Do you know who is watching you?

January 2, 2023
The New Threats to Cryptocurrency Users

The New Threats to Cryptocurrency Users

February 12, 2023
PopID announces big customer deployment for face biometric payments in UAE

PopID announces big customer deployment for face biometric payments in UAE

February 14, 2023

EDITOR'S PICK

Why Some Cloud Services Vulnerabilities Are So Hard to Fix

AppSec Looms Large for RSAC 2023 Innovation Sandbox Finalists

April 6, 2023
The Secret Vulnerability Finance Execs are Missing

The Secret Vulnerability Finance Execs are Missing

February 26, 2023
Home Security: Four Tools to Make Your Home Safer

Home Security: Four Tools to Make Your Home Safer

January 27, 2023
CISA Warns of Critical Flaws in Illumina’s DNA Sequencing Instruments

CISA Warns of Critical Flaws in Illumina’s DNA Sequencing Instruments

April 29, 2023

About

We bring you the best news & updates related to Home security, Cyber security and Biometric technology. Keep visiting our website for latest updates.

Follow us

Categories

  • Biometric Technology
  • Cyber Security
  • Home Security

Recent Posts

  • The Importance of Managing Your Data Security Posture
  • ‘PostalFurious’ SMS Attacks Target UAE Citizens for Data Theft
  • New Botnet Malware ‘Horabot’ Targets Spanish-Speaking Users in Latin America
  • Evasive QBot Malware Leverages Short-lived Residential IPs for Dynamic Attacks
  • Privacy Policy
  • Contact Us

© 2023 AI Home Security - All rights reserved.

No Result
View All Result
  • Home
  • Home Security
  • Cyber Security
  • Biometric Technology

© 2023 AI Home Security - All rights reserved.