Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    [mc4wp_form id=3515]
    What's Hot

    Name That Toon: Last Line of Defense

    April 16, 2024

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024
    Facebook Twitter Instagram
    • Privacy Policy
    • Contact Us
    Facebook Twitter Instagram Pinterest Vimeo
    AI Home SecurityAI Home Security
    • Home
    • Home Security
    • Cyber Security
    • Biometric Technology
    Contact
    AI Home SecurityAI Home Security
    Home»Cyber Security»Bad Actors Will Use Large Language Models — but Defenders Can, Too
    Cyber Security

    Bad Actors Will Use Large Language Models — but Defenders Can, Too

    justmattgBy justmattgApril 10, 2023No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    [ad_1]

    AI is dominating headlines. ChatGPT, specifically, has become the topic du jour. Everyone is taken by the novelty, the distraction. But no one is addressing the elephant in the room: how large language models (LLMs) can and will be weaponized.

    The Internet has become incredibly large and complex, exposing our most precious crown jewels. Whereas a decade ago a company had a single website, today it may have dozens — filled with unknown and untracked assets and subsidiaries — that an attacker can use to exfiltrate IP and/or breach into their network and systems. Recent research we conducted provided an eye-opening glimpse into this reality:

    • Companies’ attack surfaces fluctuate 9% in size each month, making security gaps harder to detect. 
    • Organizations have, on average, 104 subsidiaries (i.e., entities owned by a parent company, which might be business units, brands, or standalone companies), and the core security team is unaware of 10 to 31 of them.
    • Invisible or hard-to-detect subsidiaries contain an average of 56% of the critical and high-priority vulnerabilities affecting customer assets.

    In short, companies’ attack surfaces have never been larger and more vulnerable. And security leaders are in constant fear that another issue like Log4j is going to cripple their business.

    And as if we didn’t have enough security threats to contend with, large language models like ChatGPT have entered the mainstream, shining a light on language AI as a potential weapon for cyberattacks. Should we be worried? The short answer is yes. But there is a bright side, which I’ll address later.

    Large Language Models Can and Will Be Used Against You

    There are several stages of cyberattacks where LLMs can give bad actors a major advantage of scale, scope, reach, and speed. Here are a few:

    • Automated reconnaissance. Map and discover any assets (devices, files, etc.) and subsidiaries, brands, and services associated with your organization. Find sensitive information such as exposed credentials in AWS directories.
    • Vulnerability discovery. Find weaknesses in the targeted network.
    • Exploitation. To begin, initial exploitation is about using a technique like phishing to gain access to a network. Then, targeted exploitation might use watering-hole attacks to develop and exploit vulnerabilities within the network. 
    • Data theft. Copy or exfiltrate sensitive or valuable data from the network.

    Also, consumer applications based on LLMs, most notably ChatGPT, can be used both intentionally and unintentionally by employees to leak company IP, simply by using the free public version. Companies like JP Morgan caught on to this early and were swift to ban corporate use of ChatGPT.

    Spear-phishing campaigns provide another use case. High-quality phishing is based on deep understanding of the target; that is precisely what large language models can do quite well, because they process large volumes of data very quickly and customize messages effectively. Emails created by a large language model can impersonate a boss, co-worker, friend, or reputable organization with increasing precision and believability. Since 82% of data breaches involve a human element, including phishing and the use of stolen credentials, this will be an area to watch as hackers use LLMs to ramp up such attacks.

    Security Teams Can Turn the Tables on Attackers

    There is good news: Security teams can also use machine learning and LLMs to do reconnaissance on their own companies and remediate vulnerabilities before attackers get to them. They can use them to quickly and cost-effectively scan and map their own attack surfaces deeply to find exposed sensitive assets, personal identifiable information (PII), files, etc. By contrast, performing the same feat with manual methods could take months and/or cost hundreds of thousands of dollars.

    Knowing the business context of any given asset is the only way security teams can effectively prioritize risk — and machine learning can help. For example, machine learning could recognize a database holding PII and play a role in revenue transactions. 

    Machine learning can also determine the business purpose of an asset, distinguishing between a payment mechanism, a critical database, and a random device — and classify its risk profile. This context allows exponentially better risk prioritization and a higher level of threat intelligence. Without proper prioritization, security teams confront endless lists of vulnerabilities with labels like Urgent and Critical that are often, in fact, not correct. 

    Preparing for a New Era of Attacks

    There is every reason to expect attackers will make the most of large language models to automate reconnaissance and map your attack surfaces. It is time for security teams to embark on yet another learning curve: Find the best, most effective uses of large language models for defensive purposes. Right now, someone somewhere is looking for your organization’s vulnerabilities, and it’s just a matter of time before they use this newly popular type of tool to find them.

    [ad_2]

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCISA Warns of 5 Actively Exploited Security Flaws: Urgent Action Required
    Next Article Estonian National Charged in U.S. for Acquiring Electronics and Metasploit Pro for Russian Military
    justmattg
    • Website

    Related Posts

    Cyber Security

    Name That Toon: Last Line of Defense

    April 16, 2024
    Cyber Security

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024
    Cyber Security

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024
    Add A Comment

    Leave A Reply Cancel Reply

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Cyber Security

    Name That Toon: Last Line of Defense

    justmattgApril 16, 2024

    [ad_1] The enemies are always getting closer, using the same advanced technologies as security pros…

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024

    Muddled Libra Shifts Focus to SaaS and Cloud for Extortion and Data Theft Attacks

    April 16, 2024

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    [mc4wp_form id=3515]
    Demo
    Top Posts

    Name That Toon: Last Line of Defense

    April 16, 2024

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Latest Reviews
    Cyber Security

    Name That Toon: Last Line of Defense

    justmattgApril 16, 2024

    [ad_1] The enemies are always getting closer, using the same advanced technologies as security pros…

    Cyber Security

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    justmattgApril 16, 2024

    [ad_1] Apr 16, 2024NewsroomSupply Chain / Software Security Security researchers have uncovered a “credible” takeover…

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    [mc4wp_form id=3515]
    Demo
    MOST POPULAR

    Name That Toon: Last Line of Defense

    April 16, 2024

    California mountain lion P-22 left mark on wildlife conservation

    January 1, 2023

    Congress Again Writes To Home Minister Amit Shah Over Rahul Gandhi’s Security

    January 1, 2023
    OUR PICKS

    Name That Toon: Last Line of Defense

    April 16, 2024

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    [mc4wp_form id=3515]
    Facebook Twitter Instagram Pinterest
    • Privacy Policy
    • Contact Us
    AI Home Security © 2025 All rights reserved | Designed By ESmartsSolution

    Type above and press Enter to search. Press Esc to cancel.

    ↑