Facebook Twitter Instagram
    • Privacy Policy
    • Contact Us
    Facebook Twitter Instagram Pinterest Vimeo
    AI Home SecurityAI Home Security
    • Home
    • Home Security
    • Cyber Security
    • Biometric Technology
    Contact
    AI Home SecurityAI Home Security
    Cyber Security

    Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

    justmattgBy justmattgOctober 29, 2023No Comments2 Mins Read

    [ad_1]

    Oct 27, 2023NewsroomArtificial Intelligence / Vulnerability

    Artificial Intelligence Threats

    Google has announced that it’s expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security.

    “Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations),” Google’s Laurie Richardson and Royal Hansen said.

    Some of the categories that are in scope include prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft.

    Cybersecurity

    It’s worth noting that Google earlier this July instituted an AI Red Team to help address threats to AI systems as part of its Secure AI Framework (SAIF).

    Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain via existing open-source security initiatives such as Supply Chain Levels for Software Artifacts (SLSA) and Sigstore.

    Artificial Intelligence Threats

    “Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn’t tampered with or replaced,” Google said.

    “Metadata such as SLSA provenance that tell us what’s in software and how it was built, allowing consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats.”

    Cybersecurity

    The development comes as OpenAI unveiled a new internal Preparedness team to “track, evaluate, forecast, and protect” against catastrophic risks to generative AI spanning cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats.

    The two companies, alongside Anthropic and Microsoft, have also announced the creation of a $10 million AI Safety Fund, focused on promoting research in the field of AI safety.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



    [ad_2]

    Source link

    Previous ArticleOcto Tempest Group Threatens Physical Violence as Social Engineering Tactic
    Next Article Securing Cloud Identities to Protect Assets and Minimize Risk
    justmattg
    • Website

    Related Posts

    Cyber Security

    Name That Toon: Last Line of Defense

    April 16, 2024
    Cyber Security

    OpenJS Foundation Targeted in Potential JavaScript Project Takeover Attempt

    April 16, 2024
    Cyber Security

    Middle East Cyber Ops Intensify, With Israel the Main Target

    April 16, 2024
    Add A Comment

    Leave A Reply Cancel Reply

    Facebook Twitter Instagram Pinterest
    • Privacy Policy
    • Contact Us
    AI Home Security © 2025 All rights reserved | Designed By ESmartsSolution

    Type above and press Enter to search. Press Esc to cancel.

    ↑