Generative – INDIA NEWS https://www.indiavpn.org News Blog Wed, 20 Mar 2024 11:41:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Generative AI Security – Secure Your Business in a World Powered by LLMs https://www.indiavpn.org/2024/03/20/generative-ai-security-secure-your-business-in-a-world-powered-by-llms/ https://www.indiavpn.org/2024/03/20/generative-ai-security-secure-your-business-in-a-world-powered-by-llms/#respond Wed, 20 Mar 2024 11:41:30 +0000 https://www.indiavpn.org/2024/03/20/generative-ai-security-secure-your-business-in-a-world-powered-by-llms/ [ad_1]

Mar 20, 2024The Hacker NewsArtificial intelligence / Webinar

Generative AI Security

Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI.

The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities, LLMs must be approached with caution. A breach in an LLM’s security could expose the data it was trained on, along with sensitive organizational and user information, presenting a considerable risk.

Join us for an enlightening session with Elad Schulman, CEO & Co-Founder of Lasso Security, and Nir Chervoni, Booking.com’s Head of Data Security. They will share their real-world experiences and insights into securing Generative AI technologies.

Why Attend?

This webinar is a must for IT professionals, security experts, business leaders, and anyone fascinated by the future of Generative AI and security. It’s your comprehensive guide to the complexities of securing innovation in the age of generative artificial intelligence.

What You’ll Learn:

  • How GenAI is Reshaping Business Operations: Explore the current state of GenAI and LLM adoption through statistics and insightful business case studies.
  • Understanding Security Risks: Dive into the emerging security threats posed by Generative AI.
  • Effective Security Strategies for Businesses: Gain insights into proven strategies to navigate GenAI security challenges.
  • Best Practices and Tools: Discover best practices and tools for effectively securing GenAI applications and models.

Register Now for Expert-Led Insights

Don’t miss this opportunity to dive deep into the transformative potential of Generative AI and understand how to navigate its security implications with industry experts. Unlock the strategies to harness GenAI for your business securely and effectively.

Reserve Your Webinar Spot ➜

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/03/20/generative-ai-security-secure-your-business-in-a-world-powered-by-llms/feed/ 0
Microsoft Releases PyRIT – A Red Teaming Tool for Generative AI https://www.indiavpn.org/2024/02/23/microsoft-releases-pyrit-a-red-teaming-tool-for-generative-ai/ https://www.indiavpn.org/2024/02/23/microsoft-releases-pyrit-a-red-teaming-tool-for-generative-ai/#respond Fri, 23 Feb 2024 12:42:28 +0000 https://www.indiavpn.org/2024/02/23/microsoft-releases-pyrit-a-red-teaming-tool-for-generative-ai/ [ad_1]

Feb 23, 2024NewsroomRed Teaming / Artificial Intelligence

Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems.

The red teaming tool is designed to “enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances,” Ram Shankar Siva Kumar, AI red team lead at Microsoft, said.

The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment).

It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft.

Cybersecurity

PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the form of JSON or a database to store the intermediate input and output interactions.

The scoring engine also offers two different options for scoring the outputs from the target AI system, allowing red teamers to use a classical machine learning classifier or leverage an LLM endpoint for self-evaluation.

“The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model,” Microsoft said.

Generative AI

“This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.”

That said, the tech giant is careful to emphasize that PyRIT is not a replacement for manual red teaming of generative AI systems and that it complements a red team’s existing domain expertise.

In other words, the tool is meant to highlight the risk “hot spots” by generating prompts that could be used to evaluate the AI system and flag areas that require further investigation.

Cybersecurity

Microsoft further acknowledged that red teaming generative AI systems requires probing for both security and responsible AI risks simultaneously and that the exercise is more probabilistic while also pointing out the wide differences in generative AI system architectures.

“Manual probing, though time-consuming, is often needed for identifying potential blind spots,” Siva Kumar said. “Automation is needed for scaling but is not a replacement for manual probing.”

The development comes as Protect AI disclosed multiple critical vulnerabilities in popular AI supply chain platforms such as ClearML, Hugging Face, MLflow, and Triton Inference Server that could result in arbitrary code execution and disclosure of sensitive information.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/02/23/microsoft-releases-pyrit-a-red-teaming-tool-for-generative-ai/feed/ 0