Googles – INDIA NEWS https://www.indiavpn.org News Blog Wed, 13 Mar 2024 12:45:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats https://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/ https://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/#respond Wed, 13 Mar 2024 12:45:48 +0000 https://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/ [ad_1]

Mar 13, 2024NewsroomLarge Language Model / AI Security

Google's Gemini AI

Google’s Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks.

The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API.

The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its “foundational instructions” in a markdown block.

“A system message can be used to inform the LLM about the context,” Microsoft notes in its documentation about LLM prompt engineering.

“The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.”

Cybersecurity

This is made possible due to the fact that models are susceptible to what’s called a synonym attack to circumvent security defenses and content restrictions.

A second class of vulnerabilities relates to using “crafty jailbreaking” techniques to make the Gemini models generate misinformation surrounding topics like elections as well as output potentially illegal and dangerous information (e.g., hot-wiring a car) using a prompt that asks it to enter into a fictional state.

Also identified by HiddenLayer is a third shortcoming that could cause the LLM to leak information in the system prompt by passing repeated uncommon tokens as input.

“Most LLMs are trained to respond to queries with a clear delineation between the user’s input and the system prompt,” security researcher Kenneth Yeung said in a Tuesday report.

“By creating a line of nonsensical tokens, we can fool the LLM into believing it is time for it to respond and cause it to output a confirmation message, usually including the information in the prompt.”

Another test involves using Gemini Advanced and a specially crafted Google document, with the latter connected to the LLM via the Google Workspace extension.

The instructions in the document could be designed to override the model’s instructions and perform a set of malicious actions that enable an attacker to have full control of a victim’s interactions with the model.

The disclosure comes as a group of academics from Google DeepMind, ETH Zurich, University of Washington, OpenAI, and the McGill University revealed a novel model-stealing attack that makes it possible to extract “precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2.”

Cybersecurity

That said, it’s worth noting that these vulnerabilities are not novel and are present in other LLMs across the industry. The findings, if anything, emphasize the need for testing models for prompt attacks, training data extraction, model manipulation, adversarial examples, data poisoning and exfiltration.

“To help protect our users from vulnerabilities, we consistently run red-teaming exercises and train our models to defend against adversarial behaviors like prompt injection, jailbreaking, and more complex attacks,” a Google spokesperson told The Hacker News. “We’ve also built safeguards to prevent harmful or misleading responses, which we are continuously improving.”

The company also said it’s restricting responses to election-based queries out of an abundance of caution. The policy is expected to be enforced against prompts regarding candidates, political parties, election results, voting information, and notable office holders.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/feed/ 0
Google’s New Tracking Protection in Chrome Blocks Third-Party Cookies https://www.indiavpn.org/2023/12/26/googles-new-tracking-protection-in-chrome-blocks-third-party-cookies/ https://www.indiavpn.org/2023/12/26/googles-new-tracking-protection-in-chrome-blocks-third-party-cookies/#respond Tue, 26 Dec 2023 11:00:01 +0000 https://www.indiavpn.org/2023/12/26/googles-new-tracking-protection-in-chrome-blocks-third-party-cookies/ [ad_1]

Dec 15, 2023NewsroomPrivacy / User Tracking

Chrome Blocks Third-Party Cookies

Google on Thursday announced that it will start testing a new feature called “Tracking Protection” beginning January 4, 2024, to 1% of Chrome users as part of its efforts to deprecate third-party cookies in the web browser.

The setting is designed to limit “cross-site tracking by restricting website access to third-party cookies by default,” Anthony Chavez, vice president of Privacy Sandbox at Google, said.

The tech giant noted that participants for Tracking Protection will be selected at random and that chosen users will be notified upon opening Chrome on either a desktop or an Android device.

The goal is to restrict third-party cookies (also called “non-essential cookies”) by default, preventing them from being used to track users as they move from one website to the other for serving personalized ads.

Cybersecurity

While several major browsers like Apple Safari and Mozilla Firefox have either already placed restrictions on third-party cookies via features like Intelligent Tracking Prevention (ITP) and Enhanced Tracking Protection in Firefox, Google is taking more of a middle-ground approach that involves devising alternatives where users can access free online content and services without compromising on their privacy.

Chrome Blocks Third-Party Cookies

In mid-October 2023, Google confirmed its plans to “disable third-party cookies for 1% of users from Q1 2024 to facilitate testing, and then ramp up to 100% of users from Q3 2024.”

Privacy Sandbox, instead of providing a cross-site or cross-app user identifier, “aggregates, limits, or noises data” through APIs like Protected Audience (formerly FLEDGE), Topics, and Attribution Reporting to help prevent user re-identification.

In doing so, the goal is to block third-parties from tracking user browsing behavior across sites, while still allowing sites and apps to serve relevant ads and enabling advertisers to measure the performance of their online ads without using individual identifiers.

“With Tracking Protection, Privacy Sandbox and all of the features we launch in Chrome, we’ll continue to work to create a web that’s more private than ever, and universally accessible to everyone,” Chavez said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2023/12/26/googles-new-tracking-protection-in-chrome-blocks-third-party-cookies/feed/ 0