Models – INDIA NEWS https://www.indiavpn.org News Blog Mon, 04 Mar 2024 10:23:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Over 100 Malicious AI/ML Models Found on Hugging Face Platform https://www.indiavpn.org/2024/03/04/over-100-malicious-ai-ml-models-found-on-hugging-face-platform/ https://www.indiavpn.org/2024/03/04/over-100-malicious-ai-ml-models-found-on-hugging-face-platform/#respond Mon, 04 Mar 2024 10:23:58 +0000 https://www.indiavpn.org/2024/03/04/over-100-malicious-ai-ml-models-found-on-hugging-face-platform/ [ad_1]

Mar 04, 2024NewsroomAI Security / Vulnerability

Hugging Face Platform

As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform.

These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said.

“The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor,'” senior security researcher David Cohen said.

“This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state.”

Cybersecurity

Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research Environment Open Network (KREONET). Other repositories bearing the same payload have been observed connecting to other IP addresses.

In one case, the authors of the model urged users not to download it, raising the possibility that the publication may be the work of researchers or AI practitioners.

“However, a fundamental principle in security research is refraining from publishing real working exploits or malicious code,” JFrog said. “This principle was breached when the malicious code attempted to connect back to a genuine IP address.”

Hugging Face Platform

The findings once again underscore the threat lurking within open-source repositories, which could be poisoned for nefarious activities.

From Supply Chain Risks to Zero-click Worms

They also come as researchers have devised efficient ways to generate prompts that can be used to elicit harmful responses from large-language models (LLMs) using a technique called beam search-based adversarial attack (BEAST).

In a related development, security researchers have developed what’s known as a generative AI worm called Morris II that’s capable of stealing data and spreading malware through multiple systems.

Morris II, a twist on one of the oldest computer worms, leverages adversarial self-replicating prompts encoded into inputs such as images and text that, when processed by GenAI models, can trigger them to “replicate the input as output (replication) and engage in malicious activities (payload),” security researchers Stav Cohen, Ron Bitton, and Ben Nassi said.

Even more troublingly, the models can be weaponized to deliver malicious inputs to new applications by exploiting the connectivity within the generative AI ecosystem.

Malicious AI/ML Models

The attack technique, dubbed ComPromptMized, shares similarities with traditional approaches like buffer overflows and SQL injections owing to the fact that it embeds the code inside a query and data into regions known to hold executable code.

ComPromptMized impacts applications whose execution flow is reliant on the output of a generative AI service as well as those that use retrieval augmented generation (RAG), which combines text generation models with an information retrieval component to enrich query responses.

Cybersecurity

The study is not the first, nor will it be the last, to explore the idea of prompt injection as a way to attack LLMs and trick them into performing unintended actions.

Previously, academics have demonstrated attacks that use images and audio recordings to inject invisible “adversarial perturbations” into multi-modal LLMs that cause the model to output attacker-chosen text or instructions.

“The attacker may lure the victim to a webpage with an interesting image or send an email with an audio clip,” Nassi, along with Eugene Bagdasaryan, Tsung-Yin Hsieh, and Vitaly Shmatikov, said in a paper published late last year.

“When the victim directly inputs the image or the clip into an isolated LLM and asks questions about it, the model will be steered by attacker-injected prompts.”

Early last year, a group of researchers at Germany’s CISPA Helmholtz Center for Information Security at Saarland University and Sequire Technology also uncovered how an attacker could exploit LLM models by strategically injecting hidden prompts into data (i.e., indirect prompt injection) that the model would likely retrieve when responding to user input.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/03/04/over-100-malicious-ai-ml-models-found-on-hugging-face-platform/feed/ 0
New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks https://www.indiavpn.org/2024/02/27/new-hugging-face-vulnerability-exposes-ai-models-to-supply-chain-attacks/ https://www.indiavpn.org/2024/02/27/new-hugging-face-vulnerability-exposes-ai-models-to-supply-chain-attacks/#respond Tue, 27 Feb 2024 18:18:24 +0000 https://www.indiavpn.org/2024/02/27/new-hugging-face-vulnerability-exposes-ai-models-to-supply-chain-attacks/ [ad_1]

Feb 27, 2024NewsroomSupply Chain Attack / Data Security

Hugging Face Vulnerability

Cybersecurity researchers have found that it’s possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks.

“It’s possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service,” HiddenLayer said in a report published last week.

This, in turn, can be accomplished using a hijacked model that’s meant to be converted by the service, thereby allowing malicious actors to request changes to any repository on the platform by masquerading as the conversion bot.

Hugging Face is a popular collaboration platform that helps users host pre-trained machine learning models and datasets, as well as build, deploy, and train them.

Safetensors is a format devised by the company to store tensors keeping security in mind, as opposed to pickles, which has been likely weaponized by threat actors to execute arbitrary code and deploy Cobalt Strike, Mythic, and Metasploit stagers.

Cybersecurity

It also comes with a conversion service that enables users to convert any PyTorch model (i.e., pickle) to its Safetensor equivalent via a pull request.

HiddenLayer’s analysis of this module found that it’s hypothetically possible for an attacker to hijack the hosted conversion service using a malicious PyTorch binary and compromise the system hosting it.

What’s more, the token associated with SFConvertbot – an official bot designed to generate the pull request – could be exfiltrated to send a malicious pull request to any repository on the site, leading to a scenario where a threat actor could tamper with the model and implant neural backdoors.

“An attacker could run any arbitrary code any time someone attempted to convert their model,” researchers Eoin Wickens and Kasimir Schulz noted. “Without any indication to the user themselves, their models could be hijacked upon conversion.”

Should a user attempt to convert their own private repository, the attack could pave the way for the theft of their Hugging Face token, access otherwise internal models and datasets, and even poison them.

Complicating matters further, an adversary could take advantage of the fact that any user can submit a conversion request for a public repository to hijack or alter a widely used model, potentially resulting in a considerable supply chain risk.

“Despite the best intentions to secure machine learning models in the Hugging Face ecosystem, the conversion service has proven to be vulnerable and has had the potential to cause a widespread supply chain attack via the Hugging Face official service,” the researchers said.

Cybersecurity

“An attacker could gain a foothold into the container running the service and compromise any model converted by the service.”

The development comes a little over a month after Trail of Bits disclosed LeftoverLocals (CVE-2023-4969, CVSS score: 6.5), a vulnerability that allows recovery of data from Apple, Qualcomm, AMD, and Imagination general-purpose graphics processing units (GPGPUs).

The memory leak flaw, which stems from a failure to adequately isolate process memory, enables a local attacker to read memory from other processes, including another user’s interactive session with a large language model (LLM).

“This data leaking can have severe security consequences, especially given the rise of ML systems, where local memory is used to store model inputs, outputs, and weights,” security researchers Tyler Sorensen and Heidy Khlaaf said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/02/27/new-hugging-face-vulnerability-exposes-ai-models-to-supply-chain-attacks/feed/ 0