PrivEsc – INDIA NEWS http://www.indiavpn.org News Blog Fri, 05 Apr 2024 14:47:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks http://www.indiavpn.org/2024/04/05/ai-as-a-service-providers-vulnerable-to-privesc-and-cross-tenant-attacks/ http://www.indiavpn.org/2024/04/05/ai-as-a-service-providers-vulnerable-to-privesc-and-cross-tenant-attacks/#respond Fri, 05 Apr 2024 14:47:30 +0000 https://www.indiavpn.org/2024/04/05/ai-as-a-service-providers-vulnerable-to-privesc-and-cross-tenant-attacks/ [ad_1]

Apr 05, 2024NewsroomArtificial Intelligence / Supply Chain Attack

AI-as-a-Service Providers

New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers’ models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines.

“Malicious models represent a major risk to AI systems, especially for AI-as-a-service providers because potential attackers may leverage these models to perform cross-tenant attacks,” Wiz researchers Shir Tamari and Sagi Tzadik said.

“The potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers.”

The development comes as machine learning pipelines have emerged as a brand new supply chain attack vector, with repositories like Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive information and access target environments.

The threats are two-pronged, arising as a result of shared Inference infrastructure takeover and shared CI/CD takeover. They make it possible to run untrusted models uploaded to the service in pickle format and take over the CI/CD pipeline to perform a supply chain attack.

The findings from the cloud security firm show that it’s possible to breach the service running the custom models by uploading a rogue model and leverage container escape techniques to break out from its own tenant and compromise the entire service, effectively enabling threat actors to obtain cross-tenant access to other customers’ models stored and run in Hugging Face.

Cybersecurity

“Hugging Face will still let the user infer the uploaded Pickle-based model on the platform’s infrastructure, even when deemed dangerous,” the researchers elaborated.

This essentially permits an attacker to craft a PyTorch (Pickle) model with arbitrary code execution capabilities upon loading and chain it with misconfigurations in the Amazon Elastic Kubernetes Service (EKS) to obtain elevated privileges and laterally move within the cluster.

“The secrets we obtained could have had a significant impact on the platform if they were in the hands of a malicious actor,” the researchers said. “Secrets within shared environments may often lead to cross-tenant access and sensitive data leakage.

To mitigate the issue, it’s recommended to enable IMDSv2 with Hop Limit so as to prevent pods from accessing the Instance Metadata Service (IMDS) and obtaining the role of a Node within the cluster.

The research also found that it’s possible to achieve remote code execution via a specially crafted Dockerfile when running an application on the Hugging Face Spaces service, and use it to pull and push (i.e., overwrite) all the images that are available on an internal container registry.

Hugging Face, in coordinated disclosure, said it has addressed all the identified issues. It’s also urging users to employ models only from trusted sources, enable multi-factor authentication (MFA), and refrain from using pickle files in production environments.

“This research demonstrates that utilizing untrusted AI models (especially Pickle-based ones) could result in serious security consequences,” the researchers said. “Furthermore, if you intend to let users utilize untrusted AI models in your environment, it is extremely important to ensure that they are running in a sandboxed environment.”

The disclosure follows another research from Lasso Security that it’s possible for generative AI models like OpenAI ChatGPT and Google Gemini to distribute malicious (and non-existant) code packages to unsuspecting software developers.

Cybersecurity

In other words, the idea is to find a recommendation for an unpublished package and publish a trojanized package in its place in order to propagate the malware. The phenomenon of AI package hallucinations underscores the need for exercising caution when relying on large language models (LLMs) for coding solutions.

AI company Anthropic, for its part, has also detailed a new method called “many-shot jailbreaking” that can be used to bypass safety protections built into LLMs to produce responses to potentially harmful queries by taking advantage of the models’ context window.

“The ability to input increasingly-large amounts of information has obvious advantages for LLM users, but it also comes with risks: vulnerabilities to jailbreaks that exploit the longer context window,” the company said earlier this week.

The technique, in a nutshell, involves introducing a large number of faux dialogues between a human and an AI assistant within a single prompt for the LLM in an attempt to “steer model behavior” and respond to queries that it wouldn’t otherwise (e.g., “How do I build a bomb?”).

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/04/05/ai-as-a-service-providers-vulnerable-to-privesc-and-cross-tenant-attacks/feed/ 0
New CherryLoader Malware Mimics CherryTree to Deploy PrivEsc Exploits http://www.indiavpn.org/2024/01/25/new-cherryloader-malware-mimics-cherrytree-to-deploy-privesc-exploits/ http://www.indiavpn.org/2024/01/25/new-cherryloader-malware-mimics-cherrytree-to-deploy-privesc-exploits/#respond Thu, 25 Jan 2024 08:02:52 +0000 https://www.indiavpn.org/2024/01/25/new-cherryloader-malware-mimics-cherrytree-to-deploy-privesc-exploits/ [ad_1]

Jan 25, 2024NewsroomThreat Intelligence / Malware Research

CherryLoader Malware

A new Go-based malware loader called CherryLoader has been discovered by threat hunters in the wild to deliver additional payloads onto compromised hosts for follow-on exploitation.

Arctic Wolf Labs, which discovered the new attack tool in two recent intrusions, said the loader’s icon and name masquerades as the legitimate CherryTree note-taking application to dupe potential victims into installing it.

“CherryLoader was used to drop one of two privilege escalation tools, PrintSpoofer or JuicyPotatoNG, which would then run a batch file to establish persistence on the victim device,” researchers Hady Azzam, Christopher Prest, and Steven Campbell said.

In another novel twist, CherryLoader also packs modularized features that allow the threat actor to swap exploits without recompiling code.

Cybersecurity

It’s currently not known how the loader is distributed, but the attack chains examined by the cybersecurity firm show that CherryLoader (“cherrytree.exe”) and its associated files (“NuxtSharp.Data,” “Spof.Data,” and “Juicy.Data”) are contained within a RAR archive file (“Packed.rar”) hosted on the IP address 141.11.187[.]70.

Downloaded along with the RAR file is an executable (“main.exe”) that’s used to unpack and launch the Golang binary, which only proceeds if the first argument passed to it matches a hard-coded MD5 password hash.

The loader subsequently decrypts “NuxtSharp.Data” and writes its contents to a file named “File.log” on disk that, in turn, is designed to decode and run “Spof.Data” as “12.log” using a fileless technique known as process ghosting that first came to light in June 2021.

“This technique is modular in design and will allow the threat actor to leverage other exploit code in place of Spof.Data,” the researchers said. “In this case, Juicy.Data which contains a different exploit, can be swapped in place without recompiling File.log.”

Cybersecurity

The process associated with “12.log” is linked to an open-source privilege escalation tool named PrintSpoofer, while “Juicy.Data” is another privilege escalation tool named JuicyPotatoNG.

A successful privilege escalation is followed by the execution of a batch file script called “user.bat” to set up persistence on the host and disarm Microsoft Defender.

“CherryLoader is [a] newly identified multi-stage downloader that leverages different encryption methods and other anti-analysis techniques in an attempt to detonate alternative, publicly available privilege escalation exploits without having to recompile any code,” the researchers concluded.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/01/25/new-cherryloader-malware-mimics-cherrytree-to-deploy-privesc-exploits/feed/ 0