ChatGPT – INDIA NEWS http://www.indiavpn.org News Blog Fri, 15 Mar 2024 12:13:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Third-Party ChatGPT Plugins Could Lead to Account Takeovers http://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/ http://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/#respond Fri, 15 Mar 2024 12:13:03 +0000 https://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/ [ad_1]

Mar 15, 2024NewsroomData Privacy / Artificial Intelligence

ChatGPT Plugins

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data.

According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users’ consent and hijack accounts on third-party websites like GitHub.

ChatGPT plugins, as the name implies, are tools designed to run on top of the large language model (LLM) with the aim of accessing up-to-date information, running computations, or accessing third-party services.

OpenAI has since also introduced GPTs, which are bespoke versions of ChatGPT tailored for specific use cases, while reducing third-party service dependencies. As of March 19, 2024, ChatGPT users will no longer be able to install new plugins or create new conversations with existing plugins.

One of the flaws unearthed by Salt Labs involves exploiting the OAuth workflow to trick a user into installing an arbitrary plugin by taking advantage of the fact that ChatGPT doesn’t validate that the user indeed started the plugin installation.

This effectively could allow threat actors to intercept and exfiltrate all data shared by the victim, which may contain proprietary information.

Cybersecurity

The cybersecurity firm also unearthed issues with PluginLab that could be weaponized by threat actors to conduct zero-click account takeover attacks, allowing them to gain control of an organization’s account on third-party websites like GitHub and access their source code repositories.

“‘auth.pluginlab[.]ai/oauth/authorized’ does not authenticate the request, which means that the attacker can insert another memberId (aka the victim) and get a code that represents the victim,” security researcher Aviad Carmel explained. “With that code, he can use ChatGPT and access the GitHub of the victim.”

The memberId of the victim can be obtained by querying the endpoint “auth.pluginlab[.]ai/members/requestMagicEmailCode.” There is no evidence that any user data has been compromised using the flaw.

Also discovered in several plugins, including Kesem AI, is an OAuth redirection manipulation bug that could permit an attacker to steal the account credentials associated with the plugin itself by sending a specially crafted link to the victim.

The development comes weeks after Imperva detailed two cross-site scripting (XSS) vulnerabilities in ChatGPT that could be chained to seize control of any account.

In December 2023, security researcher Johann Rehberger demonstrated how malicious actors could create custom GPTs that can phish for user credentials and transmit the stolen data to an external server.

New Remote Keylogging Attack on AI Assistants

The findings also follow new research published this week about an LLM side-channel attack that employs token-length as a covert means to extract encrypted responses from AI Assistants over the web.

“LLMs generate and send responses as a series of tokens (akin to words), with each token transmitted from the server to the user as it is generated,” a group of academics from the Ben-Gurion University and Offensive AI Research Lab said.

“While this process is encrypted, the sequential token transmission exposes a new side-channel: the token-length side-channel. Despite encryption, the size of the packets can reveal the length of the tokens, potentially allowing attackers on the network to infer sensitive and confidential information shared in private AI assistant conversations.”

Cybersecurity

This is accomplished by means of a token inference attack that’s designed to decipher responses in encrypted traffic by training an LLM model capable of translating token-length sequences into their natural language sentential counterparts (i.e., plaintext).

In other words, the core idea is to intercept the real-time chat responses with an LLM provider, use the network packet headers to infer the length of each token, extract and parse text segments, and leverage the custom LLM to infer the response.

ChatGPT Plugins

Two key prerequisites to pulling off the attack are an AI chat client running in streaming mode and an adversary who is capable of capturing network traffic between the client and the AI chatbot.

To counteract the effectiveness of the side-channel attack, it’s recommended that companies that develop AI assistants apply random padding to obscure the actual length of tokens, transmit tokens in larger groups rather than individually, and send complete responses at once, instead of in a token-by-token fashion.

“Balancing security with usability and performance presents a complex challenge that requires careful consideration,” the researchers concluded.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/feed/ 0
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets http://www.indiavpn.org/2024/03/05/over-225000-compromised-chatgpt-credentials-up-for-sale-on-dark-web-markets/ http://www.indiavpn.org/2024/03/05/over-225000-compromised-chatgpt-credentials-up-for-sale-on-dark-web-markets/#respond Tue, 05 Mar 2024 15:04:07 +0000 https://www.indiavpn.org/2024/03/05/over-225000-compromised-chatgpt-credentials-up-for-sale-on-dark-web-markets/ [ad_1]

Mar 05, 2024NewsroomMalware / Artificial Intelligence

ChatGPT Credentials

More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show.

These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware.

“The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September,” the Singapore-headquartered cybersecurity company said in its Hi-Tech Crime Trends 2023/2024 report published last week.

Cybersecurity

Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below –

  • LummaC2 – 70,484 hosts
  • Raccoon – 22,468 hosts
  • RedLine – 15,970 hosts

“The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the number of hosts infected with information stealers, data from which is then put up for sale on markets or in UCLs,” Group-IB said.

The development comes as Microsoft and OpenAI revealed that nation-state actors from Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations.

ChatGPT Credentials

Stating that LLMs can be used by adversaries to brainstorm new tradecraft, craft convincing scam and phishing attacks, and improve operational productivity, Group-IB said the technology could also speed up reconnaissance, execute hacking toolkits, and make scammer robocalls.

“In the past, [threat actors] were mainly interested in corporate computers and in systems with access that enabled movement across the network,” it noted. “Now, they also focus on devices with access to public AI systems.

Cybersecurity

“This gives them access to logs with the communication history between employees and systems, which they can use to search for confidential information (for espionage purposes), details about internal infrastructure, authentication data (for conducting even more damaging attacks), and information about application source code.”

Abuse of valid account credentials by threat actors has emerged as a top access technique, primarily fueled by the easy availability of such information via stealer malware.

“The combination of a rise in infostealers and the abuse of valid account credentials to gain initial access has exacerbated defenders’ identity and access management challenges,” IBM X-Force said.

“Enterprise credential data can be stolen from compromised devices through credential reuse, browser credential stores or accessing enterprise accounts directly from personal devices.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/05/over-225000-compromised-chatgpt-credentials-up-for-sale-on-dark-web-markets/feed/ 0
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations http://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ http://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/#respond Tue, 30 Jan 2024 13:30:44 +0000 https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ [ad_1]

Jan 30, 2024NewsroomGenerative AI / Data Privacy

ChatGPT of Privacy Violations

Italy’s data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.

“The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation],” the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.

It also said it will “take account of the work in progress within the ad-hoc task force set up by the European Data Protection Framework (EDPB) in its final determination on the case.”

The development comes nearly 10 months after the watchdog imposed a temporary ban on ChatGPT in the country, weeks after which OpenAI announced a number of privacy controls, including an opt-out form to remove one’s personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023.

Cybersecurity

The Italian DPA said the latest findings, which have not been publicly disclosed, are the result of a multi-month investigation that was initiated at the same time. OpenAI has been given 30 days to respond to the allegations.

BBC reported that the transgressions are related to collecting personal data and age protections. OpenAI, in its help page, says that “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT.”

But there are also concerns that sensitive information could be exposed as well as younger users may be exposed to inappropriate content generated by the chatbot.

Indeed, Ars Technica reported this week that ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users who are said to be employees of a pharmacy prescription drug portal.

Then in September 2023, Google’s Bard chatbot was found to have a bug in the sharing feature that allowed private chats to be indexed by Google search, inadvertently exposing sensitive information that may have been shared in the conversations.

Generative artificial intelligence tools like ChatGPT, Bard, and Anthropic Claude rely on being fed large amounts of data from multiple sources on the internet.

In a statement shared with TechCrunch, OpenAI said its “practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy.”

Apple Warns Against Proposed U.K. Law

The development comes as Apple said it’s “deeply concerned” about proposed amendments to the U.K. Investigatory Powers Act (IPA) could give the government unprecedented power to “secretly veto” privacy and security updates to its products and services.

“It’s an unprecedented overreach by the government and, if enacted, the U.K. could attempt to secretly veto new user protections globally preventing us from ever offering them to customers,” the tech giant told BBC.

The U.K. Home Office said adopting secure communications technologies, including end-to-end encryption, cannot come at the cost of public safety as well as protecting the nation from child sexual abusers and terrorists.

Cybersecurity

The changes are aimed at improving the intelligence services’ ability to “respond with greater agility and speed to existing and emerging threats to national security.”

Specifically, they require technology companies that field government data requests to notify the U.K. government of any technical changes that could affect their “existing lawful access capabilities.”

“A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access,” the government notes in a fact sheet, adding “it does not provide powers for the Secretary of State to approve or refuse technical changes.”

Apple, in July 2023, said it would rather stop offering iMessage and FaceTime services in the U.K. than compromise on users’ privacy and security.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/feed/ 0