Plugins – INDIA NEWS https://www.indiavpn.org News Blog Mon, 18 Mar 2024 10:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 WordPress Admins Urged to Remove miniOrange Plugins Due to Critical Flaw https://www.indiavpn.org/2024/03/18/wordpress-admins-urged-to-remove-miniorange-plugins-due-to-critical-flaw/ https://www.indiavpn.org/2024/03/18/wordpress-admins-urged-to-remove-miniorange-plugins-due-to-critical-flaw/#respond Mon, 18 Mar 2024 10:43:20 +0000 https://www.indiavpn.org/2024/03/18/wordpress-admins-urged-to-remove-miniorange-plugins-due-to-critical-flaw/ [ad_1]

Mar 18, 2024NewsroomWebsite Security / Vulnerability

WordPress miniOrange Plugins

WordPress users of miniOrange’s Malware Scanner and Web Application Firewall plugins are being urged to delete them from their websites following the discovery of a critical security flaw.

The flaw, tracked as CVE-2024-2172, is rated 9.8 out of a maximum of 10 on the CVSS scoring system. It impacts the following versions of the two plugins –

It’s worth noting that the plugins have been permanently closed by the maintainers as of March 7, 2024. While Malware Scanner has over 10,000 active installs, Web Application Firewall has more than 300 active installations.

“This vulnerability makes it possible for an unauthenticated attacker to grant themselves administrative privileges by updating the user password,” Wordfence reported last week.

Cybersecurity

The issue is the result of a missing capability check in the function mo_wpns_init() that enables an unauthenticated attacker to arbitrarily update any user’s password and escalate their privileges to that of an administrator, potentially leading to a complete compromise of the site.

“Once an attacker has gained administrative user access to a WordPress site they can then manipulate anything on the targeted site as a normal administrator would,” Wordfence said.

“This includes the ability to upload plugin and theme files, which can be malicious zip files containing backdoors, and modify posts and pages which can be leveraged to redirect site users to other malicious sites or inject spam content.”

The development comes as the WordPress security company warned of a similar high-severity privilege escalation flaw in the RegistrationMagic plugin (CVE-2024-1991, CVSS score: 8.8) affecting all versions, including and prior to 5.3.0.0.

The issue, addressed on March 11, 2024, with the release of version 5.3.1.0, permits an authenticated attacker to grant themselves administrative privileges by updating the user role. The plugin has more than 10,000 active installations.

“This vulnerability allows authenticated threat actors with subscriber-level permissions or higher to elevate their privileges to that of a site administrator which could ultimately lead to complete site compromise,” István Márton said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/03/18/wordpress-admins-urged-to-remove-miniorange-plugins-due-to-critical-flaw/feed/ 0
Third-Party ChatGPT Plugins Could Lead to Account Takeovers https://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/ https://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/#respond Fri, 15 Mar 2024 12:13:03 +0000 https://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/ [ad_1]

Mar 15, 2024NewsroomData Privacy / Artificial Intelligence

ChatGPT Plugins

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data.

According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users’ consent and hijack accounts on third-party websites like GitHub.

ChatGPT plugins, as the name implies, are tools designed to run on top of the large language model (LLM) with the aim of accessing up-to-date information, running computations, or accessing third-party services.

OpenAI has since also introduced GPTs, which are bespoke versions of ChatGPT tailored for specific use cases, while reducing third-party service dependencies. As of March 19, 2024, ChatGPT users will no longer be able to install new plugins or create new conversations with existing plugins.

One of the flaws unearthed by Salt Labs involves exploiting the OAuth workflow to trick a user into installing an arbitrary plugin by taking advantage of the fact that ChatGPT doesn’t validate that the user indeed started the plugin installation.

This effectively could allow threat actors to intercept and exfiltrate all data shared by the victim, which may contain proprietary information.

Cybersecurity

The cybersecurity firm also unearthed issues with PluginLab that could be weaponized by threat actors to conduct zero-click account takeover attacks, allowing them to gain control of an organization’s account on third-party websites like GitHub and access their source code repositories.

“‘auth.pluginlab[.]ai/oauth/authorized’ does not authenticate the request, which means that the attacker can insert another memberId (aka the victim) and get a code that represents the victim,” security researcher Aviad Carmel explained. “With that code, he can use ChatGPT and access the GitHub of the victim.”

The memberId of the victim can be obtained by querying the endpoint “auth.pluginlab[.]ai/members/requestMagicEmailCode.” There is no evidence that any user data has been compromised using the flaw.

Also discovered in several plugins, including Kesem AI, is an OAuth redirection manipulation bug that could permit an attacker to steal the account credentials associated with the plugin itself by sending a specially crafted link to the victim.

The development comes weeks after Imperva detailed two cross-site scripting (XSS) vulnerabilities in ChatGPT that could be chained to seize control of any account.

In December 2023, security researcher Johann Rehberger demonstrated how malicious actors could create custom GPTs that can phish for user credentials and transmit the stolen data to an external server.

New Remote Keylogging Attack on AI Assistants

The findings also follow new research published this week about an LLM side-channel attack that employs token-length as a covert means to extract encrypted responses from AI Assistants over the web.

“LLMs generate and send responses as a series of tokens (akin to words), with each token transmitted from the server to the user as it is generated,” a group of academics from the Ben-Gurion University and Offensive AI Research Lab said.

“While this process is encrypted, the sequential token transmission exposes a new side-channel: the token-length side-channel. Despite encryption, the size of the packets can reveal the length of the tokens, potentially allowing attackers on the network to infer sensitive and confidential information shared in private AI assistant conversations.”

Cybersecurity

This is accomplished by means of a token inference attack that’s designed to decipher responses in encrypted traffic by training an LLM model capable of translating token-length sequences into their natural language sentential counterparts (i.e., plaintext).

In other words, the core idea is to intercept the real-time chat responses with an LLM provider, use the network packet headers to infer the length of each token, extract and parse text segments, and leverage the custom LLM to infer the response.

ChatGPT Plugins

Two key prerequisites to pulling off the attack are an AI chat client running in streaming mode and an adversary who is capable of capturing network traffic between the client and the AI chatbot.

To counteract the effectiveness of the side-channel attack, it’s recommended that companies that develop AI assistants apply random padding to obscure the actual length of tokens, transmit tokens in larger groups rather than individually, and send complete responses at once, instead of in a token-by-token fashion.

“Balancing security with usability and performance presents a complex challenge that requires careful consideration,” the researchers concluded.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/03/15/third-party-chatgpt-plugins-could-lead-to-account-takeovers/feed/ 0