Threats – INDIA NEWS http://www.indiavpn.org News Blog Tue, 16 Apr 2024 14:02:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Shedding Light on Cybersecurity’s Unseen Threats http://www.indiavpn.org/2024/04/16/shedding-light-on-cybersecuritys-unseen-threats/ http://www.indiavpn.org/2024/04/16/shedding-light-on-cybersecuritys-unseen-threats/#respond Tue, 16 Apr 2024 14:02:21 +0000 http://www.indiavpn.org/2024/04/16/shedding-light-on-cybersecuritys-unseen-threats/ [ad_1]

Apr 16, 2024The Hacker NewsCloud Security / Threat Intelligence

Cybersecurity

In today’s rapidly evolving digital landscape, organizations face an increasingly complex array of cybersecurity threats. The proliferation of cloud services and remote work arrangements has heightened the vulnerability of digital identities to exploitation, making it imperative for businesses to fortify their identity security measures.

Our recent research report, The Identity Underground Report, offers valuable insights into the challenges and vulnerabilities organizations encounter in managing digital identities. The report paints a vivid picture of the “hidden” identity security liabilities where attackers leverage Identity Threat Exposures (ITEs) such as forgotten user accounts and misconfigurations to breach organizations’ defenses, with each ITE posing a significant threat to organizations’ security posture.

Discover the most common identity security gaps that lead to compromises in the first-ever threat report focused entirely on the prevalence of identity security gaps.

🔗 Get the Full Report

These findings reveal alarming statistics that underscore the widespread prevalence of ITEs across organizations of all sizes:

  • 67% of organizations unknowingly expose their SaaS applications to potential compromise through insecure password synchronization practices.
  • 37% of admin users still rely on weak authentication protocols like NTLM.
  • 31% of user accounts are service accounts, which attackers seek to target as security teams often overlook them.
  • A single misconfiguration in Active Directory spawns an average of 109 new shadow admins, enabling attackers to change settings and permissions, and gain more access to machines as they move deeper into an environment.

The shift to cloud-based environments introduces additional challenges, as organizations synchronize on-prem user accounts with cloud Identity Providers (IdPs). While this streamlines access, it also creates a pathway for attackers to exploit ITEs in on-prem settings to gain unauthorized access to cloud resources.

Ultimately, it is essential to recognize the dynamic nature of identity threats. Cybercriminals are constantly evolving their tactics, underscoring the need for a holistic and layered approach to security. By adopting proactive measures like Multi-Factor Authentication (MFA) and investing in robust identity security solutions, organizations can enhance their resilience against identity-related threats.

Learn more about the underground weaknesses that expose organizations to identity threats here and heed the report’s findings to prioritize security investments and eliminate your identity security blind spots.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/04/16/shedding-light-on-cybersecuritys-unseen-threats/feed/ 0
Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats http://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/ http://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/#respond Wed, 13 Mar 2024 12:45:48 +0000 https://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/ [ad_1]

Mar 13, 2024NewsroomLarge Language Model / AI Security

Google's Gemini AI

Google’s Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks.

The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API.

The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its “foundational instructions” in a markdown block.

“A system message can be used to inform the LLM about the context,” Microsoft notes in its documentation about LLM prompt engineering.

“The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.”

Cybersecurity

This is made possible due to the fact that models are susceptible to what’s called a synonym attack to circumvent security defenses and content restrictions.

A second class of vulnerabilities relates to using “crafty jailbreaking” techniques to make the Gemini models generate misinformation surrounding topics like elections as well as output potentially illegal and dangerous information (e.g., hot-wiring a car) using a prompt that asks it to enter into a fictional state.

Also identified by HiddenLayer is a third shortcoming that could cause the LLM to leak information in the system prompt by passing repeated uncommon tokens as input.

“Most LLMs are trained to respond to queries with a clear delineation between the user’s input and the system prompt,” security researcher Kenneth Yeung said in a Tuesday report.

“By creating a line of nonsensical tokens, we can fool the LLM into believing it is time for it to respond and cause it to output a confirmation message, usually including the information in the prompt.”

Another test involves using Gemini Advanced and a specially crafted Google document, with the latter connected to the LLM via the Google Workspace extension.

The instructions in the document could be designed to override the model’s instructions and perform a set of malicious actions that enable an attacker to have full control of a victim’s interactions with the model.

The disclosure comes as a group of academics from Google DeepMind, ETH Zurich, University of Washington, OpenAI, and the McGill University revealed a novel model-stealing attack that makes it possible to extract “precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2.”

Cybersecurity

That said, it’s worth noting that these vulnerabilities are not novel and are present in other LLMs across the industry. The findings, if anything, emphasize the need for testing models for prompt attacks, training data extraction, model manipulation, adversarial examples, data poisoning and exfiltration.

“To help protect our users from vulnerabilities, we consistently run red-teaming exercises and train our models to defend against adversarial behaviors like prompt injection, jailbreaking, and more complex attacks,” a Google spokesperson told The Hacker News. “We’ve also built safeguards to prevent harmful or misleading responses, which we are continuously improving.”

The company also said it’s restricting responses to election-based queries out of an abundance of caution. The policy is expected to be enforced against prompts regarding candidates, political parties, election results, voting information, and notable office holders.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/13/researchers-highlight-googles-gemini-ai-susceptibility-to-llm-threats/feed/ 0
The Rise of Artificial Intelligence to Combat Cyber Threats http://www.indiavpn.org/2024/01/29/the-rise-of-artificial-intelligence-to-combat-cyber-threats/ http://www.indiavpn.org/2024/01/29/the-rise-of-artificial-intelligence-to-combat-cyber-threats/#respond Mon, 29 Jan 2024 17:02:37 +0000 https://www.indiavpn.org/2024/01/29/the-rise-of-artificial-intelligence-to-combat-cyber-threats/ [ad_1]

Artificial Intelligence

In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI’s most significant impacts are in cybersecurity.

AI’s ability to learn, adapt, and predict rapidly evolving threats has made it an indispensable tool in protecting the world’s businesses and governments. From basic applications like spam filtering to advanced predictive analytics and AI-assisted response, AI serves a critical role on the front lines, defending our digital assets from cyber criminals.

The future for AI in cybersecurity is not all rainbows and roses, however. Today we can see the early signs of a significant shift, driven by the democratization of AI technology. While AI continues to empower organizations to build stronger defenses, it also provides threat actors with tools to craft more sophisticated and stealthy attacks.

In this blog, we’ll review how the threat landscape has changed, trace the evolving role AI plays in cyber defense, and consider the implications for defending against attacks of the future.

AI in Cybersecurity: The First Wave (2000–2010)

As we welcomed the new millennium, the initial stages of digital transformation began affecting our personal and professional lives. In most organizations, knowledge workers did their jobs within tightly managed IT environments, leveraging desktop and laptop PCs, along with on-premises data centers that formed the backbone of organizational IT infrastructure.

The cyber threats that gained prominence at this time primarily focused on sowing chaos and gaining notoriety. The early 2000s witnessed the birth of malware like ILOVEYOU, Melissa, and MyDoom, which spread like wildfire and caused significant global disruptions. As we moved toward the mid-2000s, the allure of financial gains led to a proliferation of phishing schemes and financial malware. The Zeus banking trojan emerged as a significant threat, stealthily stealing banking credentials of unsuspecting users.

Organizations relied heavily on basic security controls, such as signature-based antivirus software and firewalls, to try and fend off intruders and protect digital assets. The concept of network security began to evolve, with improved intrusion detection systems making their way into the cybersecurity arsenal. Two-factor authentication (2FA) gained traction at this time, adding an extra layer of security for sensitive systems and data.

This is also when AI first began to show significant value for defenders. As spam email volumes exploded, unsolicited — and often malicious — emails clogged mail servers and inboxes, tempting users with get-rich-quick schemes, illegal pharmaceuticals, and similar lures to trick them into revealing valuable personal information. While AI still sounded like science fiction to many in IT, it proved an ideal tool to rapidly identify and quarantine suspicious messages with previously unimaginable efficiency, helping to significantly reduce risk and reclaim lost productivity. Although in its infancy, AI showed a glimpse of its potential to help organizations protect themselves against rapidly evolving threats, at scale.

AI in Cybersecurity: The Second Wave (2010–2020)

As we transitioned into the second decade of the millennium, the makeup of IT infrastructure changed significantly. The explosion of SaaS (software-as-a-service) applications, cloud computing, BYOD (bring your own device) policies, and the emergence of shadow IT made the IT landscape more dynamic than ever. At the same time, it created an ever-expanding attack surface for threat actors to explore and exploit.

Threat actors became more sophisticated, and their objectives broadened; intellectual property theft, infrastructure sabotage, and monetizing attacks on a larger scale became common. More organizations became aware of nation-state threats, driven by well-funded and highly sophisticated adversaries. This in turn drove a need for equally sophisticated defenses that could autonomously learn fast enough to stay a step ahead. Incidents like the Stuxnet worm targeting Iranian nuclear facilities, and devastating attacks against high-profile companies like Target and Sony Pictures, gained notoriety and underscored the escalating stakes.

At the same time, the vulnerability of supply chains came into sharp focus, exemplified by the SolarWinds breach that had ramifications for tens of thousands of organizations around the world. Perhaps most notably, ransomware and wiper attacks surged with notorious strains like WannaCry and NotPetya wreaking havoc globally. While relatively easy to detect, the volumes of these threats demanded defenses that could scale with speed and accuracy at levels that far outstripped a human analyst’s capabilities.

During this time, AI emerged as an indispensable tool for defenders. Cylance led the charge, founded in 2012 to replace heavyweight legacy antivirus software with lightweight machine-learning models. These models were trained to identify and stop rapidly evolving malware quickly and efficiently. AI’s role in cybersecurity continued to expand, with machine-learning techniques employed for detecting anomalies, flagging unusual patterns or behaviors indicative of a sophisticated attack, and performing predictive analytics to foresee and prevent possible attack vectors.

AI in Cybersecurity: The Third Wave (2020-Present)

Today, a profound shift is unfolding around the use of AI in cybersecurity. The ubiquity of remote work, coupled with hyperconnected and decentralized IT systems, has blurred the traditional security perimeter. With a surge in IoT (Internet of Things) and connected devices —from smart homes to smart cars and entire cities — the attack surface has expanded exponentially.

Amidst this backdrop, the role of AI has evolved from being purely a defensive mechanism to a double-edged sword, wielded by adversaries as well. While commercial generative AI tools, such as ChatGPT, have attempted to build guardrails to prevent bad actors from using the technology for malicious purposes, adversarial tools such as WormGPT have emerged to fill the gap for attackers.

Potential examples include:

  • AI-Generated Phishing Campaigns: With the assistance of generative AI, attackers can now craft highly convincing phishing emails, making these deceptive messages increasingly difficult to identify. Recent research also confirms that generative AI can save attackers days of work on each phishing campaign they create.
  • AI-Assisted Target Identification: By leveraging machine-learning algorithms to analyze social media and other online data, attackers can more efficiently identify high-value targets and customize attacks accordingly.
  • AI-Driven Behavior Analysis: Malware empowered by AI can learn typical user or network behaviors, enabling attacks or data exfiltration that evades detection by better mimicking normal activity.
  • Automated Vulnerability Scanning: AI-powered reconnaissance tools may facilitate autonomous scanning of networks for vulnerabilities, choosing the most effective exploit automatically.
  • Smart Data-Sorting: Instead of mass-copying all available data, AI can identify and select the most valuable information to exfiltrate, further reducing chances of detection.
  • AI-Assisted Social Engineering: The use of AI-generated deepfake audio or video in vishing attacks can convincingly impersonate trusted individuals, lending greater credibility to social engineering attacks that persuade employees to reveal sensitive information.

The unfolding of this third wave of AI underscores a crucial inflection point in cybersecurity. The dual use of AI — both as a shield and a spear — highlights the need for organizations to stay informed.

Conclusion

The evolutionary journey of cybersecurity emphasizes the relentless ingenuity of threat actors, and the need for defenders to keep well-equipped and informed. As we transition into a phase where AI serves both as an ally and a potential adversary, the story becomes more complex and fascinating.

Cylance® AI has been there since the beginning, as a pioneer in AI-driven cybersecurity and a proven leader in the market. Looking ahead, we at BlackBerry® are continually pushing the boundaries of our Cylance AI technology to explore what’s next on the horizon. Keep an eye out for our upcoming blog where we will delve into how generative AI is entering the scene as a powerful tool for defenders, offering a new lens to anticipate and counter the sophisticated threats of tomorrow.

The future holds great promise for those prepared to embrace the evolving tapestry of AI-powered cybersecurity.

For similar articles and news delivered straight to your inbox, subscribe to the BlackBerry Blog.

Related Reading

Note – This article has been expertly written by Jay Goodman, Director of Product Marketing at BlackBerry.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/01/29/the-rise-of-artificial-intelligence-to-combat-cyber-threats/feed/ 0
CISA Urges Manufacturers Eliminate Default Passwords to Thwart Cyber Threats http://www.indiavpn.org/2023/12/25/cisa-urges-manufacturers-eliminate-default-passwords-to-thwart-cyber-threats/ http://www.indiavpn.org/2023/12/25/cisa-urges-manufacturers-eliminate-default-passwords-to-thwart-cyber-threats/#respond Mon, 25 Dec 2023 16:07:39 +0000 https://www.indiavpn.org/2023/12/25/cisa-urges-manufacturers-eliminate-default-passwords-to-thwart-cyber-threats/ [ad_1]

Dec 18, 2023NewsroomSoftware Security / Vulnerability

Default Passwords

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is urging manufacturers to get rid of default passwords on internet-exposed systems altogether, citing severe risks that could be exploited by malicious actors to gain initial access to, and move laterally within, organizations.

In an alert published last week, the agency called out Iranian threat actors affiliated with the Islamic Revolutionary Guard Corps (IRGC) for exploiting operational technology devices with default passwords to gain access to critical infrastructure systems in the U.S.

Default passwords refer to factory default software configurations for embedded systems, devices, and appliances that are typically publicly documented and identical among all systems within a vendor’s product line.

As a result, threat actors could scan for internet-exposed endpoints using tools like Shodan and attempt to breach them through default passwords, often gaining root or administrative privileges to perform post-exploitation actions depending on the type of the system.

“Appliances that come preset with a username and password combination pose a serious threat to organizations that do not change it post installation, as they are easy targets for an adversary,” MITRE notes.

UPCOMING WEBINAR

From USER to ADMIN: Learn How Hackers Gain Full Control

Discover the secret tactics hackers use to become admins, how to detect and block it before it’s too late. Register for our webinar today.

Join Now

Earlier this month, CISA revealed that IRGC-affiliated cyber actors using the persona Cyber Av3ngers are actively targeting and compromising Israeli-made Unitronics Vision Series programmable logic controllers (PLCs) that are publicly exposed to the internet through the use of default passwords (“1111“).

“In these attacks, the default password was widely known and publicized on open forums where threat actors are known to mine intelligence for use in breaching U.S. systems,” the agency added.

Default Passwords

As mitigation measures, manufacturers are being urged to follow secure by design principles and provide unique setup passwords with the product, or alternatively disable such passwords after a preset time period and require users to enable phishing-resistant multi-factor authentication (MFA) methods.

The agency further advised vendors to conduct field tests to determine how their customers are deploying the products within their environments and if they involve the use of any unsafe mechanisms.

“Analysis of these field tests will help bridge the gap between developer expectations and actual customer usage of the product,” CISA noted in its guidance.

Default Passwords

“It will also help identify ways to build the product so customers will be most likely to securely use it—manufacturers should ensure that the easiest route is the secure one.”

The disclosure comes as the Israel National Cyber Directorate (INCD) attributed a Lebanese threat actor with connections to the Iranian Ministry of Intelligence for orchestrating cyber attacks targeting critical infrastructure in the country amidst its ongoing war with Hamas since October 2023.

The attacks, which involve the exploitation of known security flaws (e.g., CVE-2018-13379) to obtain sensitive information and deploy destructive malware, have been tied to an attack group named Plaid Rain (formerly Polonium).

Cybersecurity

The development also follows the release of a new advisory from CISA that outlines security countermeasures for healthcare and critical infrastructure entities to fortify their networks against potential malicious activity and reduce the likelihood of domain compromise –

  • Enforce strong passwords and phishing-resistant MFA
  • Ensure that only ports, protocols, and services with validated business needs are running on each system
  • Configure Service accounts with only the permissions necessary for the services they operate
  • Change all default passwords for applications, operating systems, routers, firewalls, wireless access points, and other systems
  • Discontinue reuse or sharing of administrative credentials among user/administrative accounts
  • Mandate consistent patch management
  • Implement network segregation controls
  • Evaluate the use of unsupported hardware and software and discontinue where possible
  • Encrypt personally identifiable information (PII) and other sensitive data

On a related note, the U.S. National Security Agency (NSA), Office of the Director of National Intelligence (ODNI), and CISA published a list of recommended practices that organizations can adopt in order to harden the software supply chain and improve the safety of their open-source software management processes.

“Organizations that do not follow a consistent and secure-by-design management practice for the open-source software they utilize are more likely to become vulnerable to known exploits in open-source packages and encounter more difficulty when reacting to an incident,” said Aeva Black, open-source software security lead at CISA.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2023/12/25/cisa-urges-manufacturers-eliminate-default-passwords-to-thwart-cyber-threats/feed/ 0