Secrets – INDIA NEWS http://www.indiavpn.org News Blog Thu, 11 Apr 2024 15:03:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Python’s PyPI Reveals Its Secrets http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets-2/ http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets-2/#respond Thu, 11 Apr 2024 15:03:04 +0000 https://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets-2/ [ad_1]

Apr 11, 2024The Hacker NewsSoftware Security / Programming

PyPI Secrets

GitGuardian is famous for its annual State of Secrets Sprawl report. In their 2023 report, they found over 10 million exposed passwords, API keys, and other credentials exposed in public GitHub commits. The takeaways in their 2024 report did not just highlight 12.8 million new exposed secrets in GitHub, but a number in the popular Python package repository PyPI.

PyPI, short for the Python Package Index, hosts over 20 terabytes of files that are freely available for use in Python projects. If you’ve ever typed pip install [name of package], it likely pulled that package from PyPI. A lot of people use it too. Whether it’s GitHub, PyPI, or others, the report states, “open-source packages make up an estimated 90% of the code run in production today.It’s easy to see why that is when these packages help developers avoid the reinvention of millions of wheels every day.

In the 2024 report, GitGuardian reported finding over 11,000 exposed unique secrets, with 1,000 of them being added to PyPI in 2023. That’s not much compared to the 12.8 million new secrets added to GitHub in 2023, but GitHub is orders of magnitude larger.

A more distressing fact is that, of the secrets introduced in 2017, nearly 100 were still valid 6-7 years later. They did not have the ability to check all the secrets for validity. Still, over 300 unique and valid secrets were discovered. While this is mildly alarming to the casual observer and not necessarily a threat to random Python developers (as opposed to the 116 malicious packages reported by ESET at the end of 2023), it’s a threat of unknown magnitude to the owners of those packages.

While GitGuardian has hundreds of secrets detectors, it has developed and refined over the years, some of the most common secrets it detected in its overall 2023 study were OpenAI API keys, Google API keys, and Google Cloud keys. It’s not difficult for a competent programmer to write a regular expression to find a single common secret format. And even if it came up with many false positives, automating checks to determine if they were valid could help the developer find a small treasure trove of exploitable secrets.

It is now accepted logic that if a key has been published in a public repository such as GitHub or PyPI, it must be considered compromised. In tests, honeytokens (a kind of “defanged” API key with no access to any resources) have been tested for validity by bots within a minute of being published to GitHub. In fact, honeytokens act as a “canary” for a growing number of developers. Depending on where you’ve placed a specific honeytoken, you can see that someone has been snooping there and get some information about them based on telemetry data collected when the honeytoken is used.

The bigger concern when you accidentally publish a secret is not just that a malicious actor might run up your cloud bill. It’s where they can go from there. If an over-permissioned AWS IAM token were leaked, what might that malicious actor find in the S3 buckets or databases it grants access to? Could that malicious actor gain access to other source code and corrupt something that will be delivered to many others?

Whether you’re committing secrets to GitHub, PyPI, NPM, or any public collection of source code, the best first step when you discover a secret has leaked is to revoke it. Remember that tiny window between publication and exploitation for a honeytoken. Once a secret has been published, it’s likely been copied. Even if you haven’t detected an unauthorized use, you must assume an unauthorized and malicious someone now has it.

Even if your source code is in a private repository, stories abound of malicious actors getting access to private repositories via social engineering, phishing, and of course, leaked secrets. If there’s a lesson to all of this, it’s that plain text secrets in source code eventually get found. Whether they get accidentally published in public or get found by someone with access they shouldn’t have, they get found.

In summary, wherever you’re storing or publishing your source code, be it a private repository or a public registry, you should follow a few simple rules:

  1. Don’t store secrets in plain text in source code.
  2. Keep those who get hold of a secret from going on an expedition by keeping the privileges those secrets grant strictly scoped.
  3. If you discover you leaked a secret, revoke it. You may need to take a little time to ensure your production systems have the new, unleaked secret for business continuity, but revoke it as soon as you possibly can.
  4. Implement automations like those offered by GitGuardian to ensure you’re not relying on imperfect humans to perfectly observe best practices around secrets management.

If you follow those, you may not have to learn the lessons 11,000 secrets owners have probably learned the hard way by publishing them to PyPI.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets-2/feed/ 0
Python’s PyPI Reveals Its Secrets http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets/ http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets/#respond Thu, 11 Apr 2024 13:19:24 +0000 http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets/ [ad_1]

Apr 11, 2024The Hacker NewsSoftware Security / Programming

PyPI Secrets

GitGuardian is famous for its annual State of Secrets Sprawl report. In their 2023 report, they found over 10 million exposed passwords, API keys, and other credentials exposed in public GitHub commits. The takeaways in their 2024 report did not just highlight 12.8 million new exposed secrets in GitHub, but a number in the popular Python package repository PyPI.

PyPI, short for the Python Package Index, hosts over 20 terabytes of files that are freely available for use in Python projects. If you’ve ever typed pip install [name of package], it likely pulled that package from PyPI. A lot of people use it too. Whether it’s GitHub, PyPI, or others, the report states, “open-source packages make up an estimated 90% of the code run in production today.It’s easy to see why that is when these packages help developers avoid the reinvention of millions of wheels every day.

In the 2024 report, GitGuardian reported finding over 11,000 exposed unique secrets, with 1,000 of them being added to PyPI in 2023. That’s not much compared to the 12.8 million new secrets added to GitHub in 2023, but GitHub is orders of magnitude larger.

A more distressing fact is that, of the secrets introduced in 2017, nearly 100 were still valid 6-7 years later. They did not have the ability to check all the secrets for validity. Still, over 300 unique and valid secrets were discovered. While this is mildly alarming to the casual observer and not necessarily a threat to random Python developers (as opposed to the 116 malicious packages reported by ESET at the end of 2023), it’s a threat of unknown magnitude to the owners of those packages.

While GitGuardian has hundreds of secrets detectors, it has developed and refined over the years, some of the most common secrets it detected in its overall 2023 study were OpenAI API keys, Google API keys, and Google Cloud keys. It’s not difficult for a competent programmer to write a regular expression to find a single common secret format. And even if it came up with many false positives, automating checks to determine if they were valid could help the developer find a small treasure trove of exploitable secrets.

It is now accepted logic that if a key has been published in a public repository such as GitHub or PyPI, it must be considered compromised. In tests, honeytokens (a kind of “defanged” API key with no access to any resources) have been tested for validity by bots within a minute of being published to GitHub. In fact, honeytokens act as a “canary” for a growing number of developers. Depending on where you’ve placed a specific honeytoken, you can see that someone has been snooping there and get some information about them based on telemetry data collected when the honeytoken is used.

The bigger concern when you accidentally publish a secret is not just that a malicious actor might run up your cloud bill. It’s where they can go from there. If an over-permissioned AWS IAM token were leaked, what might that malicious actor find in the S3 buckets or databases it grants access to? Could that malicious actor gain access to other source code and corrupt something that will be delivered to many others?

Whether you’re committing secrets to GitHub, PyPI, NPM, or any public collection of source code, the best first step when you discover a secret has leaked is to revoke it. Remember that tiny window between publication and exploitation for a honeytoken. Once a secret has been published, it’s likely been copied. Even if you haven’t detected an unauthorized use, you must assume an unauthorized and malicious someone now has it.

Even if your source code is in a private repository, stories abound of malicious actors getting access to private repositories via social engineering, phishing, and of course, leaked secrets. If there’s a lesson to all of this, it’s that plain text secrets in source code eventually get found. Whether they get accidentally published in public or get found by someone with access they shouldn’t have, they get found.

In summary, wherever you’re storing or publishing your source code, be it a private repository or a public registry, you should follow a few simple rules:

  1. Don’t store secrets in plain text in source code.
  2. Keep those who get hold of a secret from going on an expedition by keeping the privileges those secrets grant strictly scoped.
  3. If you discover you leaked a secret, revoke it. You may need to take a little time to ensure your production systems have the new, unleaked secret for business continuity, but revoke it as soon as you possibly can.
  4. Implement automations like those offered by GitGuardian to ensure you’re not relying on imperfect humans to perfectly observe best practices around secrets management.

If you follow those, you may not have to learn the lessons 11,000 secrets owners have probably learned the hard way by publishing them to PyPI.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/04/11/pythons-pypi-reveals-its-secrets/feed/ 0
Microsoft Confirms Russian Hackers Stole Source Code, Some Customer Secrets http://www.indiavpn.org/2024/03/09/microsoft-confirms-russian-hackers-stole-source-code-some-customer-secrets/ http://www.indiavpn.org/2024/03/09/microsoft-confirms-russian-hackers-stole-source-code-some-customer-secrets/#respond Sat, 09 Mar 2024 07:10:26 +0000 https://www.indiavpn.org/2024/03/09/microsoft-confirms-russian-hackers-stole-source-code-some-customer-secrets/ [ad_1]

Mar 09, 2024NewsroomCyber Attack / Threat Intelligence

Microsoft

Microsoft on Friday revealed that the Kremlin-backed threat actor known as Midnight Blizzard (aka APT29 or Cozy Bear) managed to gain access to some of its source code repositories and internal systems following a hack that came to light in January 2024.

“In recent weeks, we have seen evidence that Midnight Blizzard is using information initially exfiltrated from our corporate email systems to gain, or attempt to gain, unauthorized access,” the tech giant said.

“This has included access to some of the company’s source code repositories and internal systems. To date we have found no evidence that Microsoft-hosted customer-facing systems have been compromised.”

Cybersecurity

Redmond, which is continuing to investigate the extent of the breach, said the Russian state-sponsored threat actor is attempting to leverage the different types of secrets it found, including those that were shared between customers and Microsoft in email.

It, however, did not disclose what these secrets were or the scale of the compromise, although it said it has directly reached out to impacted customers. It’s not clear what source code was accessed.

Stating that it has increased in its security investments, Microsoft further noted that the adversary ramped up its password spray attacks by as much as 10-fold in February, compared to the “already large volume” observed in January.

“Midnight Blizzard’s ongoing attack is characterized by a sustained, significant commitment of the threat actor’s resources, coordination, and focus,” it said.

“It may be using the information it has obtained to accumulate a picture of areas to attack and enhance its ability to do so. This reflects what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.”

The Microsoft breach is said to have taken place in November 2023, with Midnight Blizzard employing a password spray attack to successfully infiltrate a legacy, non-production test tenant account that did not have multi-factor authentication (MFA) enabled.

Cybersecurity

The tech giant, in late January, revealed that APT29 had targeted other organizations by taking advantage of a diverse set of initial access methods ranging from stolen credentials to supply chain attacks.

Midnight Blizzard is considered part of Russia’s Foreign Intelligence Service (SVR). Active since at least 2008, the threat actor is one of the most prolific and sophisticated hacking groups, compromising high-profile targets such as SolarWinds.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/09/microsoft-confirms-russian-hackers-stole-source-code-some-customer-secrets/feed/ 0
Secrets Sensei: Conquering Secrets Management Challenges http://www.indiavpn.org/2024/03/08/secrets-sensei-conquering-secrets-management-challenges/ http://www.indiavpn.org/2024/03/08/secrets-sensei-conquering-secrets-management-challenges/#respond Fri, 08 Mar 2024 11:37:34 +0000 https://www.indiavpn.org/2024/03/08/secrets-sensei-conquering-secrets-management-challenges/ [ad_1]

Mar 08, 2024The Hacker NewsSecrets Management / Access Control

Secrets Sensei

In the realm of cybersecurity, the stakes are sky-high, and at its core lies secrets management — the foundational pillar upon which your security infrastructure rests. We’re all familiar with the routine: safeguarding those API keys, connection strings, and certificates is non-negotiable. However, let’s dispense with the pleasantries; this isn’t a simple ‘set it and forget it’ scenario. It’s about guarding your secrets in an age where threats morph as swiftly as technology itself.

Lets shed some light on common practices that could spell disaster as well as the tools and strategies to confidently navigate and overcome these challenges. In simple words this is a first step guide for mastering secrets management across diverse terrains.

Top 5 common secrets management mistakes

Alright, let’s dive into some common secrets management mistakes that can trip up even the savviest of teams:

  1. Hard coding secrets in code repositories: A classic mistake, hard coding secrets like API keys or passwords directly in code repositories is like leaving your house keys under the mat. It is convenient, and it is highly risky. Agile development environments are prone to this devastating mistake, as developers under time constraints might opt for convenience over security.
  2. Inadequate key rotation and revocation processes: Static credentials face a growing risk of compromise as time progresses. Take, for example, a company employing unchanged encryption keys for prolonged periods without rotation; this can serve as a vulnerable gateway for attackers, particularly if these keys have been previously exposed in security incidents.
  3. On the flip side, rotating keys too frequently also cause operational issues. If a key is rotated every time it is accessed, it becomes difficult for multiple applications to access the key at the same time. Only the first application would get access, and the next ones would fail. This is counterproductive. You need to find the right interval for secrets rotation.
  4. Storing secrets in public places or insecure locations: Storing sensitive information like database passwords in configuration files that are publicly accessible, perhaps in a Docker image or a public code repository, invites trouble.
  5. Over-provisioning privileges for secrets: Granting excessive privileges for secrets is similar to giving every employee a master key to the entire office. Employees with more access than needed could unintentionally or maliciously expose sensitive information, leading to data breaches or other security incidents.

3 Lesser-known pitfalls in secrets storage and management

Unfortunately, there are more…

  1. Improper secrets lifecycle management: Often overlooked, the lifecycle management of secrets is one of the major pitfalls to avoid. It involves creating and using secrets and regularly updating and eventually retiring them. Poor lifecycle management can leave outdated or unused secrets lingering in the system, becoming easy targets for attackers. For example, if not properly retired, a long-forgotten API key from a decommissioned project can provide an unintentional backdoor into the company’s system.
  2. Ignoring audit trails for secrets access: Yet another nuanced yet consequential pitfall is the failure to recognize the significance of audit trails concerning secret access. Without a robust auditing mechanism in place, monitoring who accessed which secret and when becomes a daunting task. This oversight can impede the detection of unauthorized access to secrets. For example, the absence of audit trails might fail to alert us to unusual access patterns to sensitive secrets or to someone bulk downloading all secrets from the vault.
  3. Failure to encrypt Kubernetes secrets: Let’s understand why the lack of encryption is a matter of concern by seeing how secrets are created in the Kubernetes ecosystem. These secrets are often only base64 encoded by default, which is just a hash that can be simply reverted, a thin veil of security, far from robust encryption. This vulnerability opens the door to potential breaches if these secrets are accessed.

Encrypting secrets at rest enhances security, and Kubernetes allows for this through configurations like the EncryptionConfiguration object, which specifies key materials for encryption operations on a per-node basis.

Remediations for Secrets Management Mistakes

A proactive and strategic approach is no longer optional in addressing secrets management mistakes. Here are some of the key strategies to effectively remedy the pitfalls discussed above and be a guardian of your secrets:

  • Secrets Inventory: It is imperative that you know the exact number of secrets within your systems, and where they exist. Most CISOs are unaware of this vital information and are therefore unprepared for a secrets attack.
  • Secrets classification and enrichment: Not all secrets are created equal. While some safeguard highly confidential data, others protect more routine operational information. Security approaches must acknowledge this distinction when addressing attacks on secrets. Achieving this necessitates the creation of comprehensive metadata for each secret, detailing the resources it safeguards, its priority level, authorized access, and other pertinent details.
  • Implement robust encryption: Strengthen your encryption practices—Encrypt sensitive data using strong cryptographic methods, especially secrets at rest and in transit.
  • Refine access control: Apply the principle of least privilege rigorously. Ensure that access to secrets is tightly controlled and regularly audited. In Kubernetes, managing data access effectively is achieved through RBAC, which assigns access based on user roles.
  • Continuous monitoring and auditing: Establish a robust monitoring system to track access and usage of secrets. Implement audit trails to record who accessed what data and when aiding in quick detection and response to any irregularities.
  • Leverage Automated secrets tools: Utilize automated tools for managing secrets, which can encompass automated rotation of secrets and integration with identity management systems to streamline access control. Additionally, implement secret rotation to enhance your management practices even further.
  • Review policies frequently: Stay informed about new threats and adjust your strategies to maintain a strong defense against evolving cybersecurity challenges.

Putting a stop to false positives

Minimizing false positives in secrets management is crucial for sustaining operational efficiency and enabling security teams to concentrate on authentic threats. Here are several practical measures to assist you in achieving this goal:

  • Advanced detection algorithms: Utilizing machine learning and secrets context analysis can differentiate genuine secrets from false alarms, increasing the accuracy of detection systems.
  • Advanced scanning tools: Implementing solutions that amalgamate diverse detection techniques, including regular expressions, entropy analysis, and keyword matching, can significantly mitigate false positives.
  • Regular updates and feedback loops: Keeping scanning tools updated with the latest patterns and incorporating feedback from false positives helps refine the detection process.
  • Monitoring secrets usage: Tools like Entro, which monitor secret usage across the supply chain and production, can identify suspicious behavior. This helps in understanding the risk context around each secret, further eliminating false positives. Such monitoring is crucial in discerning actual threats from benign activities, ensuring security teams focus on real issues.

What a proper secrets management approach looks like

A comprehensive approach to secrets management transcends mere protective measures, embedding itself into an organization’s IT infrastructure. It begins with a foundational understanding of what constitutes a ‘secret’ and extends to how these are generated, stored, and accessed.

The proper approach involves integrating secrets management into the development lifecycle, ensuring that secrets are not an afterthought but a fundamental part of the system architecture. This includes employing dynamic environments where secrets are not hard-coded but injected at runtime and where access is rigorously controlled and monitored.

As mentioned earlier, it is essential to take inventory of every single secret within your organization and enrich each of them with context about what resources they protect and who has access to them.

Vaults can be misconfigured to give users or identities more access than they need or to allow them to perform risky activities like exporting secrets from the vault. You need to monitor all secrets for these risks for an air-tight defense.

Following secrets management best practices is about creating a culture of security mindfulness, where every stakeholder is aware of the value and vulnerability of secrets. By adopting a holistic and integrated approach, organizations can ensure that their secrets management is robust, resilient, and adaptable to the evolving cybersecurity landscape.

Parting thoughts

In navigating the intricate realm of secrets management, tackling challenges from encrypting Kubernetes secrets to refining access controls is no easy task. Luckily, Entro steps in as a full-context platform adept at addressing these complexities, managing secret sprawl, and executing intricate secret rotation processes while providing invaluable insights for informed decision-making.

Concerned about false positives inundating your team? Entro’s advanced monitoring capabilities focus on genuine threats, cutting through the clutter of false alarms. Seamlessly incorporating proactive strategies, Entro offers a unified interface for comprehensive secret discovery, prioritization, and risk mitigation.

Ready to revolutionize your secrets management approach and bid farewell to worries? Book a demo to explore the transformative impact of Entro on your organization’s practices.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/08/secrets-sensei-conquering-secrets-management-challenges/feed/ 0
Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China http://www.indiavpn.org/2024/03/07/ex-google-engineer-arrested-for-stealing-ai-technology-secrets-for-china/ http://www.indiavpn.org/2024/03/07/ex-google-engineer-arrested-for-stealing-ai-technology-secrets-for-china/#respond Thu, 07 Mar 2024 11:11:20 +0000 https://www.indiavpn.org/2024/03/07/ex-google-engineer-arrested-for-stealing-ai-technology-secrets-for-china/ [ad_1]

AI Technology Secrets for China

The U.S. Department of Justice (DoJ) announced the indictment of a 38-year-old Chinese national and a California resident of allegedly stealing proprietary information from Google while covertly working for two China-based tech companies.

Linwei Ding (aka Leon Ding), a former Google engineer who was arrested on March 6, 2024, “transferred sensitive Google trade secrets and other confidential information from Google’s network to his personal account while secretly affiliating himself with PRC-based companies in the AI industry,” the DoJ said.

The defendant is said to have pilfered from Google over 500 confidential files containing artificial intelligence (AI) trade secrets with the goal of passing them on to two unnamed Chinese companies looking to gain an edge in the ongoing AI race.

“While Linwei Ding was employed as a software engineer at Google, he was secretly working to enrich himself and two companies based in the People’s Republic of China,” said U.S. Attorney Ismail Ramsey.

Cybersecurity

“By stealing Google’s trade secrets about its artificial intelligence supercomputing systems, Ding gave himself and the companies that he affiliated with in the PRC an unfair competitive advantage.”

Ding, who joined Google as a software engineer in 2019, has been accused of siphoning proprietary information related to the company’s supercomputing data center infrastructure used for running AI models, the Cluster Management System (CMS) software for managing the data centers, and the AI models and applications they supported.

The theft happened from May 21, 2022, until May 2, 2023, to a personal Google Cloud account, the indictment alleged, adding Ding secretly affiliated himself with two tech companies based in China.

This included one firm in which he was offered the position of chief technology officer sometime around June 2022 and another company founded by Ding himself by no later than May 30, 2023, acting as its chief executive officer.

“Ding’s company touted the development of a software platform designed to accelerate machine learning workloads, including training large AI models,” the DoJ said.

“A document related to Ding’s startup company stated, ‘we have experience with Google’s ten-thousand-card computational power platform; we just need to replicate and upgrade it – and then further develop a computational power platform suited to China’s national conditions.'”

But in an interesting twist, Ding took steps to conceal the theft of trade secrets by purportedly copying the data from Google source files into the Apple Notes application on his company-provided MacBook and then converting the notes to PDF files before uploading them to their Google account.

Furthermore, Ding allegedly allowed another Google employee in December 2023 to use his Google-issued access badge to scan into the entrance of a Google building, giving the impression that he was working from his U.S. Google office when, in fact, he was in China. He resigned from Google on December 26, 2023.

Ding has been charged with four counts of theft of trade secrets. If convicted, he faces a maximum penalty of 10 years in prison and up to a $250,000 fine for each count.

Cybersecurity

The development comes days after the DoJ arrested and indicted David Franklin Slater, a civilian employee of the U.S. Air Force assigned to the U.S. Strategic Command (USSTRATCOM), of transmitting classified information on a foreign online dating platform between February and April 2022.

The information included National Defense Information (NDI) pertaining to military targets and Russian military capabilities relating to Russia’s invasion of Ukraine. It’s said to have been sent to a co-conspirator, who claimed to be a female living in Ukraine, via the dating website’s messaging feature.

“Slater willfully, improperly, and unlawfully transmitted NDI classified as ‘SECRET,’ which he had reason to believe could be used to the injury of the United States or to the advantage of a foreign nation, on a foreign online dating platform to a person not authorized to receive such information,” the DoJ said.

Slater, 63, faces up to 10 years in prison, three years of supervised release, and a maximum monetary penalty of $250,000 for each count of conspiracy to transmit and the transmission of NDI. No details are known about the motives or the real identity of the individual posing as a Ukrainian woman.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/03/07/ex-google-engineer-arrested-for-stealing-ai-technology-secrets-for-china/feed/ 0
Three Tips to Protect Your Secrets from AI Accidents http://www.indiavpn.org/2024/02/26/three-tips-to-protect-your-secrets-from-ai-accidents/ http://www.indiavpn.org/2024/02/26/three-tips-to-protect-your-secrets-from-ai-accidents/#respond Mon, 26 Feb 2024 12:00:43 +0000 https://www.indiavpn.org/2024/02/26/three-tips-to-protect-your-secrets-from-ai-accidents/ [ad_1]

AI Accidents

Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the “OWASP Top 10 For Large Language Models,” reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We’re going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more.

We’re already aware that LLMs can reveal secrets because it’s happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github’s Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from its training data. When these prompts were used, Copilot revealed over 2,700 valid secrets.

The technique used by the researchers is called “prompt injection.” It is #1 in the OWASP Top 10 for LLMs and they describe it as follows: [blockquote]

“This manipulates a large language model (LLM) through crafty inputs, causing unintended actions by the LLM. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.”

You may be more familiar with prompt injection from the bug revealed last year that was getting ChatGPT to start spitting out training data if you asked it to repeat certain words forever.

Tip 1: Rotate your secrets

Even if you don’t think you accidentally published secrets to GitHub, a number of the secrets in there were committed in an early commit and clobbered in a newer commit, so they’re not readily apparent without reviewing your entire commit history, not just the current state of your public repositories.

A tool from GitGuardian, called Has My Secret Leaked, lets you hash encrypt a current secret, then submit the first few characters of the hash to determine if there are any matches in their database of what they find in their scans of GitHub. A positive match isn’t a guarantee your secret leaked, but provides a potential likelihood that it did so you can investigate further.

Caveats on key/password rotation are that you should know where they’re being used, what might break when they change, and have a plan to mitigate that breakage while the new secrets propagate out to the systems that need them. Once rotated, you must ensure the older secrets have been disabled.

Attackers can’t use a secret that no longer works and if the secrets of yours that might be in an LLM have been rotated, then they become nothing but useless high-entropy strings.

Tip 2: Clean your data

Item #6 in the OWASP Top 10 for LLMs is “Sensitive Information Disclosure”:

LLMs may inadvertently reveal confidential data in its responses, leading to unauthorized data access, privacy violations, and security breaches. It’s crucial to implement data sanitization and strict user policies to mitigate this.

While deliberately engineered prompts can cause LLMs to reveal sensitive data, they can do so accidentally as well. The best way to ensure the LLM isn’t revealing sensitive data is to ensure the LLM never knows it.

This is more focused on when you’re training an LLM for use by people who might not always have your best interests at heart or people who simply should not have access to certain information. Whether it’s your secrets or secret sauce, only those who need access to them should have it… and your LLM is likely not one of those people.

Using open-source tools or paid services to scan your training data for secrets BEFORE feeding the data to your LLM will help you remove the secrets. What your LLM doesn’t know, it can’t tell.

Tip 3: Patch Regularly & Limit Privileges

Recently we saw a piece on using .env files and environment variables as a way to keep secrets available to your code, but out of your code. But what if your LLM could be asked to reveal environment variables… or do something worse?

This blends both Item #2 (“Insecure Output Handling”) and item #8 (“Excessive Agency”).

  • Insecure Output Handling: This vulnerability occurs when an LLM output is accepted without scrutiny, exposing backend systems. Misuse may lead to severe consequences like XSS, CSRF, SSRF, privilege escalation, or remote code execution.
  • Excessive Agency: LLM-based systems may undertake actions leading to unintended consequences. The issue arises from excessive functionality, permissions, or autonomy granted to the LLM-based systems.

It’s hard to extricate them from each other because they can make each other worse. If an LLM can be tricked into doing something and its operating context has unnecessary privileges, the potential of an arbitrary code execution to do major harm multiplies.

Every developer has seen the “Exploits of a Mom” cartoon where a boy named `Robert”); DROP TABLE Students;”` wipes out a school’s student database. Though an LLM seems smart, it’s really no smarter than an SQL database. And like your “comedian” brother getting your toddler nephew to repeat bad words to Grandma, bad inputs can create bad outputs. Both should be sanitized and considered untrustworthy.

Furthermore, you need to set up guardrails around what the LLM or app can do, considering the principle of least privilege. Essentially, the apps that use or enable the LLM and the LLM infrastructure should not have access to any data or functionality they do not absolutely need so they can’t accidentally put it in the service of a hacker.

AI can still be considered to be in its infancy, and as with any baby, it should not be given freedom to roam in any room you haven’t baby-proofed. LLMs can misunderstand, hallucinate, and be deliberately led astray. When that happens, good locks, good walls, and good filters should help prevent them from accessing or revealing secrets.

In Summary

Large language models are an amazing tool. They’re set to revolutionize a number of professions, processes, and industries. But they are far from a mature technology, and many are adopting them recklessly out of the fear of being left behind.

As you would with any baby that’s developed enough mobility to get itself into trouble, you have to keep an eye on it and lock any cabinets you don’t want it getting into. Proceed with large language models, but proceed with caution.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/02/26/three-tips-to-protect-your-secrets-from-ai-accidents/feed/ 0
NS-STEALER Uses Discord Bots to Exfiltrate Your Secrets from Popular Browsers http://www.indiavpn.org/2024/01/22/ns-stealer-uses-discord-bots-to-exfiltrate-your-secrets-from-popular-browsers/ http://www.indiavpn.org/2024/01/22/ns-stealer-uses-discord-bots-to-exfiltrate-your-secrets-from-popular-browsers/#respond Mon, 22 Jan 2024 12:14:24 +0000 https://www.indiavpn.org/2024/01/22/ns-stealer-uses-discord-bots-to-exfiltrate-your-secrets-from-popular-browsers/ [ad_1]

Jan 22, 2024NewsroomBrowser Security / Cyber Threat

Cybersecurity researchers have discovered a new Java-based “sophisticated” information stealer that uses a Discord bot to exfiltrate sensitive data from compromised hosts.

The malware, named NS-STEALER, is propagated via ZIP archives masquerading as cracked software, Trellix security researcher Gurumoorthi Ramanathan said in an analysis published last week.

The ZIP file contains within it a rogue Windows shortcut file (“Loader GAYve”), which acts as a conduit to deploy a malicious JAR file that first creates a folder called “NS-<11-digit_random_number>” to store the harvested data.

Cybersecurity

To this folder, the malware subsequently saves screenshots, cookies, credentials, and autofill data stolen from over two dozen web browsers, system information, a list of installed programs, Discord tokens, Steam and Telegram session data. The captured information is then exfiltrated to a Discord Bot channel.

“Considering the highly sophisticated function of gathering sensitive information and using X509Certificate for supporting authentication, this malware can quickly steal information from the victim systems with [Java Runtime Environment],” Ramanathan said.

“The Discord bot channel as an EventListener for receiving exfiltrated data is also cost-effective.”

The development comes as the threat actors behind the Chaes (aka Chae$) malware have released an update (version 4.1) to the information stealer with improvements to its Chronod module, which is responsible for pilfering login credentials entered in web browsers and intercepting crypto transactions.

Cybersecurity

Infection chains distributing the malware, per Morphisec, leverage legal-themed email lures written in Portuguese to deceive recipients into clicking on bogus links to deploy a malicious installer to activate Chae$ 4.1.

But in an interesting twist, the developers also left behind messages for security researcher Arnold Osipov – who has extensively analyzed Chaes in the past – expressing gratitude for helping them improve their “software” directly within the source code.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/01/22/ns-stealer-uses-discord-bots-to-exfiltrate-your-secrets-from-popular-browsers/feed/ 0
Exposed Secrets are Everywhere. Here’s How to Tackle Them http://www.indiavpn.org/2024/01/05/exposed-secrets-are-everywhere-heres-how-to-tackle-them/ http://www.indiavpn.org/2024/01/05/exposed-secrets-are-everywhere-heres-how-to-tackle-them/#respond Fri, 05 Jan 2024 12:09:20 +0000 https://www.indiavpn.org/2024/01/05/exposed-secrets-are-everywhere-heres-how-to-tackle-them/ [ad_1]

Entro

Picture this: you stumble upon a concealed secret within your company’s source code. Instantly, a wave of panic hits as you grasp the possible consequences. This one hidden secret has the power to pave the way for unauthorized entry, data breaches, and a damaged reputation. Understanding the secret is just the beginning; swift and resolute action becomes imperative. However, lacking the necessary context, you’re left pondering the optimal steps to take. What’s the right path forward in this situation?

Secrets management is an essential aspect of any organization’s security strategy. In a world where breaches are increasingly common, managing sensitive information such as API keys, credentials, and tokens can make all the difference. Secret scanners play a role in identifying exposed secrets within source code, but they have one significant limitation: they don’t provide context. And without context, it’s impossible to devise an appropriate response plan.

Context and Response: Key factors in addressing exposed secrets

When it comes to addressing exposed secrets, context is everything as you are the guardian of your secrets. Without it, you don’t know the severity of the exposure, the potential impact, and the best course of action.

Here are some key factors to consider when contextualizing exposed secrets:

1 — Classify secrets based on sensitivity and importance

Not all secrets are created equal. Some are more critical to your organization’s security than others. Classifying your secrets based on their sensitivity and importance will help you prioritize which ones need immediate attention and remediation.

2 — Understand the scope of exposure and potential impact

Once you’ve classified the exposed secret, it’s crucial to assess the scope of the exposure. Has the secret been leaked to a public repository/darknet, or is it still in your internal systems? Understanding the extent of the exposure will help you determine the potential impact and risk on your organization and help create your response plan.

3 — Identify the root cause of the exposure

Getting to the exposure’s root cause is essential for an exposed secrets remediation process, and to prevent future attacks. By identifying how the secret was exposed, you can take steps to address the underlying issue- preventing similar incidents from occurring in the future. This could involve updating security policies, improving code review processes, or implementing additional access controls.

4 — Secrets enrichment

Secrets, while seemingly meaningless strings of characters, carry significant metadata. This includes ownership details, creation, rotation timestamps, assigned privileges for cloud service access, associated risks, and much more. Entro uses this wealth of information to construct a dynamic threat model or a secret lineage map that illustrates the connections between applications or compute workloads, the secrets they employ, and the cloud services they access — thus providing a comprehensive view of each secret’s security and compliance status.

Remediation and Prevention: Securing your organization’s Secrets

Addressing exposed secrets requires a process of remediation and prevention. Here’s how you can secure your organization’s secrets effectively:

1 — Mitigate the impact of exposed secrets:

Take swift action to mitigate the potential harm stemming from the revealed secret. This could entail changing or invalidating the compromised secret, reaching out to impacted parties, and vigilantly observing for any unusual or suspicious behavior due to the disclosure. In certain cases, it might be necessary to engage law enforcement or seek assistance from external security experts.

2 — Implement policies and processes to prevent future exposures:

Learn from the exposure and take steps to prevent similar incidents. This might include crafting or revising your company’s security protocols, adopting secure development methodologies, and educating staff on effectively managing confidential data. It’s also crucial to regularly audit your secrets management processes to ensure compliance and effectiveness.

3 — Regular monitoring and auditing of secrets:

Monitoring your organization’s secrets is vital in identifying potential exposures and mitigating risks. Implementing automated tools and processes to monitor and audit secrets will help you keep track of sensitive information, detect anomalies, and trigger alerts for any unauthorized access or changes.

Leveraging technology for effective secrets management

As your organization grows, managing secrets manually becomes increasingly complex and error-prone. Leveraging technology can significantly enhance your secrets management strategy.

1 Embrace automation:

Automation can help streamline the process of managing exposed secrets, providing you with faster detection, classification, and response capabilities. Look for tools that integrate with your existing security workflows, reducing the need for manual intervention. Through its auto-discovery process, Entro can identify the owner of each secret or token, automate resolution procedures, and detect misconfigurations in vaults and secrets stores, ensuring a faster response to security incidents.

2 Platforms that provide essential context:

Some advanced secrets management platforms go beyond simple scanning, offering valuable context that can help you respond more effectively to exposed secrets. Entro is one such platform, and very uniquely so since it goes above and beyond to create the most comprehensive secret lineage maps to provide valuable context, enabling a more effective response to exposed secrets.

3 Integration with existing tools:

Ensure your chosen technology can easily integrate with your existing security tools and workflows. Seamless integration will help you maintain a consistent security posture across your organization and maximize your current investments in security solutions.

Conclusion

Effectively handling exposed secrets is crucial for protecting your company’s confidential data and maintaining trust among stakeholders. Recognizing the significance of context in dealing with revealed secrets empowers you to make informed choices regarding fixing and preventing issues. Integrating technology and a strong approach to managing secrets into your workflow enhances your organization’s security posture, minimizing the chances of unauthorized entry and data breaches.

Appreciating this pivotal aspect of cybersecurity, it becomes clear that it’s not merely about awareness but also action. This is where solutions like Entro come into play. Specifically designed to tackle the challenges we’ve explored, Entro offers a comprehensive approach to secrets management that transcends basic scanning. It provides the crucial context needed for effective remediation and prevention. It creates a dynamic threat model map using this context, thus positioning your organization a step ahead in the face of security threats

Protecting your organization’s sensitive data is too critical to be left to chance. As such, it’s time to harness the power of proactive and strategic management of exposed secrets. Check out our use cases to explore how Entro can empower you to strengthen your organization’s security posture.

Book a demo to learn more about Entro and how it can benefit your organization.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
http://www.indiavpn.org/2024/01/05/exposed-secrets-are-everywhere-heres-how-to-tackle-them/feed/ 0