Privacy – INDIA NEWS https://www.indiavpn.org News Blog Tue, 16 Apr 2024 09:46:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 FTC Fines Mental Health Startup Cerebral $7 Million for Major Privacy Violations https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/ https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/#respond Tue, 16 Apr 2024 09:46:32 +0000 https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/ [ad_1]

Apr 16, 2024NewsroomPrivacy Breach / Regulatory Compliance

Major Privacy Violations

The U.S. Federal Trade Commission (FTC) has ordered the mental telehealth company Cerebral from using or disclosing personal data for advertising purposes.

It has also been fined more than $7 million over charges that it revealed users’ sensitive personal health information and other data to third parties for advertising purposes and failed to honor its easy cancellation policies.

“Cerebral and its former CEO, Kyle Robertson, repeatedly broke their privacy promises to consumers and misled them about the company’s cancellation policies,” the FTC said in a press statement.

While claiming to offer “safe, secure, and discreet” services in order to get consumers to sign up and provide their data, the company, FTC alleged, did not clearly disclose that the information would be shared with third-parties for advertising.

The agency also accused the company of burying its data sharing practices in dense privacy policies, with the company engaging in deceptive practices by claiming that it would not share users’ data without their consent.

Cybersecurity

The company is said to have provided the sensitive information of nearly 3.2 million consumers to third parties such as LinkedIn, Snapchat, and TikTok by integrating tracking tools within its websites and apps that are designed to provide advertising and data analytics functions.

The information included names; medical and prescription histories; home and email addresses; phone numbers; birthdates; demographic information; IP addresses; pharmacy and health insurance information; and other health information.

The FTC complaint further accused Cerebral of failing to enforce adequate security guardrails by allowing former employees to access users’ medical records from May to December 2021, using insecure access methods that exposed patient information, and not restricting access to consumer data to only those employees who needed it.

“Cerebral sent out promotional postcards, which were not in envelopes, to over 6,000 patients that included their names and language that appeared to reveal their diagnosis and treatment to anyone who saw the postcards,” the FTC said.

Pursuant to the proposed order, which is pending approval from a federal court, the company has been barred from using or disclosing consumers’ personal and health information to third-parties for marketing, and has been ordered to implement a comprehensive privacy and data security program.

Cerebral has also been asked to post a notice on its website alerting users of the FTC order, as well as adopt a data retention schedule and delete most consumer data not used for treatment, payment, or health care operations unless they have consented to it. It’s also required to provide a mechanism for users to get their data deleted.

The development comes days after alcohol addiction treatment firm Monument was prohibited by the FTC from disclosing health information to third-party platforms such as Google and Meta for advertising without users’ permission between 2020 and 2022 despite claiming such data would be “100% confidential.”

The New York-based company has been ordered to notify users about the disclosure of their health information to third parties and ensure that all the shared data has been deleted.

Cybersecurity

“Monument failed to ensure it was complying with its promises and in fact disclosed users’ health information to third-party advertising platforms, including highly sensitive data that revealed that its customers were receiving help to recover from their addiction to alcohol,” FTC said.

Over the past year, FTC has announced similar enforcement actions against healthcare service providers like BetterHelp, GoodRx, and Premom for sharing users’ data with third-party analytics and social media firms without their consent.

It also warned [PDF] Amazon against using patient data for marketing purposes after it finalized a $3.9 billion acquisition of membership-based primary care practice One Medical.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/feed/ 0
Google to Delete Billions of Browsing Records in ‘Incognito Mode’ Privacy Lawsuit Settlement https://www.indiavpn.org/2024/04/02/google-to-delete-billions-of-browsing-records-in-incognito-mode-privacy-lawsuit-settlement/ https://www.indiavpn.org/2024/04/02/google-to-delete-billions-of-browsing-records-in-incognito-mode-privacy-lawsuit-settlement/#respond Tue, 02 Apr 2024 08:14:42 +0000 https://www.indiavpn.org/2024/04/02/google-to-delete-billions-of-browsing-records-in-incognito-mode-privacy-lawsuit-settlement/ [ad_1]

Apr 02, 2024NewsroomBrowser Security / Data Security

Incognito Mode Privacy Lawsuit

Google has agreed to purge billions of data records reflecting users’ browsing activities to settle a class action lawsuit that claimed the search giant tracked them without their knowledge or consent in its Chrome browser.

The class action, filed in 2020, alleged the company misled users by tracking their internet browsing activity who thought that it remained private when using the “incognito” or “private” mode on web browsers like Chrome.

In late December 2023, it emerged that the company had consented to settle the lawsuit. The deal is currently pending approval by the U.S. District Judge Yvonne Gonzalez Rogers.

“The settlement provides broad relief regardless of any challenges presented by Google’s limited record keeping,” a court filing on April 1, 2024, said.

“Much of the private browsing data in these logs will be deleted in their entirety, including billions of event level data records that reflect class members’ private browsing activities.”

Cybersecurity

As part of the data remediation process, Google is also required to delete information that makes private browsing data identifiable by redacting data points like IP addresses, generalizing User-Agent strings, and remove detailed URLs within a specific website (i.e., retain only domain-level portion of the URL).

In addition, it has been asked to delete the so-called X-Client-Data header field, which Google described as a Chrome-Variations header that captures the “state of the installation of Chrome itself, including active variations, as well as server-side experiments that may affect the installation.”

This header is generated from a randomized seed value, making it potentially unique enough to identify specific Chrome users.

Other settlement terms require Google to block third-party cookies within Chrome’s Incognito Mode for five years, a setting the company has already implemented for all users. The tech company has separately announced plans to eliminate tracking cookies by default by the end of the year.

Google has since updated the wording of Incognito Mode in January 2024 to clarify that the setting will not change “how data is collected by websites you visit and the services they use, including Google.”

Cybersecurity

The lawsuit extracted admissions from Google employees that characterized the browser’s Incognito browsing mode as a “confusing mess,” “effectively a lie,” and a “problem of professional ethics and basic honesty.”

It further laid bare internal exchanges in which executives argued Incognito Mode shouldn’t be called “private” because it risked “exacerbating known misconceptions.”

The development comes as Google said it has started automatically blocking bulk senders in Gmail that don’t meet its Email sender guidelines in an attempt to cut down on spam and phishing attacks.

The new requirements make it mandatory for email senders who push out more than 5,000 messages per day to Gmail accounts to provide a one-click unsubscribe option and respond to unsubscription requests within two days.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/04/02/google-to-delete-billions-of-browsing-records-in-incognito-mode-privacy-lawsuit-settlement/feed/ 0
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/#respond Tue, 30 Jan 2024 13:30:44 +0000 https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ [ad_1]

Jan 30, 2024NewsroomGenerative AI / Data Privacy

ChatGPT of Privacy Violations

Italy’s data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.

“The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation],” the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.

It also said it will “take account of the work in progress within the ad-hoc task force set up by the European Data Protection Framework (EDPB) in its final determination on the case.”

The development comes nearly 10 months after the watchdog imposed a temporary ban on ChatGPT in the country, weeks after which OpenAI announced a number of privacy controls, including an opt-out form to remove one’s personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023.

Cybersecurity

The Italian DPA said the latest findings, which have not been publicly disclosed, are the result of a multi-month investigation that was initiated at the same time. OpenAI has been given 30 days to respond to the allegations.

BBC reported that the transgressions are related to collecting personal data and age protections. OpenAI, in its help page, says that “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT.”

But there are also concerns that sensitive information could be exposed as well as younger users may be exposed to inappropriate content generated by the chatbot.

Indeed, Ars Technica reported this week that ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users who are said to be employees of a pharmacy prescription drug portal.

Then in September 2023, Google’s Bard chatbot was found to have a bug in the sharing feature that allowed private chats to be indexed by Google search, inadvertently exposing sensitive information that may have been shared in the conversations.

Generative artificial intelligence tools like ChatGPT, Bard, and Anthropic Claude rely on being fed large amounts of data from multiple sources on the internet.

In a statement shared with TechCrunch, OpenAI said its “practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy.”

Apple Warns Against Proposed U.K. Law

The development comes as Apple said it’s “deeply concerned” about proposed amendments to the U.K. Investigatory Powers Act (IPA) could give the government unprecedented power to “secretly veto” privacy and security updates to its products and services.

“It’s an unprecedented overreach by the government and, if enacted, the U.K. could attempt to secretly veto new user protections globally preventing us from ever offering them to customers,” the tech giant told BBC.

The U.K. Home Office said adopting secure communications technologies, including end-to-end encryption, cannot come at the cost of public safety as well as protecting the nation from child sexual abusers and terrorists.

Cybersecurity

The changes are aimed at improving the intelligence services’ ability to “respond with greater agility and speed to existing and emerging threats to national security.”

Specifically, they require technology companies that field government data requests to notify the U.K. government of any technical changes that could affect their “existing lawful access capabilities.”

“A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access,” the government notes in a fact sheet, adding “it does not provide powers for the Secretary of State to approve or refuse technical changes.”

Apple, in July 2023, said it would rather stop offering iMessage and FaceTime services in the U.K. than compromise on users’ privacy and security.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/feed/ 0
The Cookie Privacy Monster in Big Global Retail https://www.indiavpn.org/2024/01/16/the-cookie-privacy-monster-in-big-global-retail/ https://www.indiavpn.org/2024/01/16/the-cookie-privacy-monster-in-big-global-retail/#respond Tue, 16 Jan 2024 13:17:11 +0000 https://www.indiavpn.org/2024/01/16/the-cookie-privacy-monster-in-big-global-retail/ [ad_1]

Jan 16, 2024The Hacker NewsData Security / Privacy Compliance

Cookie Privacy

Explore how an advanced exposure management solution saved a major retail industry client from ending up on the naughty step due to a misconfiguration in its cookie management policy. This wasn’t anything malicious, but with modern web environments being so complex, mistakes can happen, and non-compliance fines can be just an oversight away.
Download the full case study here.

As a child, did you ever get caught with your hand in the cookie jar and earn yourself a telling-off? Well, even if you can still remember being outed as a cookie monster, the punishments for today’s thieving beasts are worse. Millions of dollars worse.

Cookies are an essential part of modern web analytics. A cookie is a small piece of text data that records website visitor preferences along with their behaviors, and its job is to help personalize their browsing experience. Just as you needed parental consent to access the cookie jar all those years ago, your business now needs to obtain user consent before it injects cookies into a user’s browser and then stores or shares information about their browsing habits.

As custodian of the website cookie jar, your business can’t raid it like you did when you were six. You must get permission in both situations, but these days the punishment can be hefty fines from data privacy regulators and expensive lawsuits from users.

A new case study from Reflectiz, a leading website security company, highlights how its advanced exposure management solution saved a major retail industry client from ending up on the naughty step due to a misconfiguration in its cookie management policy. This wasn’t anything malicious like a web skimming or keylogging attack, but with modern web environments being so complex and companies like this one having hundreds of websites to maintain, mistakes can happen, and non-compliance fines can be just an oversight away.

For the full story, you can download the case study here.

A Little About Tracking Cookies

Tracking cookies has been around since the early days of the internet. In 1994, Lou Montulli, a programmer employed by the precursor to Netscape was working on an e-commerce application for MCI, one of its clients, which had requested a virtual shopping cart. He invented cookies as we are verifying whether users visited the site before and remembering their preferences.

Stories began to appear in the news around cookies’ potential to invade privacy, but despite public concern, it wasn’t until 2011 that the European Union enacted legislation to ensure that websites obtain users’ explicit consent before using cookies.

Unauthorized Tracking Without Cookie Consent

In this new case study, a global retail client sought to continuously monitor diverse user journeys on their websites, uncovering that 37 domains were injecting cookies without obtaining proper user consent. The retail company’s conventional security tools remained blind to this issue due to constraints imposed by their organizational VPN, limiting visibility. Furthermore, the rogue and misconfigured cookies were injected into iFrame components, creating challenges for standard security controls like WAF to monitor effectively. Download the full case study here.

The Client’s Problem: Blinded by VPN

Although the retailer’s platform already had other security solutions in place, it was blind to the problem, which was this: on 37 of its websites, cookie tracking was taking place without obtaining explicit consent from visitors. This was happening via iFrames (which are used to embed content from one website inside another) that were obscured by a VPN. This masked their activities and made the cookie consent issue invisible to the other security solutions.

Although this was a damaging oversight, at least the data was not being sent to malicious actors. Instead, Reflectiz discovered that it was going to a legitimate third-party advertising service.

The High Cost of Non-Compliance

For a company with customers in the European Union, GDPR applies, and a violation of its cookie consent rules is classed as a Tier 2 category offense. Under this regulation, businesses that fail to obtain valid cookie consent could be fined up to 4% of their global annual turnover or €20 million ($21.94 million), whichever amount is larger. This is why having the ability to track the behaviors of every asset connected to a website is so important, and why Reflectiz was such a lifesaver in this instance.

The Solution

Reflectiz saw what the other solutions couldn’t. It identified the 37 domains where cookies were being used without consent, discovered where the data was being sent (in this case, a legitimate advertiser), and empowered the retailer to fix the problem before it could escalate.

The Reflectiz platform gives companies in the retail, finance, medical, and other sectors the insights they need to maintain compliance with data protection standards and avoid similar incidents that can result in fines, lawsuits, and reputational damage. It’s remotely executed so there’s virtually no performance impact, and the intuitive interface means that employee onboarding is swift.

Key Takeaways

  • Consent Oversight: The platform failed to detect and inform users about certain cookies injected without proper consent, lacking a consent box on the website.
  • VPN Secrecy Unveiled: Reflectiz’s monitoring exposed 37 domains injecting cookies without user approval, traced back to a location initially hidden by an Organizational VPN.
  • Third-Party Data Compromise: Compromised data reached an external domain through unauthorized cookie injections triggered by a specific user journey.
  • Unnoticed iFrame Tracking: Unmonitored iFrame activity contributed to privacy violations by tracking user data without consent.
  • Misconfigured Cookie Threat: A misconfigured cookie facilitated the privacy breach, posing a significant threat to user privacy.
  • Communication Breakdown Lesson: Improved inter-departmental communication, especially between security and marketing, is crucial to prevent issues related to third-party code implementation.
  • Continuous Monitoring Crucial: The case highlights the critical need for continuous monitoring and vigilance in the ever-evolving landscape of online privacy to uphold user trust and comply with data protection regulations.

For more background and an in-depth analysis, you can download the full case study here.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/01/16/the-cookie-privacy-monster-in-big-global-retail/feed/ 0
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment https://www.indiavpn.org/2024/01/08/nist-warns-of-security-and-privacy-risks-from-rapid-ai-system-deployment/ https://www.indiavpn.org/2024/01/08/nist-warns-of-security-and-privacy-risks-from-rapid-ai-system-deployment/#respond Mon, 08 Jan 2024 10:57:45 +0000 https://www.indiavpn.org/2024/01/08/nist-warns-of-security-and-privacy-risks-from-rapid-ai-system-deployment/ [ad_1]

Jan 08, 2024NewsroomArtificial Intelligence / Cyber Security

AI Security and Privacy

The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years.

“These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data,” NIST said.

As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations.

Cybersecurity

These include corrupted training data, security flaws in the software components, data model poisoning, supply chain weaknesses, and privacy breaches arising as a result of prompt injection attacks.

“For the most part, software developers need more people to use their product so it can get better with exposure,” NIST computer scientist Apostol Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”

The attacks, which can have significant impacts on availability, integrity, and privacy, are broadly classified as follows –

  • Evasion attacks, which aim to generate adversarial output after a model is deployed
  • Poisoning attacks, which target the training phase of the algorithm by introducing corrupted data
  • Privacy attacks, which aim to glean sensitive information about the system or the data it was trained on by posing questions that circumvent existing guardrails
  • Abuse attacks, which aim to compromise legitimate sources of information, such as a web page with incorrect pieces of information, to repurpose the system’s intended use

Such attacks, NIST said, can be carried out by threat actors with full knowledge (white-box), minimal knowledge (black-box), or have a partial understanding of some of the aspects of the AI system (gray-box).

Cybersecurity

The agency further noted the lack of robust mitigation measures to counter these risks, urging the broader tech community to “come up with better defenses.”

The development arrives more than a month after the U.K., the U.S., and international partners from 16 other countries released guidelines for the development of secure artificial intelligence (AI) systems.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” Vassilev said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/01/08/nist-warns-of-security-and-privacy-risks-from-rapid-ai-system-deployment/feed/ 0
Google Settles $5 Billion Privacy Lawsuit Over Tracking Users in ‘Incognito Mode’ https://www.indiavpn.org/2024/01/02/google-settles-5-billion-privacy-lawsuit-over-tracking-users-in-incognito-mode/ https://www.indiavpn.org/2024/01/02/google-settles-5-billion-privacy-lawsuit-over-tracking-users-in-incognito-mode/#respond Tue, 02 Jan 2024 16:22:01 +0000 https://www.indiavpn.org/2024/01/02/google-settles-5-billion-privacy-lawsuit-over-tracking-users-in-incognito-mode/ [ad_1]

Jan 02, 2024NewsroomData Privacy / Online Tracking

Incognito Mode Tracking

Google has agreed to settle a lawsuit filed in June 2020 that alleged that the company misled users by tracking their surfing activity who thought that their internet use remained private when using the “incognito” or “private” mode on web browsers.

The class-action lawsuit sought at least $5 billion in damages. The settlement terms were not disclosed.

The plaintiffs had alleged that Google violated federal wiretap laws and tracked users’ activity using Google Analytics to collect information when in private mode.

They said this allowed the company to collect an “unaccountable trove of information” about users who assumed they had taken adequate steps to protect their privacy online.

Google subsequently attempted to get the lawsuit dismissed, pointing out the message it displayed when users turned on Chrome’s incognito mode, which informs users that their activity might still be visible to websites you visit, employer or school, or their internet service provider.

Cybersecurity

It’s worth noting here at this point that enabling incognito or private mode in a web browser only gives users the choice to search the internet without their activity being locally saved to the browser.

That said, websites using advertising technologies and analytics APIs can still continue to track users within that incognito session and can further correlate that activity by, for example, matching users’ IP address.

“Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode,” U.S. District Judge Yvonne Gonzalez Rogers ruled.

“Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/01/02/google-settles-5-billion-privacy-lawsuit-over-tracking-users-in-incognito-mode/feed/ 0