Violations – INDIA NEWS https://www.indiavpn.org News Blog Tue, 16 Apr 2024 09:46:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 FTC Fines Mental Health Startup Cerebral $7 Million for Major Privacy Violations https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/ https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/#respond Tue, 16 Apr 2024 09:46:32 +0000 https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/ [ad_1]

Apr 16, 2024NewsroomPrivacy Breach / Regulatory Compliance

Major Privacy Violations

The U.S. Federal Trade Commission (FTC) has ordered the mental telehealth company Cerebral from using or disclosing personal data for advertising purposes.

It has also been fined more than $7 million over charges that it revealed users’ sensitive personal health information and other data to third parties for advertising purposes and failed to honor its easy cancellation policies.

“Cerebral and its former CEO, Kyle Robertson, repeatedly broke their privacy promises to consumers and misled them about the company’s cancellation policies,” the FTC said in a press statement.

While claiming to offer “safe, secure, and discreet” services in order to get consumers to sign up and provide their data, the company, FTC alleged, did not clearly disclose that the information would be shared with third-parties for advertising.

The agency also accused the company of burying its data sharing practices in dense privacy policies, with the company engaging in deceptive practices by claiming that it would not share users’ data without their consent.

Cybersecurity

The company is said to have provided the sensitive information of nearly 3.2 million consumers to third parties such as LinkedIn, Snapchat, and TikTok by integrating tracking tools within its websites and apps that are designed to provide advertising and data analytics functions.

The information included names; medical and prescription histories; home and email addresses; phone numbers; birthdates; demographic information; IP addresses; pharmacy and health insurance information; and other health information.

The FTC complaint further accused Cerebral of failing to enforce adequate security guardrails by allowing former employees to access users’ medical records from May to December 2021, using insecure access methods that exposed patient information, and not restricting access to consumer data to only those employees who needed it.

“Cerebral sent out promotional postcards, which were not in envelopes, to over 6,000 patients that included their names and language that appeared to reveal their diagnosis and treatment to anyone who saw the postcards,” the FTC said.

Pursuant to the proposed order, which is pending approval from a federal court, the company has been barred from using or disclosing consumers’ personal and health information to third-parties for marketing, and has been ordered to implement a comprehensive privacy and data security program.

Cerebral has also been asked to post a notice on its website alerting users of the FTC order, as well as adopt a data retention schedule and delete most consumer data not used for treatment, payment, or health care operations unless they have consented to it. It’s also required to provide a mechanism for users to get their data deleted.

The development comes days after alcohol addiction treatment firm Monument was prohibited by the FTC from disclosing health information to third-party platforms such as Google and Meta for advertising without users’ permission between 2020 and 2022 despite claiming such data would be “100% confidential.”

The New York-based company has been ordered to notify users about the disclosure of their health information to third parties and ensure that all the shared data has been deleted.

Cybersecurity

“Monument failed to ensure it was complying with its promises and in fact disclosed users’ health information to third-party advertising platforms, including highly sensitive data that revealed that its customers were receiving help to recover from their addiction to alcohol,” FTC said.

Over the past year, FTC has announced similar enforcement actions against healthcare service providers like BetterHelp, GoodRx, and Premom for sharing users’ data with third-party analytics and social media firms without their consent.

It also warned [PDF] Amazon against using patient data for marketing purposes after it finalized a $3.9 billion acquisition of membership-based primary care practice One Medical.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/04/16/ftc-fines-mental-health-startup-cerebral-7-million-for-major-privacy-violations/feed/ 0
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/#respond Tue, 30 Jan 2024 13:30:44 +0000 https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/ [ad_1]

Jan 30, 2024NewsroomGenerative AI / Data Privacy

ChatGPT of Privacy Violations

Italy’s data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.

“The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation],” the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.

It also said it will “take account of the work in progress within the ad-hoc task force set up by the European Data Protection Framework (EDPB) in its final determination on the case.”

The development comes nearly 10 months after the watchdog imposed a temporary ban on ChatGPT in the country, weeks after which OpenAI announced a number of privacy controls, including an opt-out form to remove one’s personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023.

Cybersecurity

The Italian DPA said the latest findings, which have not been publicly disclosed, are the result of a multi-month investigation that was initiated at the same time. OpenAI has been given 30 days to respond to the allegations.

BBC reported that the transgressions are related to collecting personal data and age protections. OpenAI, in its help page, says that “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT.”

But there are also concerns that sensitive information could be exposed as well as younger users may be exposed to inappropriate content generated by the chatbot.

Indeed, Ars Technica reported this week that ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users who are said to be employees of a pharmacy prescription drug portal.

Then in September 2023, Google’s Bard chatbot was found to have a bug in the sharing feature that allowed private chats to be indexed by Google search, inadvertently exposing sensitive information that may have been shared in the conversations.

Generative artificial intelligence tools like ChatGPT, Bard, and Anthropic Claude rely on being fed large amounts of data from multiple sources on the internet.

In a statement shared with TechCrunch, OpenAI said its “practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy.”

Apple Warns Against Proposed U.K. Law

The development comes as Apple said it’s “deeply concerned” about proposed amendments to the U.K. Investigatory Powers Act (IPA) could give the government unprecedented power to “secretly veto” privacy and security updates to its products and services.

“It’s an unprecedented overreach by the government and, if enacted, the U.K. could attempt to secretly veto new user protections globally preventing us from ever offering them to customers,” the tech giant told BBC.

The U.K. Home Office said adopting secure communications technologies, including end-to-end encryption, cannot come at the cost of public safety as well as protecting the nation from child sexual abusers and terrorists.

Cybersecurity

The changes are aimed at improving the intelligence services’ ability to “respond with greater agility and speed to existing and emerging threats to national security.”

Specifically, they require technology companies that field government data requests to notify the U.K. government of any technical changes that could affect their “existing lawful access capabilities.”

“A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access,” the government notes in a fact sheet, adding “it does not provide powers for the Secretary of State to approve or refuse technical changes.”

Apple, in July 2023, said it would rather stop offering iMessage and FaceTime services in the U.K. than compromise on users’ privacy and security.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



[ad_2]

Source link

]]>
https://www.indiavpn.org/2024/01/30/italian-data-protection-watchdog-accuses-chatgpt-of-privacy-violations/feed/ 0