Class Action Claims Meta VR Headset Collected Physical Biometric Data From Minors

Yes. Meta is being sued in federal court for collecting intimate biometric data and video footage from children without adequate user consent, then...

Yes. Meta is being sued in federal court for collecting intimate biometric data and video footage from children without adequate user consent, then sharing that footage with overseas data annotators who reviewed users undressing, using the toilet, and engaging in sexual activity. On March 4, 2026, plaintiffs Gina Bartone from New Jersey and Mateo Canu from California filed the class action lawsuit Bartone v. Meta Platforms Inc.

In the U.S. District Court for the Northern District of California, alleging that Meta’s Ray-Ban AI smart glasses—the company’s consumer-facing wearable cameras—collected sensitive biometric and video data not only from users but explicitly from their minor children, then violated privacy obligations by sharing unredacted footage with data annotators employed by a subcontractor in Kenya. The lawsuit targets both Meta Platforms Inc. and Luxottica of America Inc., the eyewear manufacturer behind Ray-Ban, and seeks monetary damages and injunctive relief to halt the privacy deception.

Table of Contents

How Did Meta Collect Biometric Data from Children Without Parental Consent?

The lawsuit centers on a fundamental design flaw in how Meta implemented the Ray-Ban AI smart glasses and handled the data those glasses captured. The glasses contain built-in cameras that record first-person video footage automatically as users wear them throughout their day. Critically, the footage captured includes not only what the wearer sees, but also intimate moments: people undressing, using the toilet, showering, and engaging in sexual activity. More troubling for the children’s privacy issue, the glasses also captured footage of users’ minor children in these intimate settings without the explicit consent of parents or guardians.

Meta’s own marketing claimed the device was “designed for privacy, controlled by you,” yet the company systematically shared this footage—including footage of minors—with overseas contractors for annotation and analysis. The mechanics of this data collection involved what Meta calls “data annotators”: human workers employed by Sama, a Kenya-based contractor working on Meta’s behalf, who were tasked with reviewing and labeling video footage to train Meta’s AI models. These annotators accessed raw, unredacted video footage without sufficient safeguards or user notification. When users—especially parents using the smart glasses in homes where their children were present—activated video features, they had no way to know that footage containing their children would be viewed by workers on the other side of the world. The exposure revealed that annotators in Nairobi reviewed footage depicting intimate family moments, a clear violation of what a reasonable person would understand by Meta’s privacy-first marketing.

How Did Meta Collect Biometric Data from Children Without Parental Consent?

When Did This Privacy Breach Come to Light and How Was It Discovered?

The exposure began in early March 2026 when Swedish investigative journalists at Svenska Dagbladet and Göteborgs-Posten published an investigation revealing that Meta had been sending sensitive video footage to data annotators in Kenya without proper user disclosure. Between March 3-5, 2026, these publications detailed how users’ intimate moments—including nudity, sexual activity, and vulnerable household scenes involving minors—were being reviewed by overseas contractors. The investigative work came as a shock to users who believed their footage stayed within Meta’s systems or was processed with privacy protections in place.

By March 5, 2026, regulatory authorities had begun responding. The United Kingdom’s Information Commissioner’s Office (ICO) publicly confirmed it was writing to Meta regarding data protection compliance, signaling that European privacy regulators viewed the revelation as a serious breach of data protection law. However, the fact that investigative journalists had to expose this practice—rather than Meta disclosing it proactively—demonstrates how opaque the company’s data handling practices had been. The timing is significant: Meta had been selling these glasses to consumers for months or longer while keeping this annotation process hidden from the public record.

Timeline of Meta VR/Smart Glasses Privacy Exposure and Legal ActionGlass Rollout0Months (relative scale)Exposure Investigation1Months (relative scale)ICO Statement2Months (relative scale)Class Action Filed3Months (relative scale)Today4Months (relative scale)Source: Court filings (Bartone v. Meta, N.D. Cal., Case No. 3:26-cv-01897), Swedish newspapers (Svenska Dagbladet, Göteborgs-Posten), UK ICO Statement (March 5, 2026)

The complaint in Bartone v. Meta Platforms Inc. asserts 10 separate causes of action, a comprehensive legal strategy designed to attack Meta’s conduct from multiple angles. The claims include violations of California’s Unfair Competition Law (UCL) and False Advertising Law (FAL), violations of California’s Consumers Legal Remedies Act (CLRA), and violations of the New Jersey Consumer Fraud Act. Beyond these consumer protection statutes, the plaintiffs also assert traditional fraud claims: fraud by misrepresentation, fraud by concealment, fraud by omission, and negligent misrepresentation.

Finally, the complaint includes breach of contract and breach of implied warranty claims, arguing that consumers entered into agreements with Meta based on privacy promises that the company did not honor. The breadth of these claims is intentional. By casting the net widely across state and federal consumer protection statutes, the plaintiffs maximize the potential theories on which a court might find liability. Each theory has different evidentiary requirements and potential remedies, so the legal redundancy increases the chance that at least some claims survive any early motions to dismiss. The complaint also seeks both monetary damages—compensating class members for the invasion of privacy and unauthorized use of their data—and injunctive relief, meaning a court order forcing Meta to stop making privacy claims that are demonstrably false.

What Legal Claims Is the Class Action Bringing Against Meta?

What False Privacy Claims Did Meta Make That Contradict This Conduct?

Meta’s marketing materials and promotional language directly contradicted what the company was actually doing with user data. The company prominently advertised Ray-Ban AI smart glasses as “designed for privacy, controlled by you,” a core messaging pillar in the product’s positioning. Additional claims included “built for your privacy” and “you’re in control of your data and content.” These statements created an affirmative impression that users’ footage would remain under their control and protected by privacy safeguards. The false advertising claim hinges on this gap: Meta made specific, measurable privacy promises to induce consumers to purchase the glasses, then operated the glasses in a manner that directly violated those promises.

The problem is not merely that Meta failed to mention data annotation; it’s that Meta affirmatively promised privacy and control while secretly running a large-scale data-sharing operation with overseas contractors. The lawsuit treats this as intentional deception rather than mere omission. Had Meta stated upfront—for example, in product documentation or terms of service—that “your video footage will be reviewed by data annotation contractors located outside the United States without redaction, including footage of your minor children,” most consumers would have been able to make an informed choice. Instead, Meta obscured this practice while marketing privacy as the product’s core benefit. This alleged deception forms the foundation of the false advertising claims.

Who Is Included in This Class Action and How Large Is the Affected Population?

The class definition in this lawsuit is broad but distinct. The lawsuit covers not only individuals who purchased and used Meta’s Ray-Ban AI smart glasses, but explicitly includes individuals whose minor children appear in footage captured by those glasses. This is a crucial feature: a parent who never owned the glasses but whose child was filmed in intimate moments by another wearer would still be part of the class. The lawsuit targets approximately 7 million Ray-Ban AI glasses users globally, meaning the potential class could encompass tens of millions of people when accounting for the children captured in footage. However, one limitation is that the lawsuit names California and New Jersey residents as the named plaintiffs, so the geographical scope may depend on whether a court grants class certification for a nationwide or even international class.

A critical caveat: not everyone who was near a Ray-Ban AI glasses wearer qualifies for the class. The class is limited to individuals whose footage was actually captured and, more importantly, shared with data annotators outside the United States. If footage was captured but never sent for annotation, that individual might not have been harmed in the way the lawsuit alleges. Additionally, the minors’ protection angle only applies if a minor was recorded by someone else’s glasses—a wearer who used the glasses in a public setting where strangers were present would not bring minors within the class definition. The scope of the class remains subject to judicial determination, which will significantly impact how many people qualify for recovery.

Who Is Included in This Class Action and How Large Is the Affected Population?

What Regulatory Agencies Are Investigating This Conduct?

Beyond the private class action lawsuit, government regulators have begun examining Meta’s practices. The UK Information Commissioner’s Office (ICO) issued a public statement on March 5, 2026, indicating it was writing to Meta about data protection compliance in light of the revelations. This is significant because the ICO enforces the UK’s Data Protection Act 2018 and the EU General Data Protection Regulation (GDPR), which set strict requirements for the processing of biometric data and data involving minors. Under GDPR, processing biometric data of minors without explicit parental consent is generally prohibited, making the ICO’s scrutiny the beginning of what could become formal regulatory action.

Other regulatory bodies are likely to follow the ICO’s lead, including data protection authorities in California and other U.S. states, as well as the Federal Trade Commission. The FTC has a history of taking action against companies that make false privacy claims, so Meta’s deceptive marketing could trigger a separate regulatory investigation. The parallel regulatory pathway is important for consumers because even if the class action lawsuit succeeds, regulatory fines and orders can compel broader changes to Meta’s data handling practices across all users, not just class members.

What Does This Case Signal About the Future of Smart Glasses Privacy?

The Bartone lawsuit and its underlying facts represent a critical test case for how privacy law will apply to the next generation of wearable technology. Smart glasses, AR headsets, and other first-person recording devices are becoming mainstream consumer products, with Meta, Apple, and other companies racing to market. This case signals to manufacturers that the privacy protections required by law and expected by consumers cannot be bypassed through opaque contractual language or overseas data practices. If Meta loses—or if the case is settled on terms favorable to plaintiffs—it could establish a precedent that wearable camera manufacturers must obtain explicit, informed consent before sharing intimate video footage with human reviewers, particularly footage involving minors.

The broader implication is that “privacy by design” marketing claims now carry legal weight and will be scrutinized in discovery and at trial. Companies cannot simply assert privacy-first positioning while operating entirely different data-sharing practices behind the scenes. The revelation that 7 million users globally were unknowingly participating in a large-scale data annotation program raises questions about whether current consent mechanisms—usually buried in terms of service—are adequate for such significant privacy intrusions. Future smart glasses manufacturers may need to implement more transparent notification systems, opt-in rather than opt-out consent for data annotation, and stricter safeguards for footage involving children.

You Might Also Like

Leave a Reply