Yes, Clearview AI faced a federal class action lawsuit over its unauthorized scraping of LinkedIn photos, and a landmark settlement was approved on March 20, 2025. Federal Judge Sharon Johnson Coleman granted class members a 23% equity stake in Clearview AI as compensation—a creative alternative to traditional cash settlements that reflects the company’s startup status and the difficulty of quantifying damages from facial recognition surveillance. The settlement, valued at approximately $51.75 million, targets a database of over 60 billion facial images that Clearview built by scraping publicly available photos from LinkedIn, Google, YouTube, Facebook, Venmo, Twitter, news websites, and other sources without user consent. This article explains how Clearview AI scraped LinkedIn in violation of platform terms, what laws were broken, how the equity-based settlement works, and what it means for biometric privacy enforcement going forward.
Table of Contents
- How Did Clearview AI Scrape LinkedIn Photos and Violate Platform Terms?
- The Scale of Data Collection and Privacy Impact
- The Legal Violations Under Illinois BIPA and Other Privacy Laws
- The Innovative Equity-Based Settlement Structure
- Settlement Triggers and When Class Members Will Be Compensated
- Global Enforcement Actions Against Clearview AI
- Future Outlook for Facial Recognition and Biometric Privacy Litigation
How Did Clearview AI Scrape LinkedIn Photos and Violate Platform Terms?
Clearview AI built its massive facial recognition database by harvesting photos from publicly accessible web sources, including LinkedIn profiles where millions of professionals posted headshots. The company’s scraping violated LinkedIn’s terms of service, which explicitly prohibit automated collection of profile data and images. LinkedIn users posted these photos voluntarily on the platform, but they did not consent to Clearview downloading and storing them for facial recognition purposes—a critical distinction that regulators have emphasized. Clearview argued that publicly visible data was fair game, but courts disagreed: the mere fact that information is publicly accessible on the internet does not automatically grant third parties the right to collect and repurpose it, especially when the original platform’s terms forbid such use.
The scraping happened at scale through automated bots that Clearview deployed across the web. LinkedIn discovered the unauthorized access, moved to shut it down, and later participated in the class action lawsuit. Other platforms including Google, YouTube, Facebook, and Venmo similarly had their data harvested without consent or cooperation. What made Clearview’s approach particularly problematic was not just the volume of data, but the explicit purpose: to build a facial recognition engine that could identify people from photos without their knowledge. This went beyond passive data collection to active surveillance infrastructure.

The Scale of Data Collection and Privacy Impact
Clearview AI’s database contained over 60 billion facial images by the time of the settlement—a staggering scope that illustrates how quickly surveillance infrastructure can accumulate data in the digital age. To put this in perspective, with a global population of approximately 8 billion people, Clearview’s database contained enough images to cover most of Earth’s population multiple times over, creating multiple facial profiles per person. The scraping wasn’t limited to LinkedIn; it included images from Facebook profiles, Google Images, news websites, Twitter, Venmo, and hundreds of other sources, creating a comprehensive facial recognition system trained on the faces of journalists, activists, political figures, and ordinary people.
The privacy impact was severe and ongoing. Once Clearview possessed these facial images, law enforcement agencies and private companies could query the database to identify anyone in a photo, potentially without that person’s knowledge. A journalist photographed at a protest, a domestic violence survivor at a shelter, or a dissident activist in a repressive country could all be identified and located through Clearview’s database. Unlike fingerprinting, which historically required consent or arrest, facial recognition can operate silently and at scale, creating a chilling effect on freedom of assembly and expression.
The Legal Violations Under Illinois BIPA and Other Privacy Laws
The plaintiffs alleged violations of the Illinois Biometric Information Privacy Act (BIPA), a major state privacy law that requires companies to obtain explicit written consent before collecting biometric data like facial geometry. BIPA treats biometric information as a special category of personally identifiable information that deserves extra protection because it cannot be changed if compromised—unlike a password, you cannot get new fingerprints or facial features. Clearview violated this law by collecting billions of facial images without consent or notice, and without establishing a proper retention schedule or deletion procedures. The law applies even when data is publicly available, because BIPA protects the right to control how your biometric data is collected and used, not just whether it’s visible.
Illinois BIPA is unique in allowing private individuals to sue without proving actual harm, which is why it has become the favorite tool for facial recognition litigation. Other states have since enacted similar laws, and the Federal Trade Commission has increased enforcement against biometric privacy abuses. Clearview’s case showed that even a company providing tools to law enforcement and private companies cannot escape liability under state privacy laws. The settlement sends a signal to other facial recognition companies that scraping biometric data at scale carries legal risk, though BIPA applies only to Illinois residents and some multistate class actions have attempted to extend similar protections.

The Innovative Equity-Based Settlement Structure
Instead of paying cash directly to class members, the settlement granted them a 23% equity stake in Clearview AI itself. This was an unconventional approach driven by the practical reality that Clearview AI, despite its surveillance capabilities, was a private company with limited liquid assets and an uncertain financial future. Under traditional settlement logic, class members would have received small cash payments—perhaps $10-50 per person after attorney fees—and the company would have reduced its operations or shut down entirely. Instead, this settlement aligned the interests of plaintiffs with the company’s future success, creating an incentive for the company to become profitable and valuable. The equity stake converts to cash through several possible triggering events.
First, if Clearview AI has an initial public offering (IPO), the equity converts to shares or cash at that time. Second, if the company is acquired or merged, the equity converts based on the transaction value. Third, if neither an IPO nor acquisition occurs by December 31, 2027, Clearview has agreed to make payments equal to 17% of its annual revenue to the class until the equity stake is fully satisfied. Finally, class members themselves can elect to sell their equity stake, potentially to Clearview or another buyer, providing an earlier exit option. This structure gives the settlement flexibility and acknowledges that startup valuations are speculative.
Settlement Triggers and When Class Members Will Be Compensated
The timeline for actual compensation depends on which settlement trigger event occurs first. If Clearview pursues an IPO or is acquired before December 31, 2027, compensation could arrive within months or a couple of years. However, as of early 2026, Clearview AI remains private with no publicly announced IPO plans, making the revenue-sharing option the most likely near-term path to compensation. Starting January 1, 2028, if neither an IPO nor acquisition has occurred, Clearview must calculate 17% of its annual revenue and distribute it to class members according to their share of the settlement.
Class members should expect this to be a multi-year process unless market conditions suddenly shift. A limitation of this structure is that if Clearview AI’s business contracts or becomes unprofitable, the 17% of revenue could mean very modest payments. If the company generates $10 million in revenue annually, 17% would be $1.7 million split among potentially millions of class members. Conversely, if Clearview becomes a high-revenue company—particularly if it expands beyond law enforcement to mainstream commercial use—the equity stake could become extremely valuable. The equity-based settlement essentially converted injured plaintiffs into equity holders with a long-term interest in the company’s financial performance, which is unusual and carries both upside potential and downside risk.

Global Enforcement Actions Against Clearview AI
Beyond the U.S. class action, Clearview AI faced regulatory enforcement worldwide. The United Kingdom’s Information Commissioner’s Office (ICO) fined Clearview £7.5 million in May 2022 and ordered the company to delete data on all UK residents. This enforcement action was significant because it established that even companies operating outside the European Union cannot legally scrape data from EU citizens in violation of GDPR and UK data protection law.
The ICO’s action suggested that other countries would follow, and several privacy advocates have called for similar enforcement in Canada, Australia, and the European Union. The regulatory pressure has not deterred all business development for Clearview. In February 2026, the company signed a one-year contract valued at £225,000 with U.S. Customs and Border Protection, demonstrating continued government demand for facial recognition tools despite privacy concerns. This contract illustrates the tension between law enforcement adoption of surveillance technology and public and judicial skepticism about its privacy implications.
Future Outlook for Facial Recognition and Biometric Privacy Litigation
The Clearview AI settlement may be the first of many facial recognition litigation cases, as plaintiffs’ lawyers and privacy advocates pursue similar claims against other companies that collect biometric data without consent. Google, Meta, Amazon, and other large tech companies face similar lawsuits over facial recognition and photo scraping. The equity-based settlement structure, approved by a federal judge, could become a template for other startup-scale companies facing biometric privacy claims, offering an alternative to bankruptcy or shutdown.
At the same time, the case highlights the limits of litigation in regulating surveillance technology. A private settlement and equity stake do not prevent Clearview AI from continuing its operations, training its facial recognition model on its existing 60 billion images, or licensing its tools to law enforcement and other buyers. Real restrictions on facial recognition would require legislative action at federal and state levels, though the Clearview case demonstrates that state biometric privacy laws like BIPA can create meaningful financial consequences.
You Might Also Like
- Class Action Targets ZeroEyes AI Gun Detection for False Alerts That Led to Wrongful Police Response
- Class Action Targets Rite Aid for Deploying Facial Recognition That Mislabeled Minority Customers as Shoplifters
- Class Action Targets Midjourney for Reproducing Living Artists’ Styles Without Compensation
