The Facebook $650 million biometric data settlement is one of the largest privacy settlements in U.S. history, requiring the social media company to pay approximately 1.6 million Illinois residents for collecting and using their facial recognition data without explicit consent.
If you used Facebook and had your photo uploaded or tagged between 2011 and the settlement agreement in 2020, you were likely part of this class action lawsuit. Each eligible class member who filed a claim received at least $345—and many received substantially more—because Facebook’s facial recognition feature, called “Tag Suggestion,” collected biometric face templates from millions of users without the written permission required by Illinois law.
Table of Contents
- What Is the Facebook Facial Recognition Settlement and How Did It Happen?
- How Did Facebook Violate the Illinois Biometric Information Privacy Act?
- How Much Money Did Each Class Member Receive from the Settlement?
- Who Was Eligible to Claim, and What Was the Deadline?
- What Changes Did Facebook Have to Make as Part of the Settlement?
- How Does This Settlement Compare to Other Major Privacy Cases?
- What Does This Settlement Mean for the Future of Facial Recognition Technology?
What Is the Facebook Facial Recognition Settlement and How Did It Happen?
In January 2020, Facebook agreed to settle a major class action lawsuit for $550 million to resolve claims that it violated Illinois’s Biometric Information Privacy Act (BIPA). Six months later, in July 2020, a federal judge questioned whether the settlement amount was sufficient to fairly compensate victims, leading Facebook to increase the settlement to $650 million—one of the largest privacy settlements ever. The lawsuit stemmed from Facebook’s use of its “Tag Suggestion” feature, which automatically collected and analyzed facial recognition templates (essentially a digital map of faces) starting in 2011. The class action was brought on behalf of approximately 1.6 million Illinois residents whose biometric data was collected without their written consent, in direct violation of BIPA, a 2008 Illinois law that specifically requires companies to obtain written permission before collecting biometric identifiers.
Final approval of the settlement was granted on February 26, 2025, after years of litigation led by major law firms including Edelson PC, Labaton Keller Sucharow LLP, and Robbins Geller Rudman & Dowd LLP. This settlement is historically significant because it represents the first major enforcement action against a technology company for facial recognition abuses under BIPA, and the $650 million payout is among the largest per-person settlements in consumer law history. However, even though the settlement was approved in 2025, the deadline to file a claim was November 23, 2020—more than four years ago. This means no new claims are being accepted, and if you didn’t file by the deadline, you cannot recover money from this settlement.

How Did Facebook Violate the Illinois Biometric Information Privacy Act?
Facebook’s violation centered on its “Tag Suggestion” feature, which used artificial intelligence to automatically create facial recognition templates when users uploaded photos. The feature worked by scanning photo uploads and comparing them against existing face templates in Facebook’s database, then suggesting which users should be tagged in those photos. The problem was that Facebook implemented this feature without first obtaining the written disclosure and consent required by BIPA—the Illinois law that, at the time, was one of the most protective biometric privacy laws in the United States. The lawsuit alleged that Facebook collected and stored facial recognition data for up to six million Illinois residents (though the settlement class was later determined to be approximately 1.6 million), and that the company failed to disclose how long it would keep these biometric templates or what exactly it was using them for.
BIPA requires companies to provide clear written notice before collecting biometric identifiers (a category that explicitly includes face geometry and recognition data), and to obtain affirmative written consent before collection or storage. Facebook’s position was that showing users the Tag Suggestion feature in their photo interface and allowing them to use it constituted consent, but the court disagreed. The violation was particularly serious because biometric data is permanent—unlike a password that can be changed, you cannot change your face, making the misuse of facial templates a uniquely invasive privacy harm. This is why BIPA imposes statutory damages of $1,000 to $5,000 per violation per person, making biometric privacy violations potentially very expensive for companies.
How Much Money Did Each Class Member Receive from the Settlement?
The Facebook settlement distributed $650 million among approximately 1.6 million eligible Illinois class members, resulting in an average payout of around $408 per person for those who filed a claim. However, the settlement agreement guaranteed that each class member who filed a valid claim would receive at least $345, with the exact amount depending on the total number of valid claims received and how much administrative costs and attorneys’ fees consumed from the settlement fund. In reality, only about 22% of eligible class members—roughly 350,000 people—actually filed claims, which meant the per-person payouts were higher than if everyone had claimed. This is not uncommon in class action settlements; many eligible claimants don’t file because they never hear about the settlement or find the process too complicated. For those who did file, payments ranged from the guaranteed minimum of $345 to several hundred dollars more, depending on the distribution formula.
Some class members reported receiving anywhere from $345 to over $600. This made the Facebook biometric settlement one of the most generous privacy settlements per person in U.S. history. For comparison, many data breach settlements provide $10 to $50 per person; the Facebook settlement was roughly five to sixty times larger on a per-person basis. However, it’s crucial to understand that these checks have largely been distributed already—the settlement claim deadline was November 23, 2020, and most payments were completed by 2021 and 2022. If you did not file a claim by the deadline, you are not eligible for any compensation, even if you were part of the class.

Who Was Eligible to Claim, and What Was the Deadline?
To be eligible for compensation from the Facebook facial recognition settlement, you needed to be a natural person who: (1) resided in Illinois at any point between November 18, 2008 and August 30, 2020; (2) had an active Facebook account at some point during that same period; and (3) had your face scanned or your photos analyzed by Facebook’s Tag Suggestion feature or facial recognition technology before August 30, 2020. You did not need to have actually used the Tag Suggestion feature yourself—if Facebook ever uploaded or scanned your face, even if someone else tagged you in a photo, you qualified. The settlement included people whose photos were uploaded by other Facebook users, which made the class quite broad. The critical deadline for filing a claim was November 23, 2020, nearly five years ago.
This deadline has long passed, and no new claims are being accepted. If you were eligible but did not file by that date, you cannot recover compensation from the settlement. The only exception would be if you filed a timely claim before the deadline and it was subsequently rejected—in which case you may have had rights to appeal or resubmit. But anyone discovering now that they were eligible, or anyone who simply forgot to file, is unfortunately barred from participating. This is a crucial point: this settlement is no longer available for new claims, though it serves as an important precedent and warning for how facial recognition technology can expose companies to massive liability.
What Changes Did Facebook Have to Make as Part of the Settlement?
Beyond paying the $650 million, Facebook was required to make specific changes to how it handles facial recognition data. First, Facebook had to change its default settings so that the “Face Recognition” feature would be turned off for all users who had not explicitly chosen to turn it on. Previously, Facebook had the feature enabled by default for many users, requiring them to actively opt out if they didn’t want their faces recognized. The settlement flipped this to an opt-in model, meaning users would have to actively consent to facial recognition. Second, Facebook was required to delete existing facial recognition templates for any user who did not provide affirmative express consent to the storage and use of their biometric data.
This was a massive data erasure operation—Facebook had to purge millions of facial templates from its databases. Third, Facebook had to provide clear written notice before collecting any new biometric data in the future and had to obtain written consent before doing so, bringing its practices into compliance with BIPA. This settlement essentially rewrote how Facebook could use facial recognition in Illinois, and it set a precedent for other states and jurisdictions considering similar biometric privacy laws. However, these requirements applied primarily to Illinois users and to Facebook’s U.S. operations—the company’s global facial recognition practices outside the scope of BIPA remained largely unchanged by this particular settlement. Additionally, while the settlement required Facebook to delete face templates, it did not prevent the company from continuing to use other biometric or behavioral data, or from implementing facial recognition in other ways going forward if it obtained proper consent.

How Does This Settlement Compare to Other Major Privacy Cases?
The Facebook facial recognition settlement stands out as one of the largest privacy class action settlements in U.S. history, and it is particularly notable for being one of the first major biometric privacy victories. The average payout per claimant—approximately $345 to $600—is substantially larger than most data breach settlements, which typically pay between $10 and $50 per person. For example, the Equifax data breach settlement, which affected hundreds of millions of Americans and exposed Social Security numbers and financial information, paid less than $200 per person on average (and most people received far less).
The Facebook settlement acknowledged that the violation—collecting and storing permanent, unchangeable biometric data without consent—represented a uniquely invasive form of privacy harm. Another key difference is that the Facebook settlement required affirmative injunctive relief, meaning Facebook had to change its business practices going forward, delete collected data, and implement new consent procedures. Not all privacy settlements include these operational changes—some only require monetary compensation. This makes the Facebook case more significant than its dollar amount alone might suggest, because it actually forced a major technology company to restructure how it handles a sensitive category of data. The settlement also occurred before biometric privacy laws became widespread, making it a trailblazer that influenced how other states approached facial recognition regulation in subsequent years.
What Does This Settlement Mean for the Future of Facial Recognition Technology?
The Facebook settlement demonstrated that companies could face extraordinary legal exposure for deploying facial recognition technology without proper consent, even if the technology itself was not inherently harmful and the company’s intentions were not malicious. Tag Suggestion was designed to make it easier for users to tag friends in photos—a helpful feature—but Facebook’s failure to obtain written consent made it illegal under BIPA and generated a $650 million liability. This lesson has reverberated across the technology industry and has influenced how companies approach biometric data collection, particularly in states with strong privacy laws. Several states have now passed or proposed their own biometric privacy laws modeled on BIPA, including Washington, Texas, and others, creating a growing patchwork of regulations that technology companies must navigate.
The settlement also put facial recognition technology under sustained scrutiny from privacy advocates and regulators. While the technology itself is not going away—facial recognition is useful for security, identity verification, and other applications—the liability risk has pushed companies toward more transparent practices and stronger consent mechanisms. Some companies have voluntarily restricted their facial recognition deployments or declined to sell facial recognition tools to law enforcement, citing concerns about privacy and bias. The Facebook settlement is part of a larger movement toward “privacy by design,” where companies build privacy protections into their systems from the start rather than treating privacy as an afterthought. However, it’s also worth noting that in many parts of the world and for many applications, facial recognition has expanded without the same legal constraints that existed in Illinois.
