Allegheny County’s Allegheny Family Screening Tool (AFST), an AI-based risk assessment algorithm used in child welfare decisions since 2016, faces serious allegations of racial bias and disability discrimination. While no class action settlement has been finalized as of March 2026, civil rights advocates, researchers, and the Justice Department have raised urgent concerns about how the algorithm disproportionately flags Black children and families for mandatory investigations and build care placement. Families affected by these algorithmic decisions are filing complaints with the Department of Justice, and organizations like the ACLU are documenting systematic harms.
Table of Contents
- What is the Allegheny Family Screening Tool and How Does It Work?
- Documented Racial Bias in Algorithmic Risk Screening
- Disability Discrimination and ADA Concerns
- How Families Are Harmed When Algorithms Decide Welfare Outcomes
- The Legal and Regulatory Response to Algorithmic Bias
- What Organizations Are Saying About the Allegheny Tool
- What Comes Next for Families and the Future of Algorithmic Child Welfare
What is the Allegheny Family Screening Tool and How Does It Work?
The Allegheny Family Screening Tool is an artificial intelligence system designed to screen incoming child welfare reports and predict which children are at highest risk of being placed in build care within two years. Deployed by Allegheny County’s Department of Human Services since 2016, the AFST processes information from multiple databases to assign risk scores to children and families. The system pulls data from child welfare history, birth records, Medicaid enrollment, substance abuse treatment records, mental health services, jail records, and probation data. This integration of multi-agency data creates comprehensive profiles of families, but also creates opportunities for historical bias to be embedded and amplified through the algorithm’s predictions.
Unlike human caseworkers who apply judgment and context, the AFST generates a numerical risk score intended to help prioritize investigation resources. However, the tool’s predictive power depends entirely on the quality and composition of its training data. If the data reflects historical discrimination—such as over-policing in certain neighborhoods, disparate mental health diagnoses, or unequal substance abuse treatment access—the algorithm will learn and perpetuate those patterns. The county promoted the AFST as a way to reduce investigator bias and improve efficiency, but critics argue that automating biased historical patterns actually makes discrimination harder to see and challenge.

Documented Racial Bias in Algorithmic Risk Screening
Research from Carnegie Mellon University found that the Allegheny Family Screening Tool flagged Black children for mandatory neglect investigations at significantly higher rates than white children with similar circumstances. The analysis revealed that even after controlling for actual risk factors, the algorithm assigned disproportionately high risk scores to Black families. This isn’t a matter of slightly higher rates—the disparities are substantial enough that they cannot be explained by differences in actual harm or danger. A separate analysis by the Human Rights Data Analysis Group (HRDAG) confirmed these findings, demonstrating systematic bias in how the tool processes information about Black families.
The mechanism of harm is particularly concerning because the AFST’s recommendations carry significant weight in caseworker decision-making. A high algorithmic risk score can lead to more aggressive investigations, increased home visits, and higher likelihood of child removal. For Black families already subject to intensive policing and surveillance, an algorithm that flags them at higher rates compounds existing inequities in the child welfare system. One example of this compounding effect: a Black family with a parent who has a prior jail record may receive a higher algorithm score than a white family in identical circumstances, because the algorithm learned to weight incarceration history heavily—and incarceration rates are themselves shaped by racial disparities in law enforcement.
Disability Discrimination and ADA Concerns
Beyond racial bias, civil rights advocates have identified potential violations of the Americans with Disabilities Act embedded in the AFST’s design. The algorithm incorporates disability-related data—including mental health diagnoses and substance abuse treatment history—as risk factors predicting build care placement. Disability rights organizations argue that using disability status as a proxy for parental inadequacy conflates disability with danger, perpetuating the harmful stereotype that disabled people are unfit parents. This is particularly concerning because many disabilities, including mental health conditions, are diagnosed and treated at different rates across racial and socioeconomic groups, meaning the algorithm amplifies both racial and disability-based discrimination simultaneously.
The Justice Department’s Civil Rights Division has been examining the AFST specifically for these concerns. Families with disabilities—parents with depression, anxiety, autism, mobility disabilities, or past substance abuse treatment—may be flagged at higher rates not because they pose actual danger to their children, but because the algorithm has learned to weight disability-related markers as risk signals. However, if a parent’s disability is being managed effectively with treatment or accommodations, the presence of that disability alone should not trigger child welfare investigations. The distinction between a condition that requires monitoring and a condition that indicates actual harm is precisely where algorithmic decision-making often fails.

How Families Are Harmed When Algorithms Decide Welfare Outcomes
The impact of algorithmic bias on families is immediate and severe. When the AFST flags a family as high-risk, caseworkers conduct more frequent home visits, ask more invasive questions, and are more likely to recommend removing children from the home. Families don’t receive explanations for why they’ve been flagged or what specific factors drove the algorithm’s decision, making it nearly impossible to contest the assessment. A parent with a resolved mental health issue from years past may not even know that this information factored into an investigation decision. Meanwhile, the stress of intensive scrutiny and the threat of family separation itself can harm children and destabilize families.
Consider a concrete scenario: A single Black mother with one prior depression diagnosis, who has since completed treatment and remained stable for five years, files a report of her child’s minor injury from a playground accident. The AFST scores her as high-risk based on her mental health history and economic indicators, even though her current circumstances are stable. Caseworkers conduct intensive investigations, interview teachers and neighbors, and document everything in her file. Whether or not the case is substantiated, the investigation itself has caused family trauma, and the mother’s file now contains a record that will be referenced in future assessments. Over time, algorithmic decisions compound—a family flagged once is more likely to be flagged again, regardless of actual changes in circumstances.
The Legal and Regulatory Response to Algorithmic Bias
The Justice Department’s Civil Rights Division began examining the Allegheny Family Screening Tool in response to complaints from families and civil rights organizations. This investigation focuses on whether the tool violates the Civil Rights Act by discriminating on the basis of race and disability. The ACLU has also documented concerns and is pressuring Allegheny County to conduct an independent audit of the algorithm’s bias and to implement safeguards against discriminatory outcomes. However, as of March 2026, no formal class action lawsuit has been filed and no settlement has been reached—only ongoing investigation and advocacy pressure. One important limitation to understand: even if a lawsuit were filed, proving discrimination in algorithmic systems is legally complex.
Plaintiffs must show either intentional discrimination or that the algorithm has a disparate impact on protected groups. Algorithmic disparate impact cases are newer in civil rights law, and courts are still developing standards for how to evaluate AI system harms. Additionally, companies and agencies often argue that if a system makes decisions based on “neutral” factors like prior records or diagnoses (rather than explicitly mentioning race), the system cannot be discriminatory. This legal framework often fails to account for the fact that historical records themselves reflect discriminatory practices. Without clear legal standards and successful precedents, families harmed by algorithm bias have limited recourse.

What Organizations Are Saying About the Allegheny Tool
The ACLU has issued detailed documentation of how policy decisions hidden in the algorithm are threatening families throughout Pennsylvania. Disability rights advocates have raised ADA violation concerns. Researchers at major universities have confirmed the racial bias findings through rigorous statistical analysis. What’s notable across all of these critiques is that they focus not on whether the AFST’s designers had racist intentions, but on whether the system produces racist outcomes—and the evidence suggests it does.
The institutional response has been slow; Allegheny County has acknowledged some concerns but has not fundamentally changed how the tool operates or how its recommendations are weighted in caseworker decisions. An important caveat: even organizations calling for change generally acknowledge that some systematic screening tool is probably necessary in child welfare systems where caseworkers are overwhelmed with reports. The debate is not whether to use algorithms, but how to design, audit, and implement them responsibly. This means using less discriminatory data sources, regularly testing for disparate impact, allowing families to access and challenge algorithmic decisions, and ensuring that human judgment remains critical rather than algorithmic scores driving decisions.
What Comes Next for Families and the Future of Algorithmic Child Welfare
Going forward, the most likely scenarios are either significant pressure leading to algorithm modification or, in a worst-case scenario, Allegheny County continuing current practices while litigation and investigation proceed slowly through federal systems. Some states and counties have begun implementing algorithmic impact assessments and bias audits before deploying decision-making tools, and advocates are pushing Allegheny County toward similar requirements.
The lesson from this case is that technology presented as “objective” or “neutral” requires ongoing scrutiny and that families deserve transparency and meaningful recourse when algorithmic decisions affect their lives. If you believe you’ve been unfairly assessed by the AFST, if you’ve experienced family separation or intensive investigation following an algorithmic flag, or if you’re concerned about algorithmic bias in child welfare, organizations like the ACLU and local legal aid societies are beginning to document these cases. While a formal class action settlement pathway remains uncertain, individual families are joining complaint processes with the Justice Department and supporting regulatory pressure for algorithm reform.
