While credentialed professionals panic about AI eating their jobs, justice-impacted people are trying to navigate an AI-accelerated labor market that was already hostile before the robots showed up. Algorithmic hiring tools amplify the same structural exclusions that defined the old economy, concentrated job losses are hitting the exact entry-level roles returning citizens depend on, and workforce development funding has not reached the population that needs it most. The AI economy is not creating a new problem for justice-impacted workers. It is turbocharging an existing one.
The Workers Who Were Already in Crisis
The headlines have been relentless. Amazon cut 15,000 jobs. Salesforce eliminated 4,000 customer support positions because AI was already handling half the work. Workday, Accenture, and Lufthansa joined the list of companies citing artificial intelligence when announcing restructuring. In 2025 alone, AI was attributed as a contributing factor in nearly 55,000 U.S. layoffs, part of a total 1.17 million separations that represented the highest layoff volume since the pandemic year of 2020.
Worker anxiety has responded accordingly. According to Mercer’s Global Talent Trends 2026 report, which surveyed 12,000 people worldwide, employee concerns about AI-driven job loss climbed from 28% in 2024 to 40% in 2026. That is a twelve-point jump in two years, driven by real layoffs at real companies and a steady drumbeat of coverage about what the technology is capable of. The conversation has reached everyone from software engineers to paralegals to college students reconsidering their majors.
What the conversation has largely skipped is the population for whom labor market instability is not a new development, not a disruption of a stable trajectory, but the baseline condition they started from before any of this began.
Formerly incarcerated people in the United States carry a baseline unemployment rate of over 27 percent, according to the Prison Policy Initiative. That figure, the only nationally calculated rate for this population, is nearly five times the general public unemployment rate. For Black men returning from incarceration, the rate reaches 35.2 percent. For Black women, it climbs to 43.6 percent. In the best of labor markets, at the tightest point of the post-pandemic hiring boom, the structural barriers facing justice-impacted job seekers did not dissolve. They compressed slightly, then re-expanded.
The collateral consequence architecture is extensive. The Council of State Governments’ Justice Center found that 72 percent of all post-release restrictions impact job opportunities. Occupational licensing bans, background check requirements, restrictions on public benefits access, and employer screening practices combine into an interconnected system of exclusion that operates independently of whether the labor market is strong or weak. The job market was already enforcing a second sentence before AI entered the frame.
When AI Targets the Only Jobs Left
The irony of the current moment is structural. The jobs that AI is most aggressively automating are the jobs that returning citizens are most reliably able to access.
Retail cashiering faces a 65 percent automation risk, with AI-powered checkout systems expected to reach 25 percent adoption by 2026 to 2028. Manual data entry carries an automation risk of 95 percent, with AI systems now capable of processing thousands of documents per hour. Customer service roles, clerical and administrative positions, and call center work, representing approximately 6.1 million jobs according to Brookings Institution analysis, are among the highest-risk categories for displacement, and they are the jobs that do not require occupational licenses, do not require college degrees, and do not typically conduct criminal background checks as a threshold barrier.
This is not a coincidence. It is a direct consequence of how the justice system’s collateral consequences have shaped what employment pathways are actually available to returning citizens. The less gatekept the job, the more accessible it is to someone with a record. The less gatekept the job, the more exposed it is to automation. The Venn diagram is nearly a circle.
The occupational licensing bans, background check barriers, and degree requirements that make higher-wage jobs inaccessible to justice-impacted workers have also, by design, concentrated returning citizens into the job categories most exposed to AI displacement. The same system that restricted access to protected employment is now leaving people exposed in unprotected employment.
The World Economic Forum’s Future of Jobs Report 2025, which drew on surveys from over 1,000 employers representing more than 14 million workers, projects that 92 million roles will be displaced by 2030, while 170 million new roles emerge. The net number is positive. The distribution is not. As one analysis of the data put it, the jobs being destroyed and the jobs being created are not the same jobs, do not require the same skills, and are not located in the same geographies. A postal clerk in Ohio whose position is automated by intelligent mail-sorting systems does not automatically transition into an AI prompt engineer in San Francisco. The gap between those two realities is where the genuine human cost lives. For justice-impacted workers, that gap was always wider to begin with.
The Algorithm Is Not Neutral
Even when a returning citizen manages to compete for one of the remaining accessible jobs, the hiring process itself has been restructured in ways that multiply existing disadvantages.
Seventy percent of employers planned to use AI in their hiring processes by the end of 2025. These tools promise efficiency: faster resume screening, automated candidate ranking, video interview analysis. What they deliver, in practice, is scale discrimination. If the training data reflects historical hiring patterns that excluded certain groups, the algorithm learns to replicate those patterns. If the model penalizes employment gaps, it systematically disadvantages people who were incarcerated. If it flags non-linear career histories, it deprioritizes people whose working lives were interrupted by the justice system. If it uses zip code or educational institution as a signal, it re-encodes every structural disadvantage that created the criminal justice pipeline in the first place.
The legal landscape is beginning to recognize this. In May 2025, a federal court in California certified a nationwide collective action in Mobley v. Workday, Inc., establishing that AI hiring vendors can be held directly liable for discriminatory outcomes under federal employment discrimination law, even without a contractual relationship with job applicants. The court was direct: algorithmic gatekeeping that produces discriminatory results is not exempt from civil rights law simply because a machine executed the decision.
The Mobley v. Workday court held that Workday’s role in the hiring process was no less significant because it operated through artificial intelligence rather than a human reviewer. Drawing a distinction between algorithmic and human decision-makers, the court warned, would potentially gut anti-discrimination laws in the modern era. That reasoning applies with equal force to any protected class, including those facing employment discrimination based on conviction history.
AI-powered video interview tools have also drawn scrutiny. In March 2025, the ACLU filed a discrimination complaint against HireVue and Intuit on behalf of an Indigenous, deaf job applicant who was rejected by an AI interview system and given feedback about her need to practice active listening. The complaint alleged that the tool was inaccessible to deaf applicants and likely to underperform when evaluating people who speak dialects like Native American English, which carry different speech patterns and cadences than the training data anticipated. The EEOC’s own AI and Algorithmic Fairness Initiative, which aimed to develop guidance on these exact issues, was subsequently closed by the Trump administration following an executive order directing agencies to deprioritize disparate-impact enforcement.
For justice-impacted workers, the protection gap is compounding. The tools are discriminating. The regulatory apparatus designed to catch algorithmic discrimination has been deliberately weakened. And none of the pending litigation names conviction history as a protected class, because it is not one under federal law.
The EEOC’s AI and Algorithmic Fairness Initiative was terminated by executive order in 2025. Conviction history is not a protected class under federal employment discrimination law. AI hiring tools that penalize employment gaps, flag non-linear work histories, or use geography as a proxy for risk can systematically screen out returning citizens with no available federal enforcement mechanism specifically designed to address it.
The Workforce Pipeline That Does Not Include Them
The policy response to AI-driven displacement has focused on workforce development: upskilling programs, AI literacy training, community college partnerships, employer-led retraining initiatives. The United States spends approximately 0.1 percent of GDP on active labor market policy, ranking second to last among OECD countries, but a significant volume of corporate and philanthropic investment has moved into this space in the past two years. Google’s AI Works for America initiative, launched in July 2025, targets workers and small businesses. Federal executive orders have mandated AI education task forces and K-12 curriculum development. State workforce agencies are developing AI training pipelines.
What these programs share, almost universally, is a design assumption: the target participant has internet access, a stable address, freedom of movement, and the ability to engage with institutional programs on a voluntary basis. That is not the reentry profile.
People transitioning out of incarceration face housing instability, supervision requirements that constrain their schedules, digital literacy deficits accumulated over years without internet access, and the cognitive and emotional load of reentry itself. Less than 4 percent of formerly incarcerated people hold a college degree. Twenty-five percent do not have a GED, which remains a minimum threshold on a significant share of job applications. The skills gap that AI workforce training is designed to close is several rungs above where many returning citizens are starting.
Google’s earlier Career Readiness for Reentry program, which embedded digital skills training inside correctional facilities through nonprofit partnerships, demonstrated that placement-inside-the-system approaches can work. Seventy-five percent of program participants reported being employed or enrolled in school by the end of the program. But the scale of that initiative, and the number of programs like it currently operating, is nowhere near commensurate with the 650,000 people who transition out of incarceration annually in the United States. Workforce development, as TechEquity Collaborative’s November 2025 analysis concluded, is being designed for workers who are already in the workforce. The people furthest from the labor market require a different intervention architecture.
Workforce training commitments alone are an insufficient response to AI displacement. They need to be paired with broader economic policies that address power differentials, structural access barriers, and the social safety net gaps that make displacement catastrophic for some populations and merely inconvenient for others. For justice-impacted workers, training without structural change is a ladder built above where people are standing.
What Closing the Gap Actually Requires
The policy interventions that would meaningfully improve outcomes for justice-impacted people in the AI economy are not complicated to describe. They are complicated to fund and politically difficult to prioritize.
Existing ban-the-box legislation delays criminal history inquiries until after an initial interview. AI screening tools operate before interviews, making traditional ban-the-box protections structurally inapplicable. Legislation must be updated to explicitly prohibit algorithmic screening that penalizes employment gaps, flags conviction-correlated proxies, or uses data patterns that function as criminal history surrogates. The Colorado AI Act, which takes effect June 2026 and classifies employment AI as high-risk, provides a regulatory template. Michigan has not followed.
New York City’s existing requirement for annual independent bias audits of automated employment decision tools is the right structural approach. Audits should be extended to include disparate impact analysis on people with conviction histories. Mobley v. Workday established that vendors, not just employers, bear liability for discriminatory algorithmic outcomes. That liability structure should be codified at the state level to create enforcement teeth independent of federal action.
The labor market that returning citizens are entering is an AI-mediated one. Preparing people for reentry without preparing them for that reality is preparing them for a job market that no longer exists. Correctional education programs should include AI literacy, digital skills training, and applied technology coursework as standard components, not pilot initiatives dependent on philanthropic interest. Google’s Grow with Google Career Readiness for Reentry program demonstrated proof of concept. The question is whether government will provide the infrastructure philanthropists briefly funded.
The dismantling of the EEOC’s AI and Algorithmic Fairness Initiative and the executive order deprioritizing disparate-impact enforcement have created a federal enforcement vacuum. State attorneys general and state civil rights agencies can fill that vacuum. Algorithmic discrimination should be codified as a distinct enforcement category, with standing for civil rights organizations to bring pattern-and-practice claims against vendors whose tools systematically exclude protected and justice-impacted populations. The legal architecture exists. It requires political will to deploy it.
The Worry That Arrives on Schedule
There is something worth naming in the current moment of AI anxiety. The workers who are most visible in the panic, who are generating the op-eds and the podcasts and the congressional testimony, are workers whose relationship to the labor market has historically been characterized by security and upward mobility. When software engineers lose work to AI, it becomes a policy emergency. When customer service representatives lose work to AI, it becomes a quarterly earnings call talking point. When returning citizens cannot access either of those jobs because the algorithm decided their seven-year employment gap made them a poor fit, it does not make the news cycle at all.
Justice-impacted people were not included in the design of the economy that existed before AI. There is no reason to assume they will be included in the design of the economy being built with it, unless someone makes that inclusion a requirement rather than a hope.
The data on what works is not ambiguous. Employment is among the strongest predictors of successful reentry and recidivism reduction. Employers who hire returning citizens report higher retention rates than the general workforce. A Northwestern University study found that workers with criminal records are less likely to voluntarily quit than their peers. The business case exists. The moral case exists. The policy infrastructure to act on either of them is what is currently missing.
While credentialed workers are discovering, for the first time, what it feels like to have a labor market that is not designed around their needs, justice-impacted workers have been living in that condition for the entirety of their working lives. The AI economy did not create that experience. It just made it easier to ignore by directing everyone else’s attention elsewhere.
Sources
Rita Williams, Everyone Is Worried About AI Taking Their Jobs. Justice-Impacted People Were Already There., Clutch Justice (Apr. 14, 2026), https://clutchjustice.com/ai-job-displacement-justice-impacted-workers/.
Williams, R. (2026, April 14). Everyone is worried about AI taking their jobs. Justice-impacted people were already there. Clutch Justice. https://clutchjustice.com/ai-job-displacement-justice-impacted-workers/
Williams, Rita. “Everyone Is Worried About AI Taking Their Jobs. Justice-Impacted People Were Already There.” Clutch Justice, 14 Apr. 2026, clutchjustice.com/ai-job-displacement-justice-impacted-workers/.
Williams, Rita. “Everyone Is Worried About AI Taking Their Jobs. Justice-Impacted People Were Already There.” Clutch Justice, April 14, 2026. https://clutchjustice.com/ai-job-displacement-justice-impacted-workers/.