Legal Practices Bought the AI. They Did Not Buy the Infrastructure It Needs to Work.
Law firms, courts, and public agencies spent two years acquiring automation tools. The tools function. The records underneath them do not. What follows is an analysis of how institutional data infrastructure actually fails, why software cannot fix it, and what a real diagnostic process requires. The short version: you cannot run a machine learning model on a filing system organized by institutional memory and goodwill.
- Most AI implementation failures in legal and public sector settings are not software failures. They are data failures that the software purchase made newly visible.
- The most common single point of failure in institutional records is not a broken system. It is a person: one staff member whose departure would make a critical category of records uninterpretable.
- Inconsistent record-keeping is not just an operational inconvenience. In regulated environments, it is a compliance liability that surfaces under audit, FOIA, and discovery.
- The fix is not a better interface. It is a foundational audit of what records the institution actually has, where they live, and who is the only person who can currently find them.
- Institutions that cannot map their own data are not ready for AI. In many cases, they are not ready for a well-organized public records request.
The Software Is Installed. It Is Waiting.
At some point in 2024 or 2025, a procurement decision was made. A vendor gave a demo. Someone came back from a conference persuaded that the organization needed an AI-powered document management system, a predictive analytics platform, or a compliance automation tool. The budget was approved. The contract was signed. The onboarding call happened.
The software is installed. It is waiting.
It is waiting because the documents it was supposed to analyze are scanned PDFs from 2011 that were never run through OCR. It is waiting because the case data it was supposed to pull from lives in three systems that do not communicate with each other, entered under naming conventions that made sense to one paralegal in 2016 and nobody else since. It is waiting because the matter-type field contains values including “civil,” “Civil,” “civ,” “civil matter,” “CM,” “N/A,” and, in eleven records, the word “misc” followed by initials that correspond to a staff member who left in 2022.
The status field, in at least one review Clutch Justice has been made aware of, contained the entry: “ask Denise.”
Denise retired in March.
The institutional AI problem of 2026 is not a technology problem. It is a records problem that technology purchasing decisions have made impossible to ignore. Organizations that skipped the foundational step of auditing their data infrastructure are now discovering, at considerable expense, that they skipped it.
This is not sector-specific. It appears in legal aid organizations trying to deploy intake automation on case records that predate their current database by a decade. It appears in county agencies attempting to integrate predictive tools on compliance data that was never standardized across field offices. It appears in mid-size law firms that purchased contract review AI and are now confronting the reality that a meaningful portion of their historical agreements are scanned sideways, filed under partner names that no longer work at the firm, in a shared drive that IT has flagged as over storage limit since 2023.
The technology acquisition happened years ahead of the institutional readiness to use it. The gap between those two things is not a software problem. It is an accountability problem, and it is the same kind of problem this platform documents in Michigan courts and public agencies every week.
What Institutional Data Failure Actually Looks Like
The phrase “messy data” understates what is happening in most of these environments. Messy implies a temporary state of disorder. What most legal and public sector institutions have is structural fragility: records built incrementally over decades, by different staff under different leadership, with no governing logic that survived any of the transitions between them. Several failure patterns appear consistently enough to name them.
The Person Who Is the System
Every institution has at least one. This is the staff member who has maintained a critical operational record for so long that the process exists entirely in their head. The record is accurate. It is updated on a schedule only they know. It uses categories and shorthand only they can interpret. When asked to describe the process, they describe it as straightforward. When they leave, the organization discovers that “straightforward” was doing an enormous amount of work in that sentence.
In multiple Michigan public agency contexts reviewed through Clutch Justice investigative work, critical compliance records were maintained in personal Google Drive folders belonging to individual staff members, not in any institutional system accessible to others. When those staff members departed, the records either left with them or became inaccessible pending IT escalation. The institutions involved learned this when they needed the records for an external request they could not defer.
The Spreadsheet That Runs Everything
This spreadsheet was built by someone who has not worked at the organization in years. It was passed from person to person with no documentation of its logic, no version control, and no backup protocol. It contains formulas that reference tabs that no longer exist. It has columns that have not been updated in three years but that everyone is afraid to delete because no one is certain they are not load-bearing. It is the single authoritative record for something that should never be tracked in a spreadsheet.
It is maintained by one person. That person has been making noises about retirement since 2024.
The organization bought an AI tool to automate the reporting that currently runs through this spreadsheet. The AI tool cannot read it. This is because the spreadsheet was never designed to be machine-readable. It was designed to be interpretable by the person who built it, communicated to successors through a thirty-minute verbal walkthrough, and held together through years of manual corrections that never made it into any documentation anyone else can find.
The spreadsheet is not the problem. The spreadsheet is the solution that accumulated in the absence of a system. It exists because at some point the institution decided that having a capable person manage a critical record was sufficient governance. That was a reasonable decision at the time. It is also why the institution now has a single point of failure with a pension timeline and no succession plan, in a regulated environment, maintaining records it may be legally required to produce on short notice.
The Taxonomy Nobody Designed
Case types, matter categories, violation codes, service classifications: every legal and public sector organization tracks some version of these. In functional records systems, these categories are standardized, documented, and applied consistently. In most institutions reviewed through forensic systems work, they are the accumulated product of a decade of staff preferences, three software migrations, and four reorganizations, applied inconsistently across thousands of records, with no crosswalk document explaining what any legacy category maps to now.
Running analytics on this dataset produces outputs that look precise. They are measuring the inconsistency, not the underlying reality. The AI model is not hallucinating. It is accurately reporting on what it was handed.
The Compliance Record That Is Not a Record
The institution is required, by statute, contract, or grant obligation, to maintain certain records. Those records technically exist. They exist as emails in someone’s inbox. They exist as notes in a shared folder with no naming convention. They exist as entries in a system that was deprecated two platforms ago and can no longer export its own data. They exist as the institutional memory of people who are no longer employed there.
In public records disputes involving Michigan courts and agencies covered by Clutch Justice, the most common documentation failure was not falsification. It was the absence of any standardized protocol for what constituted a record, who was responsible for maintaining it, and how it was supposed to be retained. The failure was procedural at the foundation, long before any specific record became contested.
When a FOIA request arrives, or a compliance audit, or discovery in active litigation, the institution discovers that “we kept records” and “we can produce records” are two different statements with very different operational implications. The distance between them is where liability lives.
Clutch Justice consulting maps data infrastructure, traces process ownership, and surfaces the gap between what institutions document and what they actually do. If the records are the problem, this is where you start.
See Consulting Tracks ?The Diagnostic: What to Actually Do About It
Organizations that recognize themselves in the patterns above have a concrete remediation path. It does not begin with another software purchase. It begins with a structured audit of what the institution actually has, where it lives, who is responsible for it, and what the failure mode is when any one of those things changes. The following is the minimum viable version of that diagnostic.
Step One: Data Inventory and Ownership Mapping
Identify every critical record category the institution maintains, formal and informal. For each one, document the actual source of the data, not the system it is supposed to live in but where it actually lives. Document who maintains it, the update frequency, the format, and the access controls. This exercise almost always produces records that exist outside any institutional system: in a personal inbox, a shared drive with no governance, a deprecated application IT was never told to decommission, or a format that has not been natively readable since a migration five years ago.
Step Two: Dependency Analysis
For each record category, model the failure mode. If the person responsible for this record left the organization on Friday, what would be inaccessible, lost, or uninterpretable by Monday morning? Then apply the harder question: what regulatory, contractual, or legal obligation depends on the institution’s ability to produce that record on demand? The intersection of those two answers is the actual risk map. Most organizations have never produced it.
Any critical record category with a single identified owner and no documented backup process is an unacceptable operational risk in a regulated environment. That is not a staffing observation. It is a data governance finding, and it should be treated as one.
Step Three: Taxonomy and Consistency Audit
Pull a representative sample from every major data category and assess consistency. Are case types applied using the same controlled vocabulary across the entire dataset? Are dates formatted uniformly? Do status fields use a defined set of values or do they use whatever the entering staff member felt like typing that day? Are records from past system migrations complete, or did the migration silently drop fields, truncate entries, or introduce encoding errors that have been propagating through the data ever since? The answers determine whether the data is analytically usable. There is no middle category.
Step Four: Process Documentation Gap Assessment
Every critical process in a well-governed institution has written documentation sufficient for a qualified person to perform it without institutional memory. Identify which of the institution’s critical processes have this documentation. For every process that does not, note who is currently carrying it, how long they have been with the organization, and what their retirement timeline looks like. Then schedule the uncomfortable conversation about knowledge transfer before the institution is forced to have it as an emergency response to a vacancy.
Step Five: AI and Automation Readiness Scoring
Only after completing the prior four steps is an institution in a position to honestly assess what it can automate. Map each proposed use case against the data infrastructure it would require. Apply the findings from the dependency analysis and the taxonomy audit to each one. The result is a readiness score that tells the organization which use cases are viable now, which require remediation first, and which should not be attempted until the foundational records infrastructure is rebuilt. This is the step most organizations skipped. This is why most institutional AI implementations are underperforming.
The honest version of this readiness assessment usually produces a result that no one in the room wanted to hear: the organization is two to four years away from being able to use the tool it purchased eighteen months ago. The options at that point are to do the foundational work, to continue paying for underutilized software while the underlying problem compounds, or to find a consultant willing to tell the vendor what the vendor wants to hear. Two of those options resolve the problem.
Why Clutch Justice Covers This
Institutional data governance is a Clutch Justice issue because it is an accountability issue. Courts that cannot produce consistent records invite challenges to their own proceedings. Public agencies with undocumented process ownership create conditions where procedural violations are undetectable until they surface in litigation. Legal organizations with broken intake records cannot demonstrate compliance with the grant obligations that fund their operations.
The investigative work this platform does on Michigan courts and agencies is document work. It is tracing record chains, identifying gaps between what official records say happened and what the supporting documentation shows, and mapping the distance between procedure and practice. The forensic consulting work applies the same methodology to organizations that want to find those gaps before someone else does.
The difference between a procedural failure that becomes a public records dispute and one that gets caught and corrected internally is almost always the quality of the institution’s own record-keeping. Organizations with coherent, traceable, consistently maintained records can identify problems early. Organizations without that infrastructure find out about their problems from a journalist, an auditor, or opposing counsel. The timing tends to be less convenient than the alternative.
Institutions that cannot account for their own data cannot be held accountable for their own processes. That is not a statement about intent. It is a statement about structural opacity. Opacity does not protect institutions long-term. It determines who discovers the failure and under what circumstances.
Almost never because of the software. Almost always because the records the software is supposed to process are inconsistently structured, siloed across incompatible systems, or maintained by one person who has been quietly managing a critical process in a personal spreadsheet for eleven years. Automating a broken process does not improve it. It produces broken outputs faster, with a cleaner interface.
A staff member who carries critical institutional data in their head, their personal files, or a spreadsheet only they fully understand. When that person leaves, retires, or is out for two weeks, the organization discovers in real time that what it thought was a records system was actually a person. This is the most common data vulnerability in Michigan legal and public sector institutions, and it is rarely identified until the person is already gone.
Identify every critical record category the institution maintains, formal and informal, and document where each one actually lives. Not the system it is supposed to live in. Where it actually lives. Who maintains it, how often it is updated, and what happens operationally if that person does not come in on Monday. Most institutions are surprised by what this inventory produces. The surprise is the point.
In regulated environments, inconsistent record-keeping and untraceable data lineage are compliance liabilities. When a FOIA request, an audit, or a discovery order arrives and the institution cannot produce a coherent record trail, the gap between what was documented and what actually happened becomes the exposure. That gap has a name in litigation. It is called spoliation, or negligence, depending on how charitable opposing counsel chooses to be that day.
- MethodologyInstitutional forensics framework developed through Clutch Justice investigative reporting on Michigan court and public agency records practices, 2023 to 2026.
- Michigan FOIAMichigan Freedom of Information Act, MCL 15.231 et seq., governing public agency record production and retention obligations.
- Court RulesMichigan Court Rules governing document production and record maintenance in civil proceedings, MCR 2.310 and MCR 2.506.
- Federal FrameworkTitle IV-D federal record-keeping requirements for state agencies administering child support programs, 45 CFR Part 303.
- RegulatoryState and federal audit standards for public agency record retention, including OIG audit protocols applicable to federally funded programs and GASB guidance on governmental records.
APA 7: Williams, R. (2026, May 15). Legal practices bought the AI. They did not buy the infrastructure it needs to work. Clutch Justice. https://clutchjustice.com/2026/05/15/legal-practices-bought-the-ai-not-the-infrastructure/
MLA 9: Williams, Rita. “Legal Practices Bought the AI. They Did Not Buy the Infrastructure It Needs to Work.” Clutch Justice, 15 May 2026, clutchjustice.com/2026/05/15/legal-practices-bought-the-ai-not-the-infrastructure/.
Chicago: Williams, Rita. “Legal Practices Bought the AI. They Did Not Buy the Infrastructure It Needs to Work.” Clutch Justice, May 15, 2026. https://clutchjustice.com/2026/05/15/legal-practices-bought-the-ai-not-the-infrastructure/.
You have documents. I find where they break.
Clutch Justice provides independent institutional forensics consulting for law firms, public agencies, advocacy organizations, and legal service providers. If your data infrastructure has never been formally audited, it has been assumed to be functional. Those are different things.
- Government Accountability & Institutional Forensics
- Procedural Risk & Abuse Pattern Recognition
- Legal AI & Court Systems Domain Advisory
“I map how institutions hide from accountability. That map is what I sell.”