The Michigan Indigent Defense Commission approved over $295 million in statewide funding for public defense systems covering 133 trial court jurisdictions. As those systems prepare FY2027 compliance plans, the central risk is not the size of the allocation. It’s what happens when compliance reporting depends on self-generated data that no one is independently validating, and when other jurisdictions start benchmarking against numbers that may not reflect operational reality.
The Michigan Indigent Defense Commission’s 2025 Annual Report documents something that looks, on its surface, like a success story: over $295 million approved for public defense systems across the state, covering 133 trial court jurisdictions, with technical assistance flowing to systems as they prepare for the next round of compliance review. The infrastructure is funded. The standards exist. The plans are being written.
What the report does not document, because reports of this kind are not designed to, is whether the numbers inside those systems are real.
What the $295 Million Actually Represents
The MIDC was created by legislation in 2013 following an advisory commission finding that Michigan’s public defense system had constitutional deficiencies. The MIDC Act established minimum standards and created a funding structure under which the state covers the costs of bringing local systems into compliance. Standards 1 through 4 covered training, client interviews, investigation access, and initial representation. Standards 5, 6, and 7, approved between 2020 and 2023, added independence from the judiciary, attorney workload caps, and attorney qualification requirements. Standard 6, which governs caseloads, was not formally approved by the Department of Licensing and Regulatory Affairs until October 2023, making it the most recently active compliance obligation for most systems.
That timeline matters. Most jurisdictions are still in the early stages of building systems capable of tracking caseload data in a way that satisfies Standard 6. The RAND Corporation produced the empirical foundation for those standards in 2019, recommending maximum caseloads based on offense category and attorney time. MIDC’s current compliance framework expects systems to monitor against those numbers. The question of how they monitor, and whether that monitoring is reliable, is where the problem starts.
$295 million is a constitutional investment. But constitutional compliance is not established by the size of a funding allocation. It is established by whether the systems that allocation supports are actually functioning as the standards require. Funding and function are not the same thing, and right now the only people confirming the latter are the same people receiving the former.
The Self-Reporting Problem
MIDC compliance plans are submitted by the systems being evaluated. Caseload data is tracked by the same management offices that are accountable for staying within limits. Billing records are maintained internally. Compensation structures are reported by the offices whose budgets they govern. There is technical assistance from MIDC regional staff, and there is a caseload monitoring pilot project, the LMOS system, designed to help track attorney assignment data across systems. Both are real and functioning. Neither constitutes independent external audit.
MIDC’s own meeting materials make the data infrastructure challenge visible. October 2025 materials document that some jurisdictions have submitted requests for additional caseload data collection funding that MIDC staff flagged as potentially duplicative. December 2025 materials note directly that systems are struggling with caseload requirements and that rural counties face attorney availability gaps that affect whether compliance is functionally achievable, not just paperwork-achievable. These are not minor editorial notes. They are the agency’s own documentation of known system stress points that the annual report’s headline number does not surface.
MIDC’s December 2025 commission materials state that systems across the state are struggling with the requirements for controlling caseloads, and that many counties have few or no attorneys qualified to accept life offense assignments. Prosecutors in rural areas are also reporting significant recruitment and retention challenges. These findings appear in staff documentation, not in the Annual Report’s executive summary.
The Benchmarking Effect
The Annual Report, once published, functions as something its drafters did not design it to be: a statewide comparison table. When a county compliance officer sits down to write an FY2027 plan, one of the first things that person will ask is what other counties are paying attorneys, what caseload numbers are being accepted by MIDC staff, and what level of reporting has been passing compliance review. The answers to those questions come primarily from the Annual Report and from MIDC meeting materials.
That is not a problem in itself. Transparency is how systems set norms. The problem is what happens when the underlying figures are unreliable. If a jurisdiction reports a caseload that is understated because its tracking system counts assignments rather than active files, and another jurisdiction uses that figure as its reference point for what a “compliant” caseload looks like, the second jurisdiction has calibrated its compliance claim to a number that was wrong before it got there.
This is how compliance frameworks fail quietly. Not through defiance or fraud. Through imprecision that nobody catches because the system checking the numbers is the same system that generated them. One inaccurate benchmark becomes the floor everyone else builds on, and the distortion propagates silently across the entire statewide dataset.
The implications are not abstract. Under Strickland v. Washington, 466 U.S. 668 (1984), the constitutional right to counsel includes the right to effective assistance. An attorney carrying an unconstitutional caseload is not providing effective assistance regardless of what the compliance plan says. If the plan’s caseload claim is inaccurate, the defendant pays the price, and the exposure materializes at the appellate level, often years after the conviction.
Where the Failure Points Are
The structural gaps are predictable. Caseload reporting depends on what counts as a case and when, questions that different systems answer differently. An assignment-based count looks different from an active-file count, which looks different from a docket-hours count. MIDC’s LMOS pilot project is attempting to standardize this, but participation is not universal and the pilot is still developing. Until there is a uniform statewide tracking standard that all systems are required to use, the compliance numbers from different jurisdictions are not directly comparable even when they look comparable on paper.
Compensation structures carry a parallel problem. Flat-rate contracts, hybrid hourly arrangements, and different treatments of overhead and support staff costs mean that what one jurisdiction reports as attorney compensation may not be measuring the same thing as what a neighboring jurisdiction reports. The MIDC standards require that compensation be sufficient to attract qualified attorneys. Whether it actually is depends on market conditions that vary significantly across Michigan’s 83 counties, and the Annual Report figure does not disaggregate by geography or caseload type.
Data fragmentation is the third failure point. Court management systems, billing platforms, and administrative records are frequently maintained separately and are not integrated. An attorney’s active caseload may be visible in one system but not another. When the compliance narrative is written, the person writing it is drawing from systems that were not designed to talk to each other, and may be assembling a picture that is internally coherent but disconnected from operational reality.
A real compliance audit does not ask whether the plan was submitted on time. It asks whether docket activity matches reported caseloads, whether billing records align with reported attorney effort, whether compensation structures reflect actual hours worked rather than contracted flat rates, and whether outcomes, including plea rates, dismissal rates, and appellate reversal rates, are consistent with what the workload claims make plausible. Most systems currently cannot answer those questions cleanly.
Every jurisdiction preparing an FY2027 plan is asking what other systems are doing. The more defensible question is whether your own data holds up. Jurisdictions that can produce clean, cross-referenced documentation of actual caseloads, actual compensation, and actual outcomes are the ones that survive scrutiny when it arrives. Benchmarking against unverified external figures does not protect you if your own numbers are wrong.
MIDC provides technical assistance and reviews submitted plans. The Michigan Auditor General has the MIDC on its in-progress audit list. But between those two functions, there is no standing mechanism for independent, cross-referenced validation of the data inside the plans. That gap is not a criticism of MIDC’s structure. It is a description of a structural problem the current compliance framework was not designed to solve.
The Counterargument: What the MIDC Framework Is Actually Doing
The data problems described here are real, but the framework’s achievements should not be understated. Before the MIDC Act, Michigan had no statewide standard for public defense at all. Counties set their own terms, paid attorneys whatever the local market bore, and had no mechanism for verifying that constitutional minimums were being met. The $295 million represents a structural shift from that baseline. The LMOS pilot project is actively attempting to solve the caseload tracking problem. MIDC regional staff provide technical assistance that catches plan deficiencies before they become compliance failures. These are not small things.
The argument here is not that the system is failing. It’s that a well-funded system with honest deficiencies in its data infrastructure is more dangerous than a poorly funded system with honest deficiencies, because the well-funded one is harder to see clearly. When the number looks big and the plans look complete, the pressure to ask whether the underlying data is reliable drops. That’s the moment when structural problems get locked in.
The Michigan Auditor General has the MIDC on its in-progress review list. That audit, when it reports, will be the first independent external evaluation of whether what the compliance plans say matches what the systems are actually doing. The results will matter for every jurisdiction that has been using other jurisdictions’ plans as a reference point.
The money is there. The standards exist. The plans are being written. What nobody has verified yet is whether the systems behind the plans are real. That’s the question $295 million deserves to have answered.