American courts are structurally incapable of detecting the patterns they are legally obligated to stop. Stalking through litigation, abuse of process, and real-time deprivation of constitutional rights all depend on the same condition: fragmented dockets that do not talk to each other and no automated system assigned to notice when they should. The technology to fix this exists. Deploying it is a political decision that has not been made.
The court system was designed for a world in which the problem in front of the judge was the only problem. One litigant, one dispute, one courtroom, one record. That world has not existed for a long time, and the architecture built to serve it is now actively enabling the harms it was supposed to prevent.
Courts do not run on rules that are easy to apply. They run on rules that are complex by design, interpretable by training, and dependent on human memory for enforcement. A procedural violation that would be instantly flagged by any competent database query — a filing in the wrong venue, a party who has appeared in three simultaneous proceedings targeting the same respondent, a sentence that departs from the guidelines by a factor that triggers mandatory review — goes undetected in most American courtrooms because no one is watching the whole picture. The judge sees the case. Nobody sees the pattern.
The Docket Problem Is Not an Accident
Every major state court system in the country maintains some version of a case management database. Most of them have been built, rebuilt, and patched over decades with different vendors, different schemas, and different access controls. Michigan’s own MiFile rollout illustrated the problem in real time: a system intended to modernize court records instead produced a patchwork that neither attorneys nor the public could reliably navigate, that produced errors in filing records, and that offered no cross-court pattern analysis out of the box.
The result is that the data exists in the system — every filing, every party name, every case number — but the system is not designed to ask questions of itself. It does not ask whether the same petitioner has filed three PPO applications in three different counties targeting the same respondent in the same month. It does not ask whether an attorney of record in a civil case has also filed a separate action in a different court involving the same underlying facts without disclosing the parallel proceeding. It does not ask whether a sentencing departure exceeds the guidelines range by a magnitude that requires a written finding under state law. A human has to ask. And humans, including attorneys, frequently don’t know to ask, don’t have the access to check, or don’t have the standing to raise the issue before the damage is done.
A court system that cannot detect patterns is not a neutral system. It is a system that advantages whoever is willing to exploit its blind spots. That advantage does not distribute randomly. It goes to parties with resources, with legal representation, and with enough time to run a coordinated campaign before any single judge realizes they are seeing one piece of a larger operation.
What Computers Already Know How to Do
None of the pattern analysis described above is technically novel. Cross-referencing party names against a filing database is a query that any competent database administrator can write. Flagging cases where the same petitioner appears against the same respondent in multiple simultaneous proceedings is a join operation. Checking whether a proposed protective order involves a respondent who is currently represented by counsel in a related proceeding is a lookup. Verifying that a filing’s claimed venue matches the statutory requirements for that case type is a rules-based check that a system can execute at the moment of filing, before a judge ever sees the document.
The private sector figured this out a long time ago. Financial institutions run real-time fraud detection across millions of transactions because the cost of not detecting the pattern exceeds the cost of building the detection system. E-commerce platforms flag accounts that exhibit behavioral signatures consistent with abuse because the pattern is visible in the data. Social platforms, whatever their other failures, have invested heavily in coordinated inauthentic behavior detection because pattern abuse at scale is both recognizable and damaging.
Courts have not made that investment. The argument is usually about resources. Occasionally it is about judicial independence — the concern that automated flagging would constrain discretion in ways that are inappropriate for an institution whose legitimacy depends on case-by-case judgment. Both arguments are worth engaging seriously and both ultimately fail to justify the current situation.
The Discretion Argument Gets It Backwards
The judicial independence argument against automated oversight rests on a misunderstanding of what automation would do. A flag is not a ruling. A cross-docket pattern alert does not decide the case. It puts the relevant information in front of the judge who is about to make a decision without it. The judge still decides. The difference is that the decision is now made with the full picture rather than with the fragment the filing party chose to present.
Judicial discretion is not enhanced by ignorance of relevant facts. A sentencing judge who does not know that the defendant before them has been the subject of multiple prior proceedings in which the same departure argument was raised and rejected is not exercising independent judgment. They are exercising uninformed judgment. That is not a feature of the system. It is a failure of the system, and it is a failure that disproportionately harms defendants who lack the resources to research and present that history themselves.
The practical consequence of the information gap is that abuse of process has a lower effective cost than it should. A party willing to file frivolous proceedings across multiple venues imposes real costs on the respondent — legal fees, time, emotional toll, and in some cases the chilling effect on speech or conduct that the proceedings were designed to produce — while facing minimal systemic resistance. The court that processes each filing individually has no mechanism to penalize the pattern because it cannot see the pattern.
Stalking Is a Pattern Crime. Courts Treat It Like a Discrete Event.
Stalking is defined by repetition. A single contact, a single filing, a single appearance outside someone’s home is not stalking. The pattern is the crime. State and federal law recognize this. Stalking statutes require a course of conduct, not an incident. But the court systems that are supposed to adjudicate stalking behavior are structurally organized around discrete events. Each petition for a personal protection order is evaluated on its own facts. The prior petitions — granted, denied, expired, violated — may or may not surface, depending on whether the petitioner’s attorney thought to check, whether the records are accessible across jurisdictions, and whether the judge had time to review a complete history in a docket that may involve dozens of active cases that day.
The problem compounds when the stalking itself is conducted through the court system. A litigant who uses repeated, meritless filings to maintain contact with, drain the resources of, or frighten a target is engaging in stalking by another name. Courts call it abuse of process when they can see it. They frequently cannot see it because each filing looks, in isolation, like a legitimate legal action that the court has an obligation to process.
An automated cross-docket system would see it immediately. The signature is not subtle: same petitioner, same respondent, multiple proceedings, escalating or parallel timing, contested venue in more than one filing, pattern of withdrawal immediately before hearing. These are not ambiguous signals. They are the record of a campaign, and they are already in the data.
The Self-Represented Litigant Is the Canary
The court system’s reliance on human memory and professional expertise to navigate its own rules falls hardest on people without attorneys. A self-represented litigant facing a coordinated multi-forum campaign has to identify the venue problem, research the statutory authority, draft the motion, present it coherently at a hearing, and do all of this while managing the practical consequences of the proceedings they are already in. The opposing party’s attorney, by contrast, knows exactly what was filed, when, and why, because they filed it.
This is not a neutral starting position. The complexity of procedural rules is not democratically distributed. It operates as a tax on people who cannot afford representation, and it creates structural openings for exploitation that a better-designed system would close automatically, without requiring the victim to identify and articulate the problem in real time under pressure.
The argument that courts cannot be responsible for compensating for the representation gap is legitimate as far as it goes. Courts are not law schools. But courts can be responsible for their own systems. A venue check that runs automatically at filing costs the court nothing and gives the self-represented respondent the same information the filing attorney already has. A cross-docket pattern flag that surfaces at the clerk’s office before a case is docketed costs almost nothing and stops a substantial proportion of coordinated harassment before it starts.
What Reform Actually Requires
Every filing in a state court system should be queryable against every other filing. Party names, case numbers, attorneys of record, and case types should all be cross-referenceable in real time. This is a database architecture problem, not an AI problem. It requires political commitment to fund and maintain a unified system rather than allowing the existing patchwork to persist.
Before a case is accepted for docketing, the system should verify that the claimed basis for venue matches the statutory requirements for that case type. Venue manipulation is one of the most common mechanisms of abuse of process. It is also one of the most mechanically checkable. A rules-based verification that runs in seconds eliminates an entire category of frivolous filing before it consumes judicial resources.
Personal protection order proceedings are among the highest-stakes, fastest-moving, and most information-poor proceedings in the court system. A judge issuing an emergency PPO typically has fifteen minutes and one party’s affidavit. An automated system that surfaces the respondent’s full PPO history, including prior petitions, outcomes, and violations, before the ex parte hearing is not a radical intervention. It is basic due process infrastructure.
When a sentence departs from guidelines by a magnitude that triggers mandatory written findings under state law, the system should flag the departure and require a confirmed written response before the sentence is entered. This is not a constraint on judicial discretion. It is enforcement of the legislative requirements for exercising that discretion. Many departure findings that are legally required are currently not produced. An automated prompt closes the gap between what the law requires and what the system actually delivers.
The Counterargument: Automation Creates Its Own Errors
The most serious objection to automated court oversight is not about resources or judicial independence. It is about error. Automated systems produce false positives. A cross-docket flag on two people with the same name is not a pattern of abuse. A venue check that fails to account for a legitimate exception in the statute creates a barrier to access. A sentencing departure flag on a case where the parties have stipulated to the departure wastes judicial time.
These are real concerns, and they do not argue against automation. They argue for well-designed automation with transparent logic, human review at the flagging stage, and robust exception handling. A flag is not a barrier. It is a prompt for a human to look at something that the system identified as potentially worth looking at. The human retains full authority to review the flag, find the explanation, and proceed. The alternative to imperfect automated flagging is the current system, in which the absence of any flagging means that well-documented patterns of abuse proceed to completion before anyone with authority to stop them has the relevant information.
Imperfect detection beats no detection. The question is not whether automation makes errors. The question is whether a system that makes occasional false-positive errors is better or worse than a system that makes systematic false-negative errors at scale. The current system makes the systematic error every day. It misses the patterns that are already in the data. Those missed patterns have victims.
The Decision Not to Build Is a Decision
Court modernization failures are often narrated as resource problems or technical problems. They are political problems. The funding exists in many jurisdictions to build substantially better systems than currently operate. The technology exists. What does not exist is a constituency powerful enough to demand it and a political environment in which court efficiency is a priority rather than an afterthought.
The parties who benefit from the current system’s opacity are not passive. Institutional actors whose patterns of conduct would be visible in a unified, queryable system have an interest in that system not existing. That interest does not always express itself as explicit resistance to modernization. It expresses itself as inertia, as underfunding, as scope reduction, as pilot programs that do not scale, as replacement systems that replicate the fragmentation of the systems they replace.
The result is a court system that remains organized around the assumption that the problem in front of the judge is the only problem. That assumption was outdated when it was made. It is dangerous now. The people paying for it are not the people making the decision to maintain it.
Sources
If you have documents and a situation that doesn’t add up, a forensic record review maps the contradictions, identifies the gaps, and produces a written findings memo you can act on — in 24 hours or less.