JT/DL: Does Justice Tech work? đ¤ˇ
Plus, new jobs
The JT/DL is a twice-monthly newsletter about justice technology news, events, and opportunities. My opinions do not reflect those of my employers or professional partners.
So, does any of this stuff work?
During the time Iâve spent in the justice technology space, Iâve heard a lot about flowers. To support open experimentation and an influx of new ideas, funders and sector leaders wanted to âlet 1,000 flowers bloomâ and just see what happens. This philosophy gave rise to endless pilots, products, and promises.
The goal was to develop technology that helped people in the justice gapâthe chasm between the legal needs of billions of people worldwide, the systems meant to serve them, and resolution. Over the past two decades, governments and nonprofits poured millions of dollars into digital tools like online dispute resolution, AI-powered legal assistants, and data systems to make justice more accessible. But does any of it actually work?
Thatâs the central question of my new paper, âA Research Agenda for Justice Technology,â published by the American Bar Foundation. Thanks to Becky Sandefur and Matthew Burnett enough for making the publication possible and for the contribution of the other amazing authors. I canât recommend the whole publication enough. To get you started, here are five takeaways from my contribution:
1. We donât know what works in justice tech.
This is the paperâs bluntest finding. Despite significant investment in justice technology globally, rigorous evaluation of whether these tools achieve their intended outcomes for the public is scarce. Most reports on justice technology are descriptiveâcataloging what exists without asking whether it works (a body of work Iâve contributed to). There are landscape surveys, descriptive research, and pilot reports, but few answer the fundamental question: did this technology improve outcomes for the people it was intended to serve? We can say unequivocally that text message reminders work as promised. But after that, clear answers drop off fast.
2. We canât agree on how to measure success.
One reason we donât know what works is that we havenât agreed on what âworkingâ means. The field lacks a shared definition of success for justice technology. Instead, evaluators tend to fall back on simple metricsâcase processing times, website traffic, user countsâthat tell you about throughput but nothing about whether a personâs outcome was resolved. Worse, those numbers can be inaccurate, inflated by data duplication or poor product design. In other instances, the numbers, even if accurate, often only capture a single step in a userâs journey. Without agreed-upon, people-centered outcome metrics, the sector canât distinguish between a tool that merely processes cases faster and one that produces better results.
3. Weâre not talking about the enabling environment.
Technology doesnât operate in a vacuum, but the conversation about justice tech rarely extends beyond the tools themselves. We need to pay far more attention to the enabling environmentâthe people, money, and architectural conditions that determine whether technology can actually succeed.
This includes state capacity: do justice agencies have staff with the technical and managerial skills to implement and maintain modern technology? The evidence suggests many donât, but whereâs the gap analysis or the tested trainings that ensure better implementation and lead to better public outcomes? When it comes to funding: do we know the relationship between what a justice agency spends on tech and the outcomes of the public it serves? We donât. So how are government leaders, advocates, and technologists supposed to credibly say, âWe need more money.â? Then thereâs all the unsexy technical systems stuff, like interoperability, accessibility, and security, that barely receive a wisp of coverage yet determine implementation outcomes. For example, we canât deploy basic software, like SMS messaging tools (THE ONE THING WE KNOW WORKS!), in courts because they donât have functioning APIs. Getting the technology right matters little if the agency deploying it canât train up staff to tackle new technologies, canât make the argument for its technology budget, or is blocked by outdated architecture.
4. Vendor capture is a hidden chokepoint.
Justice agencies increasingly depend on private vendors for core technologyâcase management systems, e-filing platforms, digital payment toolsâand the dynamics of that market are troubling. Investigative reporting has documented inflated costs, under-performing products, and vendors that hold their public customers with hostility. But the problem runs deeper than bad vendors.
Without justice agencies adopting modern interoperability standards, vendors create lock-in effects where agencies are stuck with systems that donât talk to each other and canât easily integrate new tools. This stifles innovation and produces wasteful spending on subpar products. Meanwhile, justice agencies lack basic benchmarks to evaluate what theyâre buyingâthey donât know what a case management system should cost or how to validate an AI vendorâs accuracy claims. The result is a market where incumbents face little competitive pressure and new entrantsâpotentially offering better, less expensive, more people-centered toolsâcanât break in. Research into potential anti-competitive harms and strategies to open these markets could be one of the highest-leverage interventions the field pursues.
5. Where do we go from here?
I propose an applied research agenda that expands the expertise looking at these issues above. On the technical side, this means independent audits of justice tools (building on work like Dukeâs recent AI chatbot audit), post-mortem analyses of failed projects so the sector stops repeating the same mistakes and learns from them, and deeper investigation into systemic technical problems like cybersecurity and interoperability. Regarding the enabling environment, the priorities include gap analyses of agency staffing and skills, figuring out how public budgets impact technology outcomes, and leveraging contracting processes to ensure that security, interoperability, and accessibility standards are met. Crucially, I want this agenda to expand the coalition of experts beyond legal academics to include economists, antitrust experts, and cybersecurity professionals whose expertise the field badly needs.
This agenda moves us past the â1,000 flowers bloomingâ phase of justice technology. The flowers bloomed some time ago, but no one can say with authority which one is poisonous or which one is medicinal. One layer deeper, no one credibly knows how environmental factors hindered an early seedlingâs potential. None of this is glamorous in the way pitching yet another AI chatbot is, but until we build the evidence base for what works, why it works, and what conditions it needs to succeed, justice technology will remain a field of well-intentioned experiments with dubious results.
News
Enterprise Justice: Tyler Technologies and the privatizing court. (Yale Law Journal)
2025âs Arrest by Phone won a Pulitzer. (Bloomberg) (h/t Keith Porcaro)
The U.S. Supreme Court wrangles with police use of cell location data to find suspects. (New York Times)
Coloradoâs AI Bias law is paused as Muskâs xAI seeks an injunction. (Bloomberg Law)
The U.S. government has a policy of targeting non-citizen researchers, advocates, fact-checkers, and trust and safety workers for visa denials, revocations, detention, and deportation based on their work. (Knight First Amendment Institute)
His DNA was taken after his arrest at an ICE protestânow, heâs suing. (New York Times)
Cops are using license plate readers to stalk their exes. (Tech Dirt)
A Mexican surveillance giant youâve never heard of is now watching the U.S. border. (Rest of World)
City and state officials want speed-limiting devices installed in the cars of chronic speeders. (New York Times)
North Carolina man pleads guilty to doxing Supreme Court justice. (The Hill) (h/t Bill Raftery)
Access to justice in the age of AI: Evidence from U.S. Federal Courts. (Shah and Levy)
A deep dive on AI chatbots. (Last Week Tonight)
The UKâs Sentencing Act leaves the door open for Big Tech to build digital prisons. (Tech Policy Press)
RightsCon Canceled After Zambia Requires âFull Alignmentâ With âNational Valuesâ. (Tech Policy Press)
Code for America and Anthropic are launching an AI tools for government program. (CfA)
Criminologists are baffled by Arc Raiders playersâ behavior: Theyâre being nice. (AV Club)
Events
The NYC Leadership Summit on AI in Criminal Justice is June 16. (JJC)
Wikimania will be in Paris July 23-25. (WM)
The A2J Network Conference will be in Cincinnati October 21-22. (A2JN)
Jobs & Opportunities
Arnold Ventures is looking for a criminal justice innovation fellow. (AV)
[New] Blue Meridian needs a portfolio lead for its Studio. (BM)
The Brennan Center for Justice has multiple openings. (BCJ)
The Center for Democracy and Technology has academic externships. (CDT)
The Chan Zuckerberg Initiative needs a counsel for AI and tech. (CZI)
[New] The University of Chicago Crime Lab has multiple openings. (UCCL)
Code for America has multiple openings. (CfA) (h/t Russ Finkelstein)
The Free Law Project is looking for court partners on its Litigant Portal project. (FLP)
The Institute for Law and AI has multiple openings. (ILAI)
The Kapor Foundation needs research fellows. (KF)
Maryland Legal Services Corporation needs a director of strategic technology. (MLSC) (h/t Dave Pantzer)
OpenMinded needs a senior policy manager. (OM)
The Pew Charitable Trusts needs a data and policy officer for their courts work. (Pew)
Recidiviz has multiple openings. (R)
Renaissance Philanthropy is hiring for multiple roles. (RP)
[New] TechTonic Justice needs a chief of staff. (TJ)



