In April 2026 we sunset Casetrack, the production case-management software product AppRocket had been building and operating for 18 months. I want to tell that story here, plainly, because it shapes everything we do next.
What Casetrack was
Casetrack was a case-management product built on top of a legal-vertical AI substrate. Real attorneys at small and mid-market firms used it for intake routing, conflict-check resolution, doc-review assist, and matter-lifecycle tracking. It worked. The firms that paid for it got value. We learned more about shipping AI to attorneys in 18 months than I had learned in five years of consulting before that.
Why we sunset it
Three things became clear over those 18 months.
First, the real value we delivered to attorneys was not the AI surfaces — it was the eval discipline behind them. The same firms paying for Casetrack would have paid more for the eval framework applied to whatever case-management system they already ran. We were selling a product when the market wanted the methodology.
Second, the integration tax of replacing a firm's case-management system was enormous. Even with Casetrack working better than the incumbents on the dimensions we measured, the migration cost (data import, attorney retraining, billing reconciliation, conflicts-database normalization) routinely doubled the perceived total cost of ownership.
Third, the services revenue we generated alongside the SaaS subscription was higher-margin and faster-compounding than the SaaS itself. We were optimizing the lower-leverage business.
So we made the focus call: sunset Casetrack, transition customers to either a migration back to their prior system with our eval framework deployed on top, or to a custom services engagement applying the Casetrack-developed surfaces to a system they already owned. No customer was forced to migrate without a working alternative; sunset comms ran on a 12-month tail.
What this unlocks
AppRocket is now a vertical AI implementation studio for mid-market law firms. Same eval discipline, same agent templates (intake triage, doc-review assist, conflict-check, billing-recon), now deployed inside whatever case-management system the firm already runs. Faster. Cheaper. More attorneys served per quarter than Casetrack ever could have reached.
The detailed retrospective — what broke first in production, what patterns transferred across firms, what we would do differently in 2026 — is published as primary-source research at /research/lessons-from-operating-production-legal-ai. It includes the buyer-side checklist we now recommend mid-market firms use to evaluate any AI vendor.
What I am telling firms in 2026
If you are a managing partner or director of legal innovation evaluating AI right now, the playbook the Casetrack experience validates is straightforward.
- Brand and discoverability before AI. Nothing else compounds without inbound. The ABS & Co. case study walks through this sequencing in detail.
- Substrate before surfaces. Spend the unglamorous weeks on matter taxonomy, intake routing, and conflict-data hygiene. The AI deployments that follow will succeed because of it.
- Eval discipline from day one. Pick AI surfaces where the eval is testable and the cost of error is bounded. Skip the vendor-pitched cases that demo well and fail in production.
- Sequence by phases, not big-bang. Phase 1 funds phase 2; phase 2 makes phase 3 possible. The firms whose AI deployments stall are almost always the ones that tried to do all three at once.
The starting point for every firm we now talk to is the AI Readiness Audit — 2 weeks, $15,000, founder-led, vendor-neutral output. That is the same engagement that scoped and sequenced Casetrack's customers, and the same engagement that scoped ABS & Co.'s implementation.
On sunsetting publicly
I will close by saying this: I think the legal-tech industry talks about "we shipped this" enough and about "we sunset this" almost never. Sunsetting publicly is a credibility upgrade, not a credibility cost. If you are a vendor whose product is not working, the longer you keep it on the market, the more attorneys you waste. Killing what is not working — and writing down what you learned — is the move I wish more vendors would make.
Casetrack taught us how to ship legal AI at production quality. That work is now in the hands of a services firm that can apply it to dozens of mid-market firms simultaneously. Same lessons, more attorneys, better leverage. That is the call we made, and we stand behind it.
If you want to talk about what your firm should do next, book a call. And if you want the long version of the operational lessons, the research retrospective is up.