The verification stack
Every state-law claim on the site is verified against at least three independent primary sources before publication and re-verified daily:
- The state\'s own official legislative website , code section text, effective dates, amendment history.
- Cornell Legal Information Institute , secondary authority cross-reference for the same code section.
- CourtListener PACER archive , federal and state appellate decisions interpreting the statute, when available.
- AI dual-consensus pass , Claude Opus 4.7 and Gemini 2.5 Pro independently read the verified text and produce structured fact extractions. Disagreements are flagged for manual review.
Discrepancies are resolved by going back to the state\'s own legislative website as the authoritative source. We never publish content based solely on secondary sources.
The freshness pipeline (HCU-safe)
Google\'s 2024 Helpful Content Update specifically penalizes pages that bump dates without semantic content changes. We solved this with a two-field approach:
- lastVerifiedDate bumps daily on every successful re-fetch of the source. This is honest maintenance metadata , we did check the source today, even if it was unchanged.
- dateModified bumps only when the verified content actually changed. A hash-compare against the previously fetched version determines whether a real change occurred.
Both Google and AI engines (Perplexity, ChatGPT) reward the dual-field signal: maintenance evidence without false-freshness bumps. The result is content that ages truthfully.
How AI is constrained
Every AI output on this site is constrained to cite the primary sources actually retrieved by the verification pipeline. The model cannot generate citations to cases or statutes that are not in the verified database. When the model is asked to interpret a statute, it sees the full statutory text and the relevant appellate authority , it cannot fabricate the underlying law.
The output includes a "Show my reasoning" trace that lists exactly which sources the model considered, what factors it weighted, and why it reached the conclusion it did. The trace is the actual analytical scratch pad , not a marketing summary written afterwards.
Models used:
- Claude Opus 4.7 , primary model for prose generation, analytical reasoning, and dual-consensus verification.
- Gemini 2.5 Pro (grounded) , secondary model used in dual-consensus passes and for primary-source fetching when grounded retrieval is required.
- Deterministic algorithms , all monetary calculations (case-value baseline, lawyer-fee splits, present-value discounting) are done deterministically. AI does not compute these directly.
The case-value methodology
The Case Value AI estimator produces a settlement range using a five-step pipeline:
- Deterministic baseline. Medical bills are multiplied by an injury-type multiplier (1.5x for soft tissue, 2.5x for fractures, 3.5x for surgery, 4.5x for permanent impairment) and then by a state-specific verdict-tendency factor (e.g., 1.2x for California, 0.85x for Mississippi).
- Comparable-case adjustment. The baseline is adjusted up or down based on the median settlement of similar PACER cases in the same jurisdiction. The most heavily weighted comparable cases are within the same federal district and same injury category from the last 36 months.
- Fault-percentage reduction. The estimated range is reduced based on the user-provided fault allocation and the state\'s comparative-fault rule (pure, modified 50%, modified 51%, or pure contributory). For pure-contributory states, even a small fault allocation eliminates recovery entirely.
- Future-care addition. If permanent impairment is indicated, future-care costs are added using BLS earnings projections and standard life-expectancy tables. The future-care amount is discounted to present value using a 3-5% discount rate.
- Confidence dot. The output includes a green/yellow/red confidence indicator based on how tightly the comparable cases cluster around the estimated value and how complete the user\'s input data is.
The reasoning trace shows the user exactly which factors moved the estimate up or down, by how much, and which cases or statutory provisions were cited for each adjustment.
Sources of comparable-case data
Comparable-case data comes from three sources, in order of preference:
- CourtListener PACER archive (Free Law Project) , federal district court dockets with publicly filed settlement amounts.
- State court verdict reports , where state courts publish verdict summaries (most do not, but some bar associations do).
- Voluntarily submitted settlements , anonymized, opt-in submissions from plaintiffs and their counsel through the Settlement Tracker tool.
We do not use defense-side analytics products (which require subscriptions and exclude plaintiffs from the dataset), and we do not use "average settlement" figures from law-firm marketing websites (which are notoriously inflated for marketing purposes).
Corrections and disputes
If you believe any factual claim on this site is incorrect, please use the Corrections page. We publish a public corrections log so readers can see what we got wrong, when we got it wrong, and how we fixed it. Corrections of consequential legal content are published within 24 hours of verification.
Limitations
The methodology described here is the best we can do as an informational publisher. It is not a substitute for an attorney\'s judgment on a specific case. A licensed attorney with direct access to the facts, the documents, and the parties will always have a better understanding of a specific case than any general-purpose informational tool. Use this site as a starting point, not as an endpoint.