Case Study
TrancheLab Extracts 14 Debt Tranches from Hertz's 297-Page Disclosure Statement in Under 10 Minutes

Highlights
297-page disclosure statement with six amended plan versions filed over two months
Hard pre-filter reduced relevant content to ~40 pages before any LLM processing
14 distinct debt tranches extracted including DIP facilities, first lien, second lien, unsecured notes, and lease obligations
Fuzzy deduplication caught the same tranche renamed across plan amendments
Confidence scores flagged two values where the disclosure statement cited ranges rather than fixed amounts
Full extraction completed in under 10 minutes vs. estimated 4+ hours of manual analyst work
Challenges
The Hertz disclosure statement is a stress test for any extraction tool. Over the course of the case, the debtors filed six iterations of the plan, each amending tranche definitions, recovery estimates, and creditor class treatments. The final solicitation version ran 297 pages, with relevant capital structure information scattered across the introduction, the classification of claims section, the recovery analysis, and multiple exhibits.
An analyst manually pulling this data would need to cross-reference tranche names across amendments, reconcile figures that shifted between versions, and flag cases where the document gave ranges rather than precise amounts. This typically takes 4 to 6 hours for a senior restructuring analyst.
Solution
TrancheLab's hard pre-filter scanned all 297 pages and identified ~40 pages containing capital structure data, classification tables, and recovery estimates. The remaining 257 pages of background, risk factors, and legal boilerplate were excluded before any LLM call, keeping costs low and latency under control.
The extraction pipeline identified 14 distinct debt tranches: the DIP credit facility, first lien term loans, first lien notes, second lien notes, unsecured notes, fleet-level ABS facilities, and several lease obligation classes. For each tranche, TrancheLab extracted face amounts, outstanding balances, interest rates, maturity dates, and seniority rankings.
The deduplication engine was critical here. Across six plan amendments, the same tranche appeared under slightly different names. TrancheLab's Levenshtein fuzzy matching grouped these correctly rather than creating duplicate entries.
Two values were flagged with reduced confidence scores: a recovery estimate given as a range and an outstanding balance that referenced a figure as of the Petition Date without specifying exact amounts in the local context. Both flags would have been easy to miss in a manual review.
Key Benefits
297 pages to 40 in seconds
The deterministic pre-filter eliminated 87% of the filing before any LLM processing, keeping extraction fast and cost-effective.
Six amendments, one clean table
Fuzzy deduplication resolved tranche naming inconsistencies across plan versions, producing a single consolidated capital structure view.
Confidence scores caught what humans skip
Two ambiguous values were flagged automatically. A wrong number presented as certain is more dangerous than a gap.