Module 1

Document intelligence

Every surfaced insight is automatically added to the intake model and this client's knowledge base. Remove any that shouldn't inform downstream reasoning.

Document Intelligence · RAG-backed

Insights from your data room

Every uploaded document is chunked, embedded, and queried against Kaptrix diligence prompts. Insights are auto-synced into the intake model and this client’s knowledge base — remove any that shouldn’t inform downstream reasoning.

5 documents indexed · 6 insights surfaced
Commercial
high confidence

Deck claims production multi-model failover, but no supporting architecture diagram describes a provider abstraction layer. Elevate as a priority validation item.

Our proprietary multi-model routing layer automatically fails over between Claude, GPT-4, and Llama-based endpoints with sub-second switchover.
LexiFlow_Investor_Deck.pdf · p.12
Auto-synced to intake: Red flag priors
Synced ✓
Technical
high confidence

Shared embedding infrastructure with logical-only tenant isolation is a material risk for privileged legal content. Likely to drive sensitivity-dimension score downward.

Customer workspaces share a single Pinecone index with per-tenant namespaces; queries are filtered by tenant_id at retrieval time.
Architecture_Overview.docx · §4.1
Auto-synced to intake: Primary AI architecture
Synced ✓
Regulatory
high confidence

Only Type I today — enterprise legal buyers will require Type II before deep pipeline conversion. Factor into value-creation roadmap.

Company obtained Type I SOC 2 in Q4 2025; Type II observation window closes Q3 2026.
SOC2_Summary.pdf · §3
Auto-synced to intake: Regulatory exposure
Synced ✓
Operational
high confidence

Three of six vendors carry AI supply-chain exposure (Anthropic, OpenAI, Pinecone). Concentration risk on Anthropic flagged for scenario modeling.

Anthropic (primary LLM), OpenAI (embeddings + fallback), Pinecone (vector), AWS (compute), Auth0 (identity), Datadog (observability).
Vendor_Dependencies.xlsx · Sheet1
Auto-synced to intake: Known vendor or model dependencies
Synced ✓
Financial
medium confidence

Margins reasonable for AI-native SaaS but sensitive to foundation-model pricing. Cost-per-inference visibility needed to underwrite margin durability.

ARR of $14.2M at 128% NDR, burn multiple 1.4x, gross margin 71%.
LexiFlow_Investor_Deck.pdf · p.21
Auto-synced to intake: Diligence priorities
Synced ✓
Commercial
high confidence

Enterprise logo set is real. Reference concentration low enough to support customer-reference calls with diversified sample.

18 paying enterprise logos including three AmLaw 100 firms and two in-house legal teams at Fortune 500 companies.
LexiFlow_Investor_Deck.pdf · p.19