
- ·Wrote every line of VCI from first principles.
- ·Papers: The Artificial Brain · StreamRAG v2 (IJTRP, Feb 2026).
- ·Ex-Incredible (AI platform, 100+ integrations) · Ex-GoQuant Miami.
- ·LNMIIT Jaipur · Class of 2027.
The investor briefing is designed at 16:9. Open on a laptop or desktop to read it properly.
No persistent storage of context across sessions or interactions.
Every conversation starts from zero. No customer history, no case context, no compounding institutional knowledge.
External retrieval is not internal recall. Bolt-on memory cannot reason across what it just retrieved.
No internal confidence signal at the point of inference.
Hallucinates with full confidence. Cannot flag its own uncertainty. Cannot say “I don’t know.”
Token probabilities are not calibrated truth signals. Regulated domains cannot underwrite proxy uncertainty.
No mechanism to update knowledge without full retraining.
One new fact requires a full fine-tune cycle. Each update costs millions and risks catastrophic forgetting.
Both are external mitigations layered on a frozen substrate. Neither produces per-fact, in-place updates.
These are architectural constraints, not accuracy ceilings. External mitigation through retrieval, fine-tuning, or extended context cannot substitute for the missing primitives. Industries that require deterministic memory, calibrated uncertainty, and continuous learning remain structurally underserved by the current generation of models.
Transformer architecture released as an open paper with no defensive IP. Every LLM since is a refinement of the same substrate.
ChatGPT ships in November. Six months earlier, LeCun publishes the JEPA position paper. Capability and architectural critique reach the field in the same year.
AMI Labs (LeCun) raises ~₹8,600 Cr seed at ~₹38,000 Cr post-money to build JEPA world models. Bezos Expeditions, Nvidia, Samsung back the architectural thesis.
Cortical column network shipped. Seven pillars implemented end to end. The architecture-layer race is on. JEPA one approach, cortical columns the other.
The next moat is at the architecture layer, not the parameter layer. AMI’s raise priced the category. Verace is the differentiated alternative: instead of world-model latent prediction, a modular cortical column network with persistent memory, calibrated uncertainty, and local learning. Two architectures, one window.
One-shot encoding into writable structures. Recall across sessions. Memory is data, not weights. Add, retrieve, or revise without retraining.
Inference-time uncertainty signal at every column. Deliberation triggered when confidence falls below threshold. The system reports what it does not know.
Each column corrects from the input it just saw. No global gradient graph. Update cost is constant in model size. Learning is per-fact, not per-checkpoint.
Refines next-token prediction at scale. No persistent memory, no confidence signal, no per-fact learning.
Predicts in embedding space rather than token space. Same training-then-freeze paradigm. AMI Labs raised ₹8,600 Cr in 2026 to build it.
Specialized columns with persistent memory, calibrated uncertainty, and per-column local learning. Biological primitives.
The post-transformer era has two viable primitives. JEPA pursues latent prediction; VCI pursues biological cortical mechanisms. Both are architecturally defensible and uncorrelated. The market priced one of them at ₹8,600 Cr. Verace is the second.
India first by design. Deterministic memory for Indian legal precedent and case law, calibrated uncertainty for Indian clinical reasoning. ~₹1,00,000 Cr combined Indian TAM by 2030. Global expansion follows India validation.
Platform anchored by vertical ownership. Same substrate underwrites every product, so each new vertical ships at marginal incremental cost. Pricing models span per-token API, per-seat SaaS, and annual enterprise licensing — the revenue stack on the Business Model slide derives from this portfolio.
Substrate built in ~120 days from company founding. ARR is not the right traction metric for a deep-tech architecture company at pre-seed; IP filings, architectural completeness, and training progress are. The ARR conversation begins at API launch in 2027 H1.
Sequenced, not scattered. API first in 2027 H1 to validate the substrate; SaaS verticals layer on through 2028; enterprise + custom anchor 2028 H2 onward. Five streams across the full ACV spectrum — same VCI substrate underwrites every one of them, so incremental margin cost per new stream is near zero.
The moat is at the substrate layer, not the weights layer. A competitor cannot reach VCI by training a larger transformer or by scaling JEPA — they would have to rebuild a different cortical substrate from scratch, and the architectural primitives are what we have been filing.
Each motion validates the next. API proves the substrate. Legal design partners prove the workflow. Enterprise contracts prove the ACV. The same VCI substrate underwrites all three; only the sales motion changes per channel.



Three people. One substrate. Founder wrote every line of VCI from first principles. Hiring plan + ESOP pool discussed in person.
₹100 Cr post-money valuation · clean cap table · founder team retains 95%