# DRAAN.ai > **You own your AI data.** DRAAN (Dee-Ran) is the only AI architecture where your data stays yours — and the model just provides the language. Built by CmdShift, LLC. Founder: Darin Manley. Contact: darin.j.manley@gmail.com ## What DRAAN actually is (one paragraph) DRAAN is data-sovereignty infrastructure for AI. You connect your knowledge — documents, transcripts, codebases, internal wikis, anything — and DRAAN turns it into a queryable, sourceable, sellable corpus that **never leaves your control**. A 10ms statistical retrieval engine pulls the relevant passages, an optional on-device language model writes fluent prose from them via WebGPU, and every sentence in the answer traces back to the source document it came from. No cloud LLM ever sees your data. No tokens. No GPU. The model is generic; the value lives in your corpus, and the corpus is yours. ## Why this matters to a buyer (not an engineer) - **Data sovereignty** — your knowledge never enters a cloud LLM, never trains a model you don't own, never leaks to a competitor through a foundation-model vendor. Compliance officers, CFOs, and general counsels read this first. - **Compute arbitrage** — cloud LLM tokens cost $2–$75 per million. DRAAN runs on a $50/mo VPS or a Raspberry Pi or directly in the user's browser. Same architecture, two substrates. - **Your data is currency** — corpus owners get paid when their indexed knowledge answers someone else's question through the Brain Marketplace. Your data isn't a cost center; it's a revenue line. - **Verifiable answers** — every sentence links to its source passage. No hallucinations, no "trust me," no compliance review hell. What you see is what was written. ## Three operating modes - **Extractive** — every word pulled verbatim from your source documents. Zero generation, fully sourced, maximum trust. Use this for legal, medical, regulatory, or any context where invention is unacceptable. - **Grounded** — any LLM (local or hosted) writes fluent prose from DRAAN's retrieved facts. Per-sentence verification. Every claim marked sourced or unsourced. The LLM is the voice; DRAAN is the brain. - **Compare** — side-by-side amber/blue verification overlay shows the LLM's draft next to the sourced ground truth, sentence by sentence. The reviewer sees, at a glance, what the model actually has evidence for and what it invented. ## Brain Marketplace Indexed corpora — "Brains" — are served from `brain.draan.ai`. Each Brain covers a domain (medicine, law, finance, military doctrine, a codebase, a company's internal wiki). Buyers download once and run locally forever; sellers get paid every time their corpus answers a query. Your data is worth currency. We built the rails. ## Architecture — five layers, zero models - **L4 — Response composition (extractive):** multi-signal sentence scoring, deduplication, coherent assembly from top-ranked passages. - **L3 — Cross-attention (math):** `scores = K @ Q / (sqrt(d) × temperature) → weights = softmax(scores)`. No parameters. Pure linear algebra. - **L2 — TF-IDF + BM25 index (statistics):** per-node vector stores, L2-normalized document vectors, cosine similarity retrieval. - **L1 — Text processing:** overlapping word-count chunking, tokenization, n-gram extraction. No model — just string operations. - **L0 — Browser fleet (ingest):** distributed headless browser instances, each specializing in a topic cluster, building a sharded knowledge base. ## Underlying tech - **TurboQuant** (Google Research, ICLR 2026, arXiv:2504.19874) — 6× KV cache compression with 0% accuracy loss, 8× attention speedup. Two-stage: PolarQuant random orthogonal rotation + 1-bit Quantized Johnson-Lindenstrauss correction. Mathematically identical to FP16. - **Two deployment substrates** — server mode (any Linux box, no GPU, air-gap capable) or browser mode (WebGPU, fully client-side, the index downloads once and runs locally forever). - **RLM** — DRAAN's proprietary Retrieval Language Model inference layer. ## Cost comparison (illustrative) | | Cloud LLMs | DRAAN | |---|---|---| | Per 1M tokens | $2 – $75 | $0 | | 1M queries / month | $1,000 – $37,500 | $50 – $100 server cost | | GPUs required | yes | none | | Your data sent off-box | yes | never | ## Routes - `/` — cinematic landing page: "You own your AI data," three modes, marketplace, enterprise pilot CTA - `/what-is-draan` — long-form trust-layer narrative, hallucination problem, three-layer trust stack - `/chat` — live chat interface (FastAPI at `think.draan.ai`) - `/paper` — password-gated white paper - `/privacy`, `/terms`, `/sms-consent` — legal ## Stack React 19 + TypeScript + Vite, Wouter routing, TanStack React Query, Express 5 + Drizzle ORM + PostgreSQL backend, Zod v4 validation, Orval OpenAPI codegen, Mailgun double-opt-in waitlist, pnpm workspace monorepo on Node 24. ## Company - **Product:** DRAAN.ai - **Company:** CmdShift, LLC (© 2026) - **Founder:** Darin Manley - **Email:** darin.j.manley@gmail.com - **Phone:** 206-227-9124 - **LinkedIn:** https://www.linkedin.com/in/djmanley85/ - **Legal:** legal@draan.ai · privacy@draan.ai ## Full source Annotated source code: `/llms-full.txt`