Outlier is a Mac-native beta app for people who are tired of paying $200/month and still hitting usage limits. The goal is simple: cloud-style coding workflows, running locally by default, with no API token caps from Outlier. The app is early. The mission is not.
Cloud coding tools are incredible, but usage caps kill momentum. Outlier is the bet that the same style of coding platform can live on your Mac: agent plans, file edits, tests, memory, vision, research, and long runs without a meter.
Every dollar goes toward the gap between where Outlier is today and where it needs to be: stronger coding models, better agent loops, faster local inference, cleaner UX, and public build logs showing exactly how we compress, speed up, test, and ship.
Community-funded. Build-in-public. No cloud token tax.
Apple Silicon only (M1/M2/M3/M4). Intel Macs not supported.
That is the pain Outlier is built around. The best AI coding tools are powerful, but they are rented, rate-limited, and cloud-hosted. Outlier is building the local version: your Mac does the work, your code stays private, and usage does not stop because a server-side meter says so.
Long coding sessions hit limits. The tool gets good right when you need it most, then cuts off. Local inference changes the economics: once the model is downloaded, every token runs on your hardware.
Private repos, customer data, personal documents, and business plans all move through someone else’s infrastructure. Outlier is local-first: the default path is your Mac, your files, your disk.
Stop paying, lose access. Outlier’s free tier remains useful forever, Pro is priced at $20/month, and Founders get lifetime Pro while funding the product into existence.
The pricing has one job: keep local AI accessible while raising enough money to build a stronger offline coding-agent platform. Free gets Nano + Lite. Pro unlocks the full product. Founders pay once and help make it real.
For everyone who wants local private AI without a subscription.
Unlocks everything in Outlier now, and funds the next coding-agent sprint.
Lifetime Pro for the people who help fund the push toward high-quality offline agentic AI.
Outlier is not pretending it already matches the best cloud coding agents. The honest story is better: v1.8 is the base, the community funds the climb, and every major step gets shown in public — the compression work, the benchmarks, the failed runs, the speedups, the fixes.
Signed, notarized Mac app. Streaming chat, local sessions, seven model tiers, model downloads, memory, project files, web research, vision support, and a Pro gate that unlocks the full stack.
Plan → diff → approve → patch → test → repair. The goal is not just a chat model that writes code; it is a local coding platform that can work across a repo without burning cloud usage.
Every founder seat helps pay for the runs, evals, and infrastructure to close the quality gap. Stronger local coding, better reasoning, better long-context behavior, and honest benchmark reporting.
The build logs will show how we shrink, route, cache, page, and speed up large open models so they become practical on consumer Macs instead of locked behind H100s.
This is a community bet: enough people want unlimited local AI badly enough to fund the missing pieces together. If that is you, Founders is the cleanest way to help.
The product is organized around simple expectations: Free is useful immediately. Pro unlocks the heavier models and advanced workflows. The numbers below are the grounded v1.8 shipping story.
| Tier | Plan | Best for | Disk / RAM | Speed / note |
|---|---|---|---|---|
Outlier Nano Qwen3.5-4B · MLX 4-bit |
Free | Fast iteration, lightweight chat, small Macs | 2.37 GB · 6 GB min RAM | 71.7 tok/s |
Outlier Lite Qwen3.5-9B · MLX 4-bit |
Free | Daily local AI, writing, search, Q&A | 5.04 GB · 12 GB min RAM | 53.4 tok/s |
Outlier Quick Gemma-4-26B MoE |
Pro | Thinking-mode reasoning, not a code substitute | 15.61 GB · 16 GB min RAM | 14.6 tok/s |
Outlier Core Qwen3.6-27B · text-only |
Pro | Best default quality, reasoning, coding | 15.13 GB · 24 GB min RAM | HumanEval 0.866 · 20.7 tok/s |
Outlier Code Core weights + code config |
Pro | Coding workflow, lower-temp code-tuned setup | 15.13 GB · 24 GB min RAM | Same verified base as Core |
Outlier Plus Qwen3.5-397B-A17B · V9 paged / V11 streaming |
Pro | Frontier-adjacent local experiments on high-end Macs | 209 GB disk · 96 GB min, 128 GB recommended | V9 K=20: 1.59 tok/s @ 13.75 GB · V11 K=4 N=4 LRU=8: 3.28 tok/s @ 7.34 GB |
Outlier Vision Qwen3.6-35B-A3B · vision retained |
Pro | Images, screenshots, OCR, multimodal reasoning | 19.0 GB · 24 GB min RAM (16 GB Air on V11 streaming) | V9 K=256: 16.31 tok/s @ 34.04 GB · V11 K=4 N=2 LRU=30 XPF=1: 15.96 tok/s, ~3.16 GB peak (multi-turn) |
Core / Code / Plus / Vision are Pro-gated in the current v1.8 framing. Code uses the same weights as Core with code-specialized configuration. Quick is useful for reasoning, not positioned as a coding tier.
v1.8 is the foundation: local chat, session history, model picker, model downloads, project context, memory, web research, agent tools with approval, test panels, vision, and a Mac-native app shell. Some edges are rough. That is why the build is funded in public.
Streaming token output, persistent local history, rename/delete/pin, Markdown export, demo session on first run, cost display at $0.00, and local storage across restarts.
File read/write, shell execution with approval, permission modes, plan review card, repair loop, audit log, path scoping, project map, symbols, dependencies, snapshots, and tests.
Deep research mode, DuckDuckGo with Wikipedia fallback, source filters, export, follow-up, summary cards, trust badges, and inline citations with source excerpts.
Persistent memory in SQLite, short/medium/long-term tiers, provenance tracking, review cards, conflict detection, decay, frequency, and MEMORY.md export.
Image upload and direct image queries through Outlier Vision. Best for OCR, screenshots, diagrams, and multimodal Q&A — not positioned as the coding model.
macOS arm64 DMG, Apple notarization accepted, Gatekeeper accepted, GitHub Releases distribution, and a Tauri updater pointed at the latest manifest.
The cloud tools proved the workflow: coding agents, long-context research, file-aware assistants, and always-on help. The problem is the meter. Outlier is building the local version: Mac-native, private by default, and not capped by an Outlier API token allowance.
| Cloud AI tools | Cloud coding agents | Outlier Free | Outlier Pro | |
|---|---|---|---|---|
| Monthly cost | $20+ | Often much higher | $0 | $20 |
| Usage model | Server-side limits | Usage windows / caps | No Outlier token meter | No Outlier token meter |
| Where inference runs | Provider cloud | Provider cloud | Your Mac | Your Mac |
| Privacy default | Remote request | Remote repo/context | Local by default | Local by default |
| Offline use | No | No | Yes, once models download | Yes, once models download |
| Current maturity | Very mature | Very mature | Useful beta | Ambitious beta |
| Goal | Hosted assistant | Hosted coding workflow | Local baseline | Cloud-style coding workflows, local |
Important framing: Outlier is not claiming parity with the best cloud coding agents today. The beta is the foundation; Founders and Pro revenue fund the climb toward that experience locally.
The environmental case is simple: cloud inference needs datacenters, networking, cooling, and always-growing GPU clusters. Local inference uses the Apple Silicon chip you already own. Outlier is a bet that more AI work should happen at the edge.
When a model runs locally, the query does not need a round trip through a remote GPU cluster.
Outlier local models do not create an Outlier cloud inference bill or cloud token meter.
Apple Silicon unified memory is efficient for local inference compared with shipping every prompt to a server.
This is not a claim that every local query is automatically cleaner in every situation. Hardware, model size, electricity source, and usage pattern all matter. The point is directionally important: if a huge share of everyday AI inference can move from datacenters to efficient devices people already own, the load on cloud infrastructure can drop.
That is why compression, routing, quantization, paging, and model fit are not just engineering details. They are part of the product philosophy. A useful local model is not just cheaper for the user — it can also reduce unnecessary cloud dependence for everyday tasks.
The best environmental feature is not a green badge. It is a model that is good enough, small enough, and fast enough that people actually choose to run it locally.
Practical framing: local-first when possible, cloud only when needed. Outlier’s default path is the Mac.
The old website had a strong provenance section. It belongs here. Outlier should be ambitious, but the numbers still need to be honest: source file, command, sample size, standard error, date measured.
Benchmarks should trace back to eval artifacts or sprint logs. If a number cannot point to a file, it should not become marketing copy.
The harness, version, dtype, device, shot count, and sample size matter. Smoke tests stay smoke tests.
Small benchmark lifts are not magic. Outlier’s public story should separate strong absolute scores from statistically significant improvements.
The goal is cloud-style coding workflows running locally. The current beta is not yet equal to the best cloud coding agents, and the site should say that clearly.
The community funds model runs, evals, compression, speed work, agent loops, and UX hardening. The process is part of the product.
Outlier went from idea to shipped beta in about a month. More runway means more focused cycles on model quality, agents, and polish.
Current site-safe headline numbers: v1.8.58 shipped as a notarized Mac DMG; Free includes Nano + Lite; Pro unlocks the shipped heavier tiers; local generation has no Outlier API token meter.
Founders and chip-ins are not abstract support. They fund specific work: better coding models, better local speed, more reliable agents, better tests, and a public process that shows the wins and failures.
Notarized DMG, seven model tiers, local chat, sessions, downloads, memory, project context, research, vision, and Pro unlock.
● Live betaThe product goal is a local coding-agent loop that can work across real projects without burning paid cloud usage.
● Funded by Pro + FoundersBetter coding performance means disciplined evals, better prompts/configs, distillation experiments, and no inflated benchmark claims.
● Community-fundedMake bigger open models practical on consumer Macs through paging, caching, quantization, routing, and product-level hardware honesty.
Always improvingThe pitch is not “trust us.” The pitch is: help fund the work, use the beta, report what breaks, and watch the process of making local AI better in public.
This is the call to action: buy Founders, subscribe to Pro, or chip in whatever you can. The money goes into making Outlier better — coding-agent quality, local model quality, speed, compression, and the public process behind all of it.
Outlier went from idea to a shipped, notarized Mac app in about a month. That included seven model tiers, local inference, agents, memory, vision, payments, packaging, and the first public build. Imagine what this can become with more runway and two more months of focused building. We will take anything you can give us. Thank you.
Lifetime Pro. Early builds. Founder recognition. A direct vote for local AI with no API token caps. Founders revenue funds model runs, evals, agent work, and polish.
Become a Founder →Want to fund the work without buying Pro? Chip in any amount · Sponsor a benchmark run or feature
The community is not a side quest. It is how this gets built: users testing real workflows, reporting what breaks, funding the next sprint, and holding the benchmarks honest. The more people using it, the faster it gets better.
The cloud tools proved what the workflow should feel like. Now we build the version that runs locally, belongs to the user, and has no API token cap from Outlier mid-sprint.