⚡ Use case
Voice typing for vibe coding and AI workflows
Prompts to Claude, ChatGPT, Cursor, Copilot, v0, Lovable. Dump all your thoughts by voice — AuroraWhisp types into the active window, the AI sorts them into code. No subscription, local, ~150 ms.
Why vibe coding fits voice especially well
Coding by hand forces you to formulate cleanly upfront — syntax, types, names. That is slow. Vibe coding flips it: you describe intent, the AI handles syntax. And here voice crushes the keyboard: thinking out loud is natural; typing a long prompt is not. Most people speak 120-150 words per minute and type 40-60. On a 200-word prompt that is two minutes saved. Ten prompts a day — twenty minutes. And on top of that, by voice you do not self-censor — you say everything at once, and the AI gets richer context.
Claude (Sonnet / Opus) and Claude Code
Claude.ai in a browser — the standard case: Ctrl+Space → long prompt with architecture, requirements, constraints → Enter. Claude Code in the terminal — the same, speak straight into the TUI. Claude Code has its own slash commands (`/help`, `/init`), but the **prompt itself** is always faster spoken. Works especially well on complex prompts with code examples: "here is the current component, refactor it to take a callback instead of state without breaking the tests".
ChatGPT and Codex
chat.openai.com and the Codex CLI — same scenario. ChatGPT Plus already has Voice Mode, but it stumbles on technical terms and only types into its own chat (not arbitrary windows). AuroraWhisp writes anywhere: ChatGPT in the browser, a custom GPT, Playground, a terminal running openai-cli. Bonus: ChatGPT replies, you immediately dictate the follow-up without taking your hands off the keyboard.
Cursor and Cursor Agent
Cursor is the main vibe-coding IDE. Composer (Ctrl+I) and Cursor Agent (Ctrl+Shift+I) both take long prompts. Voice is critical here: a typical Composer prompt is 100-200 words describing a change. Typing is slow and psychologically heavy. Voice is light. Also handy in Cursor Chat (Ctrl+L) — regular questions about the code, refactor requests, explanations.
GitHub Copilot Chat in VS Code
Copilot Chat (Ctrl+Alt+I) — the built-in VS Code chat. Takes long prompts, understands the context of open files. By voice you naturally describe what you want done in the current module. For inline edits (Ctrl+I in the active file) it works the same — speak the change you want.
Lovable, v0.dev, Bolt — UI generation
Lovable.dev, v0.dev (from Vercel), Bolt.new — UI generators driven by prompts. Voice shines here: you describe the design in words ("sidebar on the left with groups, main canvas on the right with a card grid, search bar on top with tag filters") and the AI draws it. Typing that is painful; speaking it takes 15 seconds. After generation you iterate by voice: "reduce padding, add a dark mode toggle, animate the cards on appear".
How to phrase a voice prompt
You do not have to formulate as if for documentation. Useful tricks: 1) Start with context: "okay so there is a React component Profile, it takes user from props, renders an avatar and a bio...". 2) Dump constraints: "do not use class components, TypeScript strict, tests on vitest". 3) Describe the result, not the code: "make a popover smoothly appear on click with editing enabled". 4) End with a closer: "alright, let's go" or "do it". The AI handles natural speech without trouble.
Free 10,000 words a day — how many prompts is that
A typical vibe-coder prompt is 50-150 words. So 10,000 words a day is 35-100 long prompts. For most developers that is enough with headroom. Pro is only worth it if your prompts are genuinely long (200-500 words with code examples) or you dictate non-stop. Remember: 5,000 means recognised words; the dictation history is kept locally — see the stats in the app.
For vibe coders who describe architecture by voice and iterate fast through an AI.