⚡ Use case

Voice typing for vibe coding and AI workflows

Prompts to Claude, ChatGPT, Cursor, Copilot, v0, Lovable. Dump all your thoughts by voice — AuroraWhisp types into the active window, the AI sorts them into code. No subscription, local, ~150 ms.

Sound familiar?

  • Good prompts are long. Typing a long prompt is tiring, so you write a short one and get the wrong thing back
  • The idea is whole in your head but your hands cannot keep up — half the details get lost
  • Paid voice extensions for Cursor / VS Code want $15/mo and ship your prompts to the cloud
  • Win+H stumbles on technical vocabulary ("pull request", "refactor", "middleware") — you keep correcting by hand

What changes with AuroraWhisp

Voice → prompt → AI sorts it out

You do not need to formulate cleanly. Dump every thought as it comes: "okay so when the user clicks the button a modal pops up, and the animation should be smooth, and add email validation". AuroraWhisp types verbatim, the AI assembles the actual code. Idea → ready prompt: 10 seconds instead of 2 minutes.

Any window is fair game

Ctrl+Space → speak → the text appears in the active window. Claude.ai in a browser, ChatGPT chat, the prompt bar in Cursor, Copilot Chat in VS Code, Lovable, v0.dev, Bolt — all the same. The app does not care what app it is typing into; it just emulates the keyboard.

Prompts do not leave for the cloud

Recognition runs locally on your machine. Your architecture, your API keys, your internal names — never leave the computer. Only the final text gets sent to whatever AI you chose, by you.

$19.90 once instead of $15/mo

A voice extension for Cursor — $15/mo. Wispr Flow for everything — $15/mo. AuroraWhisp Pro — $19.90 once, lifetime. It pays back in 2 months. Free — 10,000 words per day, enough for ~50 long prompts daily.

Why vibe coding fits voice especially well

Coding by hand forces you to formulate cleanly upfront — syntax, types, names. That is slow. Vibe coding flips it: you describe intent, the AI handles syntax. And here voice crushes the keyboard: thinking out loud is natural; typing a long prompt is not. Most people speak 120-150 words per minute and type 40-60. On a 200-word prompt that is two minutes saved. Ten prompts a day — twenty minutes. And on top of that, by voice you do not self-censor — you say everything at once, and the AI gets richer context.

Claude (Sonnet / Opus) and Claude Code

Claude.ai in a browser — the standard case: Ctrl+Space → long prompt with architecture, requirements, constraints → Enter. Claude Code in the terminal — the same, speak straight into the TUI. Claude Code has its own slash commands (`/help`, `/init`), but the **prompt itself** is always faster spoken. Works especially well on complex prompts with code examples: "here is the current component, refactor it to take a callback instead of state without breaking the tests".

ChatGPT and Codex

chat.openai.com and the Codex CLI — same scenario. ChatGPT Plus already has Voice Mode, but it stumbles on technical terms and only types into its own chat (not arbitrary windows). AuroraWhisp writes anywhere: ChatGPT in the browser, a custom GPT, Playground, a terminal running openai-cli. Bonus: ChatGPT replies, you immediately dictate the follow-up without taking your hands off the keyboard.

Cursor and Cursor Agent

Cursor is the main vibe-coding IDE. Composer (Ctrl+I) and Cursor Agent (Ctrl+Shift+I) both take long prompts. Voice is critical here: a typical Composer prompt is 100-200 words describing a change. Typing is slow and psychologically heavy. Voice is light. Also handy in Cursor Chat (Ctrl+L) — regular questions about the code, refactor requests, explanations.

GitHub Copilot Chat in VS Code

Copilot Chat (Ctrl+Alt+I) — the built-in VS Code chat. Takes long prompts, understands the context of open files. By voice you naturally describe what you want done in the current module. For inline edits (Ctrl+I in the active file) it works the same — speak the change you want.

Lovable, v0.dev, Bolt — UI generation

Lovable.dev, v0.dev (from Vercel), Bolt.new — UI generators driven by prompts. Voice shines here: you describe the design in words ("sidebar on the left with groups, main canvas on the right with a card grid, search bar on top with tag filters") and the AI draws it. Typing that is painful; speaking it takes 15 seconds. After generation you iterate by voice: "reduce padding, add a dark mode toggle, animate the cards on appear".

How to phrase a voice prompt

You do not have to formulate as if for documentation. Useful tricks: 1) Start with context: "okay so there is a React component Profile, it takes user from props, renders an avatar and a bio...". 2) Dump constraints: "do not use class components, TypeScript strict, tests on vitest". 3) Describe the result, not the code: "make a popover smoothly appear on click with editing enabled". 4) End with a closer: "alright, let's go" or "do it". The AI handles natural speech without trouble.

Free 10,000 words a day — how many prompts is that

A typical vibe-coder prompt is 50-150 words. So 10,000 words a day is 35-100 long prompts. For most developers that is enough with headroom. Pro is only worth it if your prompts are genuinely long (200-500 words with code examples) or you dictate non-stop. Remember: 5,000 means recognised words; the dictation history is kept locally — see the stats in the app.

For vibe coders who describe architecture by voice and iterate fast through an AI.

Your voice is faster than your keyboard. Try it.

Free version available