Record once, then let Smart Screen understand your clicks, narration, and workflow. Add polished zooms, captions, trims, and callouts without giving up control of the final edit.
Most screen recorders capture everything.
Great demos should focus only on what matters.
Smart Screen sits in the useful middle: it reads clicks, narration, silence, and motion to suggest a cleaner story, while still letting you choose exactly what gets applied.
Three bundled native helpers run alongside your recording to capture the context normal recorders miss: real click positions, keystroke timing, and speech transcription. That means the editor can understand intent, not just pixels.
Record with a mic and get an editable caption track automatically. Review the transcript, fix wording, seek by segment, and burn captions into the final export without leaving the editor.
The local Smart Screen pass already gives you useful zoom, trim, and focus suggestions the moment a recording opens. AI is optional refinement, not a requirement for the product to feel smart.
Use OpenAI for stronger multimodal refinement, or point to a local Ollama model when privacy matters more. The app keeps secrets in the main process and lets you apply AI suggestions selectively instead of blindly accepting them.
The core workflow works locally by default. AI can sharpen the result, but the product is already useful before you add it.
Screen + mic + native click/keystroke telemetry captured simultaneously. Saves as .webm with cursor JSON and audio sidecars alongside.
Smart Screen heuristics run the moment the editor opens. Auto-transcription fires if a .transcription.wav sidecar exists. No waiting, no manual trigger.
Review zooms, trims, and captions in the timeline. Apply AI suggestions selectively. Toggle between raw and polished preview to see the exact transformation.
MP4 or GIF with captions burned in, click emphasis, keystroke overlays, and applied effects. Share the file — not a screen recording app link.
Start with the local editing flow, then choose cloud AI or local Ollama only if your workflow benefits from it.
Full Smart Screen analysis without any network call. Works on a plane. Works in a secure environment. Works on day one with no setup.
Run local heuristics first, then optionally pass to OpenAI for transcript-grounded zoom planning, step titles, and callout suggestions.
Route model calls to a local Ollama instance. Auto-discovers models, distinguishes base vs instruction-tuned, works on air-gapped machines.
Real screenshots from the app — no mockups, no stock photos.
From raw recording to polished export — the complete flow in under three minutes.
The main process handles recording, native telemetry, transcription, and AI provider calls. The renderer handles the UI. 50+ IPC channels connect them — keeping secrets and privileged APIs where they belong.