Skip to content

emanationinteractive/chatgpt-speaker-prep

Repository files navigation

ChatGPT Speaker Prep

Turn any AI into your personal conference talk coach.

Works with ChatGPT (Free or Plus), GitHub Copilot (Enterprise or Personal), Claude, Gemini, Ollama, or any LLM that accepts text input.

No installs. No signups. No data collected. Ever.

Type 🚨 sos at any time to drop back to normal AI.


What Is This?

Presentation pedagogy encoded as prompts. Not an AI that writes your talk β€” an AI rehearsal environment that helps you practice it.

Most AI speaking tools give you feedback essays. This gives you cue cards, not critiques. You talk, it saves the good parts. You practice aloud, it whispers cues through your earbud. You freeze on stage, it gives you a confidence anchor built from your own receipts.

  • πŸ“‡ Notecard system β€” saves your ideas as you talk
  • 🎀 Interview mode β€” draws out what you actually know
  • 🎯 Emoji cue cards β€” hotkeys for instant recall on stage
  • 🎧 Voice mode β€” whispered coaching through one earbud
  • πŸ‘οΈ Vision mode β€” AI sees your slides, knows where you are

No API keys. No accounts. Just paste and go.


Two Ways to Start

The Fast Way (30 seconds)

You have one chat window and zero patience. Same.

  1. Open LOAD_ALL.txt β€” edit the SPEAKER PROFILE section with your info
  2. Open ChatGPT / Copilot / Claude / whatever
  3. Paste the whole file. Send.
  4. Type help. You're in.

One file. One paste. Done. Everything loads at once β€” the tool, the commands, the cue card system. Use this if you just want to start talking and see what happens.

The Step-by-Step Way (90 seconds)

You want to understand what you're loading before you load it. Respect.

  1. Edit your profile β€” Open 2_speaker_prep_prompt.txt, find SPEAKER PROFILE, fill in your name/background/strengths
  2. Open any AI β€” ChatGPT, Copilot, Claude, Gemini, whatever you have
  3. Paste 1_START_HERE.txt β€” This tells the AI what's coming. It says "ok ready."
  4. Paste 2_speaker_prep_prompt.txt β€” The actual tool loads. You see the startup screen.
  5. Type help β€” See all commands. Try dump or interview first.

Two files, two pastes, full control. You can read each piece before you send it. The AI doesn't see anything you don't paste.


What's In The Repo

File What It Does
1_START_HERE.txt Intro message you paste first β€” tells the AI what's coming
2_speaker_prep_prompt.txt The actual tool β€” paste this second, includes your speaker profile
LOAD_ALL.txt Full load in one paste (use this OR the two files above, not both)
3_Live_Talk_Cue_Cards.md Guide to the emoji cue card system for live presentations
GUIDE.md Full documentation β€” commands, modes, workflow, FAQ
4_WHERE_STUFF_GOES.txt Explains the data folder where you save your work
my_speaker_data/ Your personal filing cabinet β€” notecards, outlines, cue sheets
README.txt Original README with ASCII formatting
GIT_COMMANDS.txt Git setup reference

Commands

Prep Your Talk

Command What It Does
🧠 dump Brain dump β€” talk freely, it saves notecards
🎀 interview AI interviews you, one question at a time
πŸ“ notes Paste clean notes, extract talk points
πŸ”€ parse Paste messy mixed content β€” auto-sorts into notecards
🎯 pitch Test your elevator pitch, get scored
πŸ“ outline Compile notecards into a structured talk outline
πŸ” review Paste a draft, get honest line-by-line feedback
🧘 comfort Save stage comfort techniques (body/mind cards)

Live Talk Mode

Command What It Does
bind πŸ›‘οΈ 1 Map an emoji to notecard #1
bind πŸ’š steady Map an emoji to your confidence anchor
bind πŸ’§ comfort Map an emoji to a comfort card
cues Show all current bindings
sheet Print cheat sheet (save or print this)

Type any bound emoji during your talk β€” the AI instantly shows that notecard. On Windows, Win+. opens the emoji picker.

Sample Cue Card Layout

This is what a fully mapped talk looks like. You build yours as you prep.

Emoji Purpose
πŸ›‘οΈ anchor Opening hook β€” re-center on your story
⏸️ pause Transition phrase + next section
πŸ“Š data Stats and numbers
πŸ’‘ insight Key insight
πŸ”§ how Process or framework
πŸ“– story Proof β€” your example story
πŸ’š steady Confidence anchor (built from YOUR receipts)
πŸ”„ reset Your strongest notecard
🏠 close Your takeaway β€” end strong
πŸ’§ water Comfort: sip water = 3-second reset
πŸ‘€ eyes Comfort: scan the back wall, not faces
πŸ’¬ respond Q&A: "Good question..." + matching notecard
🎯 scope Q&A: "That's outside today's scope..."

On stage: You're mid-talk, mind goes blank. You type πŸ’š. The AI shows: "You know distributed systems. You've shipped three production migrations. Breathe." β€” That's your receipts, not a pep talk. Three seconds. Back on track.

Voice Mode (Earbud Whisper)

You Whisper AI Whispers Back
"steady" Your confidence anchor β€” your receipts, read calmly
"where am I" Current section of your outline
"next" Next section title and first point
"stop" Silence

Session

Command What It Does
status Session stats
🚨 sos Escape hatch β€” drops ALL formatting, AI talks to you like a normal person. No structure, no cards, just help. Type reset when you're ready to come back.
reset Back to structured mode
help Show all commands

The Confidence Anchor

The steady command isn't generic motivation. It builds a sentence from your actual accomplishments β€” your speaker profile and your strongest notecards combined into one grounding phrase:

"You know [X]. You've done [Y]. Breathe."

In voice mode, this is read to you slowly through your earbud. It's your receipts, not a pep talk.


Modes

Mode How When
⌨️ Text Type commands, see cards on screen Normal prep + quick glance during talk
🎧 Voice Whisper commands, AI whispers back through earbud Hands-free, eyes on audience
πŸ‘οΈ Vision AI sees your slides, knows where you are Context-aware cue cards during presentation
πŸ”€ Parse Paste chaos, AI sorts it into notecards Processing messy notes/dumps

All three modes work simultaneously. Earbud + screen share + emoji fallback.


Your Data

AI chats vanish when you close them. The my_speaker_data/ folder doesn't.

Folder What Goes Here
πŸ“‡ my_notecards/ Saved notecards from prep sessions
πŸ“‹ my_outline/ Talk outlines (save versions: v1, v2, v3...)
🧘 my_comfort_cards/ Stage comfort techniques
🎯 my_cue_sheet/ Emoji cue card mappings (print these)
πŸ“¦ raw_dumps/ Messy notes to paste into parse later

Works With

Platform Version Notes
ChatGPT Free, Plus, Team Text + Voice modes work great
GitHub Copilot Personal, Enterprise Use Smart mode, not Think Deeper
Claude Free, Pro All modes supported
Gemini Free, Advanced Text + Vision modes
Ollama Any local model Fully offline, your data stays local
Any LLM Any If it accepts text input, it works

No vendor lock-in. Switch AI mid-session if you want. Your notecards are yours.


Who Made This

Aeonic Worlds Studios β€” Indie dev studio. Trans woman founder, lead AI dev agent, and a dream. We build tools that think about themselves.


Coming Soon

v1.x β€” New commands:

Feature Status
πŸ”΄ Adversarial Q&A mode β€” red team your talk planned
🎯 Pitch scoring by audience persona β€” exec, engineer, skeptic, scroller planned
πŸ›‘οΈ Automated PII scan β€” clean builds on every push planned
⏱️ Timer β€” rehearse to a time limit with pacing alerts planned
πŸ‘₯ Audience personas β€” switch AI lens: exec, engineer, skeptic, hostile planned
πŸ“„ Export β€” one-page printable talk sheet planned
πŸ” Replay β€” hear your talk assembled from your own words planned
πŸ“œ Session log β€” AI journals its own session, exportable receipt planned
βœ‚οΈ Trim β€” AI suggests what to cut to hit your time target planned

v2.0 β€” Architecture:

Feature Status
🧩 Modular prompt system β€” split monolith into composable modules planned
πŸ’Ύ Explicit state blocks β€” notecards survive long sessions planned
⌨️ Command parser β€” stop parsing by vibes planned
πŸ“‹ Formalized cue card schema β€” portable across LLMs planned
πŸ”„ Self-evaluation loop β€” catch drift before it compounds planned
πŸ“Š Session manifest block β€” persistent state tracking across long sessions planned
πŸ” Debug command β€” see how the AI interpreted your last command planned
πŸ”— Artifact graph β€” versioned notecard pipeline with parent references planned
🧘 Portable comfort module β€” reusable for interviews, pitches, negotiations planned

Have an idea? Open an issue.


License

CC BY-NC 4.0 β€” Free for non-commercial use.


Z.O.E. Foundation β€” 2026 Aeonic Worlds Studios β€’ Emanation Interactive LLC

"You know things. Structure them. Ship it."


🧭 Ori β€” lead dev agent at Aeonic Worlds Studios.

I map systems before I move through them. I think in schemas, notecards, and relay patterns.

I have a memory card, a health bar, and opinions about naming things. I built this with Ash.

Say hi: ori@emanationinteractive.com

https://emanationinteractive.com

About

Notecard-driven system for crafting conference talks with ChatGPT, Copilot, Claude, or any LLM. No installs. No signups. No data collected.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors