A reverse-engineered, no-API-key ChatGPT client that streams responses in real time. It mimics a real browser session to bypass the authentication wall, solves proof-of-work and Turnstile challenges automatically, and exposes a clean HTTP streaming endpoint via FastAPI.
Demo video:
poc.mp4(included in repo)To embed it on GitHub, upload the file via a GitHub issue or release and replace this line with the returned asset URL.
| Feature | Details |
|---|---|
| No API key required | Works with anonymous ChatGPT sessions — no OpenAI account needed |
| Real-time stream response | Responses are streamed token-by-token using server-sent events (SSE), so you see output immediately as ChatGPT generates it |
| Browser fingerprint spoofing | Uses curl_cffi to impersonate a real Chrome browser, including TLS fingerprints |
| Auto session bootstrapping | Automatically fetches the latest build number, device ID, and chat-requirements token on startup |
| Proof-of-Work solver | Automatically solves OpenAI's proofofwork challenge embedded in chat requirements |
| Turnstile bypass | Handles Cloudflare Turnstile token negotiation silently |
| FastAPI REST endpoint | Drop-in /conversation POST endpoint that returns a text/event-stream response |
| Direct class usage | Use ChatGPT directly in Python without spinning up a server |
reverse-chatgpt/
├── app.py # FastAPI application & /conversation endpoint
├── chat.py # ChatGPT class — builds payloads, streams responses
├── gpt_session.py # Session bootstrap (cookies, build number, sentinel tokens)
├── build.py # Proof-of-work solver & token utilities
├── tunsile.py # Turnstile challenge handler
├── utils.py # Helpers: build number extraction, config loading, etc.
├── config.json # Static configuration
├── requirements.txt
└── test.py # Quick CLI smoke test
git clone https://github.com/yourname/reverse-chatgpt.git
cd reverse-chatgptpython3 -m venv venv
source venv/bin/activatepip install -r requirements.txtNote:
curl_cffirequires a C compiler andlibcurl. On Debian/Ubuntu:sudo apt install build-essential libcurl4-openssl-dev
python test.pyThis instantiates ChatGPT directly and streams a response to stdout — no server needed.
uvicorn app:app --host 0.0.0.0 --port 5000The server will be available at http://localhost:5000.
Send a prompt and receive a real-time streamed response.
Request
POST /conversation
Content-Type: application/json
{
"text": "Write a brief Vue.js tutorial"
}Response — text/event-stream
Tokens are streamed back as plain text chunks as soon as ChatGPT generates them.
Example with curl:
curl -X POST http://localhost:5000/conversation \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"text": "Explain async/await in JavaScript"}' \
--no-bufferExample with Python requests:
import requests
response = requests.post(
"http://localhost:5000/conversation",
json={"text": "Write a brief Vue.js tutorial"},
stream=True,
)
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)from chat import ChatGPT
gpt = ChatGPT()
for chunk in gpt.reply_chat("What is the capital of France?"):
print(chunk, end="", flush=True)