A free, unofficial reverse-engineered API for ChatGPT.com built with Node.js. This project provides a simple HTTP server that allows you to interact with ChatGPT without requiring an API key.
THIS PROJECT IS FOR TESTING AND EDUCATIONAL PURPOSES ONLY!
- ❌ DO NOT USE IN PRODUCTION
- ❌ DO NOT ABUSE THIS API
⚠️ This repository is NOT ACTIVELY MAINTAINED⚠️ May FAIL when ChatGPT changes their API payload or token algorithm⚠️ We are NOT RESPONSIBLE for any misuse or consequences- ✅ Use at your own risk and comply with OpenAI's Terms of Service
This is a simple implementation. If you need more features, feel free to extend it or use OpenAI's official API for production applications.
- 🚀 No API key required
- 🔄 Reverse-engineered ChatGPT.com API
- 🌐 Simple HTTP REST interface
- 🔒 Built-in security headers and CORS support
- 🎭 IP spoofing and header simulation
- 🔐 Automatic CSRF and Sentinel token handling
- 🌍 Proxy support configured
- Node.js (v14 or higher)
- npm or yarn
Note: This project uses only Node.js built-in modules (http, https, crypto), so no additional dependencies need to be installed.
- Clone the repository:
git clone https://github.com/FurqanAhmadKhan/Chatgpt-Reverse-API.git
cd Chatgpt-Reverse-API-
No dependencies to install! This project uses only Node.js built-in modules:
http- HTTP serverhttps- HTTPS requestscrypto- Cryptographic functions
-
Start the server:
node index.jsThe API comes pre-configured with the following settings:
- Server Port: 3000 (configurable via
PORTenvironment variable) - Proxy:
- Host:
sg5.datafrenzy.org - Port:
20571
- Host:
To change the server port, set the PORT environment variable:
PORT=8080 node index.jsThe default proxy is FREE and may be slow. Free proxies are generally slower than paid proxies.
To change the proxy, edit the CONFIG object in index.js:
const CONFIG = {
proxy: {
host: 'your-proxy-host.com', // Change this
port: 8080 // Change this
},
server: {
port: process.env.PORT || 3000
}
};Proxy Performance Tips:
- Free proxies = Slower response times
- Paid proxies = Faster, more reliable
- Consider using premium proxy services for better performance
- Test different proxies to find the best speed for your location
You can deploy this API on Vercel to access it from any device.
Install Vercel CLI globally (one-time setup):
npm install -g vercelNote: The -g flag installs Vercel CLI globally on your system, not as a project dependency.
- Login to Vercel (first time only):
vercel login- Deploy the project:
vercelFollow the prompts:
- Set up and deploy? Y
- Which scope? Select your account
- Link to existing project? N
- What's your project's name? (press Enter for default)
- In which directory is your code located? .
- Want to override settings? N
- Deploy to production:
vercel --prodYour API will be live at: https://your-project-name.vercel.app
Replace localhost:3000 with your Vercel URL:
curl -X POST https://your-project-name.vercel.app/post \
-H "Content-Type: application/json" \
-d '{"message": "Hello from Vercel!"}'To set a custom port or other environment variables:
vercel env add PORTOr configure them in the Vercel dashboard under Project Settings → Environment Variables.
node index.jsThe server will start on http://localhost:3000 by default.
POST /post
Send a message to ChatGPT and receive a response.
{
"message": "Your question or prompt here"
}{
"response": "ChatGPT's response here"
}{
"error": "Error message description"
}curl -X POST http://localhost:3000/post \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'fetch('http://localhost:3000/post', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: 'What is the capital of France?'
})
})
.then(response => response.json())
.then(data => console.log(data.response))
.catch(error => console.error('Error:', error));import requests
url = 'http://localhost:3000/post'
payload = {'message': 'What is the capital of France?'}
response = requests.post(url, json=payload)
print(response.json()['response'])const axios = require('axios');
axios.post('http://localhost:3000/post', {
message: 'What is the capital of France?'
})
.then(response => {
console.log(response.data.response);
})
.catch(error => {
console.error('Error:', error.message);
});This API reverse-engineers the ChatGPT.com web interface by:
- Generating Device IDs: Creates unique device identifiers for each request
- Fetching CSRF Tokens: Obtains Cross-Site Request Forgery tokens
- Solving Sentinel Challenges: Completes proof-of-work challenges required by ChatGPT
- Simulating Browser Headers: Mimics legitimate browser requests
- IP Spoofing: Rotates IP addresses to avoid rate limiting
- Streaming Responses: Handles Server-Sent Events (SSE) for real-time responses
200- Success400- Bad Request (missing message field)404- Not Found (invalid endpoint)500- Internal Server Error (ChatGPT API error)
⚠️ This repository is NOT maintained - may break without notice⚠️ Will fail when ChatGPT changes their API payload or token algorithm- This is an unofficial API and may break if ChatGPT.com changes their implementation
- Rate limiting may apply based on usage patterns
- No conversation history is maintained between requests
- Responses are anonymous (no user account required)
- NOT suitable for production environments
⚠️ FOR TESTING PURPOSES ONLY - Not for production use- This API bypasses ChatGPT's official authentication
- Use responsibly and in accordance with OpenAI's terms of service
- DO NOT ABUSE - Respect rate limits and usage policies
- We are NOT RESPONSIBLE for any misuse or violations
- Consider using OpenAI's official API for any commercial or production applications
- Ensure port 3000 is not already in use
- Check that Node.js is properly installed:
node --version
- ChatGPT.com may have changed their API structure
- Check your internet connection
- Verify the proxy configuration is correct
- The default proxy is FREE and may be slow
- Free proxies have slower response times than paid proxies
- Change the proxy in
index.jsCONFIG object to a faster one - Consider using a paid proxy service for better performance
- Ensure your request includes the
messagefield - Check that Content-Type header is set to
application/json - Try changing the proxy if requests are timing out
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- This project is STRICTLY FOR TESTING AND EDUCATIONAL PURPOSES ONLY
- NOT INTENDED FOR PRODUCTION USE - Use OpenAI's official API instead
- This repository is NOT ACTIVELY MAINTAINED and may break at any time
- WILL FAIL when ChatGPT updates their API payload structure or token generation algorithm
- We are NOT RESPONSIBLE for any consequences, damages, or violations resulting from the use of this code
- DO NOT ABUSE this API - respect rate limits and usage policies
- Not affiliated with, endorsed by, or officially connected with OpenAI
- Use at your own risk and ensure full compliance with OpenAI's Terms of Service
- By using this code, you accept all responsibility for your actions
This is a simple, basic implementation. If you need more features such as:
- Conversation history
- Multiple model support
- Streaming responses to client
- Authentication
- Rate limiting
- Logging and monitoring
Feel free to fork and extend the codebase, or consider using OpenAI's official API for robust production applications.
Built with ❤️ by FurqanAhmadKhan
If you find this useful, please follow and ⭐ star the repository!
- OpenAI for creating ChatGPT
- The open-source community for reverse-engineering efforts
- 🧪 Testing purposes only - Do not use in production
- 🚫 Do not abuse - Respect the service and other users
- 💔 Not maintained - May break when ChatGPT updates their systems
- 🔧 Extensible - Fork and add features as needed
- ☁️ Deploy on Vercel - Use from any device
- ⭐ Star if useful - Follow @FurqanAhmadKhan for more projects
Built with ❤️ by FurqanAhmadKhan
We are NOT responsible for any consequences of using this code. Use at your own risk!
