An intelligent test case generation system powered by open-source LLM models (Ollama) and FAISS vector database for semantic search and context-aware test case creation.
- AI-Powered Test Generation: Leverages Ollama LLM models (Llama2, Mistral) for intelligent test case creation
- Vector-Based Context Search: Uses FAISS for semantic similarity search in knowledge base
- Knowledge Base Integration: Processes documentation and examples to generate contextually relevant test cases
- RESTful API: Flask-based backend with CORS support
- Token Usage Tracking: Built-in token counter for monitoring LLM usage
- Extensible Architecture: Modular design for easy integration and customization
AITestCaseGenerator/
βββ backend/ # Flask backend application
β βββ app.py # Main Flask application
β βββ vector_store.py # FAISS vector store implementation
β βββ token_counter.py # Token usage tracking
β βββ knowledge_base/ # Documentation and examples
β β βββ overall_functionality.txt
β β βββ test_case_examples/
β β βββ best_practices/
β βββ vector_store/ # FAISS index files (generated)
β βββ requirements.txt # Python dependencies
βββ AITestAgent/ # Frontend application
βββ README.md
- Python 3.8+
- Ollama installed and running
- Git (for version control)
llama2:latestmistral:latest
-
Clone the repository
git clone <repository-url> cd AITestCaseGenerator
-
Set up Python virtual environment
cd backend python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Install and start Ollama
# Install Ollama (macOS) brew install ollama # Start Ollama service ollama serve # Pull required models ollama pull llama2 ollama pull mistral
-
Initialize the vector store
python -c "from vector_store import initialize_vector_store; initialize_vector_store()"
cd backend
source venv/bin/activate
python app.pyThe server will start on http://localhost:5000
GET /healthReturns system status including Ollama connectivity and token usage statistics.
POST /generate-test-cases
Content-Type: application/json
{
"requirements": "User login functionality with email validation",
"test_type": "functional",
"complexity": "medium"
}The FAISS vector store automatically processes documents from the knowledge_base/ directory:
- Functionality specifications:
overall_functionality.txt - Test case examples:
test_case_examples/ - Best practices:
best_practices/
Create a .env file in the backend directory:
OLLAMA_BASE_URL=http://localhost:11434
VECTOR_STORE_PATH=./vector_store
KNOWLEDGE_BASE_PATH=./knowledge_base
CHUNK_SIZE=1000
CHUNK_OVERLAP=200Ensure Ollama is running and accessible:
# Check Ollama status
curl http://localhost:11434/api/tags
# Test model availability
ollama listThe system tracks token usage for cost monitoring:
# View token usage statistics
curl http://localhost:5000/healthfrom vector_store import get_vector_store
vs = get_vector_store()
stats = vs.get_stats()
print(stats)- Add documents to appropriate directories in
knowledge_base/ - Reinitialize the vector store:
from vector_store import initialize_vector_store initialize_vector_store(force_recreate=True)
from vector_store import get_vector_store
vs = get_vector_store()
results = vs.similarity_search("user authentication test cases", k=5)
for doc in results:
print(f"Score: {doc.metadata}")
print(f"Content: {doc.page_content[:200]}...")- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
-
Ollama Connection Error
# Check if Ollama is running ps aux | grep ollama # Restart Ollama ollama serve
-
Vector Store Initialization Failed
# Check knowledge base files exist ls -la backend/knowledge_base/ # Recreate vector store python -c "from vector_store import initialize_vector_store; initialize_vector_store(force_recreate=True)"
-
FAISS Import Error
# Reinstall FAISS pip uninstall faiss-cpu pip install faiss-cpu==1.11.0
For support and questions:
- Create an issue in the GitHub repository
- Check the troubleshooting section above
- Review Ollama documentation for LLM-related issues
- v1.0.0 - Initial release with basic test case generation
- v1.1.0 - Added FAISS vector store integration
- v1.2.0 - Enhanced context-aware generation
Built with β€οΈ using Ollama, FAISS, and Flask