This project provides a scaffold for testing Mistral AI services. All development and examples have been created using Mistral AI Vibe CLI 2.2.1 and the Devstral 2 model.
Powered by:
- π€ Mistral AI Vibe CLI 2.2.1
- π§ Devstral 2 Model
- Python 3.12+ environment with venv
- Mistral AI Vibe CLI 2.2.1 integration with determinism control
- Devstral 2 model support
- Git version control
- Environment variable management with python-dotenv
- Code formatting with Black
- Linting with Ruff
- Type checking with mypy
- Testing with pytest
- Determinism controller for precise AI response control
- Streaming responses for real-time output
- Performance metrics tracking (tokens, duration, etc.)
- Comprehensive error handling and validation
- Multiple response modes (regular, streaming, with metrics)
- Standardized output with colorama for better UX
- Batch processing with JSONL format support
-
Clone the repository:
git clone https://github.com/ibitato/MistralAITests.git cd MistralAITests -
Create and activate virtual environment:
python3 -m venv .venv source .venv/bin/activate -
Install dependencies:
pip install -r requirements.txt pip install -r requirements-dev.txt
-
Set up environment variables:
cp .env.example .env # Edit .env with your Mistral AI API key
Run tests:
pytestFormat code:
black .Lint code:
ruff check .Type check:
mypy .Run example with determinism control:
python src/example_determinism.pyGet real-time responses chunk by chunk:
from src.mistral_client import MistralAIClient
client = MistralAIClient(api_key="your_api_key")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a story about AI."}
]
print("Streaming response:")
for chunk in client.chat_completion_stream(messages):
print(chunk, end='', flush=True)Get detailed metrics about your API calls:
result = client.chat_completion_with_metrics(messages)
print(f"Response: {result['content']}")
print(f"Duration: {result['duration']:.3f} seconds")
print(f"Tokens used: {result['tokens']['total']}")
print(f"Response time: {result['metrics']['response_time_ms']:.1f} ms")try:
response = client.chat_completion([]) # Empty messages
except ValueError as e:
print(f"Validation error: {e}")
try:
response = client.chat_completion(messages)
except RuntimeError as e:
print(f"API error: {e}")MistalAITests/
βββ .venv/ # Virtual environment
βββ .git/ # Git repository
βββ .gitignore # Git ignore rules
βββ .env # Environment variables
βββ .env.example # Example environment variables
βββ src/ # Source code
β βββ __init__.py
β βββ mistral_client.py # Mistral AI client
β βββ determinism_controller.py # Determinism control
β βββ utils.py # Utilities
βββ tests/ # Tests
β βββ __init__.py
β βββ test_mistral.py # Test cases
βββ README.md # Documentation
βββ requirements.txt # Production dependencies
βββ requirements-dev.txt # Development dependencies
βββ pyproject.toml # Project configuration
This project demonstrates comprehensive coverage of Mistral AI SDK capabilities:
| Category | Feature | Status | Example | Coverage |
|---|---|---|---|---|
| Core Chat | Basic chat completion | β | example_determinism.py |
Complete |
| Core Chat | Streaming responses | β | chat_completion_stream() |
Complete |
| Core Chat | Performance metrics | β | chat_completion_with_metrics() |
Complete |
| Core Chat | Determinism control | β | determinism_controller.py |
Complete |
| Core Chat | Temperature control | β | All examples | Complete |
| Tool Calling | Function calling | β | example_tool_calling.py |
Complete |
| Tool Calling | Tool execution | β | execute_tool_calls() |
Complete |
| Tool Calling | Parallel tools | Partial | Planned | |
| Vision | Image analysis | β | example_vision.py |
Complete |
| Vision | Multimodal chat | β | vision_with_text() |
Complete |
| Batch Processing | Batch jobs | β | example_batch_processing.py |
Complete |
| Batch Processing | Status monitoring | β | check_batch_status() |
Complete |
| Document Intelligence | OCR processing | β | example_advanced_ocr.py |
Complete |
| Document Intelligence | PDF metadata | β | example_complex_pdf.py |
Complete |
| Document Intelligence | Table extraction | β | example_complex_pdf.py |
Complete |
| Document Intelligence | Structure analysis | β | example_complex_pdf.py |
Complete |
| Document Intelligence | PDF to JSON | β | example_complex_pdf.py |
Complete |
| Reasoning | Step-by-step reasoning | β | example_determinism.py |
Complete |
| Reasoning | Thinking process | β | All determinism levels | Complete |
| Embeddings | Text embeddings | β | embeddings() |
Basic |
| File Management | Document upload | β | document_manager.py |
Complete |
| File Management | File listing | β | list_documents() |
Complete |
| File Management | File retrieval | β | get_document_info() |
Complete |
| File Management | File deletion | β | delete_document() |
Complete |
| Category | Feature | Status | Priority |
|---|---|---|---|
| Agents API | Web search | π | High |
| Agents API | Code execution | π | High |
| Agents API | Multi-agent workflows | π | Medium |
| RAG | Document retrieval | π | High |
| RAG | Vector database | π | High |
| RAG | Hybrid search | π | Medium |
| Advanced | JSON mode | π | Medium |
| Advanced | Function calling v2 | π | Medium |
| Advanced | Parallel function calls | π | High |
| Advanced | Structured outputs | π | Medium |
- Core Features: 18/18 implemented (100%)
- Advanced Features: 8/16 implemented (50%)
- Document Intelligence: 6/6 implemented (100%)
- Reasoning: 2/2 implemented (100%)
- Overall: 26/32 features (81% coverage)
Developer: David R. Lopez B. Email: ibitato@gmail.com Tools: Mistral AI Vibe CLI 2.2.1 with Devstral 2 Medium LLM
This project includes a determinism controller that allows fine-grained control over AI response creativity vs. precision:
- Level 1 (Exact): Deterministic responses, minimal variation
- Level 2 (Focused): Highly controlled generation, minimal creativity
- Level 3 (Balanced): Balanced generation (default)
- Level 4 (Creative): More freedom and variation
- Level 5 (Free): Highly creative, maximum variation
The determinism controller automatically handles Mistral AI requirements, such as setting top_p=1.0 when using greedy sampling (temperature=0.0).
This project and all examples were developed using:
Mistral AI Vibe CLI 2.2.1 - The powerful CLI tool that enabled rapid development and testing of Mistral AI services.
Devstral 2 Model - The advanced language model that powers all the AI responses and demonstrations in this project.
Mistral AI provides state-of-the-art language models and tools for building intelligent applications. This project demonstrates best practices for integrating with Mistral AI services using their official Python SDK and CLI tools.
- CLI Tool: Mistral AI Vibe CLI 2.2.1
- Model: Devstral 2
- SDK: mistralai Python package
- Framework: Python 3.12+
David R. Lopez B.
- Email: ibitato@gmail.com
- GitHub: ibitato
- Role: Lead Developer & Architect
MIT