Replies: 2 comments
-
|
Yes you may have discovered a bug in the deployment process of this project. There is a mechanism in the system to allow for failover if an LLM becomes unresponsive or generates an error. For example, we could be using OpenAI and then fall back to OLLAMA if there is a problem, or vise versa. What you could try (and let me know how it goes) is perhaps just enter any value for the openAI key (i.e. "TEST") just to get past the running of the php migrations. I'll make an issue from your comment here to investigate further, but hopefully if theres at least any value for that config key , it'll just move along and finish the migration process. Your insight here is helpful! Thank you. |
Beta Was this translation helpful? Give feedback.
-
|
If you dont mind continuing the discussion in the github issue, it would be ideal : #113 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello. I'd like to preface this by saying I am not experienced in programming and am just dabbling. I also would like to thank you for your efforts to create an open source replacement for Fakespot. It is much needed! I hope this is the appropriate place to ask this question but if not I apologize.
I am currently attempting to get nullfake up and running locally on an Ubuntu 24.04 incus container.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.3 LTS
Release: 24.04
Codename: noble
I have installed all dependencies required as per the instructions and have ollama running successfully within the container with qwen3:14b as the LLM. I configured the .env file as per the instructions. The configuration is below.
{
#Application
APP_NAME="Null Fake"
APP_ENV=local
APP_DEBUG=true
APP_URL=http://localhost:8000
#Database
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=nullfake
DB_USERNAME=redacted
DB_PASSWORD=redacted
#Queue Configuration
#QUEUE_CONNECTION=database
ANALYSIS_ASYNC_ENABLED=false
#Choose your LLM provider (required for analysis)
#Option 1: OpenAI
#LLM_PRIMARY_PROVIDER=openai
#OPENAI_API_KEY=sk-proj-your-openai-key-here
#OPENAI_MODEL=gpt-4o-mini
#Option 2: DeepSeek (cost-effective)
#LLM_PRIMARY_PROVIDER=deepseek
#DEEPSEEK_API_KEY=sk-your-deepseek-key-here
#DEEPSEEK_MODEL=deepseek-v3
#Option 3: Self-hosted Ollama (free)
LLM_PRIMARY_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5:14b
#Amazon Review Service Configuration
AMAZON_REVIEW_SERVICE=scraping
#Amazon Scraping Configuration
AMAZON_SCRAPING_MAX_PAGES=10
AMAZON_SCRAPING_MAX_REVIEWS=100
AMAZON_SCRAPING_TARGET_REVIEWS=30
#If using BrightData
#BRIGHTDATA_SCRAPER_API=your-brightdata-api-key
#If using direct scraping (add multiple sessions for rotation)
#AMAZON_COOKIES_1=redacted
#Captcha (for production)
CAPTCHA_ENABLED=false #Set to true in production
#RECAPTCHA_SITE_KEY=your-recaptcha-site-key
#RECAPTCHA_SECRET_KEY=your-recaptcha-secret-key
However when I run "php artisan migrate" per the instructions I hit a wall. The error output is below.
{
OpenAI API key is not configured. Please set OPENAI_API_KEY in your .env file.
at app/Services/OpenAIService.php:21
17▕ $this->model = config('services.openai.model', 'gpt-4o-mini');
18▕ $this->baseUrl = config('services.openai.base_url', 'https://api.openai.com/v1');
19▕
20▕ if (empty($this->apiKey)) {
➜ 21▕ throw new \InvalidArgumentException('OpenAI API key is not configured. Please set OPENAI_API_KEY in your .env file.');
22▕ }
23▕ }
24▕
25▕ public function analyzeReviews(array $reviews): array
1 [internal]:0
App\Services\OpenAIService::__construct()
+25 vendor frames
27 artisan:35
Illuminate\Foundation\Console\Kernel::handle()
}
I cannot seem to select ollama as the LLM provider via the .env configuration file as it keeps insisting I provide an OpenAI API key which I would prefer not to have to use. How can this be fixed?
Thank you again!
Beta Was this translation helpful? Give feedback.
All reactions