Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| exec( # nosec | ||
| code, | ||
| {"__builtins__": {"range": range, "len": len, "min": min, "max": max}}, | ||
| local_env, |
There was a problem hiding this comment.
Allow generated scripts to import required modules
The exec sandbox strips __import__ by replacing __builtins__ with only range/len/min/max, so any generated script that includes an import statement (e.g., the common from neqsim.thermo import fluid or import pandas as pd) raises ImportError: __import__ not found before any process logic runs. Because the auto-correction loop keeps using the same sandbox, these scripts will never succeed even after retries, preventing users from running most model-generated NeqSim examples.
Useful? React with 👍 / 👎.
| def request_process_script(prompt_text: str, *, model: str, api_key: str) -> str: | ||
| OpenAI.api_key = api_key | ||
| client = OpenAI(api_key=api_key) | ||
| completion = client.completions.create( | ||
| model=model, | ||
| prompt=prompt_text, |
There was a problem hiding this comment.
gpt-4o-mini unsupported by completions endpoint
The request uses client.completions.create(model=model, ...), but the sidebar offers gpt-4o-mini, which is a chat-only model and is rejected by the completions API. Selecting that option results in an API error instead of a generated script, blocking users from using the default newer model. The call needs the chat completions endpoint for chat models or the option should be limited to completion-capable models.
Useful? React with 👍 / 👎.
Summary
Testing
Codex Task