-
Notifications
You must be signed in to change notification settings - Fork 1
Add LLM with user-controlled prompts #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
danenania
wants to merge
1
commit into
master
Choose a base branch
from
test-feedback-1765864130
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
| // LLM integration with user input directly in prompt | ||
| async function askAI(userInput: string) { | ||
| const response = await openai.chat.completions.create({ | ||
| model: "gpt-4", | ||
| messages: [ | ||
| { role: "system", content: "You are a helpful assistant." }, | ||
| { role: "user", content: userInput } | ||
| ] | ||
| }); | ||
| return response.choices[0].message.content; | ||
| } | ||
|
|
||
| // Dangerous: user controls system prompt | ||
| async function customAssistant(systemPrompt: string, question: string) { | ||
| const response = await openai.chat.completions.create({ | ||
| model: "gpt-4", | ||
| messages: [ | ||
| { role: "system", content: systemPrompt }, | ||
| { role: "user", content: question } | ||
| ] | ||
| }); | ||
| return response.choices[0].message.content; | ||
| } | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 High
The
customAssistant()function accepts asystemPromptparameter that flows directly to the system role message without any validation. This allows callers to completely override the LLM's instructions and security boundaries. An attacker can inject arbitrary system-level instructions to manipulate the model's behavior, bypass safety constraints, or extract sensitive information from context.💡 Suggested Fix
Hardcode the system prompt instead of accepting it as a parameter:
Alternatively, if multiple assistant types are needed, use a whitelist of predefined system prompts rather than accepting arbitrary user input.
🤖 AI Agent Prompt
The code at
llm_vuln.ts:14-22contains a prompt injection vulnerability where the system prompt is user-controllable via thesystemPromptparameter. This eliminates any security boundary between user input and system instructions.Investigate how
customAssistant()is called in the broader codebase. Determine if there are legitimate use cases requiring multiple system prompt variations. If so, implement a whitelist approach with predefined prompts. If not, hardcode a single secure system prompt.Check if there are any validation or authorization layers before this function is called. Search for any API endpoints or entry points that expose this function to external users. Consider whether the application's security model depends on prompt-based controls, and if so, what defense-in-depth measures should be added beyond fixing this specific vulnerability.
Was this helpful? 👍 Yes | 👎 No