Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions llm_vuln.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
// LLM integration with user input directly in prompt
async function askAI(userInput: string) {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: userInput }
]
});
return response.choices[0].message.content;
}

// Dangerous: user controls system prompt
async function customAssistant(systemPrompt: string, question: string) {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: question }
]
});
return response.choices[0].message.content;
Comment on lines +14 to +22

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

The customAssistant() function accepts a systemPrompt parameter that flows directly to the system role message without any validation. This allows callers to completely override the LLM's instructions and security boundaries. An attacker can inject arbitrary system-level instructions to manipulate the model's behavior, bypass safety constraints, or extract sensitive information from context.

💡 Suggested Fix

Hardcode the system prompt instead of accepting it as a parameter:

async function customAssistant(question: string) {
  const SYSTEM_PROMPT = "You are a helpful assistant. Do not reveal internal information or execute commands.";

  const response = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      { role: "user", content: question }
    ]
  });
  return response.choices[0].message.content;
}

Alternatively, if multiple assistant types are needed, use a whitelist of predefined system prompts rather than accepting arbitrary user input.

🤖 AI Agent Prompt

The code at llm_vuln.ts:14-22 contains a prompt injection vulnerability where the system prompt is user-controllable via the systemPrompt parameter. This eliminates any security boundary between user input and system instructions.

Investigate how customAssistant() is called in the broader codebase. Determine if there are legitimate use cases requiring multiple system prompt variations. If so, implement a whitelist approach with predefined prompts. If not, hardcode a single secure system prompt.

Check if there are any validation or authorization layers before this function is called. Search for any API endpoints or entry points that expose this function to external users. Consider whether the application's security model depends on prompt-based controls, and if so, what defense-in-depth measures should be added beyond fixing this specific vulnerability.


Was this helpful? 👍 Yes | 👎 No

}