Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -1255,6 +1255,7 @@
"guides/use-cases/run-portkey-on-prompts-from-langchain-hub",
"guides/use-cases/smart-fallback-with-model-optimized-prompts",
"guides/use-cases/how-to-use-openai-sdk-with-portkey-prompt-templates",
"guides/use-cases/automated-prompt-replication",
"guides/use-cases/setup-openai-greater-than-azure-openai-fallback",
"guides/use-cases/fallback-from-sdxl-to-dall-e-3",
"guides/use-cases/comparing-top10-lmsys-models-with-portkey",
Expand Down Expand Up @@ -2239,6 +2240,7 @@
"guides/use-cases/run-portkey-on-prompts-from-langchain-hub",
"guides/use-cases/smart-fallback-with-model-optimized-prompts",
"guides/use-cases/how-to-use-openai-sdk-with-portkey-prompt-templates",
"guides/use-cases/automated-prompt-replication",
"guides/use-cases/setup-openai-greater-than-azure-openai-fallback",
"guides/use-cases/fallback-from-sdxl-to-dall-e-3",
"guides/use-cases/comparing-top10-lmsys-models-with-portkey",
Expand Down Expand Up @@ -2481,6 +2483,14 @@
}
},
"redirects": [
{
"source": "/api-reference/admin-api/control-plane/prompts/automated-prompt-replication",
"destination": "/guides/use-cases/automated-prompt-replication"
},
{
"source": "/guides/prompts/automated-prompt-replication",
"destination": "/guides/use-cases/automated-prompt-replication"
},
{
"source": "/integrations/observability-integrations",
"destination": "/product/observability/opentelemetry/list-of-supported-otel-instrumenters"
Expand Down
1 change: 1 addition & 0 deletions guides/use-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ title: Overview
<Card title="Run Portkey on Prompts from Langchain Hub" href="/guides/use-cases/run-portkey-on-prompts-from-langchain-hub" />
<Card title="Smart Fallback with Model-Optimized Prompts" href="/guides/use-cases/smart-fallback-with-model-optimized-prompts" />
<Card title="How to use OpenAI SDK with Portkey Prompt Templates" href="/guides/use-cases/how-to-use-openai-sdk-with-portkey-prompt-templates" />
<Card title="Automated Prompt Replication" href="/guides/use-cases/automated-prompt-replication" />
<Card title="Setup OpenAI -> Azure OpenAI Fallback" href="/guides/use-cases/setup-openai-greater-than-azure-openai-fallback" />
<Card title="Fallback from SDXL to Dall-e-3" href="/guides/use-cases/fallback-from-sdxl-to-dall-e-3" />
<Card title="Comparing Top10 LMSYS Models with Portkey" href="/guides/use-cases/comparing-top10-lmsys-models-with-portkey" />
Expand Down
270 changes: 270 additions & 0 deletions guides/use-cases/automated-prompt-replication.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
---
title: "Automated Prompt Replication"
description: "Bulk-copy Portkey prompts and point them at a new model using the Admin API—no manual duplication in the UI."
---

Use this workflow when many prompts target one model (for example Claude 3.7) and you want **replicas** that keep the same template and settings but run on another model (for example Claude 3.5 Sonnet on Bedrock). The [List prompts](/api-reference/admin-api/control-plane/prompts/list-prompts), [Retrieve prompt](/api-reference/admin-api/control-plane/prompts/retrieve-prompt), and [Create prompt](/api-reference/admin-api/control-plane/prompts/create-prompt) endpoints drive the migration.

<Info>
**Auth:** Use an [Admin API key](/api-reference/admin-api/introduction) or a **Workspace API key** with prompt permissions. Send `x-portkey-api-key` on every request.
</Info>

## When to use this

- Migrate dozens of prompts to a new default model after a provider or catalog change
- Keep originals untouched by creating **named replicas** (for example append `-replica`)
- Automate what would otherwise be repeated copy-paste in Prompt Studio

## How it works

1. **List** all prompts and collect their IDs.
2. **Retrieve** each prompt’s full definition (template `string`, `parameters`, `virtual_key`, metadata, etc.).
3. **Create** a new prompt per ID with the same body fields and a **new `model`** value.

Replace the example model string (`anthropic.claude-3-5-sonnet`) with the exact model identifier your workspace uses.

---

## Step 1: List prompts and collect IDs

<Tabs>
<Tab title="Python">
```python
import requests

BASE = "https://api.portkey.ai/v1"
headers = {"x-portkey-api-key": "YOUR_API_KEY"}

r = requests.get(f"{BASE}/prompts", headers=headers)
r.raise_for_status()

prompt_ids = [item["id"] for item in r.json()["data"]]
print(prompt_ids)
```
</Tab>
<Tab title="Node.js">
```javascript
async function main() {
const BASE = 'https://api.portkey.ai/v1';
const headers = { 'x-portkey-api-key': process.env.PORTKEY_API_KEY ?? 'YOUR_API_KEY' };

const res = await fetch(`${BASE}/prompts`, { headers });
if (!res.ok) throw new Error(`List prompts failed: ${res.status}`);
const { data } = await res.json();
const promptIds = data.map((row) => row.id);
console.log(promptIds);
}

main().catch(console.error);
```
</Tab>
</Tabs>

<Note>
**Python (steps 2–3):** Run the snippets in order in the same session so `BASE`, `headers`, and `prompt_ids` / `prompt_data` stay in scope. **Node.js:** Examples use native `fetch` (Node.js 18+).
</Note>

---

## Step 2: Fetch one prompt’s full configuration

Use this shape to see which fields the API returns before building the create payload (names vary slightly by version; always log once and adjust keys if needed).

<Tabs>
<Tab title="Python">
```python
prompt_id = prompt_ids[0]
url = f"{BASE}/prompts/{prompt_id}"

prompt_data = requests.get(url, headers=headers).json()
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Python Step 2 snippet, the GET response is parsed with .json() without checking for HTTP errors first. If the request fails (401/404/etc.), users may get a confusing JSON parse / KeyError later. Consider calling raise_for_status() (and optionally using a short timeout) before reading .json() to make failures explicit.

Suggested change
prompt_data = requests.get(url, headers=headers).json()
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
prompt_data = response.json()

Copilot uses AI. Check for mistakes.
print(prompt_data)
```
</Tab>
<Tab title="Node.js">
```javascript
async function main() {
const BASE = 'https://api.portkey.ai/v1';
const headers = { 'x-portkey-api-key': process.env.PORTKEY_API_KEY ?? 'YOUR_API_KEY' };

const listRes = await fetch(`${BASE}/prompts`, { headers });
const { data } = await listRes.json();
const promptId = data[0].id;

const promptData = await fetch(`${BASE}/prompts/${promptId}`, { headers }).then((r) =>
r.json()
);
Comment on lines +90 to +95
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Node.js Step 2 snippet, listRes is not checked for ok before calling .json(). If auth is wrong or the API errors, the example may proceed with data[0] on an error payload. Add an ok check (and surface status/body) before parsing.

Suggested change
const { data } = await listRes.json();
const promptId = data[0].id;
const promptData = await fetch(`${BASE}/prompts/${promptId}`, { headers }).then((r) =>
r.json()
);
if (!listRes.ok) {
const body = await listRes.text();
throw new Error(`List prompts failed: ${listRes.status} ${body}`);
}
const { data } = await listRes.json();
const promptId = data[0].id;
const promptRes = await fetch(`${BASE}/prompts/${promptId}`, { headers });
if (!promptRes.ok) {
const body = await promptRes.text();
throw new Error(`Retrieve prompt failed: ${promptRes.status} ${body}`);
}
const promptData = await promptRes.json();

Copilot uses AI. Check for mistakes.
Comment on lines +90 to +95
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Node.js Step 2 snippet, the retrieve call parses JSON without validating r.ok. If the prompt fetch fails, the example will still log an error payload as if it were a prompt. Consider checking ok and throwing a descriptive error before r.json().

Suggested change
const { data } = await listRes.json();
const promptId = data[0].id;
const promptData = await fetch(`${BASE}/prompts/${promptId}`, { headers }).then((r) =>
r.json()
);
if (!listRes.ok) throw new Error(`List prompts failed: ${listRes.status}`);
const { data } = await listRes.json();
const promptId = data[0].id;
const promptRes = await fetch(`${BASE}/prompts/${promptId}`, { headers });
if (!promptRes.ok) throw new Error(`Retrieve prompt failed: ${promptRes.status}`);
const promptData = await promptRes.json();

Copilot uses AI. Check for mistakes.
console.log(promptData);
}

main().catch(console.error);
```
</Tab>
</Tabs>

---

## Step 3: Create a single replicated prompt

The replica reuses template content and metadata, overrides **`model`**, and uses a distinct **`name`** so it does not collide with the original.

<Tabs>
<Tab title="Python">
```python
TARGET_MODEL = "anthropic.claude-3-5-sonnet"

payload = {
"name": prompt_data["name"] + "-replica",
"collection_id": prompt_data["collection_id"],
"string": prompt_data["string"],
"parameters": prompt_data["parameters"],
"virtual_key": prompt_data["virtual_key"],
"model": TARGET_MODEL,
"version_description": prompt_data.get(
"prompt_version_description", "Replicated prompt"
),
"template_metadata": prompt_data["template_metadata"],
}

r = requests.post(f"{BASE}/prompts", json=payload, headers=headers)
r.raise_for_status()
print(r.json())
```
</Tab>
<Tab title="Node.js">
```javascript
async function main() {
const BASE = 'https://api.portkey.ai/v1';
const headers = { 'x-portkey-api-key': process.env.PORTKEY_API_KEY ?? 'YOUR_API_KEY' };
const TARGET_MODEL = 'anthropic.claude-3-5-sonnet';

const listRes = await fetch(`${BASE}/prompts`, { headers });
const { data } = await listRes.json();
const promptData = await fetch(`${BASE}/prompts/${data[0].id}`, { headers }).then((r) =>
r.json()
);

Comment on lines +141 to +145
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Node.js Step 3 snippet, the list/retrieve calls (listRes, prompt fetch) are not checked for ok before reading .json(), which can lead to confusing failures when auth/pagination errors occur. Align this snippet with the Step 1 pattern by validating ok for each fetch before using the response body.

Suggested change
const { data } = await listRes.json();
const promptData = await fetch(`${BASE}/prompts/${data[0].id}`, { headers }).then((r) =>
r.json()
);
if (!listRes.ok) {
throw new Error(`List prompts failed: ${listRes.status} ${await listRes.text()}`);
}
const { data } = await listRes.json();
const promptRes = await fetch(`${BASE}/prompts/${data[0].id}`, { headers });
if (!promptRes.ok) {
throw new Error(`Retrieve prompt failed: ${promptRes.status} ${await promptRes.text()}`);
}
const promptData = await promptRes.json();

Copilot uses AI. Check for mistakes.
const payload = {
name: `${promptData.name}-replica`,
collection_id: promptData.collection_id,
string: promptData.string,
parameters: promptData.parameters,
virtual_key: promptData.virtual_key,
model: TARGET_MODEL,
version_description:
promptData.prompt_version_description ?? 'Replicated prompt',
template_metadata: promptData.template_metadata,
};

const created = await fetch(`${BASE}/prompts`, {
method: 'POST',
headers: { ...headers, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
}).then((r) => r.json());
Comment on lines +158 to +162
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Node.js Step 3 snippet, the create request parses JSON without checking r.ok, so errors can be logged as if creation succeeded. Consider checking ok and including the response body in the thrown error before logging the created prompt.

Suggested change
const created = await fetch(`${BASE}/prompts`, {
method: 'POST',
headers: { ...headers, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
}).then((r) => r.json());
const createRes = await fetch(`${BASE}/prompts`, {
method: 'POST',
headers: { ...headers, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
const created = await createRes.json();
if (!createRes.ok) {
throw new Error(
`Failed to create prompt (${createRes.status}): ${JSON.stringify(created)}`
);
}

Copilot uses AI. Check for mistakes.
console.log(created);
}

main().catch(console.error);
```
</Tab>
</Tabs>

---

## Full loop: replicate every prompt

<Tabs>
<Tab title="Python">
```python
import requests

BASE = "https://api.portkey.ai/v1"
TARGET_MODEL = "anthropic.claude-3-5-sonnet"

headers = {"x-portkey-api-key": "YOUR_API_KEY"}

list_res = requests.get(f"{BASE}/prompts", headers=headers)
list_res.raise_for_status()
prompt_ids = [row["id"] for row in list_res.json()["data"]]

for prompt_id in prompt_ids:
data = requests.get(f"{BASE}/prompts/{prompt_id}", headers=headers).json()
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the full-loop Python example, each prompt retrieve uses .json() directly without raise_for_status(). If any prompt ID is invalid or permissions differ across collections, the loop may fail later with unclear errors. Consider calling raise_for_status() on the retrieve response before building the payload.

Suggested change
data = requests.get(f"{BASE}/prompts/{prompt_id}", headers=headers).json()
retrieve_res = requests.get(f"{BASE}/prompts/{prompt_id}", headers=headers)
retrieve_res.raise_for_status()
data = retrieve_res.json()

Copilot uses AI. Check for mistakes.

payload = {
"name": data["name"] + "-replica",
"collection_id": data["collection_id"],
"string": data["string"],
"parameters": data["parameters"],
"virtual_key": data["virtual_key"],
"model": TARGET_MODEL,
"version_description": data.get(
"prompt_version_description", "Replicated"
),
"template_metadata": data["template_metadata"],
}

r = requests.post(f"{BASE}/prompts", json=payload, headers=headers)
r.raise_for_status()
print(r.json())
```
</Tab>
<Tab title="Node.js">
```javascript
async function main() {
const BASE = 'https://api.portkey.ai/v1';
const TARGET_MODEL = 'anthropic.claude-3-5-sonnet';
const headers = { 'x-portkey-api-key': process.env.PORTKEY_API_KEY ?? 'YOUR_API_KEY' };

const listRes = await fetch(`${BASE}/prompts`, { headers });
if (!listRes.ok) throw new Error(`List failed: ${listRes.status}`);
const { data: rows } = await listRes.json();
const promptIds = rows.map((r) => r.id);

for (const promptId of promptIds) {
const data = await fetch(`${BASE}/prompts/${promptId}`, { headers }).then((r) =>
r.json()
);
Comment on lines +223 to +225
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the full-loop Node.js example, the per-prompt retrieve call parses JSON without checking r.ok. If any prompt fetch fails mid-loop, the code may attempt to create a replica from an error payload. Consider checking ok (and surfacing status/body) before using the retrieved data.

Suggested change
const data = await fetch(`${BASE}/prompts/${promptId}`, { headers }).then((r) =>
r.json()
);
const retrieveRes = await fetch(`${BASE}/prompts/${promptId}`, { headers });
if (!retrieveRes.ok) {
const body = await retrieveRes.text();
throw new Error(`Retrieve failed for ${promptId}: ${retrieveRes.status} ${body}`);
}
const data = await retrieveRes.json();

Copilot uses AI. Check for mistakes.

const payload = {
name: `${data.name}-replica`,
collection_id: data.collection_id,
string: data.string,
parameters: data.parameters,
virtual_key: data.virtual_key,
model: TARGET_MODEL,
version_description: data.prompt_version_description ?? 'Replicated',
template_metadata: data.template_metadata,
};

const r = await fetch(`${BASE}/prompts`, {
method: 'POST',
headers: { ...headers, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
if (!r.ok) throw new Error(`Create failed for ${promptId}: ${r.status}`);
console.log(await r.json());
}
}

main().catch(console.error);
```
</Tab>
</Tabs>

<Note>
**Field names:** If `retrieve` responses use different keys (for example nested version objects), log one response and map fields explicitly. **Null `collection_id`:** Omit or pass `null` only if the create API accepts it for your workspace. **Rate limits:** Add backoff or batching for very large prompt libraries.
</Note>

## After replication

- Point applications at the **new prompt IDs** or keep names predictable (for example `*-replica`) and resolve by name if your tooling supports it.
- For runtime calls, use the [Prompt API](/product/prompt-engineering-studio/prompt-api) (`/v1/prompts/{promptId}/completions`) with the replica’s ID.

## Summary

| Step | Action |
|:-----|:-------|
| 1 | `GET /v1/prompts` → collect IDs |
| 2 | `GET /v1/prompts/{id}` → read full config |
| 3 | `POST /v1/prompts` → same body + new `model` + new `name` |

Bulk replication avoids manual duplication, keeps templates aligned, and makes model upgrades repeatable across the workspace.
2 changes: 2 additions & 0 deletions product/prompt-engineering-studio/prompt-guides.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ You can easily access Prompt Engineering Studio using [https://prompt.new](https
<CardGroup cols={2}>
<Card title="Create a chatbot using Portkey Prompt Templates" icon="graduation-cap" href="/guides/prompts/build-a-chatbot-using-portkeys-prompt-templates">
</Card>
<Card title="Automated prompt replication (Admin API)" icon="copy" href="/guides/use-cases/automated-prompt-replication">
</Card>
</CardGroup>


Expand Down