feat: add GitHub Copilot as a model provider#34
feat: add GitHub Copilot as a model provider#34greatbody wants to merge 3 commits intopaoloanzn:mainfrom
Conversation
Integrate Copilot API (api.githubcopilot.com) as an OpenAI-compatible provider. Supports model listing, chat completions, and dynamic model refresh via /models endpoint. Activation: CLAUDE_CODE_USE_COPILOT=1 COPILOT_TOKEN=<token> Or interactively: /provider copilot
…esh utility Add OpenAI-related env vars (API key, base URL, default model) to managed and safe env var sets. Add refreshModelStringsForCurrentProvider helper and OpenAI-compatible fetch adapter test.
📝 WalkthroughWalkthroughThis PR introduces OpenAI and GitHub Copilot as new API providers alongside existing first-party, Bedrock, Vertex, and Foundry options. It adds a Changes
Sequence DiagramsequenceDiagram
actor User
participant ProviderCmd as /provider Command
participant SettingsAPI as Settings API
participant APIClient as API Client
participant OpenAICompat as OpenAI Adapter
participant OpenAIAPI as OpenAI-compatible API
User->>ProviderCmd: /provider openai
ProviderCmd->>APIClient: getAPIProvider()
APIClient-->>ProviderCmd: returns current provider
ProviderCmd->>SettingsAPI: updateSettingsForSource<br/>(userSettings, env)
SettingsAPI-->>ProviderCmd: persists provider env vars
ProviderCmd->>APIClient: mutate process.env<br/>with CLAUDE_CODE_USE_OPENAI
ProviderCmd->>APIClient: refreshModelStringsForCurrentProvider()
APIClient-->>ProviderCmd: refreshes model options
ProviderCmd-->>User: "Provider switched to OpenAI"
Note over User,APIClient: Subsequent API calls
User->>APIClient: request with model
APIClient->>OpenAICompat: createOpenAICompatibleFetch()
OpenAICompat->>OpenAICompat: translate Anthropic→OpenAI
OpenAICompat->>OpenAIAPI: POST /chat/completions
OpenAIAPI-->>OpenAICompat: OpenAI response
OpenAICompat->>OpenAICompat: translate OpenAI→Anthropic
OpenAICompat-->>APIClient: Anthropic-format response
APIClient-->>User: message
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR adds GitHub Copilot as an additional model/API provider by routing requests through an OpenAI-compatible adapter, plus introduces a /provider command for switching providers and updates environment variable handling and default model selection accordingly.
Changes:
- Add
copilotto provider selection and status reporting, plus a/providercommand to switch providers via settings/env flags. - Introduce an OpenAI-compatible fetch adapter (chat completions +
/modelsrefresh) and wire it into API bootstrap/client creation. - Update model config/selection logic to account for OpenAI-compatible and Copilot providers; rename the package to
free-code.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/utils/status.tsx | Displays provider/base URL details in /status, including OpenAI-compatible and Copilot branches. |
| src/utils/model/providers.ts | Adds copilot to APIProvider and provider selection env flag handling. |
| src/utils/model/modelStrings.ts | Adds refreshModelStringsForCurrentProvider helper for provider switching. |
| src/utils/model/modelOptions.ts | Adds OpenAI/Copilot model option sets and continues to append cached model options. |
| src/utils/model/model.ts | Adjusts default model selection for OpenAI-compatible/Copilot providers. |
| src/utils/model/configs.ts | Extends all model configs to include copilot provider entries. |
| src/utils/managedEnvConstants.ts | Registers OpenAI-related env vars in managed/safe sets. |
| src/services/api/openaiCompatible.ts | New adapter translating Anthropic /v1/messages to OpenAI /chat/completions, plus model refresh for OpenAI and Copilot. |
| src/services/api/openaiCompatible.test.ts | Adds Bun tests for the OpenAI-compatible fetch adapter behavior. |
| src/services/api/client.ts | Adds OpenAI-compatible and Copilot client branches using the adapter. |
| src/services/api/bootstrap.ts | Refreshes OpenAI/Copilot models via /models at startup for those providers. |
| src/commands/provider/provider.ts | Implements /provider command to switch provider flags in settings/env. |
| src/commands/provider/index.ts | Registers the /provider command. |
| src/commands.ts | Adds provider command to the command list. |
| package.json | Renames package to free-code. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| 'CLAUDE_CODE_USE_BEDROCK', | ||
| 'CLAUDE_CODE_USE_VERTEX', | ||
| 'CLAUDE_CODE_USE_FOUNDRY', | ||
| 'CLAUDE_CODE_USE_OPENAI', | ||
| // Endpoint config (base URLs, project/resource identifiers) |
There was a problem hiding this comment.
PROVIDER_MANAGED_ENV_VARS includes flags for Bedrock/Vertex/Foundry/OpenAI but not the new Copilot flag. If CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST is set, settings-sourced CLAUDE_CODE_USE_COPILOT would not be filtered, allowing users to override a host-managed provider selection. Add CLAUDE_CODE_USE_COPILOT (and consider adding Copilot routing/auth vars like COPILOT_BASE_URL/COPILOT_TOKEN in the appropriate section) to this set.
| const providerResponse = await globalThis.fetch( | ||
| `${baseUrl}/chat/completions`, | ||
| { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| Authorization: `Bearer ${options.apiKey}`, | ||
| }, |
There was a problem hiding this comment.
The OpenAI-compatible adapter drops all headers/signals from the original Anthropic SDK request when it forwards to the provider (it hardcodes only Content-Type + Authorization). This prevents provider-specific required headers (e.g. Copilot's Openai-Intent) and abort/cancellation (init.signal) from being propagated. Merge/forward relevant headers (or accept extra headers via options) and pass through signal to the provider fetch.
| const providerResponse = await globalThis.fetch( | |
| `${baseUrl}/chat/completions`, | |
| { | |
| method: "POST", | |
| headers: { | |
| "Content-Type": "application/json", | |
| Authorization: `Bearer ${options.apiKey}`, | |
| }, | |
| const forwardedHeaders = new Headers( | |
| input instanceof Request ? input.headers : undefined, | |
| ); | |
| if (init?.headers) { | |
| const initHeaders = new Headers(init.headers); | |
| initHeaders.forEach((value, key) => { | |
| forwardedHeaders.set(key, value); | |
| }); | |
| } | |
| forwardedHeaders.set("Content-Type", "application/json"); | |
| forwardedHeaders.set("Authorization", `Bearer ${options.apiKey}`); | |
| const forwardedSignal = | |
| init?.signal ?? (input instanceof Request ? input.signal : undefined); | |
| const providerResponse = await globalThis.fetch( | |
| `${baseUrl}/chat/completions`, | |
| { | |
| method: "POST", | |
| headers: forwardedHeaders, | |
| signal: forwardedSignal, |
| Object.assign(process.env, providerEnv(parsed)); | ||
| await refreshModelStringsForCurrentProvider(); | ||
|
|
There was a problem hiding this comment.
After switching providers, only model strings are refreshed. The dynamic model list (additionalModelOptionsCache) is fetched in bootstrap for OpenAI/Copilot, but /provider switching won’t refresh it, so the model picker may still show stale models from a previous provider (and default-model selection may use the wrong cached entry). Consider triggering refreshOpenAICompatibleModelOptions / refreshCopilotModelOptions when switching to those providers (or clearing the cache).
| const providerLabel = { | ||
| bedrock: 'AWS Bedrock', | ||
| vertex: 'Google Vertex AI', | ||
| foundry: 'Microsoft Foundry' | ||
| bedrock: "AWS Bedrock", | ||
| vertex: "Google Vertex AI", | ||
| foundry: "Microsoft Foundry", | ||
| openai: "OpenAI-compatible", |
There was a problem hiding this comment.
providerLabel mapping does not include the new copilot provider. When apiProvider === "copilot", providerLabel becomes undefined, so /status will show a blank API provider value. Add a copilot entry (e.g., "GitHub Copilot") or provide a safe fallback before pushing the property.
| type: 'local', | ||
| name: 'provider', | ||
| description: `Switch API provider (currently ${getAPIProvider()})`, | ||
| argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show', |
There was a problem hiding this comment.
The provider command advertises /provider but argumentHint omits the newly supported copilot option, which will mislead users. Update the hint to include copilot (and any other supported aliases as appropriate).
| argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show', | |
| argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show', |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/utils/managedEnvConstants.ts (1)
14-62:⚠️ Potential issue | 🟠 MajorCopilot env vars are missing from
PROVIDER_MANAGED_ENV_VARS.
CLAUDE_CODE_USE_OPENAIand the OpenAI endpoint/auth/default-model vars were added, but the equivalent Copilot controls introduced by this PR are not:
CLAUDE_CODE_USE_COPILOT— the provider-selection flag this PR documentsCOPILOT_TOKEN— required auth perhasProviderPrereqs("copilot")COPILOT_BASE_URL— referenced insrc/utils/status.tsxWithout these, a host setting
CLAUDE_CODE_PROVIDER_MANAGED_BY_HOSTcannot prevent a user's~/.claude/settings.jsonfrom overriding its routing for the Copilot path, which is precisely the invariant this list exists to enforce.🔒 Proposed addition
'CLAUDE_CODE_USE_FOUNDRY', 'CLAUDE_CODE_USE_OPENAI', + 'CLAUDE_CODE_USE_COPILOT', // Endpoint config (base URLs, project/resource identifiers) ... 'CLAUDE_CODE_OPENAI_BASE_URL', 'OPENAI_BASE_URL', + 'COPILOT_BASE_URL', ... 'CLAUDE_CODE_OPENAI_API_KEY', 'OPENAI_API_KEY', + 'COPILOT_TOKEN',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/managedEnvConstants.ts` around lines 14 - 62, The PROVIDER_MANAGED_ENV_VARS set is missing Copilot-related keys so host-managed routing can be overridden by user settings; add 'CLAUDE_CODE_USE_COPILOT', 'COPILOT_TOKEN', and 'COPILOT_BASE_URL' to the PROVIDER_MANAGED_ENV_VARS Set (the same collection that already contains 'CLAUDE_CODE_USE_OPENAI', 'OPENAI_API_KEY', and 'OPENAI_BASE_URL') so hasProviderPrereqs("copilot") and the Copilot routing referenced in status.tsx are properly protected by CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST.src/utils/model/model.ts (1)
9-17:⚠️ Potential issue | 🔴 CriticalDuplicate import:
isCodexSubscriberlisted twice.
isCodexSubscriberappears on both line 12 and line 16 in the same named-import list. This is a TS2300 duplicate-identifier error and will fail type-checking / build.🐛 Proposed fix
import { getSubscriptionType, isClaudeAISubscriber, isCodexSubscriber, isMaxSubscriber, isProSubscriber, isTeamPremiumSubscriber, - isCodexSubscriber, } from "../auth.js";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/model/model.ts` around lines 9 - 17, The import list in src/utils/model/model.ts includes isCodexSubscriber twice causing a duplicate-identifier TS2300 error; remove the redundant import entry so each symbol (e.g., getSubscriptionType, isClaudeAISubscriber, isCodexSubscriber, isMaxSubscriber, isProSubscriber, isTeamPremiumSubscriber) appears only once in the named-import from "../auth.js" (you can simply delete the second isCodexSubscriber in the import statement).
🧹 Nitpick comments (6)
src/utils/model/model.ts (1)
26-26: Use a relative import path for consistency.Every other import in this file uses a relative path (e.g.,
"../../bootstrap/state.js"). This line uses an absolute-style"src/services/api/openaiCompatible.js". While this may resolve viabaseUrl/paths, it breaks the convention used throughout the module and can behave differently across build/test tool configurations.♻️ Proposed fix
-import { getOpenAICompatibleDefaultModel } from "src/services/api/openaiCompatible.js"; +import { getOpenAICompatibleDefaultModel } from "../../services/api/openaiCompatible.js";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/model/model.ts` at line 26, Replace the absolute-style import of getOpenAICompatibleDefaultModel with a relative import consistent with the other imports in model.ts; locate the import statement that references getOpenAICompatibleDefaultModel and change it to a relative-path import (matching the project's relative import pattern used elsewhere in this file) so the module resolves consistently across build/test environments.src/commands/provider/index.ts (1)
4-11:descriptionis captured at module load and goes stale after switching.
description: \Switch API provider (currently ${getAPIProvider()})`is evaluated once when this module is first imported. After the user runs/provider copilot`, the command listing still advertises the prior provider until the process restarts.If a dynamic description is desired, consider computing it lazily (e.g., via a function/getter where the
Commandtype supports it) or omitting the "currently …" suffix to avoid advertising stale state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/provider/index.ts` around lines 4 - 11, The description string on providerCommand is computed once at module load and becomes stale; change providerCommand.description to be computed at call-time (e.g., make description a getter or function that calls getAPIProvider() when accessed, or if the Command type doesn't support lazy descriptions remove the "currently …" suffix and keep a static message) so the displayed provider reflects the current state; update the providerCommand object (referencing providerCommand and getAPIProvider) accordingly.src/utils/model/modelStrings.ts (1)
168-183: Consider serializing withsequentialto avoid racingupdateBedrockModelStrings.
updateBedrockModelStringsis wrapped withsequentialto serialize concurrent callers.refreshModelStringsForCurrentProvidercallsgetBedrockModelStrings()directly, so if a/providerswitch happens while a background bedrock update is in flight (e.g., frominitModelStrings()), the later write can overwrite the earlier one non-deterministically.Low-likelihood in practice, but easy to harden by reusing the same sequential guard or invoking
updateBedrockModelStringswhen the target is bedrock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/model/modelStrings.ts` around lines 168 - 183, The function refreshModelStringsForCurrentProvider calls getBedrockModelStrings() directly which can race with the sequential-wrapped updateBedrockModelStrings; change refreshModelStringsForCurrentProvider so that when provider === 'bedrock' it invokes the same serialized updater (updateBedrockModelStrings) or uses the existing sequential guard rather than calling getBedrockModelStrings() directly, then setModelStringsState with the returned/updated value and preserve the existing error handling path.src/commands/provider/provider.ts (1)
122-129: Duplicated provider-from-env logic.
currentProviderFromEnv()re-implements the exact same env-flag priority asgetAPIProvider()(already imported at line 3). SinceObject.assign(process.env, providerEnv(parsed))at line 165 runs after this read, you can simply callgetAPIProvider()here to avoid drift if new providers are added later.♻️ Proposed refactor
-function currentProviderFromEnv(): ProviderName { - if (isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK)) return "bedrock"; - if (isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX)) return "vertex"; - if (isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY)) return "foundry"; - if (isEnvTruthy(process.env.CLAUDE_CODE_USE_COPILOT)) return "copilot"; - if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) return "openai"; - return "firstParty"; -} +// getAPIProvider already returns the same union and uses the same precedence.Then replace
currentProviderFromEnv()call at line 147 withgetAPIProvider()(and drop the now-unusedisEnvTruthyimport).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/provider/provider.ts` around lines 122 - 129, currentProviderFromEnv() duplicates getAPIProvider()'s env-flag priority and can drift; remove the duplicate function and replace its usages (e.g., the call currently at the location that chooses the provider) with getAPIProvider(), and remove the now-unused isEnvTruthy import. Ensure you still import getAPIProvider() if not already and keep providerEnv(parsed) assignment as-is so env precedence remains consistent.src/services/api/openaiCompatible.ts (1)
639-641:isOpenAICompatibleModelis too loose — "o1"/"o3" substring match produces false positives.
model.includes("o1")/model.includes("o3")will match any model ID containing those two characters anywhere (e.g."model-promo1","claude-co3", etc.). Anchor the check to OpenAI's actual naming (exact prefix/regex), or enumerate known model families.♻️ Proposed tightening
-export function isOpenAICompatibleModel(model: string): boolean { - return model.includes("gpt-") || model.includes("o1") || model.includes("o3"); -} +export function isOpenAICompatibleModel(model: string): boolean { + return /^(gpt-|o1(-|$)|o3(-|$))/.test(model); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openaiCompatible.ts` around lines 639 - 641, The current isOpenAICompatibleModel function is too permissive because model.includes("o1") / includes("o3") matches any string containing those characters; update isOpenAICompatibleModel to use anchored checks instead: keep model.includes("gpt-") as a prefix check (e.g., startsWith "gpt-") and replace the loose includes for "o1"/"o3" with either exact prefix checks (e.g., startsWith "o1" or "o3" or with a regex like /^o[13](?:-|$)/) or an explicit whitelist of known OpenAI model IDs; locate and update the isOpenAICompatibleModel function to implement the anchored checks or whitelist.src/services/api/client.ts (1)
329-371: Copilot/OpenAI branches duplicate most of their shape.The two branches only differ in credential source (
COPILOT_TOKENvs OpenAI-compatible key), base URL, and the two extra Copilot headers. Consider extracting a small helper (e.g.buildOpenAICompatibleAnthropicClient(apiKey, baseUrl, extraHeaders)) to keep them in sync when fields liketimeout/retries/headers change.Also note: the Copilot-specific headers set on lines 367-368 are passed into
toAnthropicClientForOpenAICompatible'sdefaultHeaders, but the OpenAI-compatible fetch adapter currently discardsinit.headerswhen it forwards to/chat/completions— see the corresponding comment onsrc/services/api/openaiCompatible.ts(lines 475-485). This meansOpenai-Intent/x-initiatornever reachapi.githubcopilot.comtoday.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/client.ts` around lines 329 - 371, The two branches for getAPIProvider() ("openai" and "copilot") duplicate construction logic; refactor by adding a helper like buildOpenAICompatibleAnthropicClient(apiKey, baseUrl, extraHeaders) that centralizes creating createOpenAICompatibleFetch(...) and calling toAnthropicClientForOpenAICompatible({fetch, maxRetries, timeout: ARGS.timeout, defaultHeaders}). In the copilot branch call that helper with process.env.COPILOT_TOKEN, COPILOT_BASE_URL (defaulting to "https://api.githubcopilot.com") and the extra headers ("Openai-Intent" and "x-initiator"), and in the openai branch call it with getOpenAICompatibleApiKey() and getOpenAICompatibleBaseUrl(). Also fix the openai-compatible fetch adapter (createOpenAICompatibleFetch / src/services/api/openaiCompatible.ts) to forward init.headers through to the upstream /chat/completions request so Copilot headers passed via defaultHeaders actually reach api.githubcopilot.com.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/provider/index.ts`:
- Line 8: The argumentHint string for the provider command is missing the new
"copilot" option; update the argumentHint constant (argumentHint) in the
provider command module to include "copilot" so it reads like
'[first-party|bedrock|vertex|foundry|openai|copilot] | show' (or equivalent
ordering), ensuring the slash-command UI and help text advertise the new
provider.
In `@src/services/api/bootstrap.ts`:
- Around line 119-127: The openai/copilot refresh calls
(refreshOpenAICompatibleModelOptions and refreshCopilotModelOptions) are outside
the existing try/catch in fetchBootstrapData so their exceptions can escape;
move these calls inside the same try block that wraps the firstParty path (or
wrap each call with its own try/catch that calls logError) so failures degrade
gracefully like the firstParty path and do not propagate out of
fetchBootstrapData.
In `@src/services/api/openaiCompatible.test.ts`:
- Around line 122-160: The test "clamps max_tokens for providers with smaller
limits" sets process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS but deletes it only at
the end of the test, which can leak on failures; ensure the env var is always
cleaned up by either wrapping the test body in a try/finally that deletes
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS in the finally block, or add an
afterEach hook that deletes process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS so cleanup
runs unconditionally (refer to the test name and the environment variable
CLAUDE_CODE_OPENAI_MAX_TOKENS to locate where to apply the change).
In `@src/services/api/openaiCompatible.ts`:
- Around line 475-485: The outgoing fetch call that creates providerResponse
builds a fresh headers object and drops caller-supplied init?.headers (losing
Copilot-specific and custom headers); fix by merging the incoming headers into
the request headers before overriding adapter-owned fields: take init?.headers
(or any headers passed into the function), copy them into the headers for the
fetch, then set/override "Content-Type" and Authorization: `Bearer
${options.apiKey}` (and any adapter-required headers like Openai-Intent) so
adapter-owned values win; also optionally strip Anthropic-only headers (e.g.,
"anthropic-version", "x-api-key", "x-client-request-id", CCH headers) from the
merged headers before sending.
- Around line 588-599: The fetch used in createOpenAICompatibleFetch (and
specifically the /chat/completions request) is missing required GitHub Copilot
identification headers; update the request headers to include Editor-Version and
Copilot-Integration-Id (and also include Editor-Plugin-Version where
appropriate) alongside existing Authorization, Content-Type, User-Agent and
Openai-Intent so the Copilot API accepts the call; also replace the manual
.replace(/\/+$/, "") normalization in refreshCopilotModelOptions with the
existing trimTrailingSlash() utility to normalize baseUrl consistently.
In `@src/utils/managedEnvConstants.ts`:
- Around line 154-159: Add the missing Copilot flags to the SAFE_ENV_VARS array
by including 'CLAUDE_CODE_USE_COPILOT' and 'COPILOT_BASE_URL' so
provider-selection parity is preserved with other CLAUDE_CODE_USE_* entries;
locate the SAFE_ENV_VARS definition in managedEnvConstants.ts to insert these
exact symbols. Also update the header comment block (around the existing note
that marks ANTHROPIC_* as dangerous) to either document the explicit rationale
why OPENAI_* and CLAUDE_CODE_OPENAI_* are treated as safe despite the same
threat model or move those OpenAI-related symbols out of SAFE_ENV_VARS to match
the documented security stance — ensure the comment references the exact
environment names (OPENAI_BASE_URL, OPENAI_API_KEY, CLAUDE_CODE_OPENAI_BASE_URL,
CLAUDE_CODE_OPENAI_API_KEY) so the reasoning and list remain consistent.
In `@src/utils/status.tsx`:
- Around line 309-320: The providerLabel lookup for apiProvider in the status
rendering is missing the "copilot" key, causing undefined values; update the map
used to compute providerLabel (the object literal assigned to providerLabel) to
include copilot: "Copilot" (or the desired display string) so that when
apiProvider === "copilot" the properties.push({ label: "API provider", value:
providerLabel }) receives a proper string; ensure this change is made in the
same conditional block that references apiProvider and providerLabel in
src/utils/status.tsx so the status pane no longer shows an undefined value.
---
Outside diff comments:
In `@src/utils/managedEnvConstants.ts`:
- Around line 14-62: The PROVIDER_MANAGED_ENV_VARS set is missing
Copilot-related keys so host-managed routing can be overridden by user settings;
add 'CLAUDE_CODE_USE_COPILOT', 'COPILOT_TOKEN', and 'COPILOT_BASE_URL' to the
PROVIDER_MANAGED_ENV_VARS Set (the same collection that already contains
'CLAUDE_CODE_USE_OPENAI', 'OPENAI_API_KEY', and 'OPENAI_BASE_URL') so
hasProviderPrereqs("copilot") and the Copilot routing referenced in status.tsx
are properly protected by CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST.
In `@src/utils/model/model.ts`:
- Around line 9-17: The import list in src/utils/model/model.ts includes
isCodexSubscriber twice causing a duplicate-identifier TS2300 error; remove the
redundant import entry so each symbol (e.g., getSubscriptionType,
isClaudeAISubscriber, isCodexSubscriber, isMaxSubscriber, isProSubscriber,
isTeamPremiumSubscriber) appears only once in the named-import from "../auth.js"
(you can simply delete the second isCodexSubscriber in the import statement).
---
Nitpick comments:
In `@src/commands/provider/index.ts`:
- Around line 4-11: The description string on providerCommand is computed once
at module load and becomes stale; change providerCommand.description to be
computed at call-time (e.g., make description a getter or function that calls
getAPIProvider() when accessed, or if the Command type doesn't support lazy
descriptions remove the "currently …" suffix and keep a static message) so the
displayed provider reflects the current state; update the providerCommand object
(referencing providerCommand and getAPIProvider) accordingly.
In `@src/commands/provider/provider.ts`:
- Around line 122-129: currentProviderFromEnv() duplicates getAPIProvider()'s
env-flag priority and can drift; remove the duplicate function and replace its
usages (e.g., the call currently at the location that chooses the provider) with
getAPIProvider(), and remove the now-unused isEnvTruthy import. Ensure you still
import getAPIProvider() if not already and keep providerEnv(parsed) assignment
as-is so env precedence remains consistent.
In `@src/services/api/client.ts`:
- Around line 329-371: The two branches for getAPIProvider() ("openai" and
"copilot") duplicate construction logic; refactor by adding a helper like
buildOpenAICompatibleAnthropicClient(apiKey, baseUrl, extraHeaders) that
centralizes creating createOpenAICompatibleFetch(...) and calling
toAnthropicClientForOpenAICompatible({fetch, maxRetries, timeout: ARGS.timeout,
defaultHeaders}). In the copilot branch call that helper with
process.env.COPILOT_TOKEN, COPILOT_BASE_URL (defaulting to
"https://api.githubcopilot.com") and the extra headers ("Openai-Intent" and
"x-initiator"), and in the openai branch call it with
getOpenAICompatibleApiKey() and getOpenAICompatibleBaseUrl(). Also fix the
openai-compatible fetch adapter (createOpenAICompatibleFetch /
src/services/api/openaiCompatible.ts) to forward init.headers through to the
upstream /chat/completions request so Copilot headers passed via defaultHeaders
actually reach api.githubcopilot.com.
In `@src/services/api/openaiCompatible.ts`:
- Around line 639-641: The current isOpenAICompatibleModel function is too
permissive because model.includes("o1") / includes("o3") matches any string
containing those characters; update isOpenAICompatibleModel to use anchored
checks instead: keep model.includes("gpt-") as a prefix check (e.g., startsWith
"gpt-") and replace the loose includes for "o1"/"o3" with either exact prefix
checks (e.g., startsWith "o1" or "o3" or with a regex like /^o[13](?:-|$)/) or
an explicit whitelist of known OpenAI model IDs; locate and update the
isOpenAICompatibleModel function to implement the anchored checks or whitelist.
In `@src/utils/model/model.ts`:
- Line 26: Replace the absolute-style import of getOpenAICompatibleDefaultModel
with a relative import consistent with the other imports in model.ts; locate the
import statement that references getOpenAICompatibleDefaultModel and change it
to a relative-path import (matching the project's relative import pattern used
elsewhere in this file) so the module resolves consistently across build/test
environments.
In `@src/utils/model/modelStrings.ts`:
- Around line 168-183: The function refreshModelStringsForCurrentProvider calls
getBedrockModelStrings() directly which can race with the sequential-wrapped
updateBedrockModelStrings; change refreshModelStringsForCurrentProvider so that
when provider === 'bedrock' it invokes the same serialized updater
(updateBedrockModelStrings) or uses the existing sequential guard rather than
calling getBedrockModelStrings() directly, then setModelStringsState with the
returned/updated value and preserve the existing error handling path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8624aa90-f6ec-4bd7-ae54-43962c089ea9
📒 Files selected for processing (15)
package.jsonsrc/commands.tssrc/commands/provider/index.tssrc/commands/provider/provider.tssrc/services/api/bootstrap.tssrc/services/api/client.tssrc/services/api/openaiCompatible.test.tssrc/services/api/openaiCompatible.tssrc/utils/managedEnvConstants.tssrc/utils/model/configs.tssrc/utils/model/model.tssrc/utils/model/modelOptions.tssrc/utils/model/modelStrings.tssrc/utils/model/providers.tssrc/utils/status.tsx
| type: 'local', | ||
| name: 'provider', | ||
| description: `Switch API provider (currently ${getAPIProvider()})`, | ||
| argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show', |
There was a problem hiding this comment.
argumentHint is missing copilot.
The PR adds copilot as a new provider, but the hint advertised to users (and shown in the slash-command UI) still lists only first-party|bedrock|vertex|foundry|openai. Users won't discover /provider copilot from the hint.
🐛 Proposed fix
- argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
+ argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show', | |
| argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show', |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/provider/index.ts` at line 8, The argumentHint string for the
provider command is missing the new "copilot" option; update the argumentHint
constant (argumentHint) in the provider command module to include "copilot" so
it reads like '[first-party|bedrock|vertex|foundry|openai|copilot] | show' (or
equivalent ordering), ensuring the slash-command UI and help text advertise the
new provider.
| if (getAPIProvider() === "openai") { | ||
| await refreshOpenAICompatibleModelOptions(); | ||
| return; | ||
| } | ||
|
|
||
| if (getAPIProvider() === "copilot") { | ||
| await refreshCopilotModelOptions(); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Bring openai/copilot refreshes inside the error-handling boundary.
The firstParty path below is wrapped in try { … } catch (error) { logError(error) }, so bootstrap failures degrade gracefully. The new openai/copilot short-circuits are placed before that boundary — a throw from refreshOpenAICompatibleModelOptions() or refreshCopilotModelOptions() (e.g., a transient /models fetch failure) will propagate out of fetchBootstrapData() and is handled differently than the firstParty path.
🛡️ Proposed fix
export async function fetchBootstrapData(): Promise<void> {
- if (getAPIProvider() === "openai") {
- await refreshOpenAICompatibleModelOptions();
- return;
- }
-
- if (getAPIProvider() === "copilot") {
- await refreshCopilotModelOptions();
- return;
- }
-
try {
+ if (getAPIProvider() === "openai") {
+ await refreshOpenAICompatibleModelOptions();
+ return;
+ }
+
+ if (getAPIProvider() === "copilot") {
+ await refreshCopilotModelOptions();
+ return;
+ }
+
const response = await fetchBootstrapAPI();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (getAPIProvider() === "openai") { | |
| await refreshOpenAICompatibleModelOptions(); | |
| return; | |
| } | |
| if (getAPIProvider() === "copilot") { | |
| await refreshCopilotModelOptions(); | |
| return; | |
| } | |
| try { | |
| if (getAPIProvider() === "openai") { | |
| await refreshOpenAICompatibleModelOptions(); | |
| return; | |
| } | |
| if (getAPIProvider() === "copilot") { | |
| await refreshCopilotModelOptions(); | |
| return; | |
| } | |
| const response = await fetchBootstrapAPI(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/bootstrap.ts` around lines 119 - 127, The openai/copilot
refresh calls (refreshOpenAICompatibleModelOptions and
refreshCopilotModelOptions) are outside the existing try/catch in
fetchBootstrapData so their exceptions can escape; move these calls inside the
same try block that wraps the firstParty path (or wrap each call with its own
try/catch that calls logError) so failures degrade gracefully like the
firstParty path and do not propagate out of fetchBootstrapData.
| test('clamps max_tokens for providers with smaller limits', async () => { | ||
| let capturedBody: Record<string, unknown> | null = null | ||
| process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192' | ||
|
|
||
| globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => { | ||
| capturedBody = JSON.parse(String(init?.body || '{}')) as Record< | ||
| string, | ||
| unknown | ||
| > | ||
| return new Response( | ||
| JSON.stringify({ | ||
| id: 'resp_3', | ||
| model: 'deepseek-chat', | ||
| choices: [{ message: { content: 'ok' } }], | ||
| usage: { prompt_tokens: 1, completion_tokens: 1 }, | ||
| }), | ||
| { status: 200, headers: { 'Content-Type': 'application/json' } }, | ||
| ) | ||
| }) as typeof globalThis.fetch | ||
|
|
||
| const fetchAdapter = createOpenAICompatibleFetch({ | ||
| apiKey: 'test-key', | ||
| baseUrl: 'https://provider.example/v1', | ||
| }) | ||
|
|
||
| await fetchAdapter('https://api.anthropic.com/v1/messages', { | ||
| method: 'POST', | ||
| body: JSON.stringify({ | ||
| model: 'deepseek-chat', | ||
| stream: false, | ||
| max_tokens: 64000, | ||
| messages: [{ role: 'user', content: 'hello' }], | ||
| }), | ||
| }) | ||
|
|
||
| expect(capturedBody?.max_tokens).toBe(8192) | ||
|
|
||
| delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS | ||
| }) |
There was a problem hiding this comment.
Env var cleanup isn't guaranteed on test failure.
If any assertion above line 159 fails, delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS never executes and the 8192 cap leaks into subsequent tests/suites that rely on the default. Move the cleanup into an afterEach (or wrap it in try/finally) so it runs unconditionally.
🧹 Proposed fix
+ afterEach(() => {
+ delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
+ })
+
test('clamps max_tokens for providers with smaller limits', async () => {
let capturedBody: Record<string, unknown> | null = null
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192'
...
expect(capturedBody?.max_tokens).toBe(8192)
-
- delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('clamps max_tokens for providers with smaller limits', async () => { | |
| let capturedBody: Record<string, unknown> | null = null | |
| process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192' | |
| globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => { | |
| capturedBody = JSON.parse(String(init?.body || '{}')) as Record< | |
| string, | |
| unknown | |
| > | |
| return new Response( | |
| JSON.stringify({ | |
| id: 'resp_3', | |
| model: 'deepseek-chat', | |
| choices: [{ message: { content: 'ok' } }], | |
| usage: { prompt_tokens: 1, completion_tokens: 1 }, | |
| }), | |
| { status: 200, headers: { 'Content-Type': 'application/json' } }, | |
| ) | |
| }) as typeof globalThis.fetch | |
| const fetchAdapter = createOpenAICompatibleFetch({ | |
| apiKey: 'test-key', | |
| baseUrl: 'https://provider.example/v1', | |
| }) | |
| await fetchAdapter('https://api.anthropic.com/v1/messages', { | |
| method: 'POST', | |
| body: JSON.stringify({ | |
| model: 'deepseek-chat', | |
| stream: false, | |
| max_tokens: 64000, | |
| messages: [{ role: 'user', content: 'hello' }], | |
| }), | |
| }) | |
| expect(capturedBody?.max_tokens).toBe(8192) | |
| delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS | |
| }) | |
| afterEach(() => { | |
| delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS | |
| }) | |
| test('clamps max_tokens for providers with smaller limits', async () => { | |
| let capturedBody: Record<string, unknown> | null = null | |
| process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192' | |
| globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => { | |
| capturedBody = JSON.parse(String(init?.body || '{}')) as Record< | |
| string, | |
| unknown | |
| > | |
| return new Response( | |
| JSON.stringify({ | |
| id: 'resp_3', | |
| model: 'deepseek-chat', | |
| choices: [{ message: { content: 'ok' } }], | |
| usage: { prompt_tokens: 1, completion_tokens: 1 }, | |
| }), | |
| { status: 200, headers: { 'Content-Type': 'application/json' } }, | |
| ) | |
| }) as typeof globalThis.fetch | |
| const fetchAdapter = createOpenAICompatibleFetch({ | |
| apiKey: 'test-key', | |
| baseUrl: 'https://provider.example/v1', | |
| }) | |
| await fetchAdapter('https://api.anthropic.com/v1/messages', { | |
| method: 'POST', | |
| body: JSON.stringify({ | |
| model: 'deepseek-chat', | |
| stream: false, | |
| max_tokens: 64000, | |
| messages: [{ role: 'user', content: 'hello' }], | |
| }), | |
| }) | |
| expect(capturedBody?.max_tokens).toBe(8192) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openaiCompatible.test.ts` around lines 122 - 160, The test
"clamps max_tokens for providers with smaller limits" sets
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS but deletes it only at the end of the
test, which can leak on failures; ensure the env var is always cleaned up by
either wrapping the test body in a try/finally that deletes
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS in the finally block, or add an
afterEach hook that deletes process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS so cleanup
runs unconditionally (refer to the test name and the environment variable
CLAUDE_CODE_OPENAI_MAX_TOKENS to locate where to apply the change).
| const providerResponse = await globalThis.fetch( | ||
| `${baseUrl}/chat/completions`, | ||
| { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| Authorization: `Bearer ${options.apiKey}`, | ||
| }, | ||
| body: JSON.stringify(openAIBody), | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Outgoing request drops init.headers — Copilot-specific headers never reach the provider.
When the adapter forwards to /chat/completions, it builds a fresh headers object containing only Content-Type and Authorization. The caller-supplied init?.headers (which the Anthropic SDK populates from the client's defaultHeaders) is discarded entirely.
Concrete downstream impact from src/services/api/client.ts lines 365-369: the Copilot branch carefully sets Openai-Intent: "conversation-edits" and x-initiator: "user" via defaultHeaders, but those headers are silently stripped here and never reach api.githubcopilot.com. Any other custom headers (proxy auth, User-Agent, etc.) are also lost.
🔧 Proposed fix — forward incoming headers, then override the fields the adapter owns
+ const forwardedHeaders = new Headers(init?.headers);
+ forwardedHeaders.set("Content-Type", "application/json");
+ forwardedHeaders.set("Authorization", `Bearer ${options.apiKey}`);
+
const providerResponse = await globalThis.fetch(
`${baseUrl}/chat/completions`,
{
method: "POST",
- headers: {
- "Content-Type": "application/json",
- Authorization: `Bearer ${options.apiKey}`,
- },
+ headers: forwardedHeaders,
body: JSON.stringify(openAIBody),
},
);You may also want to strip Anthropic-only headers (anthropic-version, x-api-key, x-client-request-id, CCH headers) before forwarding, since those are meaningless to the OpenAI/Copilot backend and some proxies reject unknown headers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openaiCompatible.ts` around lines 475 - 485, The outgoing
fetch call that creates providerResponse builds a fresh headers object and drops
caller-supplied init?.headers (losing Copilot-specific and custom headers); fix
by merging the incoming headers into the request headers before overriding
adapter-owned fields: take init?.headers (or any headers passed into the
function), copy them into the headers for the fetch, then set/override
"Content-Type" and Authorization: `Bearer ${options.apiKey}` (and any
adapter-required headers like Openai-Intent) so adapter-owned values win; also
optionally strip Anthropic-only headers (e.g., "anthropic-version", "x-api-key",
"x-client-request-id", CCH headers) from the merged headers before sending.
| const baseUrl = ( | ||
| process.env.COPILOT_BASE_URL?.trim() || "https://api.githubcopilot.com" | ||
| ).replace(/\/+$/, ""); | ||
| try { | ||
| const response = await globalThis.fetch(`${baseUrl}/models`, { | ||
| method: "GET", | ||
| headers: { | ||
| Authorization: `Bearer ${apiKey}`, | ||
| "User-Agent": "free-code/1.0", | ||
| "Openai-Intent": "conversation-edits", | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Copilot API required headers chat completions Editor-Version Copilot-Integration-Id
💡 Result:
The GitHub Copilot Chat Completions API, accessible at endpoints like https://api.githubcopilot.com/chat/completions, requires specific headers for authentication and IDE context, particularly when used from custom extensions or proxies mimicking IDE clients. Required headers include: - Authorization: Bearer (obtained via GitHub token exchange, e.g., https://api.github.com/copilot_internal/v2/token) - Content-Type: application/json - Editor-Version: e.g., "vscode/1.107.0" (identifies the IDE/editor version; missing causes HTTP 400 "bad request: missing Editor-Version header for IDE auth") - Copilot-Integration-Id: e.g., "vscode-chat" (identifies the integration/client; frequently required alongside Editor-Version) - Editor-Plugin-Version: e.g., "copilot-chat/0.35.0" (plugin version, often paired with above) - User-Agent: e.g., "GitHubCopilotChat/0.35.0" (common in examples) These IDE-specific headers (Editor-Version, Copilot-Integration-Id, Editor-Plugin-Version) are enforced for IDE authentication and are not documented in public GitHub Docs (which cover management APIs, not this internal chat endpoint). Evidence comes from real-world usage in extensions, proxies (LiteLLM, OpenClaw), and token exchange code, where omitting them triggers explicit 400 errors. Values should match the target IDE (e.g., VSCode versions). No official public spec found; reverse-engineered from errors and working examples as of 2026.
Citations:
- 1: https://stackoverflow.com/questions/79130421/how-to-get-response-of-copilot-llm-in-github-chat-completions-api
- 2: GitHub Copilot provider: missing Editor-Version header on inference requests causes HTTP 400 openclaw/openclaw#58056
- 3: Compaction summarization missing Editor-Version header for GitHub Copilot provider openclaw/openclaw#66068
- 4: [Bug]: Missing Editor-Version Header for IDE Auth in GitHub Copilot provider in Proxy Mode BerriAI/litellm#13256
- 5: Missing Copilot-Integration-Id and Editor-Version headers causing auth failures farion1231/cc-switch#2142
- 6: https://github.com/sst/opencode-github-copilot/blob/main/auth.ts
🏁 Script executed:
# First, check the file size and read the relevant section
wc -l src/services/api/openaiCompatible.tsRepository: paoloanzn/free-code
Length of output: 103
🏁 Script executed:
# Read the section around lines 588-599
sed -n '580,620p' src/services/api/openaiCompatible.tsRepository: paoloanzn/free-code
Length of output: 1204
🏁 Script executed:
# Search for trimTrailingSlash usage and definition in this file
rg 'trimTrailingSlash' src/services/api/openaiCompatible.ts -B 2 -A 2Repository: paoloanzn/free-code
Length of output: 592
🏁 Script executed:
# Find the createOpenAICompatibleFetch function to see what headers it sends
rg 'createOpenAICompatibleFetch|function createOpenAICompatibleFetch' src/services/api/openaiCompatible.ts -A 30Repository: paoloanzn/free-code
Length of output: 1091
🏁 Script executed:
# Get the full createOpenAICompatibleFetch function
sed -n '/export function createOpenAICompatibleFetch/,/^}/p' src/services/api/openaiCompatible.ts | head -80Repository: paoloanzn/free-code
Length of output: 2329
🏁 Script executed:
# Also search for where headers are built/merged in that function
rg 'createOpenAICompatibleFetch' src/services/api/openaiCompatible.ts -A 60 | head -100Repository: paoloanzn/free-code
Length of output: 2010
Add required GitHub Copilot headers to /chat/completions requests.
The /chat/completions call currently sends only Content-Type and Authorization headers, but GitHub Copilot's API requires additional identification headers: Editor-Version and Copilot-Integration-Id are mandatory (requests omitting Editor-Version return HTTP 400). Editor-Plugin-Version is also commonly required. Add these headers to the fetch call in createOpenAICompatibleFetch to match the pattern used in refreshCopilotModelOptions (which already sends User-Agent and Openai-Intent).
Also, use trimTrailingSlash() instead of .replace(/\/+$/, "") for baseUrl normalization in refreshCopilotModelOptions (line ~590) — the utility is already available in this file and is used consistently elsewhere.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openaiCompatible.ts` around lines 588 - 599, The fetch used
in createOpenAICompatibleFetch (and specifically the /chat/completions request)
is missing required GitHub Copilot identification headers; update the request
headers to include Editor-Version and Copilot-Integration-Id (and also include
Editor-Plugin-Version where appropriate) alongside existing Authorization,
Content-Type, User-Agent and Openai-Intent so the Copilot API accepts the call;
also replace the manual .replace(/\/+$/, "") normalization in
refreshCopilotModelOptions with the existing trimTrailingSlash() utility to
normalize baseUrl consistently.
| 'CLAUDE_CODE_USE_OPENAI', | ||
| 'CLAUDE_CODE_OPENAI_BASE_URL', | ||
| 'CLAUDE_CODE_OPENAI_API_KEY', | ||
| 'CLAUDE_CODE_OPENAI_DEFAULT_MODEL', | ||
| 'OPENAI_BASE_URL', | ||
| 'OPENAI_API_KEY', |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/utils/managedEnvConstants.tsRepository: paoloanzn/free-code
Length of output: 8630
🏁 Script executed:
rg -i "copilot" --type ts --type js | head -50Repository: paoloanzn/free-code
Length of output: 3778
Add Copilot provider vars to SAFE_ENV_VARS and reconcile security documentation for OpenAI equivalents.
Two critical issues with this section:
-
Asymmetry in provider selection:
CLAUDE_CODE_USE_BEDROCK,CLAUDE_CODE_USE_VERTEX, andCLAUDE_CODE_USE_FOUNDRYare all inSAFE_ENV_VARSalongsideCLAUDE_CODE_USE_OPENAI, butCLAUDE_CODE_USE_COPILOTandCOPILOT_BASE_URLare missing. This means managed settings rollouts of Copilot will trigger the security dialog while OpenAI rollouts won't, creating a problematic UX inconsistency. BothCLAUDE_CODE_USE_COPILOTandCOPILOT_BASE_URLneed to be added. -
Undocumented security decision for OpenAI vars: The file header (lines 100–113) explicitly marks
ANTHROPIC_BASE_URLandANTHROPIC_API_KEYas dangerous because they can redirect traffic to an attacker server or swap API keys. YetOPENAI_BASE_URL,CLAUDE_CODE_OPENAI_BASE_URL,OPENAI_API_KEY, andCLAUDE_CODE_OPENAI_API_KEYare inSAFE_ENV_VARSdespite having the identical threat model. The comment block must be updated to document why OpenAI providers are intentionally treated differently from Anthropic providers in this regard (or the decision should be reconsidered).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/managedEnvConstants.ts` around lines 154 - 159, Add the missing
Copilot flags to the SAFE_ENV_VARS array by including 'CLAUDE_CODE_USE_COPILOT'
and 'COPILOT_BASE_URL' so provider-selection parity is preserved with other
CLAUDE_CODE_USE_* entries; locate the SAFE_ENV_VARS definition in
managedEnvConstants.ts to insert these exact symbols. Also update the header
comment block (around the existing note that marks ANTHROPIC_* as dangerous) to
either document the explicit rationale why OPENAI_* and CLAUDE_CODE_OPENAI_* are
treated as safe despite the same threat model or move those OpenAI-related
symbols out of SAFE_ENV_VARS to match the documented security stance — ensure
the comment references the exact environment names (OPENAI_BASE_URL,
OPENAI_API_KEY, CLAUDE_CODE_OPENAI_BASE_URL, CLAUDE_CODE_OPENAI_API_KEY) so the
reasoning and list remain consistent.
| if (apiProvider !== "firstParty") { | ||
| const providerLabel = { | ||
| bedrock: 'AWS Bedrock', | ||
| vertex: 'Google Vertex AI', | ||
| foundry: 'Microsoft Foundry' | ||
| bedrock: "AWS Bedrock", | ||
| vertex: "Google Vertex AI", | ||
| foundry: "Microsoft Foundry", | ||
| openai: "OpenAI-compatible", | ||
| }[apiProvider]; | ||
| properties.push({ | ||
| label: 'API provider', | ||
| value: providerLabel | ||
| label: "API provider", | ||
| value: providerLabel, | ||
| }); | ||
| } |
There was a problem hiding this comment.
providerLabel map is missing copilot — status will show "API provider: undefined".
apiProvider can be "copilot" (the else-branch at line 399 handles it), but the label lookup at lines 310-315 only covers bedrock/vertex/foundry/openai. When a user runs /provider copilot or sets CLAUDE_CODE_USE_COPILOT=1, this block falls through and pushes { label: "API provider", value: undefined }, producing a blank/undefined row in the status pane.
🛠️ Proposed fix
const providerLabel = {
bedrock: "AWS Bedrock",
vertex: "Google Vertex AI",
foundry: "Microsoft Foundry",
openai: "OpenAI-compatible",
+ copilot: "GitHub Copilot",
}[apiProvider];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (apiProvider !== "firstParty") { | |
| const providerLabel = { | |
| bedrock: 'AWS Bedrock', | |
| vertex: 'Google Vertex AI', | |
| foundry: 'Microsoft Foundry' | |
| bedrock: "AWS Bedrock", | |
| vertex: "Google Vertex AI", | |
| foundry: "Microsoft Foundry", | |
| openai: "OpenAI-compatible", | |
| }[apiProvider]; | |
| properties.push({ | |
| label: 'API provider', | |
| value: providerLabel | |
| label: "API provider", | |
| value: providerLabel, | |
| }); | |
| } | |
| if (apiProvider !== "firstParty") { | |
| const providerLabel = { | |
| bedrock: "AWS Bedrock", | |
| vertex: "Google Vertex AI", | |
| foundry: "Microsoft Foundry", | |
| openai: "OpenAI-compatible", | |
| copilot: "GitHub Copilot", | |
| }[apiProvider]; | |
| properties.push({ | |
| label: "API provider", | |
| value: providerLabel, | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/status.tsx` around lines 309 - 320, The providerLabel lookup for
apiProvider in the status rendering is missing the "copilot" key, causing
undefined values; update the map used to compute providerLabel (the object
literal assigned to providerLabel) to include copilot: "Copilot" (or the desired
display string) so that when apiProvider === "copilot" the properties.push({
label: "API provider", value: providerLabel }) receives a proper string; ensure
this change is made in the same conditional block that references apiProvider
and providerLabel in src/utils/status.tsx so the status pane no longer shows an
undefined value.
Summary
api.githubcopilot.com) as an OpenAI-compatible backend. Supports chat completions, dynamic model listing via/modelsendpoint, and provider switching viaCLAUDE_CODE_USE_COPILOT=1or/provider copilot.OPENAI_API_KEY,OPENAI_BASE_URL, etc.) in managed and safe env var sets, addrefreshModelStringsForCurrentProviderutility, and include a test for the OpenAI-compatible fetch adapter.claude-code-source-snapshottofree-code.Usage
Or interactively:
/provider copilotFiles Changed
src/utils/model/providers.ts— AddedcopilottoAPIProvidertypesrc/utils/model/configs.ts— Addedcopilotfield to all model configssrc/services/api/client.ts— Added Copilot client branch with Copilot-specific headerssrc/services/api/bootstrap.ts— Added Copilot model refresh on startupsrc/services/api/openaiCompatible.ts— New: OpenAI-compatible fetch adapter +refreshCopilotModelOptionssrc/commands/provider/— New: provider switching command with Copilot supportsrc/utils/model/model.ts— Added Copilot to default model selectionsrc/utils/model/modelOptions.ts— Added Copilot to model optionssrc/utils/status.tsx— Show Copilot base URL in/statussrc/utils/managedEnvConstants.ts— Register OpenAI env varssrc/utils/model/modelStrings.ts— Add provider model refresh utilitypackage.json— Rename tofree-codeSummary by CodeRabbit
New Features
/providercommand to switch between API providers: first-party Anthropic, Bedrock, Vertex AI, Foundry, OpenAI-compatible, and GitHub CopilotPackage Changes