Skip to content

feat: add GitHub Copilot as a model provider#34

Open
greatbody wants to merge 3 commits intopaoloanzn:mainfrom
greatbody:custom
Open

feat: add GitHub Copilot as a model provider#34
greatbody wants to merge 3 commits intopaoloanzn:mainfrom
greatbody:custom

Conversation

@greatbody
Copy link
Copy Markdown

@greatbody greatbody commented Apr 17, 2026

Summary

  • Add GitHub Copilot as a new model provider, integrating the Copilot API (api.githubcopilot.com) as an OpenAI-compatible backend. Supports chat completions, dynamic model listing via /models endpoint, and provider switching via CLAUDE_CODE_USE_COPILOT=1 or /provider copilot.
  • Register OpenAI env vars (OPENAI_API_KEY, OPENAI_BASE_URL, etc.) in managed and safe env var sets, add refreshModelStringsForCurrentProvider utility, and include a test for the OpenAI-compatible fetch adapter.
  • Rename package from claude-code-source-snapshot to free-code.

Usage

CLAUDE_CODE_USE_COPILOT=1 COPILOT_TOKEN=<github_oauth_token> ./cli

Or interactively: /provider copilot

Files Changed

  • src/utils/model/providers.ts — Added copilot to APIProvider type
  • src/utils/model/configs.ts — Added copilot field to all model configs
  • src/services/api/client.ts — Added Copilot client branch with Copilot-specific headers
  • src/services/api/bootstrap.ts — Added Copilot model refresh on startup
  • src/services/api/openaiCompatible.ts — New: OpenAI-compatible fetch adapter + refreshCopilotModelOptions
  • src/commands/provider/ — New: provider switching command with Copilot support
  • src/utils/model/model.ts — Added Copilot to default model selection
  • src/utils/model/modelOptions.ts — Added Copilot to model options
  • src/utils/status.tsx — Show Copilot base URL in /status
  • src/utils/managedEnvConstants.ts — Register OpenAI env vars
  • src/utils/model/modelStrings.ts — Add provider model refresh utility
  • package.json — Rename to free-code

Summary by CodeRabbit

  • New Features

    • Added /provider command to switch between API providers: first-party Anthropic, Bedrock, Vertex AI, Foundry, OpenAI-compatible, and GitHub Copilot
    • Extended model catalog with Copilot-compatible model identifiers
    • Added support for OpenAI-compatible API endpoints
  • Package Changes

    • Updated package name to "free-code"

Integrate Copilot API (api.githubcopilot.com) as an OpenAI-compatible
provider. Supports model listing, chat completions, and dynamic model
refresh via /models endpoint.

Activation: CLAUDE_CODE_USE_COPILOT=1 COPILOT_TOKEN=<token>
Or interactively: /provider copilot
…esh utility

Add OpenAI-related env vars (API key, base URL, default model) to
managed and safe env var sets. Add refreshModelStringsForCurrentProvider
helper and OpenAI-compatible fetch adapter test.
Copilot AI review requested due to automatic review settings April 17, 2026 17:55
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 17, 2026

📝 Walkthrough

Walkthrough

This PR introduces OpenAI and GitHub Copilot as new API providers alongside existing first-party, Bedrock, Vertex, and Foundry options. It adds a /provider command for switching providers, implements OpenAI-compatible request/response translation logic, updates the API client to route through provider-specific handlers, and extends model configuration with Copilot-specific identifiers. The package is renamed to "free-code".

Changes

Cohort / File(s) Summary
Package Configuration
package.json
Renamed package from "claude-code-source-snapshot" to "free-code".
Provider Command
src/commands/provider/index.ts, src/commands/provider/provider.ts
Added new /provider local command with lazy-loaded implementation. Supports switching between providers (first-party, bedrock, vertex, foundry, openai, copilot) with validation, environment variable updates, and model refresh. Includes status display and prerequisite credential warnings.
Command Registry
src/commands.ts
Registered new provider command in the built-in COMMANDS() registry.
OpenAI-compatible Adapter
src/services/api/openaiCompatible.ts, src/services/api/openaiCompatible.test.ts
Implemented OpenAI-compatible fetch adapter with request/response translation between OpenAI and Anthropic API formats. Supports streaming, model options refresh for OpenAI and Copilot providers, and error handling. Test coverage validates request sanitization, response conversion, and max_tokens clamping.
API Client Routing
src/services/api/client.ts, src/services/api/bootstrap.ts
Updated getAnthropicClient() to route openai/copilot providers through OpenAI-compatible fetch with credential validation. Modified fetchBootstrapData() to bypass API calls for openai/copilot and refresh model options instead. Formatted strings to double-quoted style.
Model Configuration & Selection
src/utils/model/configs.ts, src/utils/model/model.ts, src/utils/model/modelOptions.ts, src/utils/model/modelStrings.ts, src/utils/model/providers.ts
Extended APIProvider type with "copilot" option. Added Copilot-specific model identifiers to all model configs. Updated default model selection and model options to use OpenAI-compatible defaults when provider is openai/copilot. Added refreshModelStringsForCurrentProvider() for provider-specific model string initialization.
Environment & Status
src/utils/managedEnvConstants.ts, src/utils/status.tsx
Added OpenAI-related environment variables to provider-managed and safe var sets. Extended API provider properties display to show OpenAI base URL and Copilot base URL.

Sequence Diagram

sequenceDiagram
    actor User
    participant ProviderCmd as /provider Command
    participant SettingsAPI as Settings API
    participant APIClient as API Client
    participant OpenAICompat as OpenAI Adapter
    participant OpenAIAPI as OpenAI-compatible API
    
    User->>ProviderCmd: /provider openai
    ProviderCmd->>APIClient: getAPIProvider()
    APIClient-->>ProviderCmd: returns current provider
    ProviderCmd->>SettingsAPI: updateSettingsForSource<br/>(userSettings, env)
    SettingsAPI-->>ProviderCmd: persists provider env vars
    ProviderCmd->>APIClient: mutate process.env<br/>with CLAUDE_CODE_USE_OPENAI
    ProviderCmd->>APIClient: refreshModelStringsForCurrentProvider()
    APIClient-->>ProviderCmd: refreshes model options
    ProviderCmd-->>User: "Provider switched to OpenAI"
    
    Note over User,APIClient: Subsequent API calls
    User->>APIClient: request with model
    APIClient->>OpenAICompat: createOpenAICompatibleFetch()
    OpenAICompat->>OpenAICompat: translate Anthropic→OpenAI
    OpenAICompat->>OpenAIAPI: POST /chat/completions
    OpenAIAPI-->>OpenAICompat: OpenAI response
    OpenAICompat->>OpenAICompat: translate OpenAI→Anthropic
    OpenAICompat-->>APIClient: Anthropic-format response
    APIClient-->>User: message
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 A new provider command hops into view,
OpenAI translations make requests anew,
From Anthropic to OpenAI we dance,
Switching providers at CLI's command stance,
Multiple paths, one unified glance! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: add GitHub Copilot as a model provider' accurately summarizes the main objective of the changeset, which is to integrate GitHub Copilot as a new supported model provider alongside existing providers.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds GitHub Copilot as an additional model/API provider by routing requests through an OpenAI-compatible adapter, plus introduces a /provider command for switching providers and updates environment variable handling and default model selection accordingly.

Changes:

  • Add copilot to provider selection and status reporting, plus a /provider command to switch providers via settings/env flags.
  • Introduce an OpenAI-compatible fetch adapter (chat completions + /models refresh) and wire it into API bootstrap/client creation.
  • Update model config/selection logic to account for OpenAI-compatible and Copilot providers; rename the package to free-code.

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/utils/status.tsx Displays provider/base URL details in /status, including OpenAI-compatible and Copilot branches.
src/utils/model/providers.ts Adds copilot to APIProvider and provider selection env flag handling.
src/utils/model/modelStrings.ts Adds refreshModelStringsForCurrentProvider helper for provider switching.
src/utils/model/modelOptions.ts Adds OpenAI/Copilot model option sets and continues to append cached model options.
src/utils/model/model.ts Adjusts default model selection for OpenAI-compatible/Copilot providers.
src/utils/model/configs.ts Extends all model configs to include copilot provider entries.
src/utils/managedEnvConstants.ts Registers OpenAI-related env vars in managed/safe sets.
src/services/api/openaiCompatible.ts New adapter translating Anthropic /v1/messages to OpenAI /chat/completions, plus model refresh for OpenAI and Copilot.
src/services/api/openaiCompatible.test.ts Adds Bun tests for the OpenAI-compatible fetch adapter behavior.
src/services/api/client.ts Adds OpenAI-compatible and Copilot client branches using the adapter.
src/services/api/bootstrap.ts Refreshes OpenAI/Copilot models via /models at startup for those providers.
src/commands/provider/provider.ts Implements /provider command to switch provider flags in settings/env.
src/commands/provider/index.ts Registers the /provider command.
src/commands.ts Adds provider command to the command list.
package.json Renames package to free-code.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 18 to 22
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'CLAUDE_CODE_USE_OPENAI',
// Endpoint config (base URLs, project/resource identifiers)
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PROVIDER_MANAGED_ENV_VARS includes flags for Bedrock/Vertex/Foundry/OpenAI but not the new Copilot flag. If CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST is set, settings-sourced CLAUDE_CODE_USE_COPILOT would not be filtered, allowing users to override a host-managed provider selection. Add CLAUDE_CODE_USE_COPILOT (and consider adding Copilot routing/auth vars like COPILOT_BASE_URL/COPILOT_TOKEN in the appropriate section) to this set.

Copilot uses AI. Check for mistakes.
Comment on lines +475 to +482
const providerResponse = await globalThis.fetch(
`${baseUrl}/chat/completions`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${options.apiKey}`,
},
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OpenAI-compatible adapter drops all headers/signals from the original Anthropic SDK request when it forwards to the provider (it hardcodes only Content-Type + Authorization). This prevents provider-specific required headers (e.g. Copilot's Openai-Intent) and abort/cancellation (init.signal) from being propagated. Merge/forward relevant headers (or accept extra headers via options) and pass through signal to the provider fetch.

Suggested change
const providerResponse = await globalThis.fetch(
`${baseUrl}/chat/completions`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${options.apiKey}`,
},
const forwardedHeaders = new Headers(
input instanceof Request ? input.headers : undefined,
);
if (init?.headers) {
const initHeaders = new Headers(init.headers);
initHeaders.forEach((value, key) => {
forwardedHeaders.set(key, value);
});
}
forwardedHeaders.set("Content-Type", "application/json");
forwardedHeaders.set("Authorization", `Bearer ${options.apiKey}`);
const forwardedSignal =
init?.signal ?? (input instanceof Request ? input.signal : undefined);
const providerResponse = await globalThis.fetch(
`${baseUrl}/chat/completions`,
{
method: "POST",
headers: forwardedHeaders,
signal: forwardedSignal,

Copilot uses AI. Check for mistakes.
Comment on lines +165 to +167
Object.assign(process.env, providerEnv(parsed));
await refreshModelStringsForCurrentProvider();

Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After switching providers, only model strings are refreshed. The dynamic model list (additionalModelOptionsCache) is fetched in bootstrap for OpenAI/Copilot, but /provider switching won’t refresh it, so the model picker may still show stale models from a previous provider (and default-model selection may use the wrong cached entry). Consider triggering refreshOpenAICompatibleModelOptions / refreshCopilotModelOptions when switching to those providers (or clearing the cache).

Copilot uses AI. Check for mistakes.
Comment thread src/utils/status.tsx
Comment on lines 310 to +314
const providerLabel = {
bedrock: 'AWS Bedrock',
vertex: 'Google Vertex AI',
foundry: 'Microsoft Foundry'
bedrock: "AWS Bedrock",
vertex: "Google Vertex AI",
foundry: "Microsoft Foundry",
openai: "OpenAI-compatible",
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

providerLabel mapping does not include the new copilot provider. When apiProvider === "copilot", providerLabel becomes undefined, so /status will show a blank API provider value. Add a copilot entry (e.g., "GitHub Copilot") or provide a safe fallback before pushing the property.

Copilot uses AI. Check for mistakes.
type: 'local',
name: 'provider',
description: `Switch API provider (currently ${getAPIProvider()})`,
argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider command advertises /provider but argumentHint omits the newly supported copilot option, which will mislead users. Update the hint to include copilot (and any other supported aliases as appropriate).

Suggested change
argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show',

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/utils/managedEnvConstants.ts (1)

14-62: ⚠️ Potential issue | 🟠 Major

Copilot env vars are missing from PROVIDER_MANAGED_ENV_VARS.

CLAUDE_CODE_USE_OPENAI and the OpenAI endpoint/auth/default-model vars were added, but the equivalent Copilot controls introduced by this PR are not:

  • CLAUDE_CODE_USE_COPILOT — the provider-selection flag this PR documents
  • COPILOT_TOKEN — required auth per hasProviderPrereqs("copilot")
  • COPILOT_BASE_URL — referenced in src/utils/status.tsx

Without these, a host setting CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST cannot prevent a user's ~/.claude/settings.json from overriding its routing for the Copilot path, which is precisely the invariant this list exists to enforce.

🔒 Proposed addition
   'CLAUDE_CODE_USE_FOUNDRY',
   'CLAUDE_CODE_USE_OPENAI',
+  'CLAUDE_CODE_USE_COPILOT',
   // Endpoint config (base URLs, project/resource identifiers)
   ...
   'CLAUDE_CODE_OPENAI_BASE_URL',
   'OPENAI_BASE_URL',
+  'COPILOT_BASE_URL',
   ...
   'CLAUDE_CODE_OPENAI_API_KEY',
   'OPENAI_API_KEY',
+  'COPILOT_TOKEN',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/managedEnvConstants.ts` around lines 14 - 62, The
PROVIDER_MANAGED_ENV_VARS set is missing Copilot-related keys so host-managed
routing can be overridden by user settings; add 'CLAUDE_CODE_USE_COPILOT',
'COPILOT_TOKEN', and 'COPILOT_BASE_URL' to the PROVIDER_MANAGED_ENV_VARS Set
(the same collection that already contains 'CLAUDE_CODE_USE_OPENAI',
'OPENAI_API_KEY', and 'OPENAI_BASE_URL') so hasProviderPrereqs("copilot") and
the Copilot routing referenced in status.tsx are properly protected by
CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST.
src/utils/model/model.ts (1)

9-17: ⚠️ Potential issue | 🔴 Critical

Duplicate import: isCodexSubscriber listed twice.

isCodexSubscriber appears on both line 12 and line 16 in the same named-import list. This is a TS2300 duplicate-identifier error and will fail type-checking / build.

🐛 Proposed fix
 import {
   getSubscriptionType,
   isClaudeAISubscriber,
   isCodexSubscriber,
   isMaxSubscriber,
   isProSubscriber,
   isTeamPremiumSubscriber,
-  isCodexSubscriber,
 } from "../auth.js";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/model/model.ts` around lines 9 - 17, The import list in
src/utils/model/model.ts includes isCodexSubscriber twice causing a
duplicate-identifier TS2300 error; remove the redundant import entry so each
symbol (e.g., getSubscriptionType, isClaudeAISubscriber, isCodexSubscriber,
isMaxSubscriber, isProSubscriber, isTeamPremiumSubscriber) appears only once in
the named-import from "../auth.js" (you can simply delete the second
isCodexSubscriber in the import statement).
🧹 Nitpick comments (6)
src/utils/model/model.ts (1)

26-26: Use a relative import path for consistency.

Every other import in this file uses a relative path (e.g., "../../bootstrap/state.js"). This line uses an absolute-style "src/services/api/openaiCompatible.js". While this may resolve via baseUrl/paths, it breaks the convention used throughout the module and can behave differently across build/test tool configurations.

♻️ Proposed fix
-import { getOpenAICompatibleDefaultModel } from "src/services/api/openaiCompatible.js";
+import { getOpenAICompatibleDefaultModel } from "../../services/api/openaiCompatible.js";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/model/model.ts` at line 26, Replace the absolute-style import of
getOpenAICompatibleDefaultModel with a relative import consistent with the other
imports in model.ts; locate the import statement that references
getOpenAICompatibleDefaultModel and change it to a relative-path import
(matching the project's relative import pattern used elsewhere in this file) so
the module resolves consistently across build/test environments.
src/commands/provider/index.ts (1)

4-11: description is captured at module load and goes stale after switching.

description: \Switch API provider (currently ${getAPIProvider()})`is evaluated once when this module is first imported. After the user runs/provider copilot`, the command listing still advertises the prior provider until the process restarts.

If a dynamic description is desired, consider computing it lazily (e.g., via a function/getter where the Command type supports it) or omitting the "currently …" suffix to avoid advertising stale state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/provider/index.ts` around lines 4 - 11, The description string
on providerCommand is computed once at module load and becomes stale; change
providerCommand.description to be computed at call-time (e.g., make description
a getter or function that calls getAPIProvider() when accessed, or if the
Command type doesn't support lazy descriptions remove the "currently …" suffix
and keep a static message) so the displayed provider reflects the current state;
update the providerCommand object (referencing providerCommand and
getAPIProvider) accordingly.
src/utils/model/modelStrings.ts (1)

168-183: Consider serializing with sequential to avoid racing updateBedrockModelStrings.

updateBedrockModelStrings is wrapped with sequential to serialize concurrent callers. refreshModelStringsForCurrentProvider calls getBedrockModelStrings() directly, so if a /provider switch happens while a background bedrock update is in flight (e.g., from initModelStrings()), the later write can overwrite the earlier one non-deterministically.

Low-likelihood in practice, but easy to harden by reusing the same sequential guard or invoking updateBedrockModelStrings when the target is bedrock.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/model/modelStrings.ts` around lines 168 - 183, The function
refreshModelStringsForCurrentProvider calls getBedrockModelStrings() directly
which can race with the sequential-wrapped updateBedrockModelStrings; change
refreshModelStringsForCurrentProvider so that when provider === 'bedrock' it
invokes the same serialized updater (updateBedrockModelStrings) or uses the
existing sequential guard rather than calling getBedrockModelStrings() directly,
then setModelStringsState with the returned/updated value and preserve the
existing error handling path.
src/commands/provider/provider.ts (1)

122-129: Duplicated provider-from-env logic.

currentProviderFromEnv() re-implements the exact same env-flag priority as getAPIProvider() (already imported at line 3). Since Object.assign(process.env, providerEnv(parsed)) at line 165 runs after this read, you can simply call getAPIProvider() here to avoid drift if new providers are added later.

♻️ Proposed refactor
-function currentProviderFromEnv(): ProviderName {
-  if (isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK)) return "bedrock";
-  if (isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX)) return "vertex";
-  if (isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY)) return "foundry";
-  if (isEnvTruthy(process.env.CLAUDE_CODE_USE_COPILOT)) return "copilot";
-  if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) return "openai";
-  return "firstParty";
-}
+// getAPIProvider already returns the same union and uses the same precedence.

Then replace currentProviderFromEnv() call at line 147 with getAPIProvider() (and drop the now-unused isEnvTruthy import).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/provider/provider.ts` around lines 122 - 129,
currentProviderFromEnv() duplicates getAPIProvider()'s env-flag priority and can
drift; remove the duplicate function and replace its usages (e.g., the call
currently at the location that chooses the provider) with getAPIProvider(), and
remove the now-unused isEnvTruthy import. Ensure you still import
getAPIProvider() if not already and keep providerEnv(parsed) assignment as-is so
env precedence remains consistent.
src/services/api/openaiCompatible.ts (1)

639-641: isOpenAICompatibleModel is too loose — "o1"/"o3" substring match produces false positives.

model.includes("o1") / model.includes("o3") will match any model ID containing those two characters anywhere (e.g. "model-promo1", "claude-co3", etc.). Anchor the check to OpenAI's actual naming (exact prefix/regex), or enumerate known model families.

♻️ Proposed tightening
-export function isOpenAICompatibleModel(model: string): boolean {
-  return model.includes("gpt-") || model.includes("o1") || model.includes("o3");
-}
+export function isOpenAICompatibleModel(model: string): boolean {
+  return /^(gpt-|o1(-|$)|o3(-|$))/.test(model);
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openaiCompatible.ts` around lines 639 - 641, The current
isOpenAICompatibleModel function is too permissive because model.includes("o1")
/ includes("o3") matches any string containing those characters; update
isOpenAICompatibleModel to use anchored checks instead: keep
model.includes("gpt-") as a prefix check (e.g., startsWith "gpt-") and replace
the loose includes for "o1"/"o3" with either exact prefix checks (e.g.,
startsWith "o1" or "o3" or with a regex like /^o[13](?:-|$)/) or an explicit
whitelist of known OpenAI model IDs; locate and update the
isOpenAICompatibleModel function to implement the anchored checks or whitelist.
src/services/api/client.ts (1)

329-371: Copilot/OpenAI branches duplicate most of their shape.

The two branches only differ in credential source (COPILOT_TOKEN vs OpenAI-compatible key), base URL, and the two extra Copilot headers. Consider extracting a small helper (e.g. buildOpenAICompatibleAnthropicClient(apiKey, baseUrl, extraHeaders)) to keep them in sync when fields like timeout/retries/headers change.

Also note: the Copilot-specific headers set on lines 367-368 are passed into toAnthropicClientForOpenAICompatible's defaultHeaders, but the OpenAI-compatible fetch adapter currently discards init.headers when it forwards to /chat/completions — see the corresponding comment on src/services/api/openaiCompatible.ts (lines 475-485). This means Openai-Intent/x-initiator never reach api.githubcopilot.com today.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/client.ts` around lines 329 - 371, The two branches for
getAPIProvider() ("openai" and "copilot") duplicate construction logic; refactor
by adding a helper like buildOpenAICompatibleAnthropicClient(apiKey, baseUrl,
extraHeaders) that centralizes creating createOpenAICompatibleFetch(...) and
calling toAnthropicClientForOpenAICompatible({fetch, maxRetries, timeout:
ARGS.timeout, defaultHeaders}). In the copilot branch call that helper with
process.env.COPILOT_TOKEN, COPILOT_BASE_URL (defaulting to
"https://api.githubcopilot.com") and the extra headers ("Openai-Intent" and
"x-initiator"), and in the openai branch call it with
getOpenAICompatibleApiKey() and getOpenAICompatibleBaseUrl(). Also fix the
openai-compatible fetch adapter (createOpenAICompatibleFetch /
src/services/api/openaiCompatible.ts) to forward init.headers through to the
upstream /chat/completions request so Copilot headers passed via defaultHeaders
actually reach api.githubcopilot.com.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/provider/index.ts`:
- Line 8: The argumentHint string for the provider command is missing the new
"copilot" option; update the argumentHint constant (argumentHint) in the
provider command module to include "copilot" so it reads like
'[first-party|bedrock|vertex|foundry|openai|copilot] | show' (or equivalent
ordering), ensuring the slash-command UI and help text advertise the new
provider.

In `@src/services/api/bootstrap.ts`:
- Around line 119-127: The openai/copilot refresh calls
(refreshOpenAICompatibleModelOptions and refreshCopilotModelOptions) are outside
the existing try/catch in fetchBootstrapData so their exceptions can escape;
move these calls inside the same try block that wraps the firstParty path (or
wrap each call with its own try/catch that calls logError) so failures degrade
gracefully like the firstParty path and do not propagate out of
fetchBootstrapData.

In `@src/services/api/openaiCompatible.test.ts`:
- Around line 122-160: The test "clamps max_tokens for providers with smaller
limits" sets process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS but deletes it only at
the end of the test, which can leak on failures; ensure the env var is always
cleaned up by either wrapping the test body in a try/finally that deletes
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS in the finally block, or add an
afterEach hook that deletes process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS so cleanup
runs unconditionally (refer to the test name and the environment variable
CLAUDE_CODE_OPENAI_MAX_TOKENS to locate where to apply the change).

In `@src/services/api/openaiCompatible.ts`:
- Around line 475-485: The outgoing fetch call that creates providerResponse
builds a fresh headers object and drops caller-supplied init?.headers (losing
Copilot-specific and custom headers); fix by merging the incoming headers into
the request headers before overriding adapter-owned fields: take init?.headers
(or any headers passed into the function), copy them into the headers for the
fetch, then set/override "Content-Type" and Authorization: `Bearer
${options.apiKey}` (and any adapter-required headers like Openai-Intent) so
adapter-owned values win; also optionally strip Anthropic-only headers (e.g.,
"anthropic-version", "x-api-key", "x-client-request-id", CCH headers) from the
merged headers before sending.
- Around line 588-599: The fetch used in createOpenAICompatibleFetch (and
specifically the /chat/completions request) is missing required GitHub Copilot
identification headers; update the request headers to include Editor-Version and
Copilot-Integration-Id (and also include Editor-Plugin-Version where
appropriate) alongside existing Authorization, Content-Type, User-Agent and
Openai-Intent so the Copilot API accepts the call; also replace the manual
.replace(/\/+$/, "") normalization in refreshCopilotModelOptions with the
existing trimTrailingSlash() utility to normalize baseUrl consistently.

In `@src/utils/managedEnvConstants.ts`:
- Around line 154-159: Add the missing Copilot flags to the SAFE_ENV_VARS array
by including 'CLAUDE_CODE_USE_COPILOT' and 'COPILOT_BASE_URL' so
provider-selection parity is preserved with other CLAUDE_CODE_USE_* entries;
locate the SAFE_ENV_VARS definition in managedEnvConstants.ts to insert these
exact symbols. Also update the header comment block (around the existing note
that marks ANTHROPIC_* as dangerous) to either document the explicit rationale
why OPENAI_* and CLAUDE_CODE_OPENAI_* are treated as safe despite the same
threat model or move those OpenAI-related symbols out of SAFE_ENV_VARS to match
the documented security stance — ensure the comment references the exact
environment names (OPENAI_BASE_URL, OPENAI_API_KEY, CLAUDE_CODE_OPENAI_BASE_URL,
CLAUDE_CODE_OPENAI_API_KEY) so the reasoning and list remain consistent.

In `@src/utils/status.tsx`:
- Around line 309-320: The providerLabel lookup for apiProvider in the status
rendering is missing the "copilot" key, causing undefined values; update the map
used to compute providerLabel (the object literal assigned to providerLabel) to
include copilot: "Copilot" (or the desired display string) so that when
apiProvider === "copilot" the properties.push({ label: "API provider", value:
providerLabel }) receives a proper string; ensure this change is made in the
same conditional block that references apiProvider and providerLabel in
src/utils/status.tsx so the status pane no longer shows an undefined value.

---

Outside diff comments:
In `@src/utils/managedEnvConstants.ts`:
- Around line 14-62: The PROVIDER_MANAGED_ENV_VARS set is missing
Copilot-related keys so host-managed routing can be overridden by user settings;
add 'CLAUDE_CODE_USE_COPILOT', 'COPILOT_TOKEN', and 'COPILOT_BASE_URL' to the
PROVIDER_MANAGED_ENV_VARS Set (the same collection that already contains
'CLAUDE_CODE_USE_OPENAI', 'OPENAI_API_KEY', and 'OPENAI_BASE_URL') so
hasProviderPrereqs("copilot") and the Copilot routing referenced in status.tsx
are properly protected by CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST.

In `@src/utils/model/model.ts`:
- Around line 9-17: The import list in src/utils/model/model.ts includes
isCodexSubscriber twice causing a duplicate-identifier TS2300 error; remove the
redundant import entry so each symbol (e.g., getSubscriptionType,
isClaudeAISubscriber, isCodexSubscriber, isMaxSubscriber, isProSubscriber,
isTeamPremiumSubscriber) appears only once in the named-import from "../auth.js"
(you can simply delete the second isCodexSubscriber in the import statement).

---

Nitpick comments:
In `@src/commands/provider/index.ts`:
- Around line 4-11: The description string on providerCommand is computed once
at module load and becomes stale; change providerCommand.description to be
computed at call-time (e.g., make description a getter or function that calls
getAPIProvider() when accessed, or if the Command type doesn't support lazy
descriptions remove the "currently …" suffix and keep a static message) so the
displayed provider reflects the current state; update the providerCommand object
(referencing providerCommand and getAPIProvider) accordingly.

In `@src/commands/provider/provider.ts`:
- Around line 122-129: currentProviderFromEnv() duplicates getAPIProvider()'s
env-flag priority and can drift; remove the duplicate function and replace its
usages (e.g., the call currently at the location that chooses the provider) with
getAPIProvider(), and remove the now-unused isEnvTruthy import. Ensure you still
import getAPIProvider() if not already and keep providerEnv(parsed) assignment
as-is so env precedence remains consistent.

In `@src/services/api/client.ts`:
- Around line 329-371: The two branches for getAPIProvider() ("openai" and
"copilot") duplicate construction logic; refactor by adding a helper like
buildOpenAICompatibleAnthropicClient(apiKey, baseUrl, extraHeaders) that
centralizes creating createOpenAICompatibleFetch(...) and calling
toAnthropicClientForOpenAICompatible({fetch, maxRetries, timeout: ARGS.timeout,
defaultHeaders}). In the copilot branch call that helper with
process.env.COPILOT_TOKEN, COPILOT_BASE_URL (defaulting to
"https://api.githubcopilot.com") and the extra headers ("Openai-Intent" and
"x-initiator"), and in the openai branch call it with
getOpenAICompatibleApiKey() and getOpenAICompatibleBaseUrl(). Also fix the
openai-compatible fetch adapter (createOpenAICompatibleFetch /
src/services/api/openaiCompatible.ts) to forward init.headers through to the
upstream /chat/completions request so Copilot headers passed via defaultHeaders
actually reach api.githubcopilot.com.

In `@src/services/api/openaiCompatible.ts`:
- Around line 639-641: The current isOpenAICompatibleModel function is too
permissive because model.includes("o1") / includes("o3") matches any string
containing those characters; update isOpenAICompatibleModel to use anchored
checks instead: keep model.includes("gpt-") as a prefix check (e.g., startsWith
"gpt-") and replace the loose includes for "o1"/"o3" with either exact prefix
checks (e.g., startsWith "o1" or "o3" or with a regex like /^o[13](?:-|$)/) or
an explicit whitelist of known OpenAI model IDs; locate and update the
isOpenAICompatibleModel function to implement the anchored checks or whitelist.

In `@src/utils/model/model.ts`:
- Line 26: Replace the absolute-style import of getOpenAICompatibleDefaultModel
with a relative import consistent with the other imports in model.ts; locate the
import statement that references getOpenAICompatibleDefaultModel and change it
to a relative-path import (matching the project's relative import pattern used
elsewhere in this file) so the module resolves consistently across build/test
environments.

In `@src/utils/model/modelStrings.ts`:
- Around line 168-183: The function refreshModelStringsForCurrentProvider calls
getBedrockModelStrings() directly which can race with the sequential-wrapped
updateBedrockModelStrings; change refreshModelStringsForCurrentProvider so that
when provider === 'bedrock' it invokes the same serialized updater
(updateBedrockModelStrings) or uses the existing sequential guard rather than
calling getBedrockModelStrings() directly, then setModelStringsState with the
returned/updated value and preserve the existing error handling path.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8624aa90-f6ec-4bd7-ae54-43962c089ea9

📥 Commits

Reviewing files that changed from the base of the PR and between 7dc15d6 and a9ac2f7.

📒 Files selected for processing (15)
  • package.json
  • src/commands.ts
  • src/commands/provider/index.ts
  • src/commands/provider/provider.ts
  • src/services/api/bootstrap.ts
  • src/services/api/client.ts
  • src/services/api/openaiCompatible.test.ts
  • src/services/api/openaiCompatible.ts
  • src/utils/managedEnvConstants.ts
  • src/utils/model/configs.ts
  • src/utils/model/model.ts
  • src/utils/model/modelOptions.ts
  • src/utils/model/modelStrings.ts
  • src/utils/model/providers.ts
  • src/utils/status.tsx

type: 'local',
name: 'provider',
description: `Switch API provider (currently ${getAPIProvider()})`,
argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

argumentHint is missing copilot.

The PR adds copilot as a new provider, but the hint advertised to users (and shown in the slash-command UI) still lists only first-party|bedrock|vertex|foundry|openai. Users won't discover /provider copilot from the hint.

🐛 Proposed fix
-  argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
+  argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
argumentHint: '[first-party|bedrock|vertex|foundry|openai] | show',
argumentHint: '[first-party|bedrock|vertex|foundry|openai|copilot] | show',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/provider/index.ts` at line 8, The argumentHint string for the
provider command is missing the new "copilot" option; update the argumentHint
constant (argumentHint) in the provider command module to include "copilot" so
it reads like '[first-party|bedrock|vertex|foundry|openai|copilot] | show' (or
equivalent ordering), ensuring the slash-command UI and help text advertise the
new provider.

Comment on lines +119 to +127
if (getAPIProvider() === "openai") {
await refreshOpenAICompatibleModelOptions();
return;
}

if (getAPIProvider() === "copilot") {
await refreshCopilotModelOptions();
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Bring openai/copilot refreshes inside the error-handling boundary.

The firstParty path below is wrapped in try { … } catch (error) { logError(error) }, so bootstrap failures degrade gracefully. The new openai/copilot short-circuits are placed before that boundary — a throw from refreshOpenAICompatibleModelOptions() or refreshCopilotModelOptions() (e.g., a transient /models fetch failure) will propagate out of fetchBootstrapData() and is handled differently than the firstParty path.

🛡️ Proposed fix
 export async function fetchBootstrapData(): Promise<void> {
-  if (getAPIProvider() === "openai") {
-    await refreshOpenAICompatibleModelOptions();
-    return;
-  }
-
-  if (getAPIProvider() === "copilot") {
-    await refreshCopilotModelOptions();
-    return;
-  }
-
   try {
+    if (getAPIProvider() === "openai") {
+      await refreshOpenAICompatibleModelOptions();
+      return;
+    }
+
+    if (getAPIProvider() === "copilot") {
+      await refreshCopilotModelOptions();
+      return;
+    }
+
     const response = await fetchBootstrapAPI();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (getAPIProvider() === "openai") {
await refreshOpenAICompatibleModelOptions();
return;
}
if (getAPIProvider() === "copilot") {
await refreshCopilotModelOptions();
return;
}
try {
if (getAPIProvider() === "openai") {
await refreshOpenAICompatibleModelOptions();
return;
}
if (getAPIProvider() === "copilot") {
await refreshCopilotModelOptions();
return;
}
const response = await fetchBootstrapAPI();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/bootstrap.ts` around lines 119 - 127, The openai/copilot
refresh calls (refreshOpenAICompatibleModelOptions and
refreshCopilotModelOptions) are outside the existing try/catch in
fetchBootstrapData so their exceptions can escape; move these calls inside the
same try block that wraps the firstParty path (or wrap each call with its own
try/catch that calls logError) so failures degrade gracefully like the
firstParty path and do not propagate out of fetchBootstrapData.

Comment on lines +122 to +160
test('clamps max_tokens for providers with smaller limits', async () => {
let capturedBody: Record<string, unknown> | null = null
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192'

globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => {
capturedBody = JSON.parse(String(init?.body || '{}')) as Record<
string,
unknown
>
return new Response(
JSON.stringify({
id: 'resp_3',
model: 'deepseek-chat',
choices: [{ message: { content: 'ok' } }],
usage: { prompt_tokens: 1, completion_tokens: 1 },
}),
{ status: 200, headers: { 'Content-Type': 'application/json' } },
)
}) as typeof globalThis.fetch

const fetchAdapter = createOpenAICompatibleFetch({
apiKey: 'test-key',
baseUrl: 'https://provider.example/v1',
})

await fetchAdapter('https://api.anthropic.com/v1/messages', {
method: 'POST',
body: JSON.stringify({
model: 'deepseek-chat',
stream: false,
max_tokens: 64000,
messages: [{ role: 'user', content: 'hello' }],
}),
})

expect(capturedBody?.max_tokens).toBe(8192)

delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Env var cleanup isn't guaranteed on test failure.

If any assertion above line 159 fails, delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS never executes and the 8192 cap leaks into subsequent tests/suites that rely on the default. Move the cleanup into an afterEach (or wrap it in try/finally) so it runs unconditionally.

🧹 Proposed fix
+  afterEach(() => {
+    delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
+  })
+
   test('clamps max_tokens for providers with smaller limits', async () => {
     let capturedBody: Record<string, unknown> | null = null
     process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192'
     ...
     expect(capturedBody?.max_tokens).toBe(8192)
-
-    delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
   })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('clamps max_tokens for providers with smaller limits', async () => {
let capturedBody: Record<string, unknown> | null = null
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192'
globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => {
capturedBody = JSON.parse(String(init?.body || '{}')) as Record<
string,
unknown
>
return new Response(
JSON.stringify({
id: 'resp_3',
model: 'deepseek-chat',
choices: [{ message: { content: 'ok' } }],
usage: { prompt_tokens: 1, completion_tokens: 1 },
}),
{ status: 200, headers: { 'Content-Type': 'application/json' } },
)
}) as typeof globalThis.fetch
const fetchAdapter = createOpenAICompatibleFetch({
apiKey: 'test-key',
baseUrl: 'https://provider.example/v1',
})
await fetchAdapter('https://api.anthropic.com/v1/messages', {
method: 'POST',
body: JSON.stringify({
model: 'deepseek-chat',
stream: false,
max_tokens: 64000,
messages: [{ role: 'user', content: 'hello' }],
}),
})
expect(capturedBody?.max_tokens).toBe(8192)
delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
})
afterEach(() => {
delete process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS
})
test('clamps max_tokens for providers with smaller limits', async () => {
let capturedBody: Record<string, unknown> | null = null
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS = '8192'
globalThis.fetch = (async (_url: RequestInfo | URL, init?: RequestInit) => {
capturedBody = JSON.parse(String(init?.body || '{}')) as Record<
string,
unknown
>
return new Response(
JSON.stringify({
id: 'resp_3',
model: 'deepseek-chat',
choices: [{ message: { content: 'ok' } }],
usage: { prompt_tokens: 1, completion_tokens: 1 },
}),
{ status: 200, headers: { 'Content-Type': 'application/json' } },
)
}) as typeof globalThis.fetch
const fetchAdapter = createOpenAICompatibleFetch({
apiKey: 'test-key',
baseUrl: 'https://provider.example/v1',
})
await fetchAdapter('https://api.anthropic.com/v1/messages', {
method: 'POST',
body: JSON.stringify({
model: 'deepseek-chat',
stream: false,
max_tokens: 64000,
messages: [{ role: 'user', content: 'hello' }],
}),
})
expect(capturedBody?.max_tokens).toBe(8192)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openaiCompatible.test.ts` around lines 122 - 160, The test
"clamps max_tokens for providers with smaller limits" sets
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS but deletes it only at the end of the
test, which can leak on failures; ensure the env var is always cleaned up by
either wrapping the test body in a try/finally that deletes
process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS in the finally block, or add an
afterEach hook that deletes process.env.CLAUDE_CODE_OPENAI_MAX_TOKENS so cleanup
runs unconditionally (refer to the test name and the environment variable
CLAUDE_CODE_OPENAI_MAX_TOKENS to locate where to apply the change).

Comment on lines +475 to +485
const providerResponse = await globalThis.fetch(
`${baseUrl}/chat/completions`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${options.apiKey}`,
},
body: JSON.stringify(openAIBody),
},
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Outgoing request drops init.headers — Copilot-specific headers never reach the provider.

When the adapter forwards to /chat/completions, it builds a fresh headers object containing only Content-Type and Authorization. The caller-supplied init?.headers (which the Anthropic SDK populates from the client's defaultHeaders) is discarded entirely.

Concrete downstream impact from src/services/api/client.ts lines 365-369: the Copilot branch carefully sets Openai-Intent: "conversation-edits" and x-initiator: "user" via defaultHeaders, but those headers are silently stripped here and never reach api.githubcopilot.com. Any other custom headers (proxy auth, User-Agent, etc.) are also lost.

🔧 Proposed fix — forward incoming headers, then override the fields the adapter owns
+    const forwardedHeaders = new Headers(init?.headers);
+    forwardedHeaders.set("Content-Type", "application/json");
+    forwardedHeaders.set("Authorization", `Bearer ${options.apiKey}`);
+
     const providerResponse = await globalThis.fetch(
       `${baseUrl}/chat/completions`,
       {
         method: "POST",
-        headers: {
-          "Content-Type": "application/json",
-          Authorization: `Bearer ${options.apiKey}`,
-        },
+        headers: forwardedHeaders,
         body: JSON.stringify(openAIBody),
       },
     );

You may also want to strip Anthropic-only headers (anthropic-version, x-api-key, x-client-request-id, CCH headers) before forwarding, since those are meaningless to the OpenAI/Copilot backend and some proxies reject unknown headers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openaiCompatible.ts` around lines 475 - 485, The outgoing
fetch call that creates providerResponse builds a fresh headers object and drops
caller-supplied init?.headers (losing Copilot-specific and custom headers); fix
by merging the incoming headers into the request headers before overriding
adapter-owned fields: take init?.headers (or any headers passed into the
function), copy them into the headers for the fetch, then set/override
"Content-Type" and Authorization: `Bearer ${options.apiKey}` (and any
adapter-required headers like Openai-Intent) so adapter-owned values win; also
optionally strip Anthropic-only headers (e.g., "anthropic-version", "x-api-key",
"x-client-request-id", CCH headers) from the merged headers before sending.

Comment on lines +588 to +599
const baseUrl = (
process.env.COPILOT_BASE_URL?.trim() || "https://api.githubcopilot.com"
).replace(/\/+$/, "");
try {
const response = await globalThis.fetch(`${baseUrl}/models`, {
method: "GET",
headers: {
Authorization: `Bearer ${apiKey}`,
"User-Agent": "free-code/1.0",
"Openai-Intent": "conversation-edits",
},
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

GitHub Copilot API required headers chat completions Editor-Version Copilot-Integration-Id

💡 Result:

The GitHub Copilot Chat Completions API, accessible at endpoints like https://api.githubcopilot.com/chat/completions, requires specific headers for authentication and IDE context, particularly when used from custom extensions or proxies mimicking IDE clients. Required headers include: - Authorization: Bearer (obtained via GitHub token exchange, e.g., https://api.github.com/copilot_internal/v2/token) - Content-Type: application/json - Editor-Version: e.g., "vscode/1.107.0" (identifies the IDE/editor version; missing causes HTTP 400 "bad request: missing Editor-Version header for IDE auth") - Copilot-Integration-Id: e.g., "vscode-chat" (identifies the integration/client; frequently required alongside Editor-Version) - Editor-Plugin-Version: e.g., "copilot-chat/0.35.0" (plugin version, often paired with above) - User-Agent: e.g., "GitHubCopilotChat/0.35.0" (common in examples) These IDE-specific headers (Editor-Version, Copilot-Integration-Id, Editor-Plugin-Version) are enforced for IDE authentication and are not documented in public GitHub Docs (which cover management APIs, not this internal chat endpoint). Evidence comes from real-world usage in extensions, proxies (LiteLLM, OpenClaw), and token exchange code, where omitting them triggers explicit 400 errors. Values should match the target IDE (e.g., VSCode versions). No official public spec found; reverse-engineered from errors and working examples as of 2026.

Citations:


🏁 Script executed:

# First, check the file size and read the relevant section
wc -l src/services/api/openaiCompatible.ts

Repository: paoloanzn/free-code

Length of output: 103


🏁 Script executed:

# Read the section around lines 588-599
sed -n '580,620p' src/services/api/openaiCompatible.ts

Repository: paoloanzn/free-code

Length of output: 1204


🏁 Script executed:

# Search for trimTrailingSlash usage and definition in this file
rg 'trimTrailingSlash' src/services/api/openaiCompatible.ts -B 2 -A 2

Repository: paoloanzn/free-code

Length of output: 592


🏁 Script executed:

# Find the createOpenAICompatibleFetch function to see what headers it sends
rg 'createOpenAICompatibleFetch|function createOpenAICompatibleFetch' src/services/api/openaiCompatible.ts -A 30

Repository: paoloanzn/free-code

Length of output: 1091


🏁 Script executed:

# Get the full createOpenAICompatibleFetch function
sed -n '/export function createOpenAICompatibleFetch/,/^}/p' src/services/api/openaiCompatible.ts | head -80

Repository: paoloanzn/free-code

Length of output: 2329


🏁 Script executed:

# Also search for where headers are built/merged in that function
rg 'createOpenAICompatibleFetch' src/services/api/openaiCompatible.ts -A 60 | head -100

Repository: paoloanzn/free-code

Length of output: 2010


Add required GitHub Copilot headers to /chat/completions requests.

The /chat/completions call currently sends only Content-Type and Authorization headers, but GitHub Copilot's API requires additional identification headers: Editor-Version and Copilot-Integration-Id are mandatory (requests omitting Editor-Version return HTTP 400). Editor-Plugin-Version is also commonly required. Add these headers to the fetch call in createOpenAICompatibleFetch to match the pattern used in refreshCopilotModelOptions (which already sends User-Agent and Openai-Intent).

Also, use trimTrailingSlash() instead of .replace(/\/+$/, "") for baseUrl normalization in refreshCopilotModelOptions (line ~590) — the utility is already available in this file and is used consistently elsewhere.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openaiCompatible.ts` around lines 588 - 599, The fetch used
in createOpenAICompatibleFetch (and specifically the /chat/completions request)
is missing required GitHub Copilot identification headers; update the request
headers to include Editor-Version and Copilot-Integration-Id (and also include
Editor-Plugin-Version where appropriate) alongside existing Authorization,
Content-Type, User-Agent and Openai-Intent so the Copilot API accepts the call;
also replace the manual .replace(/\/+$/, "") normalization in
refreshCopilotModelOptions with the existing trimTrailingSlash() utility to
normalize baseUrl consistently.

Comment on lines +154 to +159
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_OPENAI_BASE_URL',
'CLAUDE_CODE_OPENAI_API_KEY',
'CLAUDE_CODE_OPENAI_DEFAULT_MODEL',
'OPENAI_BASE_URL',
'OPENAI_API_KEY',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/utils/managedEnvConstants.ts

Repository: paoloanzn/free-code

Length of output: 8630


🏁 Script executed:

rg -i "copilot" --type ts --type js | head -50

Repository: paoloanzn/free-code

Length of output: 3778


Add Copilot provider vars to SAFE_ENV_VARS and reconcile security documentation for OpenAI equivalents.

Two critical issues with this section:

  1. Asymmetry in provider selection: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY are all in SAFE_ENV_VARS alongside CLAUDE_CODE_USE_OPENAI, but CLAUDE_CODE_USE_COPILOT and COPILOT_BASE_URL are missing. This means managed settings rollouts of Copilot will trigger the security dialog while OpenAI rollouts won't, creating a problematic UX inconsistency. Both CLAUDE_CODE_USE_COPILOT and COPILOT_BASE_URL need to be added.

  2. Undocumented security decision for OpenAI vars: The file header (lines 100–113) explicitly marks ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY as dangerous because they can redirect traffic to an attacker server or swap API keys. Yet OPENAI_BASE_URL, CLAUDE_CODE_OPENAI_BASE_URL, OPENAI_API_KEY, and CLAUDE_CODE_OPENAI_API_KEY are in SAFE_ENV_VARS despite having the identical threat model. The comment block must be updated to document why OpenAI providers are intentionally treated differently from Anthropic providers in this regard (or the decision should be reconsidered).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/managedEnvConstants.ts` around lines 154 - 159, Add the missing
Copilot flags to the SAFE_ENV_VARS array by including 'CLAUDE_CODE_USE_COPILOT'
and 'COPILOT_BASE_URL' so provider-selection parity is preserved with other
CLAUDE_CODE_USE_* entries; locate the SAFE_ENV_VARS definition in
managedEnvConstants.ts to insert these exact symbols. Also update the header
comment block (around the existing note that marks ANTHROPIC_* as dangerous) to
either document the explicit rationale why OPENAI_* and CLAUDE_CODE_OPENAI_* are
treated as safe despite the same threat model or move those OpenAI-related
symbols out of SAFE_ENV_VARS to match the documented security stance — ensure
the comment references the exact environment names (OPENAI_BASE_URL,
OPENAI_API_KEY, CLAUDE_CODE_OPENAI_BASE_URL, CLAUDE_CODE_OPENAI_API_KEY) so the
reasoning and list remain consistent.

Comment thread src/utils/status.tsx
Comment on lines +309 to 320
if (apiProvider !== "firstParty") {
const providerLabel = {
bedrock: 'AWS Bedrock',
vertex: 'Google Vertex AI',
foundry: 'Microsoft Foundry'
bedrock: "AWS Bedrock",
vertex: "Google Vertex AI",
foundry: "Microsoft Foundry",
openai: "OpenAI-compatible",
}[apiProvider];
properties.push({
label: 'API provider',
value: providerLabel
label: "API provider",
value: providerLabel,
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

providerLabel map is missing copilot — status will show "API provider: undefined".

apiProvider can be "copilot" (the else-branch at line 399 handles it), but the label lookup at lines 310-315 only covers bedrock/vertex/foundry/openai. When a user runs /provider copilot or sets CLAUDE_CODE_USE_COPILOT=1, this block falls through and pushes { label: "API provider", value: undefined }, producing a blank/undefined row in the status pane.

🛠️ Proposed fix
     const providerLabel = {
       bedrock: "AWS Bedrock",
       vertex: "Google Vertex AI",
       foundry: "Microsoft Foundry",
       openai: "OpenAI-compatible",
+      copilot: "GitHub Copilot",
     }[apiProvider];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (apiProvider !== "firstParty") {
const providerLabel = {
bedrock: 'AWS Bedrock',
vertex: 'Google Vertex AI',
foundry: 'Microsoft Foundry'
bedrock: "AWS Bedrock",
vertex: "Google Vertex AI",
foundry: "Microsoft Foundry",
openai: "OpenAI-compatible",
}[apiProvider];
properties.push({
label: 'API provider',
value: providerLabel
label: "API provider",
value: providerLabel,
});
}
if (apiProvider !== "firstParty") {
const providerLabel = {
bedrock: "AWS Bedrock",
vertex: "Google Vertex AI",
foundry: "Microsoft Foundry",
openai: "OpenAI-compatible",
copilot: "GitHub Copilot",
}[apiProvider];
properties.push({
label: "API provider",
value: providerLabel,
});
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/status.tsx` around lines 309 - 320, The providerLabel lookup for
apiProvider in the status rendering is missing the "copilot" key, causing
undefined values; update the map used to compute providerLabel (the object
literal assigned to providerLabel) to include copilot: "Copilot" (or the desired
display string) so that when apiProvider === "copilot" the properties.push({
label: "API provider", value: providerLabel }) receives a proper string; ensure
this change is made in the same conditional block that references apiProvider
and providerLabel in src/utils/status.tsx so the status pane no longer shows an
undefined value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants