Skip to content

Conversation

@sergei-bronnikov
Copy link

@sergei-bronnikov sergei-bronnikov commented Jan 30, 2026

https://bugtracker.codiodev.com/issue/codio-17369/Upgrade-BricksLLM-to-Support-OpenAI-Image-Generation

Summary by CodeRabbit

  • New Features
    • Added image response metadata tracking to capture token usage, quality, and image dimensions.
    • Introduced support for new GPT image generation models (gpt-image-1, gpt-image-1.5, and variants).
    • Enhanced image cost estimation to use metadata-based calculations when available, with fallback to resolution-based pricing.

@destitutus
Copy link
Member

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Feb 2, 2026

✅ Actions performed

Full review triggered.

@coderabbitai
Copy link

coderabbitai bot commented Feb 2, 2026

Walkthrough

Added support for capturing and leveraging OpenAI image response metadata to enable more accurate cost estimation. The changes introduce new types to represent image usage data, extend cost estimation interfaces to accept metadata, and implement logic to extract and propagate metadata through HTTP proxying and event logging.

Changes

Cohort / File(s) Summary
Type Definitions
internal/provider/openai/types.go
Added three new types: ImageResponseMetadata, ImageResponseUsage, and ImageResponseInputTokensDetails to represent image API response metadata including token usage and image dimensions.
Cost Estimation
internal/provider/openai/cost.go, internal/message/handler.go
Extended image cost estimation to accept metadata parameter; added token-based cost maps for image models (gpt-image-1, gpt-image-1.5, etc.); implemented estimateImageByMetadata function for metadata-driven cost calculation with fallback to resolution-based pricing.
Metadata Extraction & Propagation
internal/server/web/proxy/image.go, internal/server/web/proxy/proxy.go, internal/server/web/proxy/middleware.go
Added helpers to parse image response metadata from JSON, store in request context, and propagate to event logging; integrated metadata extraction into three image endpoints (generations, edits, variations).
Event Tracking
internal/event/event_with_request_and_response.go
Added ImageResponseMetadata field to EventWithRequestAndContent struct to carry metadata through the event system.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Proxy
    participant OpenAI as OpenAI API
    participant Handler
    participant CostEstimator
    participant EventLog

    Client->>Proxy: POST /images/generations
    Proxy->>OpenAI: Forward request
    OpenAI->>Proxy: JSON response with metadata
    Proxy->>Proxy: Extract metadata (imageResponseMetadataFromBytes)
    Proxy->>Proxy: Store in context (setCtxImageResponseMetadata)
    Proxy->>Client: Return response
    
    Proxy->>Handler: Process event with metadata
    Handler->>CostEstimator: EstimateImagesCost(model, quality, resolution, metadata)
    
    alt Metadata available
        CostEstimator->>CostEstimator: estimateImageByMetadata (text + image tokens)
    else Fallback
        CostEstimator->>CostEstimator: Resolution-based pricing
    end
    
    CostEstimator->>Handler: Cost calculated
    Handler->>EventLog: Store event with ImageResponseMetadata
    EventLog->>EventLog: Log complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The pull request title 'images/' is too vague and does not clearly convey the main change, using only a generic prefix without descriptive content. Use a more descriptive title that captures the main change, such as 'Add OpenAI image generation cost estimation and metadata tracking' or 'Extend support for OpenAI image operations with token-based cost calculation'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 17369_extend_support_image

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@internal/server/web/proxy/image.go`:
- Around line 126-143: The setter and getter use mismatched types:
setCtxImageResponseMetadata stores imageResponse.Usage (ImageResponseUsage) but
getCtxImageResponseMetadata type-asserts to openai.ImageResponseMetadata,
causing the assertion to fail; fix by storing the full metadata object in
setCtxImageResponseMetadata (store imageResponse itself) and update the getter's
type assertion to expect a *openai.ImageResponseMetadata (or the concrete value
you store) so getCtxImageResponseMetadata returns the stored metadata pointer
correctly.
🧹 Nitpick comments (1)
internal/provider/openai/cost.go (1)

472-506: Metadata-based cost estimation looks good, but silently ignores missing cost entries.

The function correctly calculates costs from text tokens, image tokens, and output tokens. However, the ok return values from cost map lookups (lines 486, 494, 502) are ignored. If a model is in imageModelsWithTokensCost but missing from one of the token cost maps, the cost for that component will silently be 0.

Consider adding validation or logging when a model passes the initial check but is missing from a specific cost map.

♻️ Optional: Add logging for missing cost entries
 	textInputCost, _ := textInputCostMap[model]
+	// Note: If model is missing, textInputCost defaults to 0
 	totalCost += (float64(textInputTokens) / 1000) * textInputCost

Or return an error if the lookup fails:

-	textInputCost, _ := textInputCostMap[model]
+	textInputCost, ok := textInputCostMap[model]
+	if !ok {
+		return 0, fmt.Errorf("model %s not found in prompt cost map", model)
+	}

Comment on lines 126 to 143
func setCtxImageResponseMetadata(ctx *gin.Context, imageResponse *openai.ImageResponseMetadata) {
if imageResponse == nil {
return
}
ctx.Set(imageResponseMetadataKey, imageResponse.Usage)
}

func getCtxImageResponseMetadata(ctx *gin.Context) *openai.ImageResponseMetadata {
usage, exists := ctx.Get(imageResponseMetadataKey)
if !exists {
return nil
}
iru, ok := usage.(openai.ImageResponseMetadata)
if !ok {
return nil
}
return &iru
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Type mismatch between set and get functions will cause metadata to always be nil.

setCtxImageResponseMetadata stores imageResponse.Usage (type ImageResponseUsage) at line 130, but getCtxImageResponseMetadata attempts to type-assert to openai.ImageResponseMetadata at line 138. These are different types, so the type assertion will always fail, returning nil.

Either store the full ImageResponseMetadata or retrieve ImageResponseUsage and reconstruct the metadata.

🐛 Proposed fix: Store the full metadata object
 func setCtxImageResponseMetadata(ctx *gin.Context, imageResponse *openai.ImageResponseMetadata) {
 	if imageResponse == nil {
 		return
 	}
-	ctx.Set(imageResponseMetadataKey, imageResponse.Usage)
+	ctx.Set(imageResponseMetadataKey, imageResponse)
 }
 
 func getCtxImageResponseMetadata(ctx *gin.Context) *openai.ImageResponseMetadata {
 	usage, exists := ctx.Get(imageResponseMetadataKey)
 	if !exists {
 		return nil
 	}
-	iru, ok := usage.(openai.ImageResponseMetadata)
+	iru, ok := usage.(*openai.ImageResponseMetadata)
 	if !ok {
 		return nil
 	}
-	return &iru
+	return iru
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func setCtxImageResponseMetadata(ctx *gin.Context, imageResponse *openai.ImageResponseMetadata) {
if imageResponse == nil {
return
}
ctx.Set(imageResponseMetadataKey, imageResponse.Usage)
}
func getCtxImageResponseMetadata(ctx *gin.Context) *openai.ImageResponseMetadata {
usage, exists := ctx.Get(imageResponseMetadataKey)
if !exists {
return nil
}
iru, ok := usage.(openai.ImageResponseMetadata)
if !ok {
return nil
}
return &iru
}
func setCtxImageResponseMetadata(ctx *gin.Context, imageResponse *openai.ImageResponseMetadata) {
if imageResponse == nil {
return
}
ctx.Set(imageResponseMetadataKey, imageResponse)
}
func getCtxImageResponseMetadata(ctx *gin.Context) *openai.ImageResponseMetadata {
usage, exists := ctx.Get(imageResponseMetadataKey)
if !exists {
return nil
}
iru, ok := usage.(*openai.ImageResponseMetadata)
if !ok {
return nil
}
return iru
}
🤖 Prompt for AI Agents
In `@internal/server/web/proxy/image.go` around lines 126 - 143, The setter and
getter use mismatched types: setCtxImageResponseMetadata stores
imageResponse.Usage (ImageResponseUsage) but getCtxImageResponseMetadata
type-asserts to openai.ImageResponseMetadata, causing the assertion to fail; fix
by storing the full metadata object in setCtxImageResponseMetadata (store
imageResponse itself) and update the getter's type assertion to expect a
*openai.ImageResponseMetadata (or the concrete value you store) so
getCtxImageResponseMetadata returns the stored metadata pointer correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants