Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/DOCS_INDEX.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
- [Open-Core Packaging](OPEN_CORE_PACKAGING.md)
- [Open-Core Roadmap](OPEN_CORE_ROADMAP.md)
- [Full Economics Cost Layers](full-economics-cost-layers.md)
- [IntelIP Pricing Recommendation (Apr 2026)](intelip-pricing-recommendation-apr-2026.md)
- [IntelIP Modeling Backlog](profitctl-bug-backlog-from-intelip-modeling.md)
- [Benchmark Scenarios](../benchmark_scenarios/README.md)
- [Growth Assets](growth/README.md)
Expand Down
99 changes: 99 additions & 0 deletions docs/intelip-pricing-recommendation-apr-2026.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# IntelIP Pricing Recommendation (Apr 2026)

This note turns the merged IntelIP scenario pack into an executable launch recommendation.

## What changed

Before running the scenarios, ProfitCtl needed one follow-up fix: `pricing.mode: mix` now accepts `limits.users` as reference metadata instead of rejecting those files during validation. That keeps the scenario pack aligned with its documented modeling posture.

## Scenarios run

All commands were run from the ProfitCtl repo on Apr 13, 2026:

```bash
go run . simulate -f benchmark_scenarios/intelip_ops_conservative.yml --quiet
go run . simulate -f benchmark_scenarios/intelip_ops_target.yml --quiet
go run . simulate -f benchmark_scenarios/intelip_ops_stress.yml --quiet
go run . simulate -f benchmark_scenarios/intelip_rollout_paid_pilot.yml --quiet
go run . simulate -f benchmark_scenarios/intelip_rollout_tight_free.yml --quiet
go run . simulate -f benchmark_scenarios/intelip_rollout_workspace_minimum_proxy.yml --quiet
```

## Results

| Scenario | Users | Revenue | Delivery Cost | Full Cost | Full Margin | Status |
|---|---:|---:|---:|---:|---:|---|
| Conservative mix | 30 | $700 | $92.10 | $402.10 | 42.56% | Failed |
| Target mix | 90 | $3,661 | $164.35 | $599.35 | 83.63% | Passed |
| Stress mix | 180 | $11,655 | $325.91 | $1,000.91 | 91.41% | Passed |
| Paid pilot | 15 | $2,685 | $97.05 | $407.05 | 84.84% | Passed |
| Tight free tier | 30 | $1,834 | $108.10 | $418.10 | 77.20% | Passed |
| Workspace minimum hybrid | 20 | $1,790 | $102.07 | $412.07 | 76.98% | Passed |

Notes:

- The conservative mix is still the most important launch signal because it approximates the first real cohort rather than a later, healthier mix.
- The stress mix remains useful as an upper-bound sanity check, but it is not the launch decision case.
- The paid-pilot, tight-free, and workspace-minimum shapes all clear the modeled thresholds. The difference is packaging and go-to-market complexity, not raw survivability.

## Recommendation

Use a paid pilot as the default first launch motion.

Why:

- It is the strongest model outcome at the smallest active-user count.
- It protects IntelIP from the exact failure zone exposed by the conservative free-heavy mix.
- It aligns with the current onboarding reality: IntelIP still carries meaningful adoption overhead, and that cost is what erodes the early free-heavy scenario.
- It gives IntelIP the fastest clean path to collect real pricing, billing, and onboarding calibration data.

## Backup recommendation

If IntelIP wants broader top-of-funnel learning instead of the safest launch shape, use the tight free tier as the fallback.

Why:

- It preserves self-serve evaluation.
- It materially improves the low-scale economics versus the current free-heavy mix.
- It is simpler to explain than a workspace-minimum package during the first rollout.

## What not to do

Do not launch with the current free-heavy conservative mix as the default posture.

Reason:

- The scenario fails the margin covenant at the earliest meaningful cohort.
- Delivery cost is healthy, but adoption overhead dominates the full-economics view.
- That means the real problem is not runtime COGS. The problem is allowing too much unpaid evaluation while onboarding is still high-touch.

## When to use workspace minimum

Workspace minimum remains a viable second-stage package, not the best first-stage package.

Why:

- It is economically safe in the model.
- It captures team-level value better than seat-only pricing.
- But it adds packaging complexity before IntelIP has enough live calibration data to know whether the extra pricing surface is worth it.

## Recommended launch sequence

1. Start with a paid pilot.
2. Define the pilot exit criteria into Starter or Pro packaging.
3. Keep the tight free tier ready as the fallback if top-of-funnel learning becomes more important than early monetization certainty.
4. Revisit workspace minimum after the first real design-partner and billing observations are in hand.

## Immediate product and GTM follow-up

1. Write the paid-pilot package in user-facing terms:
- onboarding scope
- included workflows
- expected time-to-value
- conversion target into recurring packaging
2. Record the first real calibration inputs:
- paid invoice count
- observed Stripe fee burden
- onboarding/support hours
- workspace size and active-seat shape
3. Re-run the scenario pack after the first real cohort so the recommendation moves from modeled launch guidance to measured pricing guidance.
38 changes: 38 additions & 0 deletions internal/config/parser_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,44 @@ variable_costs: []
assert.Equal(t, 0.7, *cfg.Pricing.Plans[0].Share)
}

func TestParseConfig_MixModePreservesReferenceLimits(t *testing.T) {
configYAML := `
simulation:
base_users: 100
growth_factor: 1.5
iterations: 10000

pricing:
mode: mix
plans:
- name: free
price: 0
share: 0.7
limits:
users: 10
- name: pro
price: 29
share: 0.3
limits:
users: 100

fixed_costs: []
variable_costs: []
`

tmpFile := createTempConfigFile(t, configYAML)
cfg, err := ParseConfig(tmpFile)

assert.NoError(t, err)
assert.NotNil(t, cfg)
assert.NotNil(t, cfg.Pricing)
assert.Equal(t, PricingModeMix, cfg.Pricing.Mode)
assert.NotNil(t, cfg.Pricing.Plans[0].Limits)
assert.Equal(t, 10, cfg.Pricing.Plans[0].Limits.Users)
assert.NotNil(t, cfg.Pricing.Plans[1].Limits)
assert.Equal(t, 100, cfg.Pricing.Plans[1].Limits.Users)
}

func TestParseConfig_HybridMode(t *testing.T) {
configYAML := `
simulation:
Expand Down
3 changes: 0 additions & 3 deletions internal/config/validator.go
Original file line number Diff line number Diff line change
Expand Up @@ -225,9 +225,6 @@ func validateMixPricing(pricing *PricingConfig) error {
if err := validatePricingPlanCohort(plan); err != nil {
return err
}
if plan.Limits != nil {
return fmt.Errorf("pricing mode mix does not support limits on plan %s: %w", plan.Name, ErrPricingInconsistency)
}
if plan.Share == nil {
return fmt.Errorf("pricing mode mix requires share on each plan: %w", ErrPricingInconsistency)
}
Expand Down
11 changes: 5 additions & 6 deletions internal/config/validator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -180,20 +180,19 @@ func TestValidateConfig_MixModeValid(t *testing.T) {
assert.NoError(t, err, "Mix mode config should pass when shares sum to 1")
}

func TestValidateConfig_MixModeRejectsLimits(t *testing.T) {
invalidConfig := &Config{
func TestValidateConfig_MixModeAllowsReferenceLimits(t *testing.T) {
validConfig := &Config{
Pricing: &PricingConfig{
Mode: "mix",
Plans: []PricingPlan{
{Name: "Free", Price: 0, Share: float64Ptr(0.7)},
{Name: "Free", Price: 0, Share: float64Ptr(0.7), Limits: &PlanLimits{Users: 10}},
{Name: "Pro", Price: 29, Share: float64Ptr(0.3), Limits: &PlanLimits{Users: 100}},
},
},
}

err := ValidateConfig(invalidConfig)
assert.Error(t, err, "Mix mode should reject limits")
assert.Contains(t, err.Error(), "does not support limits")
err := ValidateConfig(validConfig)
assert.NoError(t, err, "Mix mode should allow limits as reference metadata")
}

func TestValidateConfig_MixModeRejectsShareTotalMismatch(t *testing.T) {
Expand Down
Loading