Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ contact_links:
about: Read the install, quick start, and release guidance before opening a new issue.
- name: Design-partner intake
url: https://github.com/IntelIP/ProfitCtl/tree/main/docs/growth/design-partner-intake.md
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Use /blob/ not /tree/ for file links

The URL uses /tree/main/ to link directly to a file. GitHub treats this as a tree (directory) path and redirects it, which works but may cause a brief redirect. For direct file links, /blob/main/ is the canonical form and avoids the redirect.

Suggested change
url: https://github.com/IntelIP/ProfitCtl/tree/main/docs/growth/design-partner-intake.md
url: https://github.com/IntelIP/ProfitCtl/blob/main/docs/growth/design-partner-intake.md
Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/ISSUE_TEMPLATE/config.yml
Line: 7

Comment:
**Use `/blob/` not `/tree/` for file links**

The URL uses `/tree/main/` to link directly to a file. GitHub treats this as a tree (directory) path and redirects it, which works but may cause a brief redirect. For direct file links, `/blob/main/` is the canonical form and avoids the redirect.

```suggestion
    url: https://github.com/IntelIP/ProfitCtl/blob/main/docs/growth/design-partner-intake.md
```

How can I resolve this? If you propose a fix, please make it concise.

about: Use the design-partner guide if you want to evaluate ProfitCtl for pricing or contract review workflows.
about: Use the design-partner guide if you want to evaluate ProfitCtl for pricing or contract review workflows before opening the design-partner request template.
- name: Security report
url: https://github.com/IntelIP/ProfitCtl/security/policy
about: Report security issues through the documented security policy instead of opening a public issue.
93 changes: 93 additions & 0 deletions .github/ISSUE_TEMPLATE/design_partner_request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
name: Design-partner request
description: Request a design-partner evaluation loop for a real pricing or contract decision.
title: "[design-partner] "
body:
- type: markdown
attributes:
value: |
Use this form if you want to evaluate `ProfitCtl` against a real pricing, packaging, or contract question.

Before opening this request, install the CLI and run one matching benchmark pair if possible:
- `profitctl compare benchmark_scenarios/open_core_tiered.yml benchmark_scenarios/open_core_mix.yml`
- `profitctl compare benchmark_scenarios/hybrid_steady_contract.yml benchmark_scenarios/hybrid_pilot_contract.yml`
- `profitctl compare benchmark_scenarios/hybrid_operating_safe.yml benchmark_scenarios/hybrid_operating_breach.yml`
- type: input
id: company
attributes:
label: Company
description: What company or team is evaluating ProfitCtl?
placeholder: Acme SaaS
validations:
required: true
- type: input
id: role
attributes:
label: Your role
description: Founder, product, finance, RevOps, sales, CS, or similar.
placeholder: Founder
validations:
required: true
- type: textarea
id: decision
attributes:
label: Pricing or contract decision
description: What real decision are you trying to make right now?
placeholder: We are deciding whether to keep a pilot-heavy hybrid enterprise contract or move to a steadier recurring structure.
validations:
required: true
- type: dropdown
id: revenue_shape
attributes:
label: Revenue shape
options:
- Mostly recurring
- Mostly one-time
- Mixed
validations:
required: true
- type: dropdown
id: benchmark_pair
attributes:
label: Closest benchmark pair
description: Pick the starting point that matches your question best.
options:
- open_core_tiered vs open_core_mix
- hybrid_steady_contract vs hybrid_pilot_contract
- hybrid_operating_safe vs hybrid_operating_breach
- Not sure yet
validations:
required: true
- type: dropdown
id: install_path
attributes:
label: Install status
options:
- Installed via install script
- Installed via Homebrew
- Not installed yet
- Need help installing
validations:
required: true
- type: textarea
id: real_inputs
attributes:
label: Inputs you can share
description: Keep this high level if needed. What real inputs could you provide for calibration?
placeholder: Current pricing shape, free vs paid monthly vs paid annual mix, one recurring-cost assumption, and the target contract shape.
validations:
required: true
- type: textarea
id: success
attributes:
label: Success criterion
description: What would make this evaluation useful enough to continue?
placeholder: We want to know whether the recurring margin still holds up before we commit to the contract.
validations:
required: true
- type: textarea
id: context
attributes:
label: Additional context
description: Optional links, notes, or output snippets.
validations:
required: false
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,8 @@ The paid layer should sit on top of that core through hosted workflows, collabor
- [Benchmark Outreach Pack](docs/growth/benchmark-outreach-pack.md)
- [Value Proposition](docs/growth/value-proposition.md)
- [Design-Partner Offer](docs/growth/design-partner-offer.md)
- [Outreach Message Templates](docs/growth/outreach-message-templates.md)
- [Evaluator Session Template](docs/growth/evaluator-session-template.md)
- [Adoption Dashboard](docs/growth/adoption-dashboard.md)
- [Design-Partner Operating System](docs/growth/design-partner-operating-system.md)
- [MVP Pricing Release Guide](docs/release/MVP_PRICING_RELEASE.md)
Expand Down
2 changes: 2 additions & 0 deletions docs/DOCS_INDEX.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
- [Benchmark Outreach Pack](growth/benchmark-outreach-pack.md)
- [Value Proposition](growth/value-proposition.md)
- [Design-Partner Offer](growth/design-partner-offer.md)
- [Outreach Message Templates](growth/outreach-message-templates.md)
- [Evaluator Session Template](growth/evaluator-session-template.md)
- [Design-Partner Operating System](growth/design-partner-operating-system.md)
- [Adoption Dashboard](growth/adoption-dashboard.md)
- [MVP Pricing Release Guide](release/MVP_PRICING_RELEASE.md)
Expand Down
23 changes: 13 additions & 10 deletions docs/growth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,21 +10,24 @@ This folder holds the minimum set of docs needed to turn ProfitCtl into somethin
4. Use [Benchmark Outreach Pack](benchmark-outreach-pack.md) to choose the one approved benchmark asset for the conversation.
5. Read [Compare Reporting Guide](compare-reporting-guide.md) before using `profitctl compare` in demos or content.
6. Read [Design-Partner Intake](design-partner-intake.md) before asking anyone to pilot the tool.
7. Read [Design-Partner Targeting Rubric](design-partner-targeting-rubric.md) before adding anyone to the first 10 targets.
8. Use [First 10 Design-Partner Targets](design-partner-target-list-template.md) as the working list.
9. Follow [Design-Partner Outreach Sequencing](design-partner-outreach-sequencing.md) for first touch through commitment.
10. Apply [Design-Partner Qualification Criteria](design-partner-qualification-criteria.md) before advancing a target.
11. Track movement in [Design-Partner Progress Tracking](design-partner-progress-tracking.md).
12. Keep the weekly dashboard in [Adoption Dashboard](adoption-dashboard.md).
13. Use [Weekly Review Template](weekly-review-template.md) during Friday review.
14. Follow [Source-Of-Truth Guidance](source-of-truth-guidance.md) before updating any tracker.
15. Run the weekly motion from [Design-Partner Operating System](design-partner-operating-system.md).
16. Capture each serious evaluator with [Design-Partner Evaluation Template](design-partner-evaluation-template.md).
7. Follow [Outreach Message Templates](outreach-message-templates.md) for first live contact.
8. Use [Evaluator Session Template](evaluator-session-template.md) for the first real conversation.
9. Read [Design-Partner Targeting Rubric](design-partner-targeting-rubric.md) before adding anyone to the first 10 targets.
10. Use [First 10 Design-Partner Targets](design-partner-target-list-template.md) as the working list.
11. Follow [Design-Partner Outreach Sequencing](design-partner-outreach-sequencing.md) for first touch through commitment.
12. Apply [Design-Partner Qualification Criteria](design-partner-qualification-criteria.md) before advancing a target.
13. Track movement in [Design-Partner Progress Tracking](design-partner-progress-tracking.md).
14. Keep the weekly dashboard in [Adoption Dashboard](adoption-dashboard.md).
15. Use [Weekly Review Template](weekly-review-template.md) during Friday review.
16. Follow [Source-Of-Truth Guidance](source-of-truth-guidance.md) before updating any tracker.
17. Run the weekly motion from [Design-Partner Operating System](design-partner-operating-system.md).
18. Capture each serious evaluator with [Design-Partner Evaluation Template](design-partner-evaluation-template.md).

## What These Docs Are For

- making the benchmark scenarios part of the product story
- packaging the approved benchmark outreach assets into one reusable set
- giving the team standard first-contact and first-session scripts
- making the value proposition explicit for buyers
- defining a clear design-partner offer
- keeping demo output short, repeatable, and decision-oriented
Expand Down
2 changes: 2 additions & 0 deletions docs/growth/design-partner-intake.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Turn one useful benchmark run into one of three outcomes:
- a calibration request
- a design-partner pilot

For inbound evaluators from the public repo, use the GitHub `Design-partner request` issue template as the intake entry point, then move serious conversations into the private operating tracker.

## 15-Minute Evaluation Path

1. Send the benchmark comparison that matches their problem.
Expand Down
77 changes: 77 additions & 0 deletions docs/growth/evaluator-session-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Evaluator Session Template

Use this template for the first live conversation after a target installs or agrees to run a benchmark.

## Session Goal

Determine whether `ProfitCtl` helps the evaluator make a real pricing or contract decision with more confidence.

## Before The Session

- confirm the evaluator's decision question
- choose one benchmark pair only
- prepare the exact install path and `compare` command
- know the one metric you will lead with first

## Session Flow

### 1. Confirm The Decision

Ask:

- what pricing or contract decision are you making?
- what is the realistic alternative?
- who else needs to trust this output?

### 2. Confirm The Revenue Shape

Ask:

- is the economics mostly recurring, mostly one-time, or mixed?
- where do you think the current risk actually is?

### 3. Run One Comparison

Use one benchmark pair or the closest mapped scenario.

Lead with:

- recurring margin
- operating cost per user
- covenant status

### 4. Test Trust

Ask:

- which output did you trust first?
- what would you want explained more clearly?
- would you use this before committing to the decision?

### 5. Decide The Next Step

Pick one:

- stop because it is not a fit
- install follow-up
- calibration request
- design-partner follow-up loop

## Notes To Capture

- company
- contact
- role
- benchmark pair used
- install path used
- trusted metric
- friction theme
- follow-up action

## Exit Condition

The session is strong only if it ends with one of these:

- the evaluator says the comparison changed or clarified the decision
- the evaluator agrees to provide inputs for calibration
- the evaluator commits to one design-partner follow-up
94 changes: 94 additions & 0 deletions docs/growth/outreach-message-templates.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Outreach Message Templates

Use these templates for the first live `ProfitCtl` evaluator loops.

Keep them short. The goal is to move a target to one install and one `compare` run, not to explain the whole product.

## Rules

- pick one benchmark pair only
- name the evaluator's likely decision
- lead with recurring economics, not simulation internals
- end with one next step

## Cold Or Warm Intro

Use when the target likely has a pricing or contract decision in flight.

```text
We built ProfitCtl to help SaaS teams compare pricing and contract shapes before they commit to them.

The reason I thought of you is that your team looks like a fit for one concrete question: [insert pricing or contract question].

The closest benchmark pair is [insert benchmark pair]. If it is useful, the fastest path is to install ProfitCtl and run one compare command against that benchmark.

If that sounds relevant, I can point you to the exact install path and the one benchmark run that matches your decision.
```

## Follow-Up After Interest

Use when they reply and want to know what the evaluation looks like.

```text
The simplest path is:

1. install ProfitCtl
2. run one compare command against the closest benchmark pair
3. decide whether the output is strong enough to justify calibrating your own inputs

For your case, I would start with:
[insert benchmark command]

What I want to learn first is whether the comparison changes how you think about the decision you are about to make.
```

## Install Push

Use when the target is qualified and you want a concrete action.

```text
Best next step is to install and run one compare.

Install:
curl -fsSL https://raw.githubusercontent.com/IntelIP/ProfitCtl/main/scripts/install.sh | bash

Or:
brew tap IntelIP/profitctl
brew install profitctl

Then run:
[insert benchmark command]

If the result is useful, we can map the closest version of your own scenario next.
```

## Calibration Ask

Use only after the target says the first comparison was useful.

```text
If you want to test your own shape next, I only need the minimum real inputs:

- current pricing or contract shape
- whether users behave differently across free, paid monthly, and paid annual
- one recurring-cost assumption
- the success criterion you care about most

That is enough to calibrate a first pass without turning this into a long consulting loop.
```

## Design-Partner Commitment Ask

Use when the evaluator saw value and is open to one follow-up loop.

```text
If this first comparison helped, the next useful step is a design-partner loop around one real decision.

The scope is intentionally small:
- one pricing or contract question
- one calibrated scenario
- one follow-up review
- one success criterion

If we can help you keep, revise, or reject the change with more confidence, the loop is doing its job.
```
Loading