-
Notifications
You must be signed in to change notification settings - Fork 0
docs: add design-partner intake and outreach ops #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,93 @@ | ||
| name: Design-partner request | ||
| description: Request a design-partner evaluation loop for a real pricing or contract decision. | ||
| title: "[design-partner] " | ||
| body: | ||
| - type: markdown | ||
| attributes: | ||
| value: | | ||
| Use this form if you want to evaluate `ProfitCtl` against a real pricing, packaging, or contract question. | ||
|
|
||
| Before opening this request, install the CLI and run one matching benchmark pair if possible: | ||
| - `profitctl compare benchmark_scenarios/open_core_tiered.yml benchmark_scenarios/open_core_mix.yml` | ||
| - `profitctl compare benchmark_scenarios/hybrid_steady_contract.yml benchmark_scenarios/hybrid_pilot_contract.yml` | ||
| - `profitctl compare benchmark_scenarios/hybrid_operating_safe.yml benchmark_scenarios/hybrid_operating_breach.yml` | ||
| - type: input | ||
| id: company | ||
| attributes: | ||
| label: Company | ||
| description: What company or team is evaluating ProfitCtl? | ||
| placeholder: Acme SaaS | ||
| validations: | ||
| required: true | ||
| - type: input | ||
| id: role | ||
| attributes: | ||
| label: Your role | ||
| description: Founder, product, finance, RevOps, sales, CS, or similar. | ||
| placeholder: Founder | ||
| validations: | ||
| required: true | ||
| - type: textarea | ||
| id: decision | ||
| attributes: | ||
| label: Pricing or contract decision | ||
| description: What real decision are you trying to make right now? | ||
| placeholder: We are deciding whether to keep a pilot-heavy hybrid enterprise contract or move to a steadier recurring structure. | ||
| validations: | ||
| required: true | ||
| - type: dropdown | ||
| id: revenue_shape | ||
| attributes: | ||
| label: Revenue shape | ||
| options: | ||
| - Mostly recurring | ||
| - Mostly one-time | ||
| - Mixed | ||
| validations: | ||
| required: true | ||
| - type: dropdown | ||
| id: benchmark_pair | ||
| attributes: | ||
| label: Closest benchmark pair | ||
| description: Pick the starting point that matches your question best. | ||
| options: | ||
| - open_core_tiered vs open_core_mix | ||
| - hybrid_steady_contract vs hybrid_pilot_contract | ||
| - hybrid_operating_safe vs hybrid_operating_breach | ||
| - Not sure yet | ||
| validations: | ||
| required: true | ||
| - type: dropdown | ||
| id: install_path | ||
| attributes: | ||
| label: Install status | ||
| options: | ||
| - Installed via install script | ||
| - Installed via Homebrew | ||
| - Not installed yet | ||
| - Need help installing | ||
| validations: | ||
| required: true | ||
| - type: textarea | ||
| id: real_inputs | ||
| attributes: | ||
| label: Inputs you can share | ||
| description: Keep this high level if needed. What real inputs could you provide for calibration? | ||
| placeholder: Current pricing shape, free vs paid monthly vs paid annual mix, one recurring-cost assumption, and the target contract shape. | ||
| validations: | ||
| required: true | ||
| - type: textarea | ||
| id: success | ||
| attributes: | ||
| label: Success criterion | ||
| description: What would make this evaluation useful enough to continue? | ||
| placeholder: We want to know whether the recurring margin still holds up before we commit to the contract. | ||
| validations: | ||
| required: true | ||
| - type: textarea | ||
| id: context | ||
| attributes: | ||
| label: Additional context | ||
| description: Optional links, notes, or output snippets. | ||
| validations: | ||
| required: false |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,77 @@ | ||
| # Evaluator Session Template | ||
|
|
||
| Use this template for the first live conversation after a target installs or agrees to run a benchmark. | ||
|
|
||
| ## Session Goal | ||
|
|
||
| Determine whether `ProfitCtl` helps the evaluator make a real pricing or contract decision with more confidence. | ||
|
|
||
| ## Before The Session | ||
|
|
||
| - confirm the evaluator's decision question | ||
| - choose one benchmark pair only | ||
| - prepare the exact install path and `compare` command | ||
| - know the one metric you will lead with first | ||
|
|
||
| ## Session Flow | ||
|
|
||
| ### 1. Confirm The Decision | ||
|
|
||
| Ask: | ||
|
|
||
| - what pricing or contract decision are you making? | ||
| - what is the realistic alternative? | ||
| - who else needs to trust this output? | ||
|
|
||
| ### 2. Confirm The Revenue Shape | ||
|
|
||
| Ask: | ||
|
|
||
| - is the economics mostly recurring, mostly one-time, or mixed? | ||
| - where do you think the current risk actually is? | ||
|
|
||
| ### 3. Run One Comparison | ||
|
|
||
| Use one benchmark pair or the closest mapped scenario. | ||
|
|
||
| Lead with: | ||
|
|
||
| - recurring margin | ||
| - operating cost per user | ||
| - covenant status | ||
|
|
||
| ### 4. Test Trust | ||
|
|
||
| Ask: | ||
|
|
||
| - which output did you trust first? | ||
| - what would you want explained more clearly? | ||
| - would you use this before committing to the decision? | ||
|
|
||
| ### 5. Decide The Next Step | ||
|
|
||
| Pick one: | ||
|
|
||
| - stop because it is not a fit | ||
| - install follow-up | ||
| - calibration request | ||
| - design-partner follow-up loop | ||
|
|
||
| ## Notes To Capture | ||
|
|
||
| - company | ||
| - contact | ||
| - role | ||
| - benchmark pair used | ||
| - install path used | ||
| - trusted metric | ||
| - friction theme | ||
| - follow-up action | ||
|
|
||
| ## Exit Condition | ||
|
|
||
| The session is strong only if it ends with one of these: | ||
|
|
||
| - the evaluator says the comparison changed or clarified the decision | ||
| - the evaluator agrees to provide inputs for calibration | ||
| - the evaluator commits to one design-partner follow-up |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,94 @@ | ||
| # Outreach Message Templates | ||
|
|
||
| Use these templates for the first live `ProfitCtl` evaluator loops. | ||
|
|
||
| Keep them short. The goal is to move a target to one install and one `compare` run, not to explain the whole product. | ||
|
|
||
| ## Rules | ||
|
|
||
| - pick one benchmark pair only | ||
| - name the evaluator's likely decision | ||
| - lead with recurring economics, not simulation internals | ||
| - end with one next step | ||
|
|
||
| ## Cold Or Warm Intro | ||
|
|
||
| Use when the target likely has a pricing or contract decision in flight. | ||
|
|
||
| ```text | ||
| We built ProfitCtl to help SaaS teams compare pricing and contract shapes before they commit to them. | ||
|
|
||
| The reason I thought of you is that your team looks like a fit for one concrete question: [insert pricing or contract question]. | ||
|
|
||
| The closest benchmark pair is [insert benchmark pair]. If it is useful, the fastest path is to install ProfitCtl and run one compare command against that benchmark. | ||
|
|
||
| If that sounds relevant, I can point you to the exact install path and the one benchmark run that matches your decision. | ||
| ``` | ||
|
|
||
| ## Follow-Up After Interest | ||
|
|
||
| Use when they reply and want to know what the evaluation looks like. | ||
|
|
||
| ```text | ||
| The simplest path is: | ||
|
|
||
| 1. install ProfitCtl | ||
| 2. run one compare command against the closest benchmark pair | ||
| 3. decide whether the output is strong enough to justify calibrating your own inputs | ||
|
|
||
| For your case, I would start with: | ||
| [insert benchmark command] | ||
|
|
||
| What I want to learn first is whether the comparison changes how you think about the decision you are about to make. | ||
| ``` | ||
|
|
||
| ## Install Push | ||
|
|
||
| Use when the target is qualified and you want a concrete action. | ||
|
|
||
| ```text | ||
| Best next step is to install and run one compare. | ||
|
|
||
| Install: | ||
| curl -fsSL https://raw.githubusercontent.com/IntelIP/ProfitCtl/main/scripts/install.sh | bash | ||
|
|
||
| Or: | ||
| brew tap IntelIP/profitctl | ||
| brew install profitctl | ||
|
|
||
| Then run: | ||
| [insert benchmark command] | ||
|
|
||
| If the result is useful, we can map the closest version of your own scenario next. | ||
| ``` | ||
|
|
||
| ## Calibration Ask | ||
|
|
||
| Use only after the target says the first comparison was useful. | ||
|
|
||
| ```text | ||
| If you want to test your own shape next, I only need the minimum real inputs: | ||
|
|
||
| - current pricing or contract shape | ||
| - whether users behave differently across free, paid monthly, and paid annual | ||
| - one recurring-cost assumption | ||
| - the success criterion you care about most | ||
|
|
||
| That is enough to calibrate a first pass without turning this into a long consulting loop. | ||
| ``` | ||
|
|
||
| ## Design-Partner Commitment Ask | ||
|
|
||
| Use when the evaluator saw value and is open to one follow-up loop. | ||
|
|
||
| ```text | ||
| If this first comparison helped, the next useful step is a design-partner loop around one real decision. | ||
|
|
||
| The scope is intentionally small: | ||
| - one pricing or contract question | ||
| - one calibrated scenario | ||
| - one follow-up review | ||
| - one success criterion | ||
|
|
||
| If we can help you keep, revise, or reject the change with more confidence, the loop is doing its job. | ||
| ``` |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/blob/not/tree/for file linksThe URL uses
/tree/main/to link directly to a file. GitHub treats this as a tree (directory) path and redirects it, which works but may cause a brief redirect. For direct file links,/blob/main/is the canonical form and avoids the redirect.Prompt To Fix With AI