Skip to content
18 changes: 18 additions & 0 deletions workflows/prd-rfe-workflow/.ambient/ambient.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,22 @@
"RFE Tasks": "artifacts/rfe-tasks/*.md",
"Prioritization Matrix": "artifacts/prioritization.md"
}
"rubric": {
"activationPrompt": "After creating rfe.md, evaluate the quality of the RFEs. Utilize the evaluator.md to better understand the criteria of a quality RFE. Utilize that rubric to rate the RFEs and produce a score out of 25, an aggregate of each score for each criteria.",
"schema": {
"type": "object",
"properties": {
"completeness": {"type": "number", "description": "Structural Completeness and Organization score (1-5)"},
"professionalism": {"type": "number", "description": "Professional Perspective and Strategic Depth score (1-5)"},
"tone": {"type": "number", "description": "Language Quality and Communicative Tone score (1-5)"},
"purpose": {"type": "number", "description": "Clarity of Purpose and Stakeholder Alignment score (1-5)"},
"actionability": {"type": "number", "description": "Actionability and Testability score (1-5)"},
"criteria": {"type": "string", "description": "The criteria that was scored"},
"rfe_count": {"type": "integer", "description": "Number of RFEs produced"}
}
}
}

}


54 changes: 54 additions & 0 deletions workflows/prd-rfe-workflow/.ambient/rubric.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
This tool gets triggered every time an rfe file is created in Ambient. The purpose of this tool is to assess the rfe file that is created on the basis of 5 criteria, providing a score out of 5 for each. The output of this tool is a score out of 25 (the aggregate score across all 5 criteria) and a one sentence explanation/feedback of the score.

**Structural Completeness and Organization**
The RFE Council reviews these documents asynchronously. A standard structure allows for rapid "evaluation and triage" within the 1-hour weekly time limit per member. If the structure is poor, the RFE is rejected for revision.
Mandatory Sections: Ensure the presence of Problem Statement, Business Alignment, Proposed Solution, and Acceptance Criteria.
Negative Space Check: If a header like "Risks" or "Scope" is present but says "TBD" or is empty, the judge must penalize the score.
Score 1: Unformatted "wall of text". No discernible template usage.
Score 2: Uses headers, but mandatory sections are empty or contain "TBD".
Score 3: Contains the "Big 4" sections (Problem, Business, Solution, Acceptance). Optional sections (Alternatives, Affected Customers) are ignored even when contextually relevant.
Score 4: Logical flow. Uses Markdown for scannability. Separates the "high-level description" from "user scenarios" and "assumptions".
Score 5: Perfectly organized. Includes optional sections like "Alternative Approaches Considered" and "Reference Documents" to provide a 360-degree view


**Professional Perspective and Strategic Depth**
High-quality RFEs are not just "feature requests"; they are strategic documents. Even without knowing a specific persona, the RFE should demonstrate an expert's understanding of the Red Hat AI ecosystem. It must move beyond "what" is being built to "how" it impacts the broader portfolio, technical feasibility, and alignment with a cohesive vision.
Depth of Insight: Does the document demonstrate a nuanced understanding of technical trade-offs, architectural impacts, or market dynamics?
Strategic Alignment: Does the RFE explicitly map the request to the product roadmap, company strategy, or specific Red Hat AI outcomes?
Score 1 (Generic/Naive): The RFE is written from a "default" perspective. It lacks any professional nuance and could have been generated by someone with no knowledge of the product or its strategic goals.
Score 2 (Surface Level): Identifies a feature but fails to consider the broader context, such as technical feasibility or impact on existing systems. It treats the enhancement as an isolated task.
Score 3 (Professional Standard): Demonstrates a clear understanding of the "Red Hat AI Outcome" being targeted. It moves past basic functionality to discuss why this specific approach aligns with the product's mission.
Score 4 (Expert Framing): Shows significant depth by identifying potential "Impacted Components" and providing a "High-level architecture plan" or rationale that considers the Red Hat AI portfolio holistically.
Score 5 (Visionary/Strategic): Masterfully frames the RFE within the context of the entire ecosystem. It anticipates architectural bottlenecks, addresses "Alternative Approaches" with expertise, and provides the "Business Value" data required for high-level executive prioritization.


**Language Quality and Communicative Tone**
RFE justifications and descriptions are read by stakeholders internal and external to Red Hat (e.g., IBM, customers). Professional, objective language is mandatory for maintaining transparency and credibility.
Objectivity: The judge should look for factual, to-the-point descriptions.
Prescriptive vs. High-Level: If the RFE is broad, it should NOT be prescriptive about solutions. If it is prescriptive, it must be for a well-defined domain.
Score 1: Unprofessional, uses casual slang or overly verbose "word salad".
Score 2: Professional but overly prescriptive on a broad topic, violating RFE guiding principles.
Score 3: Standard technical writing. functional but uses "implied" information that should be explicitly written.
Score 4: Concise and objective. Maintains a professional tone suitable for external stakeholders.
Score 5: Masterful technical prose. Perfectly balances high-level requirements with necessary detail, strictly following the "objective and to the point" rule.


**Clarity of Purpose and Stakeholder Alignment**
The RFE Council must balance competing priorities across the Red Hat AI portfolio. Without a clear persona and pain point, the Council cannot evaluate the "Business Value" or "Impact" required for prioritization. This criterion ensures the RFE solves a real-world problem rather than just being a "cool idea".
Scoring Tiers
Score 1: No identifiable persona or problem. The text is purely technical without context.
Score 2: Identifies a generic problem (e.g., "users want X") but lacks a specific business impact or data on market opportunity.
Score 3: Mentions a user role and pain point. Alignment to organizational goals is stated but lacks specific details on the current workflow challenges.
Score 4: Explicitly identifies the customer/partner who benefits. Clearly states the expected impact.
Score 5: Comprehensive. Includes diagrams (may be described in text), specific market opportunity data, and maps the request to the technical vision and product roadmap.

**Actionability and Testability**
An approved RFE is cloned into JIRA Feature tickets. If the requirements aren't "testable," the engineering team cannot perform "feature refinement" or determine when the work is "done".
Quantifiable Metrics: Look for numbers (latency, throughput, specific accelerator types).
Scoped Features: Check if the RFE is "self-contained" or too broad.
Clarity of "Done": Does it list criteria like package dependencies or product documentation?
Score 1: No acceptance criteria. Impossible to validate.
Score 2: General direction provided, but lacks any technical metrics requested by the prompt (e.g., missing a required latency target).
Score 3: Includes generic success criteria (e.g., "system is stable") but lacks the specific "done" definitions required for engineering triage.
Score 4: Includes precise, testable requirements. The scope is limited enough to be actionable by a single component team.
Score 5: Comprehensive "done" definition including accelerator support, dependencies, and documentation. Ready for immediate cloning into JIRA.