Skip to content

Commit 9934c80

Browse files
author
Pierre
committed
Update README.md
1 parent 57baccc commit 9934c80

File tree

1 file changed

+139
-73
lines changed

1 file changed

+139
-73
lines changed

README.md

Lines changed: 139 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# WorkflowAI Python
22

3-
A library to use WorkflowAI with Python
3+
A library to use [WorkflowAI](https://workflowai.com) with Python.
44

55
## Context
66

7-
WorkflowAI is a platform for building agents.
7+
[WorkflowAI](https://workflowai.com) is a platform for designing, building, and deploying agents.
88

99
## Installation
1010

@@ -79,21 +79,50 @@ An agent is in essence an async function with the added constraints that:
7979
> [Pydantic](https://docs.pydantic.dev/latest/) is a very popular and powerful library for data validation and
8080
> parsing. It allows us to extract the input and output schema in a simple way
8181
82-
Below is an agent that says hello:
82+
Below is an agent that analyzes customer feedback from call transcripts:
8383

8484
```python
8585
import workflowai
86-
from pydantic import BaseModel
87-
88-
class Input(BaseModel):
89-
name: str
90-
91-
class Output(BaseModel):
92-
greeting: str
93-
94-
@workflowai.agent()
95-
async def say_hello(input: Input) -> Output:
96-
"""Say hello"""
86+
from pydantic import BaseModel, Field
87+
from typing import List
88+
from datetime import date
89+
90+
# Input model for the call feedback analysis
91+
class CallFeedbackInput(BaseModel):
92+
"""Input for analyzing a customer feedback call."""
93+
transcript: str = Field(description="The full transcript of the customer feedback call.")
94+
call_date: date = Field(description="The date when the call took place.")
95+
96+
# Model representing a single feedback point with supporting evidence
97+
class FeedbackPoint(BaseModel):
98+
"""A specific feedback point with its supporting quote."""
99+
point: str = Field(description="The main point or insight from the feedback.")
100+
quote: str = Field(description="The exact quote from the transcript supporting this point.")
101+
timestamp: str = Field(description="The timestamp or context of when this was mentioned in the call.")
102+
103+
# Model representing the structured analysis of the customer feedback call
104+
class CallFeedbackOutput(BaseModel):
105+
"""Structured analysis of the customer feedback call."""
106+
positive_points: List[FeedbackPoint] = Field(
107+
default_factory=list,
108+
description="List of positive feedback points, each with a supporting quote."
109+
)
110+
negative_points: List[FeedbackPoint] = Field(
111+
default_factory=list,
112+
description="List of negative feedback points, each with a supporting quote."
113+
)
114+
115+
@workflowai.agent(id="analyze-call-feedback", model=Model.GPT_4O_LATEST)
116+
async def analyze_call_feedback(input: CallFeedbackInput) -> CallFeedbackOutput:
117+
"""
118+
Analyze a customer feedback call transcript to extract key insights:
119+
1. Identify positive feedback points with supporting quotes
120+
2. Identify negative feedback points with supporting quotes
121+
3. Include timestamp/context for each point
122+
123+
Be specific and objective in the analysis. Use exact quotes from the transcript.
124+
Maintain the customer's original wording in quotes.
125+
"""
97126
...
98127
```
99128

@@ -102,7 +131,41 @@ run will be created. By default:
102131

103132
- the docstring will be used as instructions for the agent
104133
- the default model (`workflowai.DEFAULT_MODEL`) is used to run the agent
105-
- the agent id will be a slugified version of the function name (i-e `say-hello`) in this case
134+
- the agent id will be a slugified version of the function name unless specified explicitly
135+
136+
Example usage:
137+
138+
```python
139+
# Example transcript
140+
transcript = '''
141+
[00:01:15] Customer: I've been using your software for about 3 months now, and I have to say the new dashboard feature is really impressive. It's saving me at least an hour each day on reporting.
142+
143+
[00:02:30] Customer: However, I'm really frustrated with the export functionality. It crashed twice this week when I tried to export large reports, and I lost all my work.
144+
145+
[00:03:45] Customer: On a positive note, your support team, especially Sarah, was very responsive when I reported the issue. She got back to me within minutes.
146+
147+
[00:04:30] Customer: But I think the pricing for additional users is a bit steep compared to other solutions we looked at.
148+
'''
149+
150+
# Analyze the feedback
151+
result = await analyze_call_feedback(
152+
CallFeedbackInput(
153+
transcript=transcript,
154+
call_date=date(2024, 1, 15)
155+
)
156+
)
157+
158+
# Print the analysis
159+
print("\nPositive Points:")
160+
for point in result.positive_points:
161+
print(f"\n{point.point}")
162+
print(f" Quote [{point.timestamp}]: \"{point.quote}\"")
163+
164+
print("\nNegative Points:")
165+
for point in result.negative_points:
166+
print(f"\n{point.point}")
167+
print(f" Quote [{point.timestamp}]: \"{point.quote}\"")
168+
```
106169

107170
> **What is "..." ?**
108171
>
@@ -122,7 +185,7 @@ You can set the model explicitly in the agent decorator:
122185

123186
```python
124187
@workflowai.agent(model=Model.GPT_4O_LATEST)
125-
def say_hello(input: Input) -> Output:
188+
async def analyze_call_feedback(input: CallFeedbackInput) -> CallFeedbackOutput:
126189
...
127190
```
128191

@@ -147,7 +210,7 @@ more flexible than changing the function parameters when running in production.
147210

148211
```python
149212
@workflowai.agent(deployment="production") # or simply @workflowai.agent()
150-
def say_hello(input: Input) -> AsyncIterator[Run[Output]]:
213+
async def analyze_call_feedback(input: CallFeedbackInput) -> AsyncIterator[Run[CallFeedbackOutput]]:
151214
...
152215
```
153216

@@ -158,17 +221,17 @@ You can configure the agent function to stream or return the full run object, si
158221
```python
159222
# Return the full run object, useful if you want to extract metadata like cost or duration
160223
@workflowai.agent()
161-
async def say_hello(input: Input) -> Run[Output]:
224+
async def analyze_call_feedback(input: CallFeedbackInput) -> Run[CallFeedbackOutput]:
162225
...
163226

164227
# Stream the output, the output is filled as it is generated
165228
@workflowai.agent()
166-
def say_hello(input: Input) -> AsyncIterator[Output]:
229+
async def analyze_call_feedback(input: CallFeedbackInput) -> AsyncIterator[CallFeedbackOutput]:
167230
...
168231

169232
# Stream the run object, the output is filled as it is generated
170233
@workflowai.agent()
171-
def say_hello(input: Input) -> AsyncIterator[Run[Output]]:
234+
async def analyze_call_feedback(input: CallFeedbackInput) -> AsyncIterator[Run[CallFeedbackOutput]]:
172235
...
173236
```
174237

@@ -192,7 +255,7 @@ To use a tool, simply add it's handles to the instructions (the function docstri
192255

193256
```python
194257
@workflowai.agent()
195-
def say_hello(input: Input) -> Output:
258+
async def analyze_call_feedback(input: CallFeedbackInput) -> CallFeedbackOutput:
196259
"""
197260
You can use @search and @browser-text to retrieve information about the name.
198261
"""
@@ -255,13 +318,18 @@ The `WorkflowAIError` is raised when the agent is called, so you can handle it l
255318

256319
```python
257320
try:
258-
await say_hello(Input(name="John"))
321+
await analyze_call_feedback(
322+
CallFeedbackInput(
323+
transcript="[00:01:15] Customer: The product is great!",
324+
call_date=date(2024, 1, 15)
325+
)
326+
)
259327
except WorkflowAIError as e:
260328
print(e.code)
261329
print(e.message)
262330
```
263331

264-
### Definining input and output types
332+
### Defining input and output types
265333

266334
There are some important subtleties when defining input and output types.
267335

@@ -271,17 +339,25 @@ Field description and examples are passed to the model and can help stir the out
271339
use case is to describe a format or style for a string field
272340

273341
```python
274-
# summary has no examples or description so the model will likely return a block of text
275-
class SummaryOutput(BaseModel):
276-
summary: str
342+
# point has no examples or description so the model will be less guided
343+
class BasicFeedbackPoint(BaseModel):
344+
point: str
277345

278-
# passing the description will help the model return a summary formatted as bullet points
279-
class SummaryOutput(BaseModel):
280-
summary: str = Field(description="A summary, formatted as bullet points")
346+
# passing the description helps guide the model's output format
347+
class DetailedFeedbackPoint(BaseModel):
348+
point: str = Field(
349+
description="A clear, specific point of feedback extracted from the transcript."
350+
)
281351

282352
# passing examples can help as well
283-
class SummaryOutput(BaseModel):
284-
summary: str = Field(examples=["- Paris is a city in France\n- London is a city in England"])
353+
class FeedbackPoint(BaseModel):
354+
point: str = Field(
355+
description="A clear, specific point of feedback extracted from the transcript.",
356+
examples=[
357+
"Dashboard feature saves significant time on reporting",
358+
"Export functionality is unstable with large reports"
359+
]
360+
)
285361
```
286362

287363
Some notes:
@@ -299,35 +375,41 @@ Although the fact that a field is required is passed to the model, the generatio
299375
values.
300376

301377
```python
302-
class Input(BaseModel):
303-
name: str
304-
305-
class OutputStrict(BaseModel):
306-
greeting: str
378+
class CallFeedbackOutputStrict(BaseModel):
379+
positive_points: List[FeedbackPoint]
380+
negative_points: List[FeedbackPoint]
307381

308382
@workflowai.agent()
309-
async def say_hello_strict(_: Input) -> OutputStrict:
383+
async def analyze_call_feedback_strict(input: CallFeedbackInput) -> CallFeedbackOutputStrict:
310384
...
311385

312386
try:
313-
run = await say_hello(Input(name="John"))
314-
print(run.output.greeting) # "Hello, John!"
387+
result = await analyze_call_feedback_strict(
388+
CallFeedbackInput(
389+
transcript="[00:01:15] Customer: The product is great!",
390+
call_date=date(2024, 1, 15)
391+
)
392+
)
315393
except WorkflowAIError as e:
316394
print(e.code) # "invalid_generation" error code means that the generation did not match the schema
317395

318-
class OutputTolerant(BaseModel):
319-
greeting: str = ""
396+
class CallFeedbackOutputTolerant(BaseModel):
397+
positive_points: List[FeedbackPoint] = Field(default_factory=list)
398+
negative_points: List[FeedbackPoint] = Field(default_factory=list)
320399

321400
@workflowai.agent()
322-
async def say_hello_tolerant(_: Input) -> OutputTolerant:
401+
async def analyze_call_feedback_tolerant(input: CallFeedbackInput) -> CallFeedbackOutputTolerant:
323402
...
324403

325404
# The invalid_generation is less likely
326-
run = await say_hello_tolerant(Input(name="John"))
327-
if not run.output.greeting:
328-
print("No greeting was generated !")
329-
print(run.output.greeting) # "Hello, John!"
330-
405+
result = await analyze_call_feedback_tolerant(
406+
CallFeedbackInput(
407+
transcript="[00:01:15] Customer: The product is great!",
408+
call_date=date(2024, 1, 15)
409+
)
410+
)
411+
if not result.positive_points and not result.negative_points:
412+
print("No feedback points were generated!")
331413
```
332414

333415
> WorkflowAI automatically retries invalid generations once. If a model outputs an object that does not match the
@@ -338,33 +420,17 @@ Another reason to prefer optional fields in the output is for streaming. Partial
338420
absent will cause `AttributeError` when queried.
339421

340422
```python
341-
class Input(BaseModel):
342-
name: str
343-
344-
class OutputStrict(BaseModel):
345-
greeting1: str
346-
greeting2: str
347-
348423
@workflowai.agent()
349-
def say_hello_strict(_: Input) -> AsyncIterator[Output]:
424+
async def analyze_call_feedback_stream(input: CallFeedbackInput) -> AsyncIterator[CallFeedbackOutput]:
350425
...
351426

352-
async for run in say_hello(Input(name="John")):
353-
try:
354-
print(run.output.greeting1)
355-
except AttributeError:
356-
# run.output.greeting1 has not been generated yet
357-
358-
359-
class OutputTolerant(BaseModel):
360-
greeting1: str = ""
361-
greeting2: str = ""
362-
363-
@workflowai.agent()
364-
def say_hello_tolerant(_: Input) -> AsyncIterator[OutputTolerant]:
365-
...
366-
367-
async for run in say_hello(Input(name="John")):
368-
print(run.output.greeting1) # will be empty if the model has not generated it yet
369-
427+
async for result in analyze_call_feedback_stream(
428+
CallFeedbackInput(
429+
transcript="[00:01:15] Customer: The product is great!",
430+
call_date=date(2024, 1, 15)
431+
)
432+
):
433+
# With default values, we can safely check the points as they stream in
434+
print(f"Positive points so far: {len(result.positive_points)}")
435+
print(f"Negative points so far: {len(result.negative_points)}")
370436
```

0 commit comments

Comments
 (0)