11# WorkflowAI Python
22
3- A library to use WorkflowAI with Python
3+ A library to use [ WorkflowAI] ( https://workflowai.com ) with Python.
44
55## Context
66
7- WorkflowAI is a platform for building agents.
7+ [ WorkflowAI] ( https://workflowai.com ) is a platform for designing, building, and deploying agents.
88
99## Installation
1010
@@ -79,21 +79,50 @@ An agent is in essence an async function with the added constraints that:
7979> [ Pydantic] ( https://docs.pydantic.dev/latest/ ) is a very popular and powerful library for data validation and
8080> parsing. It allows us to extract the input and output schema in a simple way
8181
82- Below is an agent that says hello :
82+ Below is an agent that analyzes customer feedback from call transcripts :
8383
8484``` python
8585import workflowai
86- from pydantic import BaseModel
87-
88- class Input (BaseModel ):
89- name: str
90-
91- class Output (BaseModel ):
92- greeting: str
86+ from pydantic import BaseModel, Field
87+ from typing import List
88+ from datetime import date
89+
90+ # Input model for the call feedback analysis
91+ class CallFeedbackInput (BaseModel ):
92+ """ Input for analyzing a customer feedback call."""
93+ transcript: str = Field(description = " The full transcript of the customer feedback call." )
94+ call_date: date = Field(description = " The date when the call took place." )
95+
96+ # Model representing a single feedback point with supporting evidence
97+ class FeedbackPoint (BaseModel ):
98+ """ A specific feedback point with its supporting quote."""
99+ point: str = Field(description = " The main point or insight from the feedback." )
100+ quote: str = Field(description = " The exact quote from the transcript supporting this point." )
101+ timestamp: str = Field(description = " The timestamp or context of when this was mentioned in the call." )
102+
103+ # Model representing the structured analysis of the customer feedback call
104+ class CallFeedbackOutput (BaseModel ):
105+ """ Structured analysis of the customer feedback call."""
106+ positive_points: List[FeedbackPoint] = Field(
107+ default_factory = list ,
108+ description = " List of positive feedback points, each with a supporting quote."
109+ )
110+ negative_points: List[FeedbackPoint] = Field(
111+ default_factory = list ,
112+ description = " List of negative feedback points, each with a supporting quote."
113+ )
114+
115+ @workflowai.agent (id = " analyze-call-feedback" , model = Model.GPT_4O_LATEST )
116+ async def analyze_call_feedback (input : CallFeedbackInput) -> CallFeedbackOutput:
117+ """
118+ Analyze a customer feedback call transcript to extract key insights:
119+ 1. Identify positive feedback points with supporting quotes
120+ 2. Identify negative feedback points with supporting quotes
121+ 3. Include timestamp/context for each point
93122
94- @workflowai.agent ()
95- async def say_hello ( input : Input) -> Output:
96- """ Say hello """
123+ Be specific and objective in the analysis. Use exact quotes from the transcript.
124+ Maintain the customer's original wording in quotes.
125+ """
97126 ...
98127```
99128
@@ -102,7 +131,41 @@ run will be created. By default:
102131
103132- the docstring will be used as instructions for the agent
104133- the default model (` workflowai.DEFAULT_MODEL ` ) is used to run the agent
105- - the agent id will be a slugified version of the function name (i-e ` say-hello ` ) in this case
134+ - the agent id will be a slugified version of the function name unless specified explicitly
135+
136+ Example usage:
137+
138+ ``` python
139+ # Example transcript
140+ transcript = '''
141+ [00:01:15] Customer: I've been using your software for about 3 months now, and I have to say the new dashboard feature is really impressive. It's saving me at least an hour each day on reporting.
142+
143+ [00:02:30] Customer: However, I'm really frustrated with the export functionality. It crashed twice this week when I tried to export large reports, and I lost all my work.
144+
145+ [00:03:45] Customer: On a positive note, your support team, especially Sarah, was very responsive when I reported the issue. She got back to me within minutes.
146+
147+ [00:04:30] Customer: But I think the pricing for additional users is a bit steep compared to other solutions we looked at.
148+ '''
149+
150+ # Analyze the feedback
151+ result = await analyze_call_feedback(
152+ CallFeedbackInput(
153+ transcript = transcript,
154+ call_date = date(2024 , 1 , 15 )
155+ )
156+ )
157+
158+ # Print the analysis
159+ print (" \n Positive Points:" )
160+ for point in result.positive_points:
161+ print (f " \n • { point.point} " )
162+ print (f " Quote [ { point.timestamp} ]: \" { point.quote} \" " )
163+
164+ print (" \n Negative Points:" )
165+ for point in result.negative_points:
166+ print (f " \n • { point.point} " )
167+ print (f " Quote [ { point.timestamp} ]: \" { point.quote} \" " )
168+ ```
106169
107170> ** What is "..." ?**
108171>
@@ -124,7 +187,7 @@ You can set the model explicitly in the agent decorator:
124187from workflowai import Model
125188
126189@workflowai.agent (model = Model.GPT_4O_LATEST )
127- def say_hello (input : Input ) -> Output :
190+ async def analyze_call_feedback (input : CallFeedbackInput ) -> CallFeedbackOutput :
128191 ...
129192```
130193
@@ -149,7 +212,7 @@ more flexible than changing the function parameters when running in production.
149212
150213``` python
151214@workflowai.agent (deployment = " production" ) # or simply @workflowai.agent()
152- def say_hello (input : Input ) -> AsyncIterator[Run[Output ]]:
215+ async def analyze_call_feedback (input : CallFeedbackInput ) -> AsyncIterator[Run[CallFeedbackOutput ]]:
153216 ...
154217```
155218
@@ -163,7 +226,8 @@ the full run object.
163226
164227``` python
165228@workflowai.agent ()
166- async def say_hello (input : Input) -> Run[Output]: ...
229+ async def analyze_call_feedback (input : CallFeedbackInput) -> Run[CallFeedbackOutput]:
230+ ...
167231
168232
169233run = await say_hello(Input(name = " John" ))
@@ -180,12 +244,12 @@ You can configure the agent function to stream by changing the type annotation t
180244``` python
181245# Stream the output, the output is filled as it is generated
182246@workflowai.agent ()
183- def say_hello (input : Input ) -> AsyncIterator[Output ]:
247+ async def analyze_call_feedback (input : CallFeedbackInput ) -> AsyncIterator[CallFeedbackOutput ]:
184248 ...
185249
186250# Stream the run object, the output is filled as it is generated
187251@workflowai.agent ()
188- def say_hello (input : Input ) -> AsyncIterator[Run[Output ]]:
252+ async def analyze_call_feedback (input : CallFeedbackInput ) -> AsyncIterator[Run[CallFeedbackOutput ]]:
189253 ...
190254```
191255
@@ -241,7 +305,7 @@ To use a tool, simply add it's handles to the instructions (the function docstri
241305
242306``` python
243307@workflowai.agent ()
244- def say_hello (input : Input ) -> Output :
308+ async def analyze_call_feedback (input : CallFeedbackInput ) -> CallFeedbackOutput :
245309 """
246310 You can use @search and @browser-text to retrieve information about the name.
247311 """
@@ -311,7 +375,12 @@ The `WorkflowAIError` is raised when the agent is called, so you can handle it l
311375
312376``` python
313377try :
314- await say_hello(Input(name = " John" ))
378+ await analyze_call_feedback(
379+ CallFeedbackInput(
380+ transcript = " [00:01:15] Customer: The product is great!" ,
381+ call_date = date(2024 , 1 , 15 )
382+ )
383+ )
315384except WorkflowAIError as e:
316385 print (e.code)
317386 print (e.message)
@@ -340,7 +409,7 @@ assert run.error is not None
340409assert run.output is not None
341410```
342411
343- ### Definining input and output types
412+ ### Defining input and output types
344413
345414There are some important subtleties when defining input and output types.
346415
@@ -350,17 +419,25 @@ Field description and examples are passed to the model and can help stir the out
350419use case is to describe a format or style for a string field
351420
352421``` python
353- # summary has no examples or description so the model will likely return a block of text
354- class SummaryOutput (BaseModel ):
355- summary : str
422+ # point has no examples or description so the model will be less guided
423+ class BasicFeedbackPoint (BaseModel ):
424+ point : str
356425
357- # passing the description will help the model return a summary formatted as bullet points
358- class SummaryOutput (BaseModel ):
359- summary: str = Field(description = " A summary, formatted as bullet points" )
426+ # passing the description helps guide the model's output format
427+ class DetailedFeedbackPoint (BaseModel ):
428+ point: str = Field(
429+ description = " A clear, specific point of feedback extracted from the transcript."
430+ )
360431
361432# passing examples can help as well
362- class SummaryOutput (BaseModel ):
363- summary: str = Field(examples = [" - Paris is a city in France\n - London is a city in England" ])
433+ class FeedbackPoint (BaseModel ):
434+ point: str = Field(
435+ description = " A clear, specific point of feedback extracted from the transcript." ,
436+ examples = [
437+ " Dashboard feature saves significant time on reporting" ,
438+ " Export functionality is unstable with large reports"
439+ ]
440+ )
364441```
365442
366443Some notes:
@@ -378,35 +455,41 @@ Although the fact that a field is required is passed to the model, the generatio
378455values.
379456
380457``` python
381- class Input (BaseModel ):
382- name: str
383-
384- class OutputStrict (BaseModel ):
385- greeting: str
458+ class CallFeedbackOutputStrict (BaseModel ):
459+ positive_points: List[FeedbackPoint]
460+ negative_points: List[FeedbackPoint]
386461
387462@workflowai.agent ()
388- async def say_hello_strict ( _ : Input ) -> OutputStrict :
463+ async def analyze_call_feedback_strict ( input : CallFeedbackInput ) -> CallFeedbackOutputStrict :
389464 ...
390465
391466try :
392- run = await say_hello(Input(name = " John" ))
393- print (run.output.greeting) # "Hello, John!"
467+ result = await analyze_call_feedback_strict(
468+ CallFeedbackInput(
469+ transcript = " [00:01:15] Customer: The product is great!" ,
470+ call_date = date(2024 , 1 , 15 )
471+ )
472+ )
394473except WorkflowAIError as e:
395474 print (e.code) # "invalid_generation" error code means that the generation did not match the schema
396475
397- class OutputTolerant (BaseModel ):
398- greeting: str = " "
476+ class CallFeedbackOutputTolerant (BaseModel ):
477+ positive_points: List[FeedbackPoint] = Field(default_factory = list )
478+ negative_points: List[FeedbackPoint] = Field(default_factory = list )
399479
400480@workflowai.agent ()
401- async def say_hello_tolerant ( _ : Input ) -> OutputTolerant :
481+ async def analyze_call_feedback_tolerant ( input : CallFeedbackInput ) -> CallFeedbackOutputTolerant :
402482 ...
403483
404484# The invalid_generation is less likely
405- run = await say_hello_tolerant(Input(name = " John" ))
406- if not run.output.greeting:
407- print (" No greeting was generated !" )
408- print (run.output.greeting) # "Hello, John!"
409-
485+ result = await analyze_call_feedback_tolerant(
486+ CallFeedbackInput(
487+ transcript = " [00:01:15] Customer: The product is great!" ,
488+ call_date = date(2024 , 1 , 15 )
489+ )
490+ )
491+ if not result.positive_points and not result.negative_points:
492+ print (" No feedback points were generated!" )
410493```
411494
412495> WorkflowAI automatically retries invalid generations once. If a model outputs an object that does not match the
@@ -417,35 +500,19 @@ Another reason to prefer optional fields in the output is for streaming. Partial
417500absent will cause ` AttributeError ` when queried.
418501
419502``` python
420- class Input (BaseModel ):
421- name: str
422-
423- class OutputStrict (BaseModel ):
424- greeting1: str
425- greeting2: str
426-
427- @workflowai.agent ()
428- def say_hello_strict (_ : Input) -> AsyncIterator[Output]:
429- ...
430-
431- async for run in say_hello(Input(name = " John" )):
432- try :
433- print (run.output.greeting1)
434- except AttributeError :
435- # run.output.greeting1 has not been generated yet
436-
437-
438- class OutputTolerant (BaseModel ):
439- greeting1: str = " "
440- greeting2: str = " "
441-
442503@workflowai.agent ()
443- def say_hello_tolerant ( _ : Input ) -> AsyncIterator[OutputTolerant ]:
504+ async def analyze_call_feedback_stream ( input : CallFeedbackInput ) -> AsyncIterator[CallFeedbackOutput ]:
444505 ...
445506
446- async for run in say_hello(Input(name = " John" )):
447- print (run.output.greeting1) # will be empty if the model has not generated it yet
448-
507+ async for result in analyze_call_feedback_stream(
508+ CallFeedbackInput(
509+ transcript = " [00:01:15] Customer: The product is great!" ,
510+ call_date = date(2024 , 1 , 15 )
511+ )
512+ ):
513+ # With default values, we can safely check the points as they stream in
514+ print (f " Positive points so far: { len (result.positive_points)} " )
515+ print (f " Negative points so far: { len (result.negative_points)} " )
449516```
450517
451518#### Field properties
0 commit comments