@@ -121,6 +121,8 @@ WorkflowAI supports a long list of models. The source of truth for models we sup
121121You can set the model explicitly in the agent decorator:
122122
123123``` python
124+ from workflowai import Model
125+
124126@workflowai.agent (model = Model.GPT_4O_LATEST )
125127def say_hello (input : Input) -> Output:
126128 ...
@@ -151,18 +153,31 @@ def say_hello(input: Input) -> AsyncIterator[Run[Output]]:
151153 ...
152154```
153155
154- ### Streaming and advanced usage
156+ ### The Run object
157+
158+ Although having an agent only return the run output covers most use cases, some use cases require having more
159+ information about the run.
155160
156- You can configure the agent function to stream or return the full run object, simply by changing the type annotation.
161+ By changing the type annotation of the agent function to ` Run[Output] ` , the generated function will return
162+ the full run object.
157163
158164``` python
159- # Return the full run object, useful if you want to extract metadata like cost or duration
160- # The generated function also tries to recover from errors in the generation process and will attempt to process final
161- # outputs even when there is a partial error.
162165@workflowai.agent ()
163- async def say_hello (input : Input) -> Run[Output]:
164- ...
166+ async def say_hello (input : Input) -> Run[Output]: ...
167+
165168
169+ run = await say_hello(Input(name = " John" ))
170+ print (run.output) # the output, as before
171+ print (run.model) # the model used for the run
172+ print (run.cost_usd) # the cost of the run in USD
173+ print (run.duration_seconds) # the duration of the inference in seconds
174+ ```
175+
176+ ### Streaming
177+
178+ You can configure the agent function to stream by changing the type annotation to an AsyncIterator.
179+
180+ ``` python
166181# Stream the output, the output is filled as it is generated
167182@workflowai.agent ()
168183def say_hello (input : Input) -> AsyncIterator[Output]:
0 commit comments