Skip to content

Commit 71da663

Browse files
author
Pierre
committed
Update name_extractor.py
1 parent 4e24f35 commit 71da663

File tree

1 file changed

+20
-1
lines changed

1 file changed

+20
-1
lines changed

examples/reply/name_extractor.py

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,19 @@
1+
"""
2+
This example demonstrates how to use the reply() method to have a conversation with the agent/LLM.
3+
After getting an initial response, you can use reply() to ask follow-up questions or request
4+
confirmation. The agent/LLM maintains context from the previous interaction, allowing it to:
5+
6+
1. Confirm its previous output
7+
2. Correct mistakes if needed
8+
3. Provide additional explanation
9+
4. Refine its response based on new information
10+
11+
Example:
12+
run = await my_agent(input) # Initial response
13+
run = await run.reply(user_message="Are you sure?") # Ask for confirmation
14+
...
15+
"""
16+
117
import asyncio
218

319
from dotenv import load_dotenv
@@ -52,7 +68,10 @@ async def main():
5268

5369
print(f"Extracted: {run.output.first_name} {run.output.last_name}")
5470

55-
# Double check with a simple confirmation
71+
# The reply() method allows you to continue the conversation with the LLM
72+
# by sending a follow-up message. The LLM will maintain context from the
73+
# previous interaction and can confirm or revise its previous output.
74+
# Here we ask it to double check its extraction.
5675
run = await run.reply(user_message="Are you sure?")
5776

5877
print("\nAfter double-checking:")

0 commit comments

Comments
 (0)