Tonic commited on
Commit
967138a
·
verified ·
1 Parent(s): 00ef5a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -372
README.md CHANGED
@@ -7,375 +7,3 @@ sdk: static
7
  pinned: false
8
  short_description: Langchain / LangGraph Chat UI
9
  ---
10
-
11
- ## LangGraph Agent Chat UI: Your Gateway to Agent Interaction
12
-
13
- The Agent Chat UI,, is a React/Vite application that provides a clean, chat-based interface for interacting with your LangGraph agents. Here's why it's a valuable tool:
14
-
15
- * **Easy Connection:** Connect to local or deployed LangGraph agents with a simple URL and graph ID.
16
- * **Intuitive Chat:** Interact naturally with your agents, sending and receiving messages in a familiar chat format.
17
- * **Visualize Agent Actions:** See tool calls and their results rendered directly in the UI.
18
- * **Human-in-the-Loop Made Easy:** Seamlessly integrate human input using LangGraph's `interrupt` feature. The UI handles the presentation and interaction, allowing for approvals, edits, and responses.
19
- * **Explore Execution Paths:** Use the UI to travel through time, inspect checkpoints, and fork conversations, all powered by LangGraph's state management.
20
- * **Debug and Understand:** Inspect the full state of your LangGraph thread at any point.
21
-
22
- ## Get Started with the Agent Chat UI (and LangGraph!)
23
-
24
- You have several options to start using the UI:
25
-
26
- ### 1. Try the Deployed Version (No Setup Required!)
27
-
28
- * **Visit:** [agentchat.vercel.app](https://agentchat.vercel.app/)
29
- * **Connect:** Enter your LangGraph deployment URL and graph ID (the `path` you set with `langserve.add_routes`). If using a production deployment, also include your LangSmith API key.
30
- * **Chat!** You're ready to interact with your agent.
31
-
32
- ### 2. Run Locally (for Development and Customization)
33
-
34
- * **Option A: Clone the Repository:**
35
- ```bash
36
- git clone https://github.com/langchain-ai/agent-chat-ui.git
37
- cd agent-chat-ui
38
- pnpm install # Or npm install/yarn install
39
- pnpm dev # Or npm run dev/yarn dev
40
- ```
41
- * **Option B: Quickstart with `npx`:**
42
- ```bash
43
- npx create-agent-chat-app
44
- cd agent-chat-app
45
- pnpm install # Or npm install/yarn install
46
- pnpm dev # Or npm run dev/yarn dev
47
- ```
48
-
49
- Open your browser to `http://localhost:5173` (or the port indicated in your terminal).
50
-
51
- # LangGraph Agent Chat UI
52
-
53
- This project provides a simple, intuitive user interface (UI) for interacting with LangGraph agents. It's built with React and Vite, offering a responsive chat-like experience for testing and demonstrating your LangGraph deployments. It's designed to work seamlessly with LangGraph's core concepts, including checkpoints, thread management, and human-in-the-loop capabilities.
54
-
55
- ## Features
56
-
57
- * **Easy Connection:** Connect to both local and production LangGraph deployments by simply providing the deployment URL and graph ID (the path used when defining the graph).
58
- * **Chat Interface:** Interact with your agents through a familiar chat interface, sending and receiving messages in real-time. The UI manages the conversation thread, automatically using checkpoints for persistence.
59
- * **Tool Call Rendering:** The UI automatically renders tool calls and their results, making it easy to visualize the agent's actions. This is compatible with LangGraph's [tool calling and function calling capabilities](https://python.langchain.com/docs/guides/tools/custom_tools).
60
- * **Human-in-the-Loop Support:** Seamlessly integrate human intervention using LangGraph's `interrupt` function. The UI presents a dedicated interface for reviewing, editing, and responding to interrupt requests (e.g., for approval or modification of agent actions), following the standardized schema.
61
- * **Thread History:** View and navigate through past chat threads, enabling you to review previous interactions. This leverages LangGraph's checkpointing for persistent conversation history.
62
- * **Time Travel and Forking:** Leverage LangGraph's powerful state management features, including [checkpointing](https://python.langchain.com/docs/modules/agents/concepts#checkpointing) and thread manipulation. Run the graph from specific checkpoints, explore different execution paths, and edit previous messages.
63
- * **State Inspection:** Examine the current state of your LangGraph thread for debugging and understanding the agent's internal workings. This allows you to inspect the full state object managed by LangGraph.
64
- * **Multiple Deployment Options:**
65
- * **Deployed Site:** Use the hosted version at [agentchat.vercel.app](https://agentchat.vercel.app/)
66
- * **Local Development:** Clone the repository and run it locally for development and customization.
67
- * **Quick Setup:** Use `npx create-agent-chat-app` for a fast, streamlined setup.
68
- * **Langsmith API key:** When utilizing a product deployment you must provide an Langsmith API key.
69
-
70
- ## Getting Started
71
-
72
- There are three main ways to use the Agent Chat UI:
73
-
74
- ### 1. Using the Deployed Site (Easiest)
75
-
76
- 1. **Navigate:** Go to [agentchat.vercel.app](https://agentchat.vercel.app/).
77
- 2. **Enter Details:**
78
- * **Deployment URL:** The URL of your LangGraph deployment (e.g., `http://localhost:2024` for a local deployment using LangServe, or the URL provided by LangSmith for a production deployment).
79
- * **Assistant / Graph ID:** The path of the graph you want to interact with (e.g., `chat`, `email_agent`). This is defined when adding routes with `add_routes(..., path="/your_path")`.
80
- * **LangSmith API Key** (Production Deployments Only): If you are connecting to a deployment hosted on LangSmith, you will need to provide your LangSmith API key for authentication. *This is NOT required for local LangGraph servers.* The key is stored locally in your browser's storage.
81
- 3. **Click "Continue":** You'll be taken to the chat interface, ready to interact with your agent.
82
-
83
- ### 2. Local Development (Full Control)
84
-
85
- 1. **Clone the Repository:**
86
-
87
- ```bash
88
- git clone https://github.com/langchain-ai/agent-chat-ui.git
89
- cd agent-chat-ui
90
- ```
91
-
92
- 2. **Install Dependencies:**
93
-
94
- ```bash
95
- pnpm install # Or npm install, or yarn install
96
- ```
97
-
98
- 3. **Start the Development Server:**
99
-
100
- ```bash
101
- pnpm dev # Or npm run dev, or yarn dev
102
- ```
103
-
104
- 4. **Open in Browser:** The application will typically be available at `http://localhost:5173` (the port may vary; check your terminal output). Follow the instructions in "Using the Deployed Site" to connect to your LangGraph.
105
-
106
- ### 3. Quick Setup with `npx create-agent-chat-app`
107
-
108
- This method creates a new project directory with the Agent Chat UI already set up.
109
-
110
- 1. **Run the Command:**
111
-
112
- ```bash
113
- npx create-agent-chat-app
114
- ```
115
-
116
- 2. **Follow Prompts:** You'll be prompted for a project name (default is `agent-chat-app`).
117
-
118
- 3. **Navigate to Project Directory:**
119
-
120
- ```bash
121
- cd agent-chat-app
122
- ```
123
-
124
- 4. **Install and Run:**
125
-
126
- ```bash
127
- pnpm install # Or npm install, or yarn install
128
- pnpm dev # Or npm run dev, or yarn dev
129
- ```
130
-
131
- 5. **Open in Browser:** The application will be available at `http://localhost:5173`. Follow the instructions in "Using the Deployed Site" to connect.
132
-
133
- ## LangGraph Setup (Prerequisites)
134
-
135
- Before using the Agent Chat UI, you need a running LangGraph agent served via LangServe. Below are examples using both a simple agent and an agent with human-in-the-loop.
136
-
137
- ### Basic LangGraph Example (Python)
138
-
139
- ```python
140
- # agent.py (Example LangGraph agent - Python)
141
- from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
142
- from langchain_core.runnables import chain
143
- from langchain_openai import ChatOpenAI
144
- from langchain_core.messages import AIMessage, HumanMessage
145
- from langgraph.prebuilt import create_agent_executor
146
- from langchain_core.tools import tool
147
-
148
- # FastAPI and LangServe for serving the graph
149
- from fastapi import FastAPI
150
- from langserve import add_routes
151
-
152
-
153
- @tool
154
- def get_weather(city: str):
155
- """
156
- Gets the weather for a specified city
157
- """
158
- if city.lower() == "new york":
159
- return "The weather in New York is nice today with a high of 75F."
160
- else:
161
- return "The weather for that city is not supported"
162
-
163
-
164
- # Define the tools
165
- tools = [get_weather]
166
-
167
- prompt = ChatPromptTemplate.from_messages(
168
- [
169
- ("system", "You are a helpful assistant"),
170
- MessagesPlaceholder(variable_name="messages"),
171
- MessagesPlaceholder(variable_name="agent_scratchpad"),
172
- ]
173
- )
174
-
175
- model = ChatOpenAI(temperature=0).bind_tools(tools)
176
-
177
-
178
- @chain
179
- def transform_messages(data):
180
- messages = data["messages"]
181
- if not isinstance(messages[-1], HumanMessage):
182
- messages.append(
183
- AIMessage(
184
- content="I don't know how to respond to messages other than a final answer"
185
- )
186
- )
187
- return {"messages": messages}
188
-
189
-
190
- agent = (
191
- {
192
- "messages": transform_messages,
193
- "agent_scratchpad": lambda x: [], # No tools in this simple example
194
- }
195
- | prompt
196
- | model
197
- )
198
-
199
- # Wrap the agent in a RunnableGraph
200
- app = create_agent_executor(agent, tools)
201
-
202
- # Serve the graph using FastAPI and langserve
203
- fastapi_app = FastAPI(
204
- title="LangGraph Agent",
205
- version="1.0",
206
- description="A simple LangGraph agent server",
207
- )
208
-
209
- # Mount LangServe at the /agent endpoint
210
- add_routes(
211
- fastapi_app,
212
- app,
213
- path="/chat", # Matches the graph ID we'll use in the UI
214
- )
215
-
216
- if __name__ == "__main__":
217
- import uvicorn
218
-
219
- uvicorn.run(fastapi_app, host="localhost", port=2024)
220
-
221
- ```
222
- To run this example:
223
-
224
- 1. Save the code as `agent.py`.
225
- 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
226
- 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
227
- 4. Run the script: `python agent.py`
228
- 5. Your LangGraph agent will be running at `http://localhost:2024/chat`, and the graph ID to enter into the ui is `chat`.
229
-
230
- ### LangGraph with Human-in-the-Loop Example (Python)
231
-
232
- ```python
233
- from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
234
- from langchain_core.runnables import chain
235
- from langchain_openai import ChatOpenAI
236
- from langchain_core.messages import AIMessage, HumanMessage
237
- from langgraph.prebuilt import create_agent_executor, ToolInvocation, interrupt
238
- from langchain_core.tools import tool
239
- from fastapi import FastAPI
240
- from langserve import add_routes
241
-
242
-
243
- @tool
244
- def write_email(subject: str, body: str, to: str):
245
- """
246
- Drafts an email with a specified subject, body and recipient
247
- """
248
- print(f"Writing email with subject '{subject}' to '{to}'") # Debugging
249
- return f"Draft email to {to} with subject {subject} sent."
250
-
251
-
252
- tools = [write_email]
253
-
254
- prompt = ChatPromptTemplate.from_messages(
255
- [
256
- ("system", "You are a helpful assistant that drafts emails."),
257
- MessagesPlaceholder(variable_name="messages"),
258
- MessagesPlaceholder(variable_name="agent_scratchpad"),
259
- ]
260
- )
261
-
262
-
263
- model = ChatOpenAI(temperature=0, model="gpt-4-turbo-preview").bind_tools(tools)
264
-
265
-
266
- @chain
267
- def transform_messages(data):
268
- messages = data["messages"]
269
- if not isinstance(messages[-1], HumanMessage):
270
- messages.append(
271
- AIMessage(
272
- content="I don't know how to respond to messages other than a final answer"
273
- )
274
- )
275
- return {"messages": messages}
276
-
277
-
278
-
279
- def handle_interrupt(state):
280
- """Handles human-in-the-loop interruptions."""
281
- print("---INTERRUPT---") # Debugging
282
- messages = state["messages"]
283
- last_message = messages[-1]
284
-
285
- if isinstance(last_message, AIMessage) and isinstance(
286
- last_message.content, list
287
- ):
288
- # Find the tool call
289
- for msg in last_message.content:
290
- if isinstance(msg, ToolInvocation):
291
- tool_name = msg.name
292
- tool_args = msg.args
293
- if tool_name == "write_email":
294
- # Construct the human interrupt request
295
- interrupt_data = {
296
- "type": "interrupt",
297
- "args": {
298
- "type": "response",
299
- "studio": { # optional
300
- "subject": tool_args["subject"],
301
- "body": tool_args["body"],
302
- "to": tool_args["to"],
303
- },
304
- "description": "Response Instruction: \n\n- **Response**: Any response submitted will be passed to an LLM to rewrite the email. It can rewrite the email body, subject, or recipient.\n\n- **Edit or Accept**: Editing/Accepting the email.",
305
- },
306
- }
307
- # Call the interrupt function and return the new state
308
- return interrupt(messages, interrupt_data)
309
- return {"messages": messages}
310
-
311
-
312
- agent = (
313
- {
314
- "messages": transform_messages,
315
- "agent_scratchpad": lambda x: x.get("agent_scratchpad", []),
316
- }
317
- | prompt
318
- | model
319
- | handle_interrupt # Add the interrupt handler
320
- )
321
-
322
- # Wrap the agent in a RunnableGraph
323
- app = create_agent_executor(agent, tools)
324
-
325
- # Serve the graph using FastAPI and langserve
326
- fastapi_app = FastAPI(
327
- title="LangGraph Agent",
328
- version="1.0",
329
- description="A simple LangGraph agent server",
330
- )
331
-
332
- # Mount LangServe at the /agent endpoint
333
- add_routes(
334
- fastapi_app,
335
- app,
336
- path="/email_agent", # Matches the graph ID we'll use in the UI
337
- )
338
-
339
- if __name__ == "__main__":
340
- import uvicorn
341
-
342
- uvicorn.run(fastapi_app, host="localhost", port=2024)
343
-
344
- ```
345
- To run this example:
346
-
347
- 1. Save the code as `agent.py`.
348
- 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
349
- 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
350
- 4. Run the script: `python agent.py`
351
- 5. Your LangGraph agent will be running at `http://localhost:2024/email_agent`, and the graph ID to enter into the ui is `email_agent`.
352
-
353
- ## Key Concepts (LangGraph Integration)
354
-
355
- * **Messages Key:** The Agent Chat UI expects your LangGraph state to include a `messages` key, which holds a list of `langchain_core.messages.BaseMessage` instances (e.g., `HumanMessage`, `AIMessage`, `SystemMessage`, `ToolMessage`). This is standard practice in LangChain and LangGraph for conversational agents.
356
- * **Checkpoints:** The UI automatically utilizes LangGraph's checkpointing mechanism to save and restore the conversation state. This ensures that you can resume conversations and explore different branches without losing progress.
357
- * **`add_routes` and `path`:** The `path` argument in `add_routes` (from `langserve`) determines the "Graph ID" that you'll enter in the UI. This is crucial for the UI to connect to the correct LangGraph endpoint.
358
- * **Tool Calling:** If you use `bind_tools` with your LLM, tool calls and tool results will be rendered in the UI, with clear labels showing the function call and the response.
359
-
360
- ## Human-in-the-Loop Details
361
-
362
- The Agent Chat UI supports human-in-the-loop interactions using the standard LangGraph interrupt schema. Here's how it works:
363
-
364
- 1. **Interrupt Schema:** Your LangGraph agent should call the `interrupt` function (from `langgraph.prebuilt`) with a specific schema to pause execution and request human input. The schema should include:
365
- * `type`: `interrupt`.
366
- * `args`: A dictionary containing information about the interruption. This is where you provide the data the human needs to review (e.g., a draft email, a proposed action).
367
- * `type`: Can be one of `"response"`, `"accept"`, or `"ignore"`. This indicates the type of human interaction expected.
368
- * `args`: Further arguments specific to the interrupt type. For instance, if the interrupt type is `response`, the `args` could contain a message to give to the user.
369
- * `studio`: *Optional.* If included, this must contain `subject`, `body`, and `to` keys for interrupt requests.
370
- * `description`: *Optional.* If used, this provides a static prompt to the user that displays the fields the human needs to complete.
371
- * `name` (optional): A name for the interrupt.
372
- * `id` (optional): A unique identifier for the interrupt.
373
-
374
- 2. **UI Rendering:** When the Agent Chat UI detects an interrupt with this schema, it will automatically render a user-friendly interface for human interaction. This interface allows the user to:
375
- * **Inspect:** View the data provided in the `args` of the interrupt (e.g., the content of a draft email).
376
- * **Edit:** Modify the data (if the interrupt schema allows for it).
377
- * **Respond:** Provide a response (if the interrupt type is `"response"`).
378
- * **Accept/Reject:** Approve or reject the proposed action (if the interrupt type is `"accept"`).
379
- * **Ignore:** Ignore the interrupt (if the interrupt type is `"ignore"`).
380
-
381
- 3. **Resuming Execution:** After the human interacts with the interrupt, the UI sends the response back to the LangGraph via LangServe, and execution resumes.
 
7
  pinned: false
8
  short_description: Langchain / LangGraph Chat UI
9
  ---