Building an Agentic Workflow with OpenAI, LangGraph, and Python

How to architect intelligent, structured LLM systems the way real humans think

👋 A Real-Life Analogy to Start With

Working Beavis And Butthead GIF – Find & Share on GIPHY

Imagine you’ve just called customer support.

You: “Hey, I need help!”
Agent: “Sure! Is this about a technical issue or something general?”
You: “It’s about a code bug.”
Agent: “Let me transfer you to a tech expert.”
Tech Agent: “Please describe the issue…”
(Verifies your query, confirms it’s a real bug, gives you a fix.)

That’s exactly how our LLM agents should behave too:
Listen → Understand → Route → Validate → Respond

Now, what if I told you that you could build this same thinking process using Python, OpenAI, LangGraph, and LangChain?

Let’s build it together. Line by line. Brain by brain.

🧰 What We’re Building

We’ll build a modular LLM agentic workflow that:

  1. Accepts a user query.
  2. Uses GPT to classify it: coding-related or general?
  3. Based on the result:
  • Routes to either coding_query() or general_query().
  • If coding, it goes through an additional validation step.

This is how the brain of your agent looks:

🛠 Tech Stack

Tool Role OpenAI GPT Handles classification & answering LangGraph Manages structured flows LangChain Simplifies LLM integration Pydantic Validates GPT responses Python Orchestrates everything

🧪 Let’s Start Coding

1. Install Dependencies

pip install langgraph langchain openai python-dotenv

2. Project Structure

agentic-workflow/
├── main.py
├── .env

🧾 main.py — Full Code with Explanations

import os
from dotenv import load_dotenv
from openai import OpenAI
from pydantic import BaseModel
from typing import Literal, TypedDict
from langgraph.graph import StateGraph, START, END
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

🔹 What is StateGraph?

Think of it as a smart flowchart in code. Each node is a step (or decision), and the edges define how you move from one node to another.

💡 It’s great for LLM systems that mimic human workflows like: ask → analyze → act.

🔹 Why TypedDict for State?

class State(TypedDict):
user_query: str

State defines what data flows between steps. It’s your global whiteboard that each node can read/write.
Using TypedDict ensures:

  • 🧼 Type-safety
  • 🤖 Better autocomplete
  • 🛡 Prevents accidental bugs

🔹 classify_message() — Uses GPT to classify

class ClassifyMessageResponse(BaseModel):
is_coding_question: bool
def classify_message(state: State) -> State:
query = state["user_query"]
response = client.chat.completions.create(
model="gpt-4.1-nano",
messages=[
{"role": "system", "content": "Classify if this is a programming question."},
{"role": "user", "content": query}
],
response_model=ClassifyMessageResponse
)
return {**state, "is_coding_question": response.is_coding_question}

🔹 Why Use Pydantic?

You want structured output from GPT, not guesswork. Pydantic:

  • 📦 Parses JSON
  • ✅ Validates keys & types
  • 🤝 Returns Python objects (not messy strings)
  • Its same as zod in JS

🔹 route_query() — Branches without using LLM

def route_query(state: State) -> Literal["general", "coding"]:
return "coding" if state.get("is_coding_question") else "general"

🔹 general_query() — Answers non-tech queries

def general_query(state: State) -> State:
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": "Answer the following as a helpful assistant."},
{"role": "user", "content": state["user_query"]}
]
)
print("General Answer:", response.choices[0].message.content)
return state

🔹 coding_query() — Answers technical queries

def coding_query(state: State) -> State:
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": "Answer programming questions with examples."},
{"role": "user", "content": state["user_query"]}
]
)
print("Coding Answer:", response.choices[0].message.content)
return state

🔹 coding_validate_query() — Extra layer for validation

def coding_validate_query(state: State) -> State:
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": "Is this a valid coding question? Justify shortly."},
{"role": "user", "content": state["user_query"]}
]
)
print("Validation:", response.choices[0].message.content)
return state

🔀 Connecting the Brain (LangGraph Flow)

workflow = StateGraph(State)
# Add nodes
workflow.add_node("classify_message", classify_message)
workflow.add_node("route_query", route_query)
workflow.add_node("general", general_query)
workflow.add_node("coding", coding_query)
workflow.add_node("coding_validate", coding_validate_query)
# Connect nodes
workflow.set_entry_point("classify_message")
workflow.add_edge("classify_message", "route_query")
workflow.add_conditional_edges("route_query", route_query, {
"general": "general",
"coding": "coding"
})
workflow.add_edge("general", END)
workflow.add_edge("coding", "coding_validate")
workflow.add_edge("coding_validate", END)
# Compile the graph
graph = workflow.compile()

🚀 Run the Agent!

if __name__ == "__main__":
user_input = input("Ask me anything: ")
graph.invoke({"user_query": user_input})

🤯 Behind the Magic: Why This Works So Well

Layer Real-Life Role Purpose classify_message() Receptionist First line of triage route_query() Decision-making brain Decides who answers general_query() Helpful human Answers non-tech questions coding_query() Developer friend Answers code queries coding_validate() Quality checker Ensures code query is valid

🧠 Recap

Concept What it Does StateGraph Structures your LLM agent like a flowchart TypedDict Defines what gets passed between nodes Pydantic Validates structured GPT responses LangChain Makes prompting and model interaction smooth LangGraph Visual and code-based reasoning agent framework

🛣️ What’s Next?

This is just the beginning.

In upcoming parts, we’ll:

  • How to make the state as Persistent using checkpointers
  • Add memory and conversation history
  • Integrate streaming outputs
  • Visualize this with graph renderers

🧡 Final Thoughts

Building agentic workflows doesn’t just mean stringing GPT calls together. It’s about mimicking how humans make decisions — structured, conditional, logical.

LangGraph gives you that power. OpenAI gives you the intelligence. Python gives you control.

So go ahead. Build your thinking AI agents. Design flows that work like brains.

Because the future of AI isn’t just smart — it’s structured.

Want a GitHub repo?
Just drop a comment, and I’ll ship it for you. 🚀

#LangGraph #OpenAI #LangChain #AgenticAI #LLMWorkflow #PythonAI #PromptEngineering #LLMArchitecture


🧠 Building an Agentic Workflow with OpenAI, LangGraph, and Python was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.

This post first appeared on Read More