Building AI Personas: The Magic Behind Personality-Packed Chatbots

Have you ever wished you could chat with your favorite tech expert or business guru anytime? That’s exactly what we set out to achieve by creating AI versions of Hitesh Chaudhary (the beloved tech educator) and Piyush Garg (the insightful educator). Here’s the fascinating story of how we brought these digital personas to life!

This comprehensive guide explores the technical implementation of a sophisticated persona chatbot system, specifically focusing on creating digital personas of Hitesh Chaudhary and Piyush Garg using YouTube transcripts and advanced prompting techniques.

System Architecture Overview

Our persona chatbot system is built on a modular architecture that supports multiple AI providers and customizable personas. The core components include:

  1. Client Management Layer — Handles API connections to different AI providers
  2. Model Interface Layer — Manages prompt processing and response generation
  3. Persona Configuration — Stores and processes persona-specific prompts
  4. Application Layer — Provides the user interface and conversation management

The Three Key Components

  1. The Brain (AI Models): We used cutting-edge AI from OpenAI and Google Gemini
  2. The Personality (Prompts): Carefully crafted instructions that shape how the AI responds
  3. The Conversation Manager: Keeps track of discussions for natural back-and-forth

Step 1: Teaching the AI to Think Like a Human

We implemented a special technique called Chain-of-Thought (CoT) prompting that makes the AI:

  1. Pause and reflect before answering (just like humans do)
  2. Show its work so we understand how it reached conclusions
  3. Structure responses in a clear, logical way

Here’s how we built this thinking process:

# The AI's thought process looks like this internally
{
"step": "think",
"content": "This question is about teaching React concepts. I should break it down simply."
}
{
"step": "result",
"content": "React is like building with LEGO blocks..."
}

Step 2: Capturing Unique Personalities, data Collection and Processing

To create authentic personas of Hitesh Chaudhary and Piyush Garg, we begin by processing their YouTube transcripts:

Step 3: Persona Prompt Engineering

For Hitesh Chaudhary’s persona, we focus on:

  1. Technical Expertise: Emphasizing his software development and teaching experience
  2. Communication Style: Direct, enthusiastic, with practical examples
  3. Signature Phrases: Incorporating frequently used expressions

For Piyush Garg’s Persona:

  • Approach: Strategic frameworks
  • Trademarks: Clear structures and action steps
  • Special sauce: Relates everything to business fundamentals

The Technology Powering the Magic

Backstage Components:

  1. Client Handlers — Our multilingual translators:
# Handles communication with different AI services
def get_openai_client():
return OpenAI(api_key="your_key_here") # Like getting a special phone
def get_gemini_client():
genai.configure(api_key="your_key_here") # Another type of phone
return genai

Persona Loader — The personality implant:

# Loads the specific persona instructions
def load_prompt():
with open('prompts/hitesh-prompt.txt') as f:
return f.read() # Like giving the AI a script to follow

Conversation Manager — The memory keeper:

messages = [
{"role": "system", "content": "You are Hitesh..."},
{"role": "user", "content": "How do I learn programming?"}
]

File Structure Deep Dive

Persona-Chatbot/
├── main.py # Application entry point and chat loop
├── models.py # Core AI model interactions
├── clients.py # API client configurations
├── prompts/ # Persona definitions
│ ├── hitesh-prompt.txt # Hitesh Chaudhary persona
│ └── piyush-prompt.txt # Piyush Garg persona
├── transcript_processor.py # YouTube transcript processing
└── persona.md # System documentation

CLIENT.PY

Purpose: Handles all communication with external AI providers (OpenAI and Google Gemini)


from google import generativeai as genai
import os
from dotenv import load_dotenv

load_dotenv()

def get_openai_client():
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY environment variable is not set")
return OpenAI(api_key=api_key)



def get_gemini_client():
api_key = os.getenv("GEMINI_API_KEY")
if not api_key:
raise ValueError("GEMINI_API_KEY environment variable is not set")
genai.configure(api_key=api_key)
return genai # Return the configured genai module

Key Features:

  • Dual-Provider Support: Works with both OpenAI and Google’s Gemini
  • Secure Configuration: Uses environment variables to protect API keys
  • Error Handling: Validates credentials before establishing connections

Why It Matters:

This is like having a universal remote control that can operate different brands of AI services. Whether we want to use GPT-4 or Gemini Pro, client.py handles the differences seamlessly.

MODELS.PY : The Brain of operations 🧠

Purpose: Manages AI model interactions and response generation

import os
from dotenv import load_dotenv
from clients import get_openai_client, get_gemini_client
import google.generativeai as genai

load_dotenv()

PROVIDER = os.getenv('PROVIDER', 'openai').lower()
OPENAI_MODEL = os.getenv('OPENAI_MODEL')
GEMINI_MODEL = os.getenv('GEMINI_MODEL')
PROMPT_FILE = os.getenv('PROMPT_FILE')

def load_prompt():
"""
Loads the prompt content from the file path specified in the PROMPT_FILE environment variable.
Supports .txt (plain text) and .py (Python file with a variable named 'prompt' or ending with '_prompt').
"""
if not PROMPT_FILE:
raise Exception("PROMPT_FILE environment variable not set.")

prompt_path = os.path.join(os.path.dirname(__file__), PROMPT_FILE)
if not os.path.exists(prompt_path):
raise Exception(f"Prompt file '{PROMPT_FILE}' not found at '{prompt_path}'.")
with open(prompt_path, 'r', encoding='utf-8') as f:
prompt = f.read()
print(f"Loaded prompt: {prompt}")
return prompt



def get_response(messages):
provider = PROVIDER
if provider == 'openai':
client = get_openai_client()
try:
response = client.chat.completions.create(
model=OPENAI_MODEL,
messages=messages
)
return response.choices[0].message.content
except Exception as e:
raise RuntimeError(f"OpenAI Error: {e}")
elif provider == 'gemini':
genai_module = get_gemini_client()
try:
prompt = ""
for msg in messages:
prompt += f"{msg['role'].capitalize()}: {msg['content']}n"
model = genai_module.GenerativeModel(GEMINI_MODEL) # e.g., 'gemini-pro'
response = model.generate_content(prompt.strip())
return response.text if response.text else "No response"
except Exception as e:
raise RuntimeError(f"Gemini Error: {e}")

else:
raise ValueError(f"Unsupported provider: {provider}")

Key Features:

  • Persona Loading: Reads the specific personality instructions from prompt files
  • Provider Agnostic: Works with either AI service through a unified interface
  • Error Handling: Provides clear error messages for troubleshooting

The Magic Happens Here:

This is where the AI gets its personality. By loading different prompt files, we can make the same underlying model behave like Hitesh Chaudhary, Piyush Garg, or any other persona we design.

main.py: The Conversation Conductor

Purpose: Manages the chat interface and conversation flow


import json
import os
from dotenv import load_dotenv
from models import load_prompt, get_response

# Load .env from current directory
dotenv_path = os.path.join(os.path.dirname(__file__), '.env')
load_dotenv(dotenv_path)

def main():
system_prompt = load_prompt()

messages = [
{"role": "system", "content": system_prompt}
]

print("Assistant is ready! Type your message below (type 'exit' to quit):")
while True:
user_input = input("You: ")
if user_input.strip().lower() in ["exit", "quit","bye","see you later"]:
print("Exiting chat. Goodbye!")
break
messages.append({"role": "user", "content": user_input})

try:
raw_response = get_response(messages)
# Try to parse as JSON
try:
parsed_response = json.loads(raw_response)
step = parsed_response.get("step")
content = parsed_response.get("content")

if step == "think":
print(" 🧠:", content)
messages.append({"role": "assistant", "content": raw_response})
continue
elif step != "result":
print(" 🧠:", content)
messages.append({"role": "assistant", "content": raw_response})
continue
else:
print("🤖:", content)
messages.append({"role": "assistant", "content": raw_response})
except json.JSONDecodeError:
# If not JSON, just print the raw response
print("🤖 (raw):", raw_response)
messages.append({"role": "assistant", "content": raw_response})

except Exception as e:
print(f"Error: {str(e)}")
break

if __name__ == "__main__":
main()

Key Features:

  • Interactive Interface: Provides a clean command-line chat experience
  • Context Maintenance: Keeps track of the entire conversation history
  • Response Processing: Handles both structured (JSON) and raw text responses
  • User-Friendly: Includes clear instructions and exit commands

The User Experience:

This file creates the illusion of talking to a real person by:

  1. Maintaining conversation context
  2. Showing the AI’s “thinking process” when using Chain-of-Thought
  3. Providing natural ways to end the conversation

How They Work Together

  1. Initialization:
  • main.py loads the persona prompt
  • Creates the initial system message

2. Conversation Flow:

User Input → main.py → models.py → client.py → AI Service
Response → client.py → models.py → main.py → User

3. Context Maintenance:

  • Each exchange gets added to the message history
  • Subsequent responses consider the full conversation context
  • Each exchange gets added to the message history
  • Subsequent responses consider the full conversation context

Customization Options

  1. Switching Personas:
    Change the PROMPT_FILE in your .env to:
PROMPT_FILE=prompts/hitesh-prompt.txt # or piyush-prompt.txt

2. Changing AI Providers:
Modify the .env file:

PROVIDER=gemini # or 'openai'
  1. Adjusting Models:
OPENAI_MODEL=gpt-4-turbo GEMINI_MODEL=gemini-1.5-pro

Why This Architecture Works

  1. Separation of Concerns:
  • Client handling separate from business logic
  • Persona definitions separate from chat interface

2. Extensibility:

  • Easy to add new AI providers
  • Simple to create new personas

3. Maintainability:

  • Clear error handling at each layer
  • Environment-based configuration

This three-file structure creates a robust foundation for personality-driven chatbots that can scale from simple demonstrations to production-grade applications.

Final Thoughts

Creating these AI personas has been like teaching talented students to think and communicate like specific experts. The Chain-of-Thought approach makes the conversations feel remarkably human-like, while the careful persona crafting ensures each AI stays true to its real-world counterpart’s style.

Whether you’re looking to learn, brainstorm, or just have an interesting conversation, these personality-powered chatbots open up exciting new possibilities for human-AI interaction!

GITHUB LINK — PERSONA AI


Building AI Personas: The Magic Behind Personality-Packed Chatbots was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.

This post first appeared on Read More