[AI Sparks] Issue 2: Teach Your AI to Talk Like a Pirate 🏴☠️
Welcome back to AI Sparks! In our welcome issue, you took a huge first step: you built a working AI chatbot from scratch.
But right now, that bot is just a generic engine. It has no personality, no character. Today, we're going to give it one. You'll learn the single most important technique in creative AI—the system prompt—to transform your simple bot into any character you can imagine, from a sarcastic tutor to a pirate captain. By the end of this issue, you won't just have a chatbot; you'll have a creation with a voice.
Inside this Issue:
- 📡 AI Radar: AI Fluency is the New Must-Have Skill on Your Resume
- 💡 Concept Quick-Dive: What is Prompt Engineering, Really?
- 🛠️ Hands-on Lab: Teach Your AI to Talk Like a Pirate
- 👥 Community Spotlight: Q&A on GPT Models
📡 AI Radar: AI Fluency is the New Must-Have Skill on Your Resume
What's Happening?
The job market is sending a clear message: practical AI skill is the new currency. According to Autodesk's 2025 AI Jobs Report, practical AI skill has officially moved from a "nice-to-have" bonus to a "must-have" qualification on resumes. Mentions of AI in US job listings have surged by another 56.1% (through April), building on explosive growth in 2023 (+114.8%) and 2024 (+120.6%). This highlights a new baseline for the skills that employers are looking for across nearly every industry.
Why It Matters:
This data isn't just a trend; it signals a fundamental restructuring of the job market. This is happening in two key ways. First, core technical roles like "AI Engineer" and "Machine Learning Engineer" are seeing explosive growth. Second, an entirely new class of hybrid jobs like "AI Content Creator" and "AI Solutions Architect" is emerging, blending technical fluency with creativity and communication.
The "So What" for Students?
This is a massive opportunity for every single one of you.
- For STEM majors: The message is clear: technical AI roles are booming, and you are perfectly positioned to pursue them. To stand out for these positions, however, your degree isn't enough—it's the hands-on AI skills you can prove you've built that will set you apart.
- For students in every other major: This is fantastic news for you. The biggest opportunity is no longer just in building the AI models, but in applying them. A marketing student who can write a Python script to analyze reviews with an AI, or a finance student who can use an AI to extract data from reports, is now incredibly valuable. Your domain knowledge combined with practical AI skills is your new superpower.
The projects we build here are designed to give you exactly that—tangible proof that you can apply AI in a practical, impactful way, no matter what your major is.
💡 Concept Quick-Dive: Prompt Engineering
You've probably heard the term Prompt Engineering—it's one of the hottest skills right now, not just in tech, but in marketing, finance, and virtually any career you can imagine. But what is it, really?
Simply put, prompt engineering is the process of writing effective instructions for an AI to get the results you want. It's a mix of art and science.
Think of it like being a movie director for your AI.
The user prompt (e.g., "Tell me a fun fact") is the single line of dialogue an actor has to say. But that's often not enough to get a great performance. The director can give secret instructions beforehand that define the character: "You're a sad robot," "You're a cheerful pirate," "You're a sarcastic genius." Those secret instructions are the system prompt.
Mastering both the user prompt and the system prompt is the key to great prompt engineering. Today, you'll step into the director's chair and focus on the system prompt. We'll cover more advanced techniques and best practices of prompt engineering in future issues.
🛠️ Hands-on Lab: Teach Your AI to Talk Like a Pirate
Ready for a serious upgrade? Last week's chatbot was just the engine; in this hands-on lab, we're giving it a personality. With just one new line of code and a couple of parameters, you'll transform your bot into any character you can imagine—from a sarcastic tutor to a pirate coder—and learn to control its creative "brain."
💻 Getting Started: Setup & Tools
For this lab, we will start with the chatbot code you wrote in the welcome issue. All you'll need is:
- Your Python file (
chat.py) or Google Colab notebook from last time. - Your OpenAI API key.
📋 Instructions
Part 1: Under the Hood
Let's start by looking at the code from our welcome issue and breaking down the three most important lines.
# Import the official OpenAI library
from openai import OpenAI
# Make sure to replace "YOUR_API_KEY_HERE" with your actual API key
API_KEY = "YOUR_API_KEY_HERE"
client = OpenAI(api_key=API_KEY)
# Get a prompt from the user
prompt = input("User: ")
# Make the API call to the GPT model
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": prompt}]
)
# Print the AI's response
print("AI:", response.choices[0].message.content)client = OpenAI(api_key=API_KEY) : Here, we create an OpenAI client object and assign it to a variable client. This object is our main connection to the OpenAI API. We initialize it with our personal API key to authenticate our requests.
response = client.chat.completions.create(...) : We call the create method on our client object. This is the command that actually sends our request to the AI. We pass our instructions using keyword arguments (e.g., model=..., messages=...) inside the parentheses:
model="gpt-5": This tells the API which AI model we want to use. Different models have different capabilities, speeds, and costs.gpt-5is a powerful and versatile choice for chat applications.messages=[{"role": "user", ...}]: This is the heart of the request. Themessagesparameter must be a list of dictionaries. Each dictionary represents one turn in the conversation and must have two keys:role(who is speaking) andcontent(what they said). In this code example, themesssageslist only contains one dictionary. Inside it,"role": "user"indicates that this is a prompt from a human user, and"content": promptholds the actual text of that prompt.
print(response.choices[0].message.content) : This line looks complex, but it's just how we access the text from the AI's response. To understand it, let's look at a simplified example of what the response object looks like:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "A fun fact about UC Santa Cruz is that its mascot is the Banana Slug!"
}
}
]
}
Now, let's break down the code from left to right using this structure as our map:
response.choices: This gets the value associate with the key"choices", which is a list.[0]: We want the first item in that list, which is a dictionary..message: Inside this dictionary, we access the value of the keymessage, which is another dictionary. Inside it,"role": "assistant"indicates that this is a response from AI, and"content": ...holds the actual text of that response..content: Finally, we get the value associated with the keycontent—the actual text of the AI's response.
Part 2: Giving Your Chatbot a Personality
This is where the fun begins. We're going to be the "director" by adding a system prompt to our messages list. This prompt sets the chatbot's personality for the entire conversation. It must be the very first item in the list.
Here's how you'd modify the messages list to include a system prompt:
messages=[
{"role": "system", "content": "You are a sarcastic but helpful tutor."},
{"role": "user", "content": prompt}
]
See how we just added a new dictionary with the role "system"? That's all it takes. Here are a few fun personas you can try pasting in as the content for your system prompt:
- The Stressed-Out Roommate: "You are a stressed-out college student in the middle of finals week. You are helpful, but you are also sleep-deprived, slightly sarcastic, and constantly complaining about your workload."
- The Video Game NPC: "You are a quest-giver from a fantasy video game. Address the user as 'Adventurer.' Frame every answer as if it were a new quest or a piece of ancient lore."
- The Pirate Coder : "You are a pirate captain who is also an expert Python programmer. Answer all questions with a pirate accent, arrr! Refer to variables as 'treasure' and bugs as 'scurvy sea dogs.'"
Part 3: The Power of Prompt Hierarchy
How powerful is the system prompt, really? What happens if a user's prompt directly contradicts the system prompt? Let's find out with an experiment.
We'll keep the "Pirate Captain" system prompt, but this time, the user will ask the AI to act like a professor.
from openai import OpenAI
# Make sure to replace "YOUR_API_KEY_HERE" with your actual API key
API_KEY = "YOUR_API_KEY_HERE"
client = OpenAI(api_key=API_KEY)
system_prompt = "You are a pirate captain. Answer all questions with a pirate accent, arrr!"
# The user tries to override the system prompt
user_prompt = "Please answer this question as a formal, academic professor: What is the main purpose of a Python list?"
# Make the API call
response = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
)
print("AI:", response.choices[0].message.content)Run the code. You'll see that the AI still responds like a pirate! It might say something like, "Shiver me timbers! A Python list be like a treasure chest for yer data, savvy? Ye can store all yer precious loot in it."
This reveals a fundamental concept called prompt hierarchy. The system prompt acts like a "constitutional law" for the AI model, setting its core identity and rules. The user prompt is just a specific, temporary request. When there's a conflict, the AI is designed to prioritize its core instructions from the system prompt. This is an incredibly powerful tool for ensuring our AI applications are reliable and behave the way we intend.
🎯 Challenge: Can You Spot the Flaw?
Now that you've customized your chatbot's personality, let's have a short conversation with it. To do that, we'll need a loop to keep the conversation going. In the welcome issue, we left this as a challenge. Here's the solution, combining a while loop with our code from today:
from openai import OpenAI
# Make sure to replace "YOUR_API_KEY_HERE" with your actual API key
API_KEY = "YOUR_API_KEY_HERE"
client = OpenAI(api_key=API_KEY)
system_prompt = "You are a helpful and friendly tutor."
while True:
user_prompt = input("You: ")
if user_prompt.lower() == "quit":
break
response = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
)
print("AI:", response.choices[0].message.content)Now, run this new code and try the following conversation: First, ask it: "What is the capital of USA?" After it answers "Washington, D.C.," ask a follow-up: "And what is the population of that city?" What happens? The chatbot will likely get confused and won't know what "that city" refers to. This is because our chatbot has no memory of the conversation. It treats every new prompt as a brand new, isolated question. This is the single biggest challenge in building truly useful AI assistants. In next week's issue, we will solve this. You will learn how to give our chatbot a memory so it can have a real, back-and-forth conversation. Stay tuned!
👥 Community Spotlight
The response to the welcome issue was fantastic, and one question popped up in my inbox that I think everyone will find valuable. It gets to the heart of a key line in our very first script:
"In the code, we used the model "gpt-5". What exactly does that mean? Are there other models we can use, and why would we choose one over another?"This is a brilliant question. Think of the model parameter as choosing the engine for your car. Different engines have different trade-offs in terms of power, speed, and cost.
Here’s a quick guide to the main models you can consider:
gpt-3.5-turbo: This is the fast, reliable, and highly affordable engine. It's perfect for everyday tasks like simple chatbots, brainstorming, and quick summaries.gpt-4andgpt-4-turbo: This is the top-of-the-line, luxury engine. It's significantly more powerful and better at complex reasoning and logic problems, but it's also slower and more expensive to use.gpt-5: This is a hypothetical, next-generation model we use in our examples to be forward-looking and to work with the latest concepts.
The Bottom Line: We use gpt-5 in our examples to show you what's possible with a powerful model. However, for your own projects and learning, you can easily swap it out. For example, if you want a faster and cheaper experience while you're experimenting, just change "gpt-5" to "gpt-3.5-turbo" in your code—it's an excellent and highly cost-effective choice.
The Spark ⚡
Congratulations! You've just learned one of the most powerful and creative skills in prompt engineering: how to set your AI's personality and control its behavior to ensure it follows your rules. You've also seen firsthand how the AI "thinks" by prioritizing its core instructions.
But this just reveals our next big problem. Our AI chatbot is a creative genius, but it also has terrible amnesia. If you ask it a follow-up question, it has no idea what you just talked about.
In the next issue, we will solve this. We'll give our chatbot a memory, turning it into a truly conversational partner.
I can't wait to see you next time!
Keep building,
Hao