Pair Prompting: How AI Changed the Way I Mentor and Collaborate
As a senior SRE, mentoring is a big part of the job. You help people get unstuck, walk them through problems, and pair up on tricky code. Pair programming has been part of my workflow for as long as I can remember. Two people, one screen, talking through the logic together. It's how I learned, and it's how I've helped others learn.
But recently I noticed something. I haven't actually been pair programming. Not in the traditional sense. And I didn't even realize it until one specific moment made it impossible to ignore.
The Call
I got a call from a teammate. Something was wrong with our Terraform. It was blocking an upcoming release, and we needed to move fast.
Here's the context: this Terraform hadn't been run in at least eight months. The azurerm provider was outdated. Every resource was lumped together in a way that made it painful to work with. My teammate had been grinding through the update and was close to finishing, but he hit a wall and called me to figure out how to move forward.
In the old days, I would have pulled up the code, started reading through it line by line, and we would have worked through it together. Classic pair programming.
Instead, without even thinking about it, I said: "Open Copilot and ask it to clean this up."
The Moment It Clicked
We started iterating through prompts together. He'd type a prompt, Copilot would generate output, and we'd review it. Not quite right? Tweak the prompt, run it again. Too aggressive of a refactor? Ask it to be more conservative. Need it to handle a specific resource differently? Tell it exactly what we need.
We went back and forth like this for a while, and that's when it hit me.
We're not coding. We're prompting.
The work is the same. We're still solving the same problem, still collaborating, still making decisions together. But the mechanics of how we do it have completely changed. Instead of typing Terraform blocks by hand, we're describing what we want and guiding an AI to produce it. Instead of debugging syntax, we're refining instructions.
My mind was blown. Not because the technology is new. I've been using Copilot daily for a while now. But because I realized the collaboration model itself had shifted without me even noticing. Pair programming had become pair prompting.
The Result
Copilot cleaned up the Terraform. We got the provider updated, the resources properly organized, and the release was unblocked. What could have been a multi-day slog through eight months of technical debt turned into a focused session of iterating on prompts together.
That's the part that still gets me. This work used to take hours, sometimes days. Writing scripts, refactoring infrastructure code, untangling resources that someone jammed together months ago. Now it's done in a fraction of the time. The problem-solving is the same. The execution speed is on another level.
Looking Back
I'd be lying if I said I don't miss the old way sometimes. There was something satisfying about digging through Stack Overflow, reading blog posts, piecing together a solution from five different sources, and finally getting it to work. That exploration was part of the craft.
But we have to adjust to the new way of working. The engineers who resist AI aren't going to be more productive. They're going to be slower. And the ones who learn to use it well, who learn to prompt effectively and collaborate with AI as a tool, are going to operate at a completely different level.
That doesn't mean the old skills don't matter. You still need to understand what good Terraform looks like. You still need to know what the right architecture is. You still need to be able to read the output and say "no, that's wrong, try again." AI doesn't replace judgment. It just changes the interface.
How to Pair Prompt Effectively
Since that moment, I've been thinking a lot about what makes pair prompting work well. Here's what I've found:
Start With Context, Not Code
Before you start prompting, make sure both people understand the problem. Just like pair programming, if one person doesn't know what they're looking at, the session is going to be frustrating. Align on the goal first, then start prompting.
Take Turns Driving
One person types the prompts, the other reviews the output. Swap roles regularly. The person driving focuses on crafting clear prompts. The person reviewing focuses on whether the output is actually correct. Two sets of eyes on AI output catches mistakes that one person might miss.
Iterate, Don't Accept
The first output is rarely perfect. Treat it as a starting point. Refine the prompt, add constraints, ask for a different approach. The magic of pair prompting is that two people can iterate faster because you're bouncing ideas off each other about what to ask next.
Know When It Works Best
Pair prompting shines in specific situations:
- Messy codebases: When you're dealing with code that hasn't been touched in months and needs cleanup
- Migrations and upgrades: Provider updates, framework upgrades, dependency bumps
- Unfamiliar territory: When neither person is an expert in the specific tool or language
- Boilerplate-heavy work: Generating repetitive code that follows a pattern
For deeply creative or architectural decisions, you're still going to need human conversation. AI is great at execution. Humans are better at deciding what to execute.
Use It as a Mentoring Tool
This is the part I'm most excited about. Pair prompting is an incredible way to teach people. Instead of telling a junior engineer "here's how you write a Terraform module," you can sit with them and say "let's prompt Copilot to generate one, and then I'll walk you through why the output is good or where it's wrong."
The learning shifts from memorizing syntax to understanding concepts. They learn what good infrastructure looks like by evaluating AI output, not by copying examples from documentation. That's a deeper kind of understanding.
Don't Trust Blindly
This is the most important one. AI output needs to be reviewed. Always. Every time. Copilot doesn't know your environment, your constraints, your edge cases. It generates plausible code, not guaranteed-correct code. Pair prompting helps with this because two people reviewing output catch more issues than one.
The Bigger Picture
Pair programming didn't die. It evolved. The collaboration is still there. The mentoring is still there. The problem-solving is still there. The only thing that changed is the tool we're using to write the code.
I didn't plan to coin a term that day on the call with my teammate. But "pair prompting" is exactly what we were doing, and I think it's what a lot of engineers are already doing without realizing it. The sooner we name it and get intentional about it, the better we'll be at it.
The craft of software engineering isn't going away. It's just changing shape. And honestly, I'm here for it.