I Finally Subscribed to Claude, And It Immediately Made Me Build a GTA Clone in Go
I've been subscribed to Gemini AI Pro and ChatGPT Plus for a while now. Both are good. I use them daily. But if you spend any amount of time in developer circles (Reddit threads, YouTube videos, Discord servers, random comments under a Stack Overflow answer), you will eventually hit a wall of the same opinion: if you're writing code, Claude is on another level.
I held out for a while. Two AI subscriptions already felt like enough. But the consensus was relentless. So I finally did it. I subscribed to Claude Pro and immediately decided I wasn't going to ease into it with something simple.
The Reputation Was Hard to Ignore
The thing about Claude's reputation for code is that it doesn't come from marketing. It comes from developers. The people who have actually thrown hard problems at it, compared outputs side by side, and walked away with an opinion.
I'd seen enough comparisons. Code reviews where Claude caught things the others missed. Refactors that actually made sense. Explanations that read like they came from a senior engineer who genuinely understood the problem, not just the syntax. The pattern kept showing up: when the task was code, Claude was the name that came up first.
That kind of word-of-mouth carries a lot more weight than a benchmark graphic on a landing page. So when I finally decided to try it, I wasn't going in with low expectations. I wanted to see if it lived up to the hype.
Skipping the Small Stuff
I didn't start with "write me a function" or "explain this error." If I was going to form a real opinion, I needed to throw something substantial at it right away. Something that required understanding game logic, Go's standard library, rendering concepts, and the ability to structure a non-trivial project from scratch.
The project I had in mind: a remake of the original Grand Theft Auto, the top-down version.
If you weren't gaming in the late 90s, the original GTA wasn't the open-world third-person game the franchise became famous for. It was a top-down, bird's-eye view game where you drove cars through city streets, took on missions, evaded cops, and caused mayhem from a camera angle looking straight down at the world. It was chaotic, addictive, and genuinely ahead of its time. It also has a surprisingly elegant set of mechanics when you break it down: a grid-based city, a player character that can walk or drive, basic collision detection, a wanted level system.
That felt like a solid test. Not a toy problem, not a tutorial project, but something with real moving parts.
What I Asked Claude to Build
I gave Claude the prompt: build me a Go project that recreates the original GTA with the top-down view. I wanted the game to actually run, have a city grid you could drive through, a player you could move around, and the basic feel of that original top-down perspective.
The model I was using was Claude Sonnet 4.6, the latest at the time of writing this.
What it came back with was a structured Go project using the Ebiten game library, a solid choice for 2D games in Go. It generated the project layout, the main game loop, a tile-based city grid, a player struct with movement and rotation, basic car handling, and a simple camera system that kept the player centered on screen as they moved through the world.
type Player struct {
X, Y float64
Angle float64
Speed float64
InCar bool
}
func (p *Player) Update() {
if ebiten.IsKeyPressed(ebiten.KeyW) {
p.X += math.Cos(p.Angle) * p.Speed
p.Y += math.Sin(p.Angle) * p.Speed
}
if ebiten.IsKeyPressed(ebiten.KeyA) {
p.Angle -= 0.05
}
if ebiten.IsKeyPressed(ebiten.KeyD) {
p.Angle += 0.05
}
}
It wasn't a finished AAA game. But it was a running, structured Go project with real game logic, and it looked like something a developer had actually put thought into, not a pile of spaghetti that technically compiled.
Two Corrections. That's It.
Here's the part that actually impressed me.
I ran the project and hit two issues. The first was a rendering bug where the camera offset wasn't being applied correctly, which caused the player to drift toward the edge of the screen instead of staying centered. The second was a tile collision issue where the player could pass through certain city blocks that should have been solid.
I went back to Claude with both. Described what was happening, pasted the relevant functions, and asked it to fix them.
Both corrections were clean. Not "here's a hacky workaround" fixes, but actual corrections that addressed the root cause. The camera fix adjusted how the draw offset was calculated relative to the player's world position. The collision fix corrected a boundary check that had an off-by-one error in how it was reading the tile map.
After those two changes, the project ran cleanly. A top-down city grid, a player character that moved and rotated correctly, collision with buildings, the basic feel of navigating a city from above.
Two corrections. I've burned more back-and-forth than that debugging a single function with other models.
Comparing It to What I've Used Before
I want to be fair here. ChatGPT and Gemini are both capable tools and I still use them. But there's a pattern I've noticed when using them for larger code tasks.
ChatGPT tends to be confident in ways that bite you. It'll generate code that looks right, compiles, and then fails in subtle ways that take time to untangle. You end up spending rounds of conversation fixing things that shouldn't have been wrong in the first place.
Gemini has gotten better with code over time, but it sometimes feels like it's optimizing for an answer that looks good rather than one that runs correctly. The structure is there, the comments explain things clearly, and then you run it and find that the logic doesn't quite hold up under real conditions.
With Claude, the code felt more considered. Like it was thinking through the problem rather than pattern-matching to something that resembled a solution. The corrections it made were precise. It didn't rewrite half the file to fix a camera offset. It found the issue, fixed the issue, and left everything else alone.
That's not a small thing. That's the difference between an AI that understands your code and one that's approximating it.
Is It Worth the Extra Subscription?
Honest answer: yes, if you're writing code regularly.
The Go GTA project was one session. One test. But it was the kind of test that tells you a lot. Complex enough to expose limitations, concrete enough to evaluate the output objectively. Two corrections to go from prompt to working project is a good result no matter what tool you're using.
I'm not canceling my other subscriptions. Each tool has its strengths, and there are tasks where ChatGPT or Gemini is genuinely the right tool: creative writing, research, summarization, working with Google's ecosystem. But for coding, Claude has earned its spot in the rotation. I might even argue it's moved to the top.
What I'm Building Next
The Go GTA project is sitting on my machine as a foundation. The city grid is there, the player movement is there, the collision system is there. The obvious next step is to add vehicles you can enter and exit, a basic wanted level system, and maybe a few mission triggers to give the world some structure.
I'm also curious to push Claude on bigger Go projects. More complex architecture, more moving pieces, closer to production-quality code. If it handles those the same way it handled the game (well-structured output, fast and precise corrections), it's going to change how I approach side projects.
That's the thing about actually testing a tool instead of just reading about it. You stop debating the hype and start finding out what's real. For me, the verdict after day one was clear enough: Claude earns the subscription.