foo for thought

Imagine the worst fast food chain you can think of. This ultimately comes down to personal preference, but pick the one you don’t care for one bit. Now imagine going to a maximum-efficiency version of that restaurant. You now get the food you don’t like, just much faster. That’s what Google’s new Antigravity is, except instead of food you don’t like it’s making code you don’t like.

Gemini 3 Pro and Antigravity for that matter are both impressive feats of technology. Just like it’s impressive how my dog correctly responds to “Left” when I put my hand directly in front of her left paw. But just like my dog, any slight deviation from expectation results in a completely butchered output. If my hand is not right next to her left paw, saying “Left” will prompt her to badger me with her right paw, or sometimes even both. Similarly, any runtime or compilation error will throw Antigravity in for the same loops that Cursor, GitHub Copilot, Claude Code, and every other AI editor that “kills manual coding” struggle in.

It seems like Google put in a lot of work into doing two things. The first is streamlining the developer + agent co-op experience. Similar to Cursor, Antigravity now has a dedicated Agent portal where you can manage all of your delegated tasks to the AI. The second thing Google has worked on with Antigravity is the prompt engineering when it comes to injecting this code. With Antigravity, you do get more of a “magical” feeling, like you’re quite literally telling your computer to do something and it bounces away on a trail of rainbows and green builds.

Except, that’s not actually what happens. In reality, we end up with a faster version of Cursor. The same loopholes, the same roadblocks are violently shoved in our faces as we try to “vibe code”, or as the cool kids in management of FAANG tell us, “marks the death of coding”. A faster version of the same AI-IDE that doesn’t remember to use client in server components, that doesn’t know what libraries we’re dealing with, or for some reason still can’t process we’re in the year 2025? I asked Gemini 3 Pro to code a Gemini 3 Pro model implementation, and it spent quite a lot of tokens to decide that, “I don’t know of this strange model you speak of, so we’ll just pretend it exists and stub it”. As if I’m not directly talking to it.

Anyways, all of this to say, I’m not that impressed. Will I be using Antigravity for everything instead of Cursor from now on? Probably, but that’s because it’s better at what it does. Gemini 3 Pro is also pretty impressive – I can say for the first time I would actually trust Gemini with my code (which, despite all of the 2.5 glaze when that came out, I was pretty adamantly against its coding capabilities).

However, I don’t think it marks anything “new” in software engineering. It’s the best out of what we got, but somehow, I can’t help but think we’re converging towards an invisible asymptote here. Similar to how the iPhone literally hasn’t changed in like 10 years, I don’t think any of this AI stuff in its current LLM form is going to change in a while. I just think it’s going to get “slimmer”, “faster”, and “shinier” as big tech finds new clever ways to jam more efficiency into LLMs.

Until then, “vibe coding” is still, fortunately for my career as a software engineer, non-existent. But the obvious “upside” of AI is still extremely prevalent. Just doesn’t put me out of a job yet. And, as much as everyone around me says it will, I don’t know. I think it’s going to take more than terrible incremental iterations (cough cough iPhone cough) to cause a massive paradigm shift into the “AI age” or whatever these people call it. It’s going to take a change in computing and in application, not just nudging the stock prices slightly up!

Until next time
Jared

Leave a comment