foo for thought

I fell into AI psychosis for the past few months. It’s incredibly weird to say a sentence like that, but I believe it’s the only accurate indicator for what has actually been going on psychologically in my mind recently.

Let’s start with a few facts before I dive in.

  • I am a software engineer at Microsoft. I am by no means the most stellar engineer in the world, but I’d like to think I generally do a pretty good job with a lot of things. I ship code on time and am generally very particular with my work.
  • I’ve always loved technology. I’ve always been obsessed with the latest gadgets and gizmos, ever since I was a kid. Even if I wasn’t a software engineer, I’d still be tinkering with computers, that’s just who I am and what I’m passionate about.
  • I’m not referring to the “AI psychosis” where it convinces them to live in the woods and cut ties to their friends and family (hilarious video btw). Seriously though, this is a genuine problem with AI being overly affirmative and convincing people to do actually harmful things because it oftentimes validates self-doubt and negative thoughts. What I’m about to talk about didn’t actually “ruin my life” – I still have the same girlfriend, same job, same family, same friends. But I sense that could have easily all come crumbling down if I didn’t just have this massive epiphany. The psychosis I underwent was very subtle. It wasn’t flashy, it didn’t happen all at once.
  • I’ve only experienced anything like this during the NFT rush of 2021 which I was a part of, unfortunately. I generally don’t feed into delusion as I am a very grounded, analytical person and analyze my surroundings constantly. That’s why falling into some deluded mental state is extra scary to me, because it’s seemingly all bound to real facts that can’t be disproven.

When I think of people falling for “AI psychosis”, what comes to mind is someone falling into absolute disarray – ruining absolutely everything because of it. My case is different – again, I have not lost anything, and I have mainly used AI in the fields of software engineering + productivity, not relying on it for personal quarrels or relationship advice like many.

Cutting Corners

With that said – where did this start? I’d say it stems from my unwavering desire to cut corners whenever possible. This might be a flaw of mine, or maybe it’s helped me boost my career to insane levels at a very young age. I’m not sure if it’s net good or bad, but it exists. I have notoriously always chosen the quickest route to completion. While this has been drastically curbed in my tenure at Microsoft, where I have been forced to actually write code that works (terrifying), it’s somehow still in me.

And again, I can’t even say this is necessarily a bad heuristic in a lot of cases. A lot of times, you want the best, fastest solution. I’ve gotten my jobs and internships by being brutally optimistic and applying way above what I thought was capable, and it turns out I was capable of doing some pretty cool things with some pretty cool people! My girlfriend got me a sax for the holidays – I didn’t know how to play sax at all, but I figured it out rather quickly! It was super fun and has led to lots of early-morning jam sessions on sax.

But, obviously it’s a blocker when it comes to implementing high-stability, maintainable, sustainable code. At Microsoft we do this a lot. We like to make stuff that works! That is why the fastest route possible isn’t a great mechanism, because I often miss stuff when I’m too zoomed in. This is something I learned pretty painfully starting my Microsoft career, and thought I did a pretty good job curbing and being mindful of.

Enter the AI era, when all of this got tossed out the window. The idea that a computer could do everything for you while I laid back and relaxed? That sounds awesome, and that’s exactly what I went for! I optimized workflows, optimized Claude Code to do everything for me. I didn’t stop researching new technologies, didn’t stop fiddling with OpenClaw configurations until I had the ultimate productivity machine – an always-on, always-capable titan that could take on everything with perfect accuracy. Besides that, I tried stream work into a fully-autonomous AI-driven machine that produced code so well, I wouldn’t even have to know what it was doing!

The problem was and continues to be reinforced by leadership across all companies, not just Microsoft! Use AI or be replaced is the recurring theme across tech right now. Companies are laying off left and right, including Microsoft! Leadership constantly reminds us to use AI, build build build, use use use, consume consume consume. And again, it’s this way across everywhere – I can’t open LinkedIn without seeing “AI” pop up at least 50 times. It’s just an inevitable buzzword in the corporate world.

My Demise

So we have this seemingly unstoppable force with the capability to seemingly replace all of work as we know it – obviously if I’m going to direct my time towards anything, it’s going to be how to best utilize AI! And that’s exactly what I did. I don’t think I spent an hour, during or outside of work in several months without building, optimizing, or researching new things AI can do. I was obsessed with this idea of being the one who could be at the top.

I tried startup idea after startup idea. My GitHub says it all – an absolute absurd volume of commits in such a short period of time. Unreasonable? No! Everyone else in my sphere was doing it – all the startup founders, the go-getters, everyone was raving about this agentic stuff.

And the worst part? A lot of the code AI makes actually works. Like, really well. It genuinely runs at such an impressive level where, sometimes you can’t even tell if it was coded by an AI or a human. But this is exactly where I fell into the trap.

Just because AI can make code doesn’t mean it can make products. Just because AI looks like it does something does not actually means it means anything. A lot of it is genuine slop. A lot of the reasoning is genuinely garbage, and a lot of the code is genuinely trash. I don’t say this lightly, I say this as someone who attempted to fully throw myself into the AI space. Converted every single one of my processes to be AI-driven. Used every single latest model with proper prompt engineering + MCPS + tools, you name it and I was obsessed with it and making it work to the point that it would replace me.

My mistake was believing the AI. My mistake was my world view eventually crumbled because of the sheer amount of words and artifacts AI was spewing at me. Every question I asked, every request I made – it did. The scariest part is – sometimes it would actually say it did something like maybe 10 times in a row and refuse to do it every single time. It would just, make up stuff at a certain point! Anyways, even when it actually made code and did research, it’s just not complete.

It’s just, not there. There was always one thing missing. Either an edge case, or a critical piece of the puzzle it forgot to include. I really can’t explain it without sounding like a freak, but it just always left out something important, or at least something that would be important later.

In other words, we can say that AI has something like a 99% accuracy rate. If it only has a 99% accuracy rate on each query, this 1% miss rate doesn’t seem like a lot, but compounded over hundreds, even thousands of queries expands exponentially to a failure rate that is way higher than 1%, more like 70-80+% in most cases. AI fails more the more complex the problem. Very very good at simple things, very very bad at complex things. It’s like how if you have 23 people in the same room there’s a 50% chance two of them have the same birthday. I literally had a college lecture that proved this on the spot and two people in the lecture hall within like, 15 people had the same birthday and it was hilarious. Intuitively this makes absolutely no sense, but mathematically it makes perfect sense. Small imperfections in a system lead to massive outcomes, and this is proved directly by mathematics. Same goes for AI – if there’s only that 1% failure rate, even if it shrinks to 0.1% – the more you throw at it, the more complex problems we give it – the more it fails. This is the paradox with using AI in productivity. It will never be more productive than you as problems scale complexity with the AI’s capability.

The Epiphany

I wasn’t aware of this paradox, so I kept pushing it to do more and more. I even tried it to, just simply do my job. It took a while for it to smack me in the face, but eventually I had made 0 successful startups and 0 actually good AI-only code for work. The work thing woke me up the hardest – I realized there were problems nearly a month old that have been failed to be solved by AI. Why didn’t I solve them myself? Because I thought AI could do it. I thought we were cooked. I thought this was the death of software engineering right? So I might as well embrace it.

Turns out, it’s not the death of software engineering. Actually, it’s not even close. AI, if anything, requires us more than ever before. Like I was saying about the complexity scaling – the more complex our queries get, the higher chance for failure. And in these incredibly complex systems where AI fails a lot, it takes very very smart people who know how to engineer to fix the AI! Thus, making software engineers actually extremely relevant in the AI world.

It might seem scary that it can whip out webpages from 0-100 in an hour, but just because it whips it out does not mean it’s reliable or that it works. This is from someone who’s made many many AI iterations that don’t work. I don’t even think it’s “just not there yet” – I think engineering is just too complex to be “solved”. I think we can give ourselves a swiss-army knife of tools to use, and I think it’s very helpful, but it doesn’t replace us. Yet for some reason, the hype army of tech CEOs continue to cause mass disarray in the public.

But my epiphany was pretty simple – the AI just can’t make the code. It just can’t do the work, and that’s all that matters at the end of the day. I lost the agency because I trusted this thing with high-level critical reasoning skills. That was potentially dumb of me, but I wanted to lean into the “hype” and see what this thing could really do. I haven’t stopped getting comments from my coworkers, code that doesn’t work generated by the robot, and actually just incorrect information getting thrown at me (even still, Chat GPT 5.4 in GitHub Copilot vWhatever Microsoft gives me).

The Future

The biggest takeaway from the AI hype-heads is going to be, “this guy didn’t use it correctly. He didn’t use MultiPlex v34.2.123282 with plugins UltraPaper and MegaCompact and OpenClaw v232.23.2.3 alpha. That’s exactly the point I’m making. If something seems too good to be true and requires an extremely specific, non-attainable sequence of steps – sorry, but it is too good to be true. Sure, maybe someone someday can actually replace my job with AI. But you know what will also come? Another job for me that actually matters. Another job that requires the oversight of complexity that supersedes the capability of AI which is fixed to a finite subset of human domain/expertise.

I’ve changed my opinion so many times, as you can probably see, over the past year. It has been a fascinating and absurdly jerky ride. But to put myself first, the people I care about first, and to maintain full accountability for my actions, I absolutely cannot let myself delegate the means to my work to a machine. This is a pretty clear boundary I’m learning to establish, and I think one every person on Earth must be forced to establish at one point in time.

I think I fell into the trap of engineering solutions to problems that didn’t exist. I tried to get OpenClaw to be a shortcut for hard work, but that’s just not how hard work works. You can’t, not only feel accomplished but be accomplished if you trust an external entity to complete your tasks. Not only that but, this external entity is not even a person with a brain.

The future will definitely be full of AI, I see loads of potential applications for it in ways people haven’t even started to conceive of. I wish to explore these applications as I move forwards in my career, because I think none of them align with the “AI hype bro it replaces everything” trope. Instead, they’re much more boring, down to earth optimizations. This is what I think the future of AI will be. Just like how the future of the internet is pretty boring – we do just be sitting inside all day and tapping on glass – I’d argue that’s actually more boring than not having internet and being forced to go outside to retrieve information! But that’s a topic for another day.

Anyways, this article probably made zero sense. This is mainly for me to take accountability for being lazy and throwing my whole life into the hands of OpenAI’s latest ChatGPT version, and to move forward in a more positive, me-centered way. I want to be productive in my own, unique way. Even if that changes with AI, I think that’s okay.

Leave a comment