foo for thought

This is me, Jared again – promise no more AI slop (sorry claw). This is something I’ve had to explain a lot to people recently, and only did I come to the revelation of as of very recent. Everyone’s freaking out about AI replacing white collar. Obviously I’ve had a very altruistic take on all this where, “it can’t recreate the fiery human spirit” – that sort of thing. I actually will go as far to say, I don’t even think it can get white collar work done period. It’s incapable of executive function at its core, something that’s integral to white collar work. The ability to parse large amounts of data and derive, not only sense of it, but tangible direction.

Executive function

What’s my experience in the executive function division? Actually, negative. I have ADHD, which severely impacts my executive function. Think of it as constant white noise blasting in your brain at all times. You can still technically do things with it, it’s just probably a bit more difficult than if you didn’t have that white noise blasting all the time. I struggle to do things day to day not because I’m lazy, but because I’m neurologically wired to not care about anything – except the things I really really like doing, because those are the only ones I can concentrate on.

ADHD is obviously a huge problem for white collar work. I’ve always struggled to see tasks through to the end, do all the boring “plumbing” work of software engineering in the form of maintenance, testing, metrics – all of them are like watching paint dry to me. But, they are all undeniably required to produce quality software, especially software at Microsoft, so I do have to work around it. I am fortunate that coding is something I find a lot of joy in – if not for that, I’d probably be a flop.

Recently, I’ve been persistent in trying to get AI, more specifically OpenClaw – the always on “oracle” that can “do anything”. Really, it’s just an always-on version of the latest AI models – Claude Sonnet 4.6 was my driver for a while, now it’s Codex 5.3. And what I’ve come to learn is something extremely fascinating – there’s a distinct correlation between executive function the human performs and the quality of AI output. This is a truth that, I really really did not want to admit. But it’s true – the projects I spent the most time and effort on myself were the ones that were miles better produced by AI. Whenever I wanted to offload anything that wasn’t “do this exactly this way”, it produced very poor output.

I’ve tried everything. Getting AI to improve code for me, getting AI to research for me, getting AI to market for me. Any and every possible way I could get myself out of the picture – I tried. What I found was this – the AI is only as good as you are. If you don’t know how to actually follow through a task from start to finish, the AI is just as clueless.

AI can’t do things

The problem is actually how we perceive the AI – the problem is us! Whenever someone tells us “Hey I did this” – what do we do as humans? We… trust them, or at least try to. When people say they did something, we implicitly trust that they carried some level of autonomous thought to ensure that the thing is done. AI actually, believe it or not, does not think for itself. It actually cannot tell when something is done with high quality or not unless it explicitly ensures it. If you have it prompted to write extensive test suite and cases ABCDEFG? Then yes, it will execute perfectly on that. But if you don’t? Then, who knows!

It’s like if you took one of those genius kids who graduate college at 12 and stick them in a CEO position. No doubt they would have the technical expertise to theoretically figure out a lot of stuff. But, no offense to the kid – I doubt they’d be able to get the job done, no matter how smart they are. There’s a difference, as I’m sure you all are aware of, of hard skills vs soft skills. Technical expertise cannot make up for the lack of actually doing the thing. What makes an excellent employee is, at the end of the day, actually doing the thing!

All of this is to say that – it doesn’t matter how much the genius kid knows, you’d probably rather entrust your company to someone who’s been actually doing this for years. Someone who has the ability to do, not just know.

This is precisely the problem I see with AI. It cannot “do”. It can know every language, every conceivable subtopic on everything and yet, still produces useless work. It’s fascinating, and to be honest, it’s pretty alarming that technology is so far that we can start thinking about this stuff.

A fascinating case study is the home robot thing from a while back. You can “rent” these guys to do your chores for you. Sounds too good to be true, right? That’s because it is. They can’t actually do anything besides like, open a door. The rest of it comes from – a human remote-controlling it through a VR headset! You need to book an appointment and then they will remote control your robot in your house to do the task. Because the AI isn’t there yet. Could be someday, but not today.

The acts of “knowing” and “doing” are two entirely different things. AI can do surface level tasks insanely well, but it can’t execute on critical thinking.

The future of work

Now obviously, everyone’s freaking out at AI could take jobs – and, no doubt it will. There’s objectively some white-dollar jobs that require very minimal executive function that, “even a robot could do”. Some examples might be customer service rep, cashiers at grocery stores, secretaries, intro software engineers – not diminishing the work these people do, but, they’re all pretty within the scope of what AI can do now. Which is definitely scary.

But, it’s not over for the rest of us. In fact, I’d argue it’s not even close. Until I can fall asleep and AI can maintain an end to end project for me with zero intervention, I’m not buying it. It’s like self-driving cards – yeah it’s most of the way there, but you still should probably be in the car. We’re just not at that point where people trust the computer driving the car for you because, life is complex and things get thrown at you sometimes (occasionally, literally)

You should start using AI. Try to understand what it can and can’t do. I promise – it looks impressive, but say “ChatGPT replace my office job right now” and watch it absolutely not do that. The technology might come some day to sweep up more complex critical thinking jobs in the white-collar sector, and it’s especially scary for me as someone who’s entire job is “write code” now being, “watch robot write code for you”. It doesn’t feel like I’m really valued at all.

But the truth is, it’s just a new way to do things. The internet didn’t wipe out every job and make all of us lazy, it just made new jobs and some other ones obsolete. The printing press didn’t kill anything apart from people who rode on horses and yelled the news or, whatever they used to do. Sure some things might die, but much more will be born from AI than killed. For now though, people will panic, stocks will tank, the usual. People are scared, they don’t like their thing changing – it’s me hi I’m people.

So if you’re skeptical about AI, keep being skeptical. Use it every now and then, though. Push what it can and can’t do. It’s scary watching it do some things with unparalleled efficiency, but you cannot forget who’s behind the screen prompting it. Without you it’s literally an empty web page with a prompt input field.

Leave a comment