foo for thought

Two heads are better than one. That’s how the saying goes, I think. History has been filled with nothing but collaboration towards a shared goal. Many people working on different tasks, all to accomplish the exact same thing. Arguably, a lot of the methods used to get people to do the same thing were pretty messed up over time. But even in modern days, in companies, we all work together to accomplish the same goal. Whether it’s to build a product or to provide a service, we all play our role individually to fulfill a bigger purpose than just ourselves.

What if the history of collaboration can now be performed by just you? You don’t even have to involve another person anymore. You can just do everything yourself. Not only can you do everything yourself, but you can parallelize work so that you can do multiple things at the same time. It’s obviously been romanticized in science fiction novels and films where you could clone yourself to theoretically do multiple things at the same time with your level of cognition.

It’s really weird to be saying this, but I think the day of cloning yourself is upon us. Not only that, it actually works.

The concept of “AI coworkers”

Claude just released their co-worker tool, which everyone jokes is the death of white-collar companies. I thought this too, so I gave it a test run while making a new startup yesterday. The truth is it’s still in beta or preview, and it doesn’t fully do as promised yet. It can really only navigate websites, just like ChatGPT Atlas could. It can’t really have a fundamental understanding of how to use tools to get actions done.

But just the very concept of this made me think. This fictitious world leaders are throwing around with agentic AI experiences, that we can just have agents basically do everything for us. It’s not only going to happen, but it will happen to everybody. That’s not to say that we don’t need people. We still do. I mean, we’re the only ones with senses, with brains, and with consciousness (that we know of). We can think of this agentic hive mind as an extension to us. Not that AI agents will necessarily take us over, as much as we will learn to use them to harness them, like many other tools and technology throughout history.

I saw this one tweet comparing Claude Opus to the printing press or the photo camera, and I kind of laughed because that sounds kind of ridiculous. But when you think about it, these are technologies that fundamentally change the art of doing something. Pictures change how we capture moments instead of painting or drawing. We can now just click a button and immediately capture a moment with high fidelity and accuracy. But even though we don’t have to paint pictures of people anymore, not only do we still paint pictures of people, but there is still an art in photography. Not anyone with a camera is a good photographer, in fact I’d argue most people with cameras are bad photographers.

Why is that? Doesn’t the tool being good immediately make the user an expert at whatever art they’re trying to do? No. No, it doesn’t. Just like how the printing press doesn’t make somebody who wants to write an article write an article. It just makes it easier to. I mean, it certainly allows them to try, just like the internet gives everybody a voice, technically. But that doesn’t mean people are going to listen. Just because the means of communication is there does not mean that what these people are trying to do can actually be accomplished, and it surely doesn’t mean people will listen.

Obviously, we’re talking about a different scenario here with AI agents and replacing human work, but really we’re not. We’re trying to go from point A to point B. We’re either trying to build something or create a service for people. If we are able to do that in less time, we’re still going to have to know how to do that. We’re still going to have to be masters of our craft. Engineers are still going to have to know how to code. If they want to successfully operate agents, just like a pilot needs to know how to fly a plane in order to effectively use autopilot. If you put a person who doesn’t know how to fly a plane behind a cockpit, I think the plane is going down.

Strategically working with agentic experiences

Now that we’ve firmly established the existence of AI and AI-agentic experiences as nothing more than an extension of our own selves, we can start to take this cloning analogy more seriously. We just become more powerful, like those power mecha suits and Power Rangers where they all link together and become some giant robot. Now they’re able to fight the bad guys easier. That’s kind of what we’re doing when we’re using AI, except it obviously doesn’t look as cool or feel as heroic. It allows us to extend our capabilities past what we could do just by ourselves.

The best way to use AI and AI-agentic experiences, I think, is to work in parallel with them. People parallelize work all the time. That’s what a job is. You’re basically just doing a task in parallel with other people. And then you combine the work and then bam you have a product or a service. You can think of controlling a fleet of AI agents like the same thing. If you’ve ever been a manager, you probably already know how to do this, but for people like me, an engineer who’s always been an individual contributor, this is kind of a new concept. Rest assured, this concept is not new at all. It’s existed for tens of thousands of years.

Because this is an entirely new concept of an individual contributor to be able to effectively take on multiple tasks in parallel, we need to go over exactly what that entails and exactly how you as an individual contributor can be of more impact.

I like to first take whatever task or assignment I have at hand and separate them into individual action items. Anyone who makes a to-do list, this should be pretty familiar. Now we’re going to sort these action items into complexity. Complexity as in, how much higher-level thinking would you need to do as a contributor in your role to get that task done? If this action item could be passed off to a junior engineer, and they could do it no problem, chances are this falls on the lower end of the scale. It’s not to say that it’s not important to do. It’s just that more people could theoretically do it because it doesn’t require a specialized skill set or knowledge pool. And of course, on the other end, you have tasks that maybe you can’t even figure out and you need to phone a friend. The really complex problems that require you to think outside of the box that force you outside your comfort zone. All of engineering fits in this scale. Obviously, the higher up you go as an engineer, the more senior you become, and the further along this scale you end up. Junior engineers worry about implementing straightforward tasks, while senior and principal engineers worry about higher-level, complex problems.

Once you’re done sorting everything on a scale, you can start attacking it from both sides. An easy 2x multiplier on your efficiency. Whatever you can do without even thinking about it, it’s probably something an AI can do without even thinking about it. Not only that, but it can probably do it faster than you can. The further along we start getting on a scale, the higher complexity the problem becomes, the less likely it is that an AI agent is able to successfully complete the task. This means that it’s probably more efficient anyways if you do it by hand. The easy way to 2x efficiency is to tackle the complex action items in parallel while a single AI agent tackles a clear-cut implementation path. Once one or both parties are complete, you can move on an inch closer towards the center of the scale. If the AI completes its task before you do, which it probably will, it can start taking on harder and more complex tasks while you catch up.

However, a lot of times engineering has more dimensions than just one. You might have to tackle more than just a single problem. You might have lots of different problems in different domains that all needed to be connected together in some way. For this, we can do the exact same thing where we take the action items and order them based on complexity. Except this time, there are going to be a couple of key differences.

  1. There will be multiple domains where a multiple of the same level task is required. For example, if you need to complete a front-end implementation, you might need to build an API and build the components, both of which have the same layers of complexity. They both share an easy part, a harder part, and a more theoretical part. In this case, we can use the AI to tackle the easier parts in parallel, making it so that for N domains that our work covers, we will be N times as more powerful with lower-level tasks. The catch is obviously that we ourselves cannot tackle multiple higher-level problems at the same time. This is unfortunately where the parallelization falls a bit short, as we basically still have to do harder tasks in sequence.
  2. With multiple domains, we also have tasks not only to interconnect the domains but to tie it all into the final product. Believe it or not, this is also something we can parallelize with AI. AI can do lots of tasks like this, such as importing libraries and making integration tasks easy. All you have to do is specify exactly what you want to do, and it will go for it. This does have the prerequisite of the individual pillars being complete. At least, you probably should wait until that if you want the AI to actually know what it’s doing.
  3. As for talking it all into the final product, this is oftentimes something that would be performed better by you. The AI doesn’t have feelings, it doesn’t know when something should be done. So that needs to be on you.

Overall, despite there being less parallelization possible with higher-level thinking as complex issues arise, you will be able to shave off a good amount of time with AI if you work in parallel. Funny enough, this actually makes feature work way easier than bug fixes because bug fixes require a high level of critical thinking and awareness of the entire project, while feature work doesn’t.

Closing thoughts

I’m curious to see how this strategy of tackling engineering with AI will pan out for corporations. With human engineers, it’s always cheaper and more efficient to use an existing system than to make a new one. However, because AI can code new systems so fast, we might be entering a world where it’s actually advantageous to tear down the whole building and rebuild it. Because we can now build the building at instant speed.

This then raises the question of, if we’re going to redo everything every time, how are we going to improve? How are we going to iterate and become better over time? This is why I almost think code isn’t the final medium. The knowledge and understanding that AI has is not going to be fully reflected in the codebase like it is for human engineers. It might be reflected through how the model is trained. It might be reflected through Markdown files. But it certainly will be independent of the code that it touches. We can think of this idea like an artist’s ability is independent of the work that they make or musician’s ability is independent from the concert they perform at.

I’m a firm believer in harnessing the technology that’s available to us in the most efficient way. So if that interests you, feel free to subscribe to this blog. I post philosophical takes like this every now and again.

Until next time –
Jared

Leave a comment