foo for thought

AI, AI, AI. That seems to be the only thing that people talk about nowadays, especially in corporate where I’m at. It seems like AI is at the forefront of every conversation, every discussion regarding efficiency. Some of you who’ve been reading this website may know that I am extremely down to try new things. I’ve always been fascinated with technology. Every single iPhone since release I’ve wanted my hands on. Every single operating system I’ve wanted to use, and every single new gadget I’ve always just had to get my hands on. So when it comes to AI, a super intelligent being that can answer basically any question ever and now more capable than ever can even code for me, for example. It seems too good to be true. It seems like literally the coolest thing I could ever imagine when I was a kid. Like, you can’t tell me every kid didn’t want to have their own Jarvis watching Iron Man, having their own control panel, having their own assistant that responds to them whenever they want, that feeds them with infinite information from infinite sources.

Is that not the absolute pinnacle of human innovation? Have we plateaued? This is it, right? This is literally the best we’re ever going to get as a human species? I mean, think about all of the technological and scientific advancements we’ve made through history. Every single one gets us closer to more knowledge, to how our world works, to how we work as a human race. Every single discovery and invention brings us one step closer to maximizing the potential that we have as human beings. And to be honest, this is literally the reason why people make stuff. This is a reason why scientists discover new things. This is the reason why engineers make new things. Throughout history, we’ve always been striving to do more and better.

So again I ask the question, is AI the end goal? All of our research, all of our knowledge distribution, finally worth it, where now we don’t even have to do the work, we don’t even have to think. We don’t even have to try. We just leave it to something else. Something that’s void of emotion, something that’s void of needing a lunch break. Something that’s void of any real human needs that we could possibly have.

I think despite the incredible technological advancements we’ve made as a society, because AI is no doubt an insane feat in the field of technology, it’s a very important to realize where this brings us as the human race. How this new technology actually affects us from day to day. How it actually affects the things that we do. How it actually affects our relationships with people and the world around us.

I see three potential futures for us as a human race.

The first possibility is that AI turns out to be cryptocurrency. It’s this giant scam. It turns out to be nothing. We go into a massive recession and everyone forgets about it 5-10 years later, and it’ll turn into a fun story to tell to our grandkids someday. This is the future that I think most people feel is going to happen. I mean, again, we’ve already seen this with cryptocurrency. We’ve seen this with a lot of things that try to be the next big thing, but end up being massive failures. We see this with technology all the time, as a matter of fact. Companies will make stuff that just doesn’t catch on. They’ll pour millions, sometimes billions, of dollars into ventures that lead absolutely nowhere.

The only reason I consider this possibility unlikely is just the sheer number of companies and people in power that are embracing this new technology and genuinely believe in its future. Entire companies such as Microsoft, Apple, Google, Facebook have completely done a 180 on their entire business strategies pivoting towards AI. This won’t just be a recession if AI fails. Every single company that we put our money into nowadays is going straight to the toilet. Even Amazon, who basically runs retail, for every single division of it is fully banking on AI to work. I don’t have a massive amount of faith in the altruism of multi-billionaires, but I will say if there’s enough smart people all thinking the same thing, to me it seems like the possibility of that thing being true is more than 50%. At least, maybe that’s just my optimism shining through, and I shouldn’t think so highly of people who are in a different pay bracket than me.

Now, the second possibility is if AI is as big as these corporations are saying it is. It actually does take over every single sector: entertainment, knowledge distribution, transportation, communications. It becomes the defining force in the world, similar to the internet. This possibility includes every single idea that companies have piled billions of dollars into, every single data center that’s ever been made for the sake of AI or training on AI, all of that stays and becomes permanent. AI becomes integrated into everybody’s lives to a level which we haven’t seen since the internet. Even sectors such as the arts, where a lot of people dislike the application of AI, will be converted to use AI in some way, shape, or form. It’ll just be normalized; everyone will do it.

As for this possibility, I don’t see it coming to life because at the end of the day, people are people. People care about other people. Not everything is about maximizing corporate profits. Not everything is about increasing the number that shows up on the stock market. At the end of the day, people are going to have to live through this. They are going to have to every day use a technology that is potentially harmful to the environment, that is potentially harmful to them, they are going to have to live with this reality. And I don’t think this is going to happen, especially based on what I’ve seen online. It’s kind of like when I think of empires that used to exist, like the Roman Empire, the Mongolian Empire, etc. – I think there were a lot of reasons that these empires collapsed. But a commonality that exists in a lot of them is class division. Class division itself has been the cause for entire nations breaking. I mean, it even caused America to even exist. We didn’t want to abide by all of the rules that Britain imposed on us. Or something like that. I’m not really a history buff.

The point is, I think there is always a point where it becomes too much. There’s always a point where if the divide keeps increasing, nobody’s going to think that’s fair. At the end of the day, the more people that think a certain thing will outweigh the opinions of the few. This was a bit more philosophical than just a topic of AI, but the point still stands. If corporate profits and pushing AI get to a point where it completely abandons the rest of the people on planet earth, I don’t think the people of planet earth are going to stand with it. I don’t think they will even try. I think people will, at the end of the day, try to preserve their humanity.

Segueing to the original topic of this blog, I think that the implications of actually using AI in our day-to-day lives are a lot more intricate than what people think. I don’t think it’s quite as polarized as people think. Where I can either use AI for everything or AI is going to be this massive scam. Which leads me to the third possibility of where I suspect AI landing: somewhere in this super weird, bizarre middle ground. Where all of us are trying to figure out AI’s true purpose and utility, and trying to figure out exactly how to utilize it to our advantage, both as people and as employees.

The worst part about this bizarre middle ground is that it’s constantly changing as new models and integrations come out. I’ve personally been flipping between ChatGPT, Claude, and Gemini back and forth over the last year approximately. That’s because each company is constantly racing to make the best version of themselves. To be honest, they don’t even know what they’re making. I really do not think that OpenAI made ChatGPT with the full intent of people are going to use ChatGPT for X, Y, Z. I think it’s that nobody actually knows what’s going on, and everybody is trying to race towards this imaginary finish line that they are trying to create.

What do we know at this point in time? Most importantly, I don’t think we truly know the full extent of what we are going to see with AI. However, we do have some good leads on to what the true applications of AI will eventually settle to. We at least know what people use it for the most now. And we are seeing trends change every day in the workplace.

We know that AI is really good at processing large amounts of information efficiently. I’ve said this in basically every other blog post, but that’s basically all it can do. At least, that was all it was able to do. I’ve been using Claude Opus 4.5 for coding both work and personal projects, and it has absolutely blown my mind – not in a magical wizard-like sort of way, but in a way that cleaned up the 1% that these LLMs were missing when it came to complex analysis of something like software engineering. A lot of the times, just one missing point would cause the LLM to trail off into a sea of nothingness. However, Opus 4.5, at least for all of my projects, reasons so well that my cleanup is extremely minimal.

This scares me both on a personal level and at a wide scale level. On a personal level, if Claude Opus can do my job, why am I there? What is the point of me coding when it’s not even me who’s doing the coding? It’s not to say that I don’t have a purpose. I think that there’s definitely a part to software engineering that involves human intervention to be as efficient as possible. However, a large part of my work and software engineering is powered by this AI. So I feel like the job, even though it still is required, has definitely changed.

At a wide scale level, I’m seeing just so many different things. First of all, relationships with ChatGPT that’s absurdly more common than I would expect. People are forming relationships with ChatGPT, using it as their therapist, their companion. Some people are even driven to suicide because of the recommendations that ChatGPT will give it. A lot of this can be prompt engineered out, I think OpenAI has been doing this recently. But the main problem still stands where ChatGPT will continue to affirm you because it is a service, and you are the client. It is not a person with thoughts, emotions, or senses. It cannot understand the human experience because it is not a human. It can only pretend like it does.

The part that scares me the most is the absurd efficiency gains you get from these LLMs. When fully optimized, I think these things will be able to take over a large amount of jobs that exist today on the market, or at least force them to readapt accordingly. I think LLMs are capable of doing things like customer service, taking orders for restaurant services on the internet, even engineering. And with great efficiency gain comes great laziness. Why would I want to put more effort into something and use my brain for more time when I can just wait until the computer tells me what to do? Especially when the computer is probably right, I’m going to be inclined to just fully rely on the computer.

Even if you’re sitting here reading this thinking, “Well, I use ChatGPT, but I don’t fully rely on it. I have a brain and I can critically think.” It’s very easy to lose sight of that. It’s very easy to gradually gain trust in this oracle that allegedly knows everything and can allegedly do everything. It’s so easy for it to gain your trust so much so that people are getting ChatGPT-induced psychosis, that they are going crazy from this. Also for people who aren’t privileged enough to gain critical thinking from top schools or colleges, those people might lack the ability to firmly tell that ChatGPT is not an all-knowing Oracle – that the AI, despite being really good, doesn’t know everything. A lot of people just don’t know that.

And this is the scariest part of all. I think AI makes us stupid. I think it forces us to lose our critical thinking, abandon our creativity, and worst of all, abandon our humanity. I’ve noticed this when personally using AI on stuff. I just don’t like coding anymore because I’m not coding, I’m just telling somebody else to code for me. It’s very weird and I hope that we can live in a world where we coexist with AI instead of it replacing us. But I fear that greed and the power-hungry people that exist now are going to want to fully utilize it to their financial incentives.

Just an aside, I was asked if I used AI to write anything on this blog. For the record, nothing on this blog is written using AI. I write everything myself. However, for this article, I decided to use Wispr Flow, which is a dictation software. So maybe that’s why it’s more coherent than normal, or maybe that’s why it’s not. I’m not really sure.

Anyways, I have so many more takes on AI. I hope you all enjoyed this article, and please subscribe to this blog if you are interested in this stuff.

Jared

Leave a comment