foo for thought

In all of the amazing technological progressions we’ve seen in the field of AI, a disturbing realization comes with – what if the AI replaces humans? What if there’s no need for humans, and the AI can just do everything? Would that be a net benefit to society?

I think it’s important when asking these questions to stay rooted in what’s actually going on in the tech world. AI is just machine learning. Sure, it’s incredibly complex and is able to very accurately mimic a ton of human behavior, but above all else it’s an approximation machine. It takes language, people’s experiences, and information into account to generate artificial output modeling the “correct answer”. Sure, most of the time it is the correct answer, but not always. And it’s never going to be 100%, because it’s fundamentally just copying us.

AI doesn’t have a brain. It doesn’t even work like a brain does. Our brain is so complex – evaluating not just information and past experiences, but our sensory inputs and emotions that constantly shape our perception of the world. And that’s the other thing that differentiates us – perception.

What does it mean to be sentient? Think about it. Do you really know that everyone else is real? You can’t really prove it, can you? You just have to trust that they are. In fact, you probably can’t even prove that you’re real! We still have very limited understanding of consciousness and what it means to be a “living thing”. Two identical twins can have drastically different lives and opinions on virtually everything despite having nearly identical DNA and environments. So obviously, there’s something more. There’s a layer that makes you, you.

You might think this is trivial, that this really has no discernible impact on data quality that the AI gives. It’s really smart in fact, it can do things most humans can’t – it can access data at an unparalleled speed, make sense of data it would take us months to parse through, and speak in every language! So, what’s lacking from this? Sentience. Emotion. Perception. AI doesn’t have any of these, because AI doesn’t have a brain, and AI doesn’t “think”. It’s just a string of ongoing tokens that estimate what the next one is going to be.

So, AI is fundamentally different from how our brain works. It’s not sentient. But, that begs the question – if it could theoretically perfectly model sentience, is it? If you replace a boat plank by plank until the boat is made of entirely new planks, is it still the same boat? At what point do we decide that something is sentient? Does it even matter? This is a relatively new category of philosophy, but is definitely emerging in these times as these AI agents get more and more conversational. My sister wrote a paper in college about this – she argued that if AI looks sentient and behaves sentient, then it’s sentient. If it looks like a duck and walks like a duck then, it’s a duck. A bold stance – I’m inclined to agree, but I’m not sure we’re even remotely close to that technologically from my experience.

Nevertheless, I think that either one of two fates are in front of us – the first is a world where AI resembles sentience more and more to the point where it is “practically sentient”. This is the world we’re currently in, and at least in my lifetime, I don’t see us breaking from this. The technology is good, it’s powerful, but I don’t see modular increments in LLM models leading to a point where we all go, “aha, THAT’S the one that redefines humanity!” We didn’t go “wow, the iPhone 13 is good, but the iPhone 14 – THAT’S the one that changes everything!” – as long as they keep producing the same sort of iPhone every year, I don’t think it’ll revolutionize anything!

The second is much more fascinating, and intersects with a ton of different disciplines – what if we could actually model sentience, the way our brains do it? I haven’t watched the entirety of the last Black Mirror season so I’m sure they already touched on this, but this is the more fascinating trajectory for me. I don’t think our technology is even close to this but, if we could map out our brains and figure out the secret behind consciousness – who knows what could happen! Hopefully nothing bad right??

Today’s was a bit more existential I apologize, but I find it fascinating to discover future implementations of AI that haven’t been conceived of yet. It’ll happen someday, and I’d like to be on top of it when it does!

Leave a comment