The buzzword of the year – AI. I’ve been particularly interested in applications of this new technology, but it doesn’t seem like there’s a lot to work with. Companies seem to try to push AI in ways that, as a consumer, make absolutely no sense. No, I don’t want an AI whenever I try to make a website bombarding me with terrible branding suggestions. No, I don’t want to generate an AI slop graphic for my new business.
Why does it seem like everyone’s so obsessed with AI, but it doesn’t seem to do anything special?
What is AI?
To better understand the craze, we need to break down exactly what AI is at the fundamental level. Since the dawn of computing, people have been trying to make computers smarter. The very first computers took up entire rooms and could only perform basic arithmetic, yet engineers wouldn’t shut up about them and kept improving them. Before you know it, we all have little glass chunks in our hands that can contact anyone around the entire world instantly. Pretty cool, right? It’s because of our ambition to accomplish more. Recently, we’ve gotten pretty good at making fast computers. People have always been the operators behind computers, but with this new technology, maybe it could operate itself.
That’s the premise behind artificial intelligence. The AI you’re probably familiar with operates on an LLM, a large language model. Basically, that means it is familiar with a massive amount of our speech patterns, mostly thanks to the internet.
Think about the decisions you make every day. You make these decisions based on two things – your past knowledge, and your brain’s ability to make decisions. AI doesn’t really “have” a brain, but it can have way more past knowledge than we ever could have. What you end up with is a robot that can answer pretty much anything surface level, but struggles with more complex problems. That’s the current state of AI, as of writing this article. Sure there are reasoning models that do quite well on complex problems, but they crack under just a little bit of pressure.
I firmly believe that there’s a hard limitation on AI in its ability to reason. As the problems we try to solve become more and more complicated, the AI requires exponentially more sample data and computing power to reason correctly. To a point where, we’re shelling hundreds of billions towards OpenAI just to have a couple decently-function models. Not to say this isn’t impressive but, to get the actual thinking capacity of a human would take so much more money.
My experience with AI in coding
I’ve been trying to use AI for coding and, it does a pretty decent job, but only at the surface level. If I know exactly what I’m going for and exactly how to solve it, AI is brilliant in cutting out all of the work that I’d have to do manually – reducing my work time by sometimes a factor of even 10. However, the second I run into a roadblock that has just a little bit of complexity, it freaks out and comes up with an answer that, honestly doesn’t even come close to working.
What I’ve found from using the different AI models for coding is that it particularly excels at:
- Initializing projects. This is a life saver when dealing with many different frameworks
- Dealing with coding languages I don’t have a ton of familiarity with syntactically. The fact that you can be a programmer and by default be a master of all languages is amazing
- Writing test cases/styling/other overhead I really don’t want to do. It’s such a time saver and makes coding so much more enjoyable to not deal with all this.
And, what it’s terrible at:
- Recommending higher-level infrastructure design. It has absolutely no clue how to think for itself, and usually just “assumes” a ton that isn’t necessarily what you want
- Solving complex problems optimally. It’ll get a lot of the way there, but will usually fail. Even if it’s just a one-line change, these are always faster for me to tackle myself.
- Dealing with a lot of code. Hallucinations oftentimes cause parts I’ve worked hard on to randomly get wiped when the AI will think of some “optimization” during a prompt that isn’t even remotely related. I have no idea why this happens but it makes sense given the complexity of code sometimes
I think that better AI models can solve these pitfalls somewhat, however I think that you’re always going to need a qualified operator to make the most out of a tool like this. If you don’t know how to fly a plane, it doesn’t matter how good the auto pilot system is, you will probably still have a rough time.
AI in consumer tech
“Consumer tech” is all of the technology being marketed and sold to just, normal people. The iPhone, personal laptops, headphones – all of the tech that you’d go out and buy yourself.
Now tell me, why would you go out and willingly look for AI to get yourself? There is a massive push to market AI towards the consumer tech market. Apple with Apple Intelligence, Microsoft with Copilot, Google towards Gemini, and Facebook with Meta AI. Every single app that can use AI in some way is using AI in some way. But for some reason, I get the feeling that none of these are particularly, good.
Despite working in the AI industry myself, I can’t help but critique the way that AI is getting pushed in the consumer industry. Notably, it doesn’t feel like it really does anything. Am I going to buy a phone because it has AI and another phone doesn’t? Why would I buy an AI TV over a TV that doesn’t have AI? Me personally, I don’t think I care that much. And, I’m honestly not sure a ton of other people care either. As a consumer, I want a solid reason to buy new tech, which means that, it fundamentally has to be backed in some value/utility for me.
Along with coding, I will say I also have tried my best to use AI as a consumer of technology. What I found is actually, a good amount of things I think AI does really well:
- Dealing with random electronics and ports. Google is often terrible when it comes to accurately honing in search information on specific electronic details. AI can aggregate this data into an easy-to-read summary, which makes figuring out what cable you need for this random monitor super easy.
- It’s a “better Siri”. I’ve had a lot with Siri where it either responds completely incorrectly, or doesn’t even know how to interpret my question. When I have a query on something random that I want to know, AI usually gets the job done very well.
- Learning about something new. AI is very good with explaining things, and acts as your own personal tutor. I’ve been learning a couple languages on the side, and Microsoft Copilot actually does a really good job of being a conversational language tutor. It corrects me on mistakes, and even gives suggestions for improvements and more things to learn.
- Dealing with a ton of data in general. If you have lots of text in a document or something, upload it to AI and it’ll quickly analyze the entire thing. Super useful.
Notably, all of these use cases are using AI as a “better search-engine”/”better information aggregator”. Unfortunately, I think I see AI being marketed as something more than this. Apple has done such an aggressive ad campaign on Apple Intelligence – touting it as something that will “save your family”, or “land you that job”. When, in reality, you are the one who has to do those things. Perhaps AI can help with that, but, in all honesty it can’t do a better job than, let’s say Google can.
The UI of AI
This leads me to the UI of AI in the consumer realm – which is primarily a chat-bot. I think some companies are catching on to more agentic experiences, which is really cool for enterprise-related fields. But, apart from the chatbot, there really is no UI for AI. You ask it a question, and it gives you an answer.
Certain applications such as “Tasks” from ChatGPT let you turn it into a “better Siri” – where you can ask it to remind you things in plain English. And, there are some other interesting applications from them such as Operator, which will perform actual tasks for you on your computer with just a prompt. I think these are purely experimental, but are extremely important because they push the limits of AI’s UI away from just being a chat bot.
The Rabbit R1 actually caught my eye when that released for this reason. It made AI into a way to access apps and information, not just a chatbot that can retrieve data in the one dimension. Unfortunately I think this project kinda flopped – but the point is it still pushed that boundary of an intelligent user interface that isn’t just a chat bot.
I think we have a ton of work to do as a tech industry to define how people use AI. I think that we’re not quite there – it’s not getting as much attention as, I think that it deserves. The technology is so incredibly powerful, but consumer applications have yet to prove that it’s something that can truly change the world.
Perhaps I’ll see the refinement of AI as a UI in my lifetime, or, perhaps it is no more than a “better Google”. I’m worried that, as companies race to beat each other in the AI race, the innovative spirit is lost in the process. Only time will tell, I guess.
Leave a comment