Anything worth doing, is worth actually doing

If an AI can do it, there probably isn’t a good reason for it to exist at all.

Anything worth doing, is worth actually doing
Doing the work is kind of the point, actually (Photo by Kelly Sikkema / Unsplash)

If your work involves computers you probably had a discussion with co-workers about AI. At first I didn’t think much about it, because it felt similar to conversations about the best text editor or best DAW. But it isn’t.

When I talk about web browsers or messenger apps I usually understand where their preferences come from. They use the tools that best serve their priorities. I might disagree with them, but if I would make my choice based on the same criteria I would come to a similar conclusion. My experience with AI is different, which is why I’m confused about how many people seem to have integrated it into their professional lives.

It’s not the very strong ethical concerns about the impact LLMs have on our planet and creative fields. While I believe they are strong enough to warrant a boycott of chatbots, the decision to where to draw the line is a personal one. I still eat things made by Nestlé, for example. The fundamental issue I have is that AI chatbot outputs simply aren’t good enough to be useful. And it’s not even close. So why do people think they are?

It isn’t about AI, it’s about the definition of work

AI chatbots are always pitched as an assistant, but in reality they are more like interns. Sometimes they deliver work that’s good enough, often the give you things that you need to fix and the result is that it always would be faster to do it yourself. There’s just one difference: Interns aren’t there to make your day easier, they are there to learn. They are an investment in our future and helping them on their path is providing a value for our society. None of this is true for AI.

So why are people happily integrating AI into their workflows, if they get the same results I do? To find the answer we have to look at universities. There are many reports of the impact AI has on students, because of course they use chatbots to cheat, especially in classes that feel like a duty and not like a choice. And when you are told that the rest on your life depends on your grades (which isn’t the same as your knowledge), cheating is heavily incentivized.

The problem is that many employers have created an atmosphere that is very similar to the one in universities where the primary motivation is to get through the day and deliver the equivalent of a passing grade instead of something you take pride in. Which, to be clear, isn’t the fault of the individual worker trying to live their life, it’s a systemic problem that leads to the creation of things nobody wants.

Why should I care if you obviously don’t?

To be clear: If we would grade AI outputs like creative works, the maximum it can achieve is 58. This is a fact.

It doesn’t matter if we are talking about text, images or a simple e-mail. Now think about the last time you were excited about watching a movie that scored a 58. The only reason AI is even in the conversation is because podcasts, articles and similar things aren’t graded. Our media landscape would look very different if people knew what they were clicking on beforehand.

The thing is: AI makes it very possible to know. Because as an author, I would fight tooth and nail before I let anyone put an AI written teaser on an article I care about. After all I’d like it to reach as many people as possible, so I’d want something that’s actually good. The same goes for AI generated images. If mediocrity is good enough for the article, the article itself probably isn’t very good.

I don’t care if it’s code, articles or anything else. The only reason people feel encouraged to use chatbots in their daily work is because they don’t care about the result. Which is because their managers don’t care. I don’t expect anyone to take pride in their work, but in a world of things made by people who actually do, the use of AI is the perfect warning sign for things to avoid.

This isn’t meant as a plea for perfectionism, because as we all know by now that route will lead to burnout or nothing ever being done. What we need is people who care but accept the fact that 95 isn’t achievable and to be happy with 75-85. But today we don’t even reach the bare minimum of companies who at least admit that they are going for 58 because that’s still profitable. They can’t be honest, because then we’d all know that they knowingly serve trash and we’d find someone who cares. But to be clear: Every company who isn’t discouraging the use of AI is perfectly happy with an 58.

PS.: While I understand that we do live in a society and people need to pay rent, if you use chatbots in your private life you should really check your priorities. By now you know that they are notoriously bad at serving up facts. And if you are fine with that because what you were asking wasn’t that important anyway … maybe you need to sit down and think about that for a moment.