After the dust settled, AI became the enemy.

Roscoe Rubin-Rottenberg

September 19, 2024

By November of 2022, AI companies had created technology more powerful than what most people at the time believed would be achieved in their lifetime. The advancements in AI were hidden, shrouded in the secrecy of research labs inside huge tech giants. The deepest opinion a member of the public could have on the technology would be mere speculation. When OpenAI launched ChatGPT in November of 2022, the curtain shielding not just OpenAI but all AI companies was pulled back. What it revealed was a half dozen companies who fully intended to continue their research out of the pressure of the public eye. Now, they had to dance. It was then that anyone and everyone could see for themselves what the technology offered and form criticism and commentary that went beyond speculation.

In the days after ChatGPT launched, there was some fear and uncertainty, but the main reaction was awe. I remember using ChatGPT for the first time. The way my eyes widened as it generated multiple intelligent paragraphs within the span of a single second. It was the "oh shit" watershed moment that showed how much potential this technology had.

Now the curtain has been gone for two years. None of the companies, including OpenAI, were prepared for it to come down in the way it did, but all of them have been doing their best to perform for the audience.

The public has watched the show for two years now. The novelty of those first few days is gone. In its place is distrust and aversion.

There is a small group that is incredibly enthusiastic and bullish on AI, holding onto that feeling they had when they first tried it, even if they don't fully understand the technology. But this group has largely been alienated from the everyday person.

The rest are either incredibly skeptical or actively anti-AI. I've noticed many factions, tribes if you will, of this AI criticism. Each of them has its own points and arguments, each with its own flaws and strengths.

The creatives

This is the most vocal and passionate group you'll encounter. They are staunchly against generative AI and all its uses. Their main concern is the unethical acquisition of training data. Most will argue that AI is incapable of creating anything original because of how it’s trained. They also emphasize more unrelated drawbacks of AI that all other tribes point out, such as its hallucinations and obvious mistakes.

This group has a lot to say that I agree with. I think people should be paid if their work is being used for AI training.

However, I think that those who make the assertion that nothing AI produces is truly original should consider what definition of ‘original' they're using and if any human-made work can fit that definition. AI is much more similar to humans than we would like to admit. It is fed data much like we are fed experiences, conversations, and opinions of others. Just like us, everything it produces uses this pre-existing knowledge base, and because of the influences of it, nothing either of us produces can be truly original.

This group also led the way on the aversion to AI. Much of it is reasonable. AI is being pushed on consumers incredibly hard as a revolution before they can even understand if it's helpful or not. But what I take issue with is the idea that no generative AI tool— from grammar check to image editing— can be viewed as helpful or useful in the eyes of many of these people simply because it has the label of AI.

The tech critics

This group is much less prominent but still has the same large consensus about AI. To be clear, I'm mainly talking about journalists and reviewers who look from the outside in at this technology with just a bit more knowledge than the average non-enthusiast but not as much as an engineer would have.

The main criticism you'll hear among this tribe is that AI is overhyped, stagnated, or that AI companies promised more than they could deliver, and AI can't get much better than it is now. You'll hear the common criticisms about hallucinations, but here those are talked about in a very… Different way, with much less hatred and disgust than the creatives, but a twinge of what feels like betrayal.

The criticism I take the most issue with is that AI has stagnated, is not on schedule, or won't get better. This is because people perceived that ChatGPT was the first technology of its kind, the first model as powerful as it was at the time, when in reality, the model powering ChatGPT, GPT-3, had already existed for almost 3 years before ChatGPT.

Graph
Before ChatGPT, hardly anyone knew how good these models were. Source

However, it appeared to the public as if GPT-4 came out a mere 5 months after GPT-3, because the public had only known about GPT-3 for that long, even though it had existed for 3 years. This is why tech enthusiasts expect GPT-5 to come out sometime this year - so much so that Sam Altman had to clarify that GPT-5 would not be released at an event in April of this year

  • even though the previous schedule would line it up for late 2025 or early 2026. This is because they think that the AI tech had only gotten good when they heard about it.

The teachers

While a tiny amount of teachers want to embrace and work with AI, most see it as a non-starter. This makes me pretty sad as a student because for a moment, it seemed like teachers would work with students to integrate AI into the classroom, but now we're back to teachers vs. students, teachers fighting to stop the use of the technology and students sneakily using AI and finding ways to avoid detection.

This leads to teachers making students write long essays on paper, making it harder for students to write and edit, and for teachers to grade. Where I live in New York City, the DOE made a version of ChatGPT, specifically to be used for schooling, and even that is banned at my school. I hope that eventually teachers will realize that the way forward is to embrace AI for tasks like memorization, brainstorming, and getting started on assignments, not just to write it off as a cheating mechanism.

The takeaway

I think it's not unnatural to be whipped up into a frenzy when you see companies you have no reason to trust all pushing the same scary technology that you've seen is more flawed than they admit, but I think there is a balance to be struck.

The dot-com-bubble burst not because internet companies weren't the future but because investors were just throwing money at any company with a domain no matter if it had potential or not. The AI bubble will burst not because AI doesn't have massive potential but because neither companies nor investors fully understand the best ways to utilize the potential and think they should just slap 'AI' onto anything they can think of.

The distrust and hatred towards many of these companies who take work without asking, do incredibly creepy things with people's likenesses, or just making stupid scams with no use other than taking your money makes complete sense and I completely agree with the sentiment that AI is being stuffed in places where it has no business being (Excel???) and isn't yet powerful enough for the mass consumer base it's being pushed to, but falling into the trap of villainizing the technology itself, hating a product just because it uses any form of AI, or arguing that AI will never get any better just creates an unnecessary us vs them where it's completely unnecessary.

It's important not to forget that original feeling you had when you first tried ChatGPT, to think about how powerful and somewhat magical it can be, while still remembering the pitfalls of the technology and how companies use it, keeping in mind that it will improve with time.

Loading comments...