My vision for AI


Artificial intelligence is an incredible technology. Its potential to help us is vast—it can help us understand complex topics, learn new skills, and completely transform the way we work.

But as with any powerful tool, my concern has never been about the technology itself, but about how it's being used. And right now, I believe it's being used wrongly.


A recent experience brought this into sharp focus. Do you know the YouTuber Enderman? He runs a popular tech channel, and I literally grew up watching his videos, he is the reason why I know tech this deep. 

For a long time, his content was a huge part of my life—his skills, his humor, his sheer creativity. He was the one who sparked my deep interest in technology, and I genuinely believe his influence is a big reason I'm a security researcher today.


But a while ago, something incredibly frustrating happened.

His channel was banned by an AI for "violations." Despite numerous appeals, the ban remained. (The channel is technically still live, and he can upload, but the constant threat of being unfairly demonetized or restricted is a nightmare).


This is a massive YouTube channel we're talking about. If a creator with his reach and resources can be wrongly targeted by an automated system, what chance do smaller channels have? They're completely helpless against this kind of arbitrary enforcement.


This isn't just a simple mistake; it feels like a systematic failure.

It reminds me of what happened to Quora. In its early days, Quora was a haven for brilliant minds and deep thinkers. But it lost its way when it started prioritizing advertisers over its users.

YouTube seems to be making the same mistake. They're so focused on creating a sanitized, advertiser-friendly platform that they've handed the keys to an overzealous, deeply flawed AI.

The AI can't truly understand a video's context, humor, or educational value. So why is it trusted to make irreversible management decisions?

It's a massive risk for the creators who built the platform. We see the consequences: educational, family-friendly tech creators are being targeted, while countless blatantly policy-violating channels (like many porn channels) seem to slip through the cracks.

It's not right.


Enderman left his audience with a piece of valuable advice that has stuck with me:

"Do not ever make YouTube your main income."


He's right—it's an incredibly unstable foundation. People think being a YouTuber is a dream job, but the reality for many is a constant battle against an opaque and unpredictable system.

I understand this instability in my own field, too.

As a bug bounty hunter and security researcher, the surface-level glamour fades quickly. You have to deal with dismissive analysts, bad-faith actors, and a system that isn't always fair.

The struggle is real, whether you're creating content or finding vulnerabilities.


These big corporations are getting crazy with AI. They see a shiny new technology and suffer from severe FOMO (fear of missing out), implementing features before they're properly tested. It's like building a skyscraper on a foundation you haven't even checked.

Using an AI that cannot understand nuance to make final termination calls is a recipe for disaster.


Take OpenAI, for example.

Their mission statement is to ensure that artificial general intelligence (AGI) "benefits all of humanity." It's a noble sentiment, but the moment you look closer, you see the problem.

"Benefits all of humanity" is a dangerously broad statement. If someone wants to use your AI tool to plan something harmful, and they claim it's for their own "benefit," where does that leave you? You can't make a mission that broad and expect it to guide you through every ethical dilemma.


I get it. Moderating a platform as huge as YouTube is an impossibly difficult job.

But Google has billions in profit. They have the money to do it right.

Instead of using AI to make the final, aggressive decision to ban a channel, use it for what it's good at: triage.

Let the AI report and flag potentially problematic videos, sort them by severity, and then—and this is the crucial part—let a trained human reviewer make the final call.

That is a far more effective, nuanced, and less idiotic approach than the one YouTube is currently using.


Imagine if platforms cared about their creators as much as they care about their advertisers.

It would be a beautiful thing. The platform would be more vibrant, more creative, and more fun for everyone.

Look at Discord: it's a massive platform, but they don't act as irresponsibly as Google, nuking accounts with an automated hammer.

My hope is that more platforms will learn from these mistakes.


"A computer can never be held accountable, therefore a computer must never make a management decision" - IBM, 1979

AI is an amazing technology, but in the hands of corporations who only see the bottom line, it becomes a blunt instrument that causes real harm.

It's been a while since these problems started, and I'm still waiting for things to get better.

Comments

Popular posts from this blog

[XSS] Breaking ‘safe’ embeds via frame-src bypass

About Me

[XSS, CVE] CVE-2025-68116: Bypassing Security Headers for Critical Stored XSS in FileRise