points by Kwpolska 2 months ago

The story is credited to Benj Edwards and Kyle Orland. I've filtered out Edwards from my RSS reader a long time ago, his writing is terrible and extremely AI-enthusiastic. No surprise he's behind an AI-generated story.

christkv 2 months ago

Is he even a real person I wonder

  • morkalork 2 months ago

    He was murdered on a Condé Nast corporate retreat and they have been using an AI in his likeness to write articles ever since!

    • christkv 2 months ago

      Would make for a good book, company hires famous writer, trains an ai on them, tortures them to sign over their likeness rights and then murders them. Keeps up appearances of life via video gen, voice gen and writing gen.

cubefox 2 months ago

> his writing is terrible and extremely AI-enthusiastic

I disagree, his writings are generally quite good. For example, in a recent article [1] on a hostile Gemini distillation attempt, he gives a significant amount of background, including the relevant historical precedent of Alpaca, which almost any other journalist wouldn't even know about.

1: https://arstechnica.com/ai/2026/02/attackers-prompted-gemini...

  • lich_king 2 months ago

    For what it's worth, both the article you're linking to and the one this story is about are immediately flagged by AI text checkers as LLM-generated. These tools are not perfect, but they're right more often than they're wrong.

    • GaggiX 2 months ago

      >These tools are not perfect, but they're right more often than they're wrong.

      Based on what in particular? The only time I have used them is to have a laugh.

      • lich_king 2 months ago

        Based on experience, including a good number of experiments I've done with known-LLM output and contemporary, known-human text. Try them for real and be surprised. Some of the good, state-of-the-art tools include originality.ai and Pangram.

        A lot of people on HN have preconceived notions here based on stories they read about someone being unfairly accused of plagiarism or people deliberately triggering failure modes in these programs, and that's basically like dismissing the potential of LLMs because you read they suggested putting glue on a pizza once.

        • GaggiX 2 months ago

          I had fun with AI detectors in particular for images, even the best one (Hive in my opinion) was failing miserably with my tests, maybe the one trained on text are better but I find it hard to trust them, in particular if someone know how to fiddle with them.

        • cubefox 2 months ago

          I just tested originality.ai and it claimed 100% probability that the editors note on the Ars retraction [1] was itself AI generated. For the Gemini article on Benji Edwards it was "only" 56%.

          I think your tools need a lot more evidence to be considered reliable.

          1: https://arstechnica.com/staff/2026/02/editors-note-retractio...

    • cubefox 2 months ago

      > immediately flagged by AI text checkers as LLM-generate

      Proof? Which one? I would like to test a few other articles with your checker to test its accuracy.

      • greenfrogs 2 months ago

        hey! im not op but ive used originality.ai before and it saved my ass. its super sensitive, but also super accurate

        • cubefox 2 months ago

          I tested it, I think it's super inaccurate.

tocitadel 2 months ago

Also filtered out the following slop generators from my RSS feed, which significantly enhanced my reading experience:

Jonathan M. Gitlin

Ashley Belanger

Jon Brodkin

I wonder how soon I will be forced to whitelist only a handful of seasoned authors.

  • stavros 2 months ago

    > I wonder how soon I will be forced to whitelist only a handful of seasoned authors.

    Twenty years ago?