Skip to content

Misuse of AI isn’t worth it

I stumbled across my first AI-generated YouTube video last night. (At least it was the first that I identified as such). It was rubbish. The subject was relatively sensible – “the latest trends in modern kitchen design”. The video was from a proper kitchen furniture supplier, and the presentation was decent enough that I watched for a couple of minutes before I realised how shallow it all was. The creator had clearly asked ChatGPT for a 500-word article on trends in kitchen design. They had then got this read out by a (very good) text to speech service, and combined it with a nice slideshow of showroom kitchens.

What made me realise that something was up? Firstly, although the narration was good, the words weren’t what a human would say. They were what ChatGPT might write. That’s not a criticism of ChatGPT in this case; the video creators had clearly not told it to write in the appropriate style, nor had they told it not to make each paragraph sound like it was its own standalone answer to the question “what are the latest trends in modern kitchen design?”

Worse though was that the slideshow had nothing to do with the narration. It just showed a series of nice kitchens, not illustrating what was being said. Why were they zooming in on worktops when the narration was talking about drawers?

The outcome? I was a bit annoyed with myself, but more irritated with the company that has wasted my time. I’m unsure what the objective of the video was; the only one I can think of would be to pull in viewers in the hope they’d convert to subscribers to the company’s channel, and perhaps receive more sales-oriented suggestions in the future. But I’m probably giving them more credit than is due.

The moral of the story? If anyone internally or externally suggests that ‘AI’ content creation is the way ahead, remain skeptical. As I’ve written before, it’s a great tool to get things kicked off, but speed is not a substitute for quality.