AI Content Publishing Lacked Transparency: CNET Editor-in-Chief Defends Quiet Approach

Companies embracing AI must establish policies to label content amid complexities of determining AI-generated, mixed or biased content, and political considerations.

There is an obvious need to standardise content labelling, both for human and machine consumption.

What appears simple on the surface quickly gets complicated.

  • Is it AI generated?
  • Is it AI generated + human edited?
  • Is it a mix - some paragraphs one way, some another?
  • How about that image?
  • Plus, do we need to denote the heritage of the training set?
  • What about bias - do we need to mark whether content was generated from a training set that was unbiased?
  • And what does that really mean?
  • What about business application code generated by AI that makes decisions in sensitive areas?

Watch this topic hot up fast, struggle with complexity...and get politicised.

Companies embracing AI for content will find themselves compelled to establish policies in how they mark AI output. I believe this one example is just the tip of a very big iceberg.