AI Content Publishing Lacked Transparency: CNET Editor-in-Chief Defends Quiet Approach
There is an obvious need to standardise content labelling, both for human and machine consumption.
What appears simple on the surface quickly gets complicated.
- Is it AI generated?
- Is it AI generated + human edited?
- Is it a mix - some paragraphs one way, some another?
- How about that image?
- Plus, do we need to denote the heritage of the training set?
- What about bias - do we need to mark whether content was generated from a training set that was unbiased?
- And what does that really mean?
- What about business application code generated by AI that makes decisions in sensitive areas?
Watch this topic hot up fast, struggle with complexity...and get politicised.
Companies embracing AI for content will find themselves compelled to establish policies in how they mark AI output. I believe this one example is just the tip of a very big iceberg.
Related Posts
-
How To Apply Policy to an LLM powered chat
ChatGPT gains new guardiantool - a policy enforcement tool
-
Chat Markup Language (ChatML)
Establishing Conversational Roles and Addressing Syntax-Level Prompt Injections
-
Learn how hackers bypass GPT-4 controls with the first jailbreak
Can an AI be kept in its box?