• dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    1 year ago

    That’s good, but considering last I heard months of content is uploaded every day I think enforcement will be spotty at best.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” the company wrote in a statement.

    The move by YouTube comes as part of a series of efforts by the platform to address challenges posed by generative AI in content creation, including deepfakes, voice cloning, and disinformation.

    In the detailed announcement, Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, explained that the policy update aims to maintain a positive ecosystem in the face of generative AI.

    Also, content created by YouTube’s own generative AI products, such as AI-powered video creator Dream Screen, will be automatically labeled as altered or synthetic.

    Creators who choose to avoid AI-use disclosure may be subject to penalties, including content removal or suspension from the YouTube Partner Program.

    “This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”


    The original article contains 612 words, the summary contains 175 words. Saved 71%. I’m a bot and I’m open source!