• jonathan7luke@lemmy.zip
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    13 hours ago

    I know this comes from a good place, but you are misunderstanding how LLMs work at a fundamental level. The LLMs “admitted” to those things in the same way that parrots speak English. LLMs aren’t self-aware and do not understand their own implementation or purpose. They just spit out a statistically reasonable series of words from their dataset. You could just as easily get LLMs to admit they are an alien, the flying spaghetti monster, or the second coming of Jesus.

    Realistically, engaging with these LLMs directly in any way is not really a good idea. It wastes resources, shows engagement with the app, and gives it more training data.