You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • goat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 hours ago

    It’s as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can’t be bothered to bother people about dumb ideas.

    But at the moment, no, it’s not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.

        I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation

        • epicshepich@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          16 hours ago

          Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.

          • goat@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            Plagiarism if it uses art, yeah.

            For LLMs, not so much since you can’t really own reddit comments