• 1 Post
  • 886 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle
  • FauxLiving@lemmy.worldtomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    3 hours ago

    Intrusive? You’re not talking about AI itself. I have a 8GB model file and it is not intruding in anything. It’s actually just sitting on the hard drive not doing anything intrusive at all.

    What you’re talking about are things like Microsoft’s CoPilot AI, or Apple’s Siri integration or whatever other chatbot service that people pay for. Those service are intrusive, but they were intrusive before AI was invented.



  • FauxLiving@lemmy.worldtomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    7
    ·
    18 hours ago

    I mean, you did imply that I make people who disagree with me my personal enemies based on me commenting “Fuck gen-AI though”.

    I didn’t say you were not a bot, I only allowed that you were possibly a regular human. Though it is sus that you’re anti-AI and also being offended on the behalf of bots, hmmmm

    And why should LLM-bots post anti-AI messages?

    The same reason an LLM does anything, because a human prompted them to.



  • FauxLiving@lemmy.worldtomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    8
    ·
    edit-2
    18 hours ago

    I sure did insult the anti-ai bots, you are right about that.

    That should not offend people that are not bots.

    You may have your opinions and be a human, but that is not true of everyone who posts on this topic.

    If you’re reading ‘bots’ as ‘people I think are dumb’ or ‘NPCs IRL’ instead of ‘automated posting done with the use of LLM augmented human agents coordinating in teams’ then we’re probably having two different conversations.



  • It didn’t, but EAC added Linux support a while ago… so any game dev can choose enable Linux support (and most do in my experience). I play many EAC games on Arch(, btw) with an NVIDIA card, HDMI 2.1, HDR works, etc. I have a working VR (Index) setup, a gaming mouse with better customization software (imo) than Windows, etc.

    Most of these things had various minor issues even a year ago and now the only thing I can think that is non-standard/requires tinkering is that I’m using beta drivers to have Vulkan support on NVIDIA. This provides a good HDR implementation. Once the Vulkan support is released in the official driver then a user could get all of the same features without ever needing to do anything but update their system and install Steam.

    Progress in the Linux gaming space advances every week. Things are approaching perfect, outside of structural issues (such as kernel anticheat). I have 213 games in my Steam library and the only game that I cannot play is Apex: Legends.

    Apex runs just fine, but EAC is configured to kick Linux clients if you try to connect to a match. This isn’t a Linux issue that can be patched, this is a developer choosing to not allow Linux.

    If you haven’t tried gaming on Linux in a while, you should give it a shot. I’ve long since ditched Windows in order to have more free space.




  • A leader doesn’t seem necessary. The leaderless nonviolent resistance movement has been winning in the court of public opinion.

    100%

    In some sense, they’re using modern technology to mass produce propaganda but the people actually directing things are still stuck in the 1900s mindset when regards to thinking about power.

    Communication Technology has made these kinds of diffuse movements possible, that’s why they’re trying desperately to create an ‘antifa’ to fight against. They want a conflict with a target that they can slander/attack and instead they’re just getting shit spontaneously from every possible angle.

    They’re fighting a 20th century battle with 21st century technology. Like Russia using armor to invade a country armed with Javelins.


  • FauxLiving@lemmy.worldtolinuxmemes@lemmy.worldPeasants...
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    20 hours ago

    Steam? We had Wine launch scripts AND WE LOVED IT.

    If our DXVK and Mesa versions were not compatible we just kernel panicked like a real OS. Kids these days with their GE-Proton and NTSYNC don’t know how good they have it.

    Kernel synchronization primitives? ABSOLUTELY NOT, we’ll use file mutexes in userspace like Linus intended.



  • Space Marine 2 works just fine on Linux, I was just playing it last weekend. It has a gold rating on Protondb.

    Kernel anticheat games can die in a fire, with all due respect to them.

    I’ll worry about them when I get through my backlog of games which grows faster than my completed game list.


  • The progress in the last 2 years has been nothing short of amazing.

    The KDE team, Wine, Proton, TKG/GE/etc have worked miracles for the Linux community.

    Also, shout out to Microsoft for spectacularly face planting in their move to Windows 11/CoPilot/Vibe coded OS development. Nobody deserves more credit for Linux’s growth than Microsoft’s complete failure to innovate as an operating system developer.





  • The case against Meta, where they ‘lost’ the copyright claim, was one of the biggest cases recently where Authors Guild v. Google was used. The judge dismissed one of the complaints (about training) while citing Authors Guild v. Google. Meta did have to pay for the books, but once they paid for the books they were free to train their models without violating copyright.

    Now, there are some differences so the litigation is still ongoing. For example, one of the key elements was that Google Books and an actual book fulfill two different purposes/commercial markets so Google Books isn’t stealing market share from a written novel.

    However, for LLMs and image generators this isn’t as true so there is the possibility that a future judge will carve out an exception for this kind of case… it just hasn’t happened yet.


  • The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case.

    We know that the current case law on the topic, which has been applied in the specific case of training a model on copyrighted material, including books is that training a model on copyright material is ‘highly transformative’.

    Some models are capable of reproducing the majority of some books, after hundreds or thousands of prompts (not counting the tens of thousands of prompts required to defeat the explicit safeguards preventing this exact kind of copyright violation), as long as you make the definition of ‘reproduce’ broader (measuring non-contiguous matching, allowing near edits, etcetc).

    Compare that level of ‘copyright violation’ vs how the standard in Authors Guild v. Google, Inc was applied. In that case Google had OCR’d copies of books and allows (it is still a service that you can use now) users to full-text search books and it will return you a sentence or two of text around the search term.

    Not ‘kind of similar text that has some areas where the tokens match several times in a row’, an exact 1:1 copy of text taken directly from a scan of the physical book. In addition, the service also has high quality scans of the book covers as well.

    Google’s use was considered highly transformative and it gives far more accurate copies of the exact same books with far less effort than a language model which is trained, in many cases, to resist doing the very thing that Google Books has been doing openly and legally for a decade.

    LLMs don’t get close to this level of fidelity in reproducing a book:

    Google

    vs

    LLMs