“It’s safe to say that the people who volunteered to “shape” the initiative want it dead and buried. Of the 52 responses at the time of writing, all rejected the idea and asked Mozilla to stop shoving AI features into Firefox.”

  • golden_zealot@lemmy.ml
    shield
    M
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    9
    ·
    23 days ago

    Hey all, just a reminder to keep the community rules in mind when commenting on this thread. Criticism in any direction is fine, but please maintain your civility and don’t stoop to ad-hominem etc. Thanks.

    • Wooki@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      3
      ·
      23 days ago

      don’t stoop to ad-hominem

      At this point Ad-hominem is practically the nice name for the business model “enshitification”.

      • golden_zealot@lemmy.mlM
        link
        fedilink
        English
        arrow-up
        18
        ·
        23 days ago

        If it can be proven that an LLM bot account is present on the instance masquerading as a human user, I would recommend you report the account for that reason/spam so that it can be investigated and removed per instance rule 4 after evidence is found.

        Since they aren’t people, I’d say it’s pointless to reply to them with ad-hominem in the first place since it means nothing to them, and therefore reporting it would be the more effective action to take in any event.

  • Hirom@beehaw.org
    link
    fedilink
    arrow-up
    67
    ·
    edit-2
    23 days ago

    The more AI is being pushed into my face, the more it pisses me off.

    Mozilla could have made an extension and promote it on their extension store. Rather than adding cruft to their browser and turning it on by default.

    The list of things to turn off to get a pleasant experience in Firefox is getting longer by the day. Not as bad as chrome, but still.

    • incompetent@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      22 days ago

      Rather than adding cruft to their browser and turning it on by default.

      The second paragraph of the article:

      The post stresses the feature will be opt-in and that the user “is in control.”

      That being said, I agree with you that they should have made it an extension if they really wanted to make sure the user “is in control.”

    • pory@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 days ago

      Switching to de-Mozilla’d Firefox (Waterfox) is as simple as copying your profile folder from FF to WF. Everything transfers over, and I mean everything. No mozilla corp, no opting out of shit in menus at all.

  • balsoft@lemmy.ml
    link
    fedilink
    arrow-up
    56
    ·
    edit-2
    24 days ago

    You want AI in your browser? Just add <your favourite spying ad machine> as a “search engine” option, with a URL like

    https://chatgpt.com/?q=%25s
    

    , with a shortcut like @ai. You can then ask it anything right there in your search bar.

    Maybe also add one with a URL with some query pre-written like

    https://chatgpt.com/?q=summarize this page for me: %s
    

    as @ais or something, modern chatbots have the ability to make HTTP requests for you. Then if you want to summarize the page you’re on, you do Ctrl+L Ctrl+C @ais Ctrl+V Enter. There, I solved all your AI needs with 4 shortcuts without literally any client-side code.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    3
    ·
    edit-2
    23 days ago

    Hear me out.

    This could actually be cool:

    • If I could, say, mash in “get rid of the junk in this page” or “turn the page this color” or “navigate this form for me”

    • If it could block SEO and AI slop from search/pages, including images.

    • If I can pick my own API (including local) and sampling parameters

    • If it doesn’t preload any model in RAM.

    …That’d be neat.

    What I don’t want is a chatbot or summarizer or deep researcher because there are 7000 bajillion of those, and there is literally no advantage to FF baking it in like every other service on the planet.


    And… Honestly, PCs are not ready for local LLMs. Not even the most exotic experimental quantization of Qwen3 30B is ‘good enough’ to be reliable for the average person, and it still takes too much CPU/RAM. And whatever Mozilla ships would be way worse.

    That could change with a good bitnet model, but no one with money has pursued it yet.

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      22 days ago

      Honestly, PCs are not ready for local LLMs

      The auto-translation LLM runs locally and works fine. Not quite as good as deepl but perfectly competent. That’s the one “AI” feature which is largely uncontroversial because it’s actually useful, unobtrusive, and privacy-enhancing.

      Local LLMs (and related transformer-based models) can work, they just need a narrow focus. Unfortunately they’re not getting much love because cloud chatbots can generate a lot of incoherent bullshit really quickly and that’s a party trick that’s got all the CEOs creaming their pants at the ungrounded fantasy of being just another trillion dollars away from AGI.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        21 days ago

        Yeah that’s really awesome.

        …But it’s also something the anti-AI crowd would hate once they realize it’s an 'LLM" doing the translation, which is a large part of FF’s userbase. The well has been poisoned by said CEOs.

        • azertyfun@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          21 days ago

          I don’t think that’s really fair. There are cranky contradictarians everywhere, but in my experience that feature has been well received even in the AI-skeptic tech circles that are well educated on the matter.

          Besides, the technical “concerns” are only the tip of the iceberg. The reality is that people complaining about AI often fall back to those concerns because they can’t articulate how most AI fucking sucks to use. It’s an eldtritch version of clippy. It’s inhuman and creepy in an uncanny valley kind of way, half the time it doesn’t even fucking work right and even if it does it’s less efficient than having a competent person (usually me) do the work.

          Auto translation or live transcription tools are narrowly-focused tools that just work, don’t get in the way, and don’t try to get me to talk to them like they are a person. Who cares whether it’s an LLM. What matters is that it’s a completely different vibe. It’s useful, out of my way when I don’t need it, and isn’t pretending to have a first name. That’s what I want from my computer. And I haven’t seen significant backlash to that sentiment even in very left-wing tech circles.

    • Professorozone@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      23 days ago

      You know what would be really cool? If I could just ask AI to turn off the AI in my browser. Now that would be cool.

          • cassandrafatigue@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            22 days ago

            Your server has not a monopoly on, but a majority of the worst shitlibs and other chuds. To the point I’m genuinely surprised by agreeing with someone there, and am worried that when i examine it closely youll be agreeing with me for some unthinkably horrible reason.

            • Professorozone@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              22 days ago

              The problem is I fundamentally do not understand how Lemmy works, so I just picked what seemed obvious. Like why wouldn’t I want the world.

              Also I thought from just reading sub-Lemmies? that .ml was the crap hole.

              Also, I looked up Chud and that’s really mean.

              • golden_zealot@lemmy.mlM
                link
                fedilink
                English
                arrow-up
                1
                ·
                21 days ago

                I would say that while there are general rules of thumb, it’s generally good to never assume the intentions or beliefs of another user based solely on their home server. There are nice people all over, and there are also a lot of assholes all over.

                By the way, as to your question mark, they are just called “Communities” on Lemmy typically, though I think some instances call them something different occasionally.

    • sudo@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      22 days ago

      If I can pick my own API (including local) and sampling parameters

      You can do this now:

      • selfhost ollama.
      • selfhost open-webui and point it to ollama
      • enable local models in about:config
      • select “local” instead of ChatGPT or w/e.

      Hardest part is hosting open-webui because AFAIK it only ships as a docker image.

      Edit: s/openai/open-webui

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        21 days ago

        Open WebUI isn’t very ‘open’ and kinda problematic last I saw. Same with ollama; you should absolutely avoid either.

        …And actually, why is open web ui even needed? For an embeddings model or something? All the browser should need is an openai compatible endpoint.

        • sudo@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          21 days ago

          The firefox AI sidebar embeds an external open-webui. It doesn’t roll its own ui for chat. Everything with AI is done in the quickest laziest way.

          What exactly isn’t very open about open-webui or ollama? Are there some binary blobs or weird copyright licensing? What alternatives are you suggesting?

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            20 days ago

            https://old.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/

            https://old.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/

            Basically, they’re both using their popularity to push proprietary bits, which their devleopment is shifting to. They’re enshittifying.

            In addition, ollama is just a demanding leech on llama.cpp that contributes nothing back, while hiding the connection to the underlying library at every opportunity. They do scummy things like.

            • Rename models for SEO, like “Deepseek R1” which is really the 7b distill.

            • It has really bad default settings (like a 2K default context limit, and default imatrix free quants) which give local LLM runners bad impressions of the whole ecosystem.

            • They mess with chat templates, and on top of that, create other bugs that don’t exist in base llama.cpp

            • Sometimes, they lag behind GGUF support.

            • And other times, they make thier own sloppy implementations for ‘day 1’ support of trending models. They often work poorly; the support’s just there for SEO. But this also leads to some public GGUFs not working with the underlying llama.cpp library, or working inexplicably bad, polluting the issue tracker of llama.cpp.

            I could go on and on with examples of their drama, but needless to say most everyone in localllama hates them. The base llama.cpp maintainers hate them, and they’re nice devs.

            You should use llama.cpp llama-server as an API endpoint. Or, alternatively the ik_llama.cpp fork, kobold.cpp, or croco.cpp. Or TabbyAPI as an ‘alternate’ GPU focused quantized runtime. Or SGLang if you just batch small models. Llamacpp-python, LMStudo; literally anything but ollama.

            As for the UI, thats a muddier answer and totally depends what you use LLMs for. I use mikupad for its ‘raw’ notebook mode and logit displays, but there are many options. Llama.cpp has a pretty nice built in one now.

  • voodooattack@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    23 days ago

    Why not just distribute a separate build and call it “Firefox AI Edition” or something? Making this available in the base binary is a big mistake. At least doing so immediately and without testing the waters.

  • 1984@lemmy.today
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    23 days ago

    I think ive lost hope at this point to see AI being actually useful in any application except chat gpt and code editors.

    Companies are struggling how to use Ai in their products because it actually doesnt improve their product, but they really really want it to.

  • PearOfJudes@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    23 days ago

    I think Mozilla’s base is privacy focused individuals, a lot of them appreciating firefox’s opensource nature and the privacy hardened firefox forks. From a PR perspective, Firefox will gain users by adamantly going against AI tech.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      23 days ago

      Maybe their thought process is they’ll gain more users by adopting AI while knowing they’re still the most privacy focused of the major browsers. Where have I seen this mentality before?

      Spoiler

      The American Democrat party often believes it can get more votes by shifting conservative, believing the more progressive voters will stick pick them because they’re still more progressive than not.

      • Niquarl@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        21 days ago

        Yeah but you can not vote, can you really not use a browser ? At the end of the day all the browsers are either chrome or Firefox forks

    • Vincent@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      It’s interesting that so many of those privacy-focused individuals use Windows and don’t have a single extension installed though.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    22 days ago

    It depends. If it’s just for the sake of plugging AI because it’s cool and trendy, fuck no.

    If it’s to improve privacy, accessibility and minimize our dependency on big tech, then I think it’s a good idea.

    A good example of AI in Firefox is the Translate feature (Project Bergamot). It works entirely locally, but relies on trained models to provide translation on-demand, without having Google, etc as the middle-man, and Mozilla has no idea what you translates, just which language model(s) you downloaded.

    Another example is local alt-text generation for images, which also requires a trained model. Again, works entirely locally, and provide some accessibility to users with a vision impairment when an image doesn’t provide caption.

    • arkitectnaut@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      22 days ago

      Totally agree. Just because generally AI is bad and used in stupid ways, it doesn’t mean that all AI is useless or without meaning. Clearly if you look at the trends, people are using chatbots as search engines. This is not Mozilla forcing anything on us, we are doing this. At that point I much prefer them to develop a system that lets us use gpts to surf the web in the most convenient and private way possible. So far I have been very happy with how Mozilla has implemented AI in Firefox. I don’t feel the bloat, it is not shoved in my face, and it is under my control. We don’t have to make it a witch hunt. Not everything is either horrible or beautiful.

  • blackroses97@lemmy.zip
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    23 days ago

    I am not really liking AI , sure its good for somethings but in last 2 weeks i seen some very negative and destructive outcomes from AI . I am so tired of everything being AI . It can have good potential but what are risks to users experience?

      • sudo@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        22 days ago

        Basically everything its used for that isn’t being shoved in your face 24/7.

        • speech to text
        • image recognition
        • image to text (includes OCR)
        • language translation
        • text to speech
        • protein folding
          • lots of other bio/chem problems

        Lots of these existed before the AI hype to the point they’re taken for granted, but they are as much AI an LLM or image generator. All the consumer level AI services range from annoying to dangerous.

        • cassandrafatigue@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          22 days ago

          Is it actually both good and efficient for that crap, though? Or is it just capable of doing it?

          Is it efficient at simulating protein folding, or does it constantly hallucinate impossible bullshit that has to be filtered out, burning a mountain and a lake for what a super computer circa 2010 would have just crunched through?

          Does the speech to text actually work efficiently? On a variety of accents and voices? Compared to the same resources without the bullshit machine?

          I feel like i need to ask all these questions because there are so many cultists out there contriving places to put this shit. I’m not opposed to a huge chunky ‘nuclear option’ for computing existing, I just think we need to actually think before we burn the kinds of resources this shit takes on something my phone could have done in 2017.

          • sudo@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            21 days ago

            All of the AI uses I’ve listed have been around for almost a decade or more and are the only computational solutions to those problems. If you’ve ever used speech to text that wasn’t a speak-n-spell you were using a very basic AI model. If you ever scanned a document and had the text be recognized, that’s an AI model.

            The catch here is I’m not talking about chatgpt or anything trying be very “general”. These are all highly specialized ai models that serve a very specific function.

  • Sam_Bass@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    23 days ago

    Doesn’t matter what the end-user wants. Corporate greed feeding into technological ignorance is gonna shove it down our throats anyway

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 days ago

      My worry about AI built into my browser is that it’ll be turned into data mining, training, and revenue generation

      Isn’t the AI Mozilla is talking about all run locally?

      • 52fighters@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        I’ll be honest, I do not know, but I’m always more worried about where it’ll end-up over where it is right now. Even if it is all local for now, it is a small tweak for that to change. Just a small decision by a few people and everything changes. I don’t have enough trust to believe that decision won’t be made.