• Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    The HackerOne report that does not even apply has 44 upvotes.

    What do upvotes mean on HackerOne?

    I guess, at least here, they’re mindless “looks interesting” or “looks well worded” or something?

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    76
    arrow-down
    3
    ·
    edit-2
    2 days ago

    Those who use AI to report to open source projects and flood the volunteering devs who keep the World going, should be disqualified from using those open source projects to begin with (even though thats not feasible)

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      21
      ·
      2 days ago

      Consider that it’s not intended to be helpful, but actually could be a malicious DDOS attempt. If it slows down devs from fixing real vulnerabilities, then it empowers those holding zero days for a widely used package (like curl).

  • Akatsuki Levi@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    2 days ago

    I still don’t get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever

    • emzili@programming.dev
      link
      fedilink
      English
      arrow-up
      33
      ·
      2 days ago

      It’s simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540

      • zygo_histo_morpheus@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

        Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.

        • psivchaz@reddthat.com
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          AI could probably find the occasional actual bug. If you use AI to file 500 bug reports in the time it may take a researcher to find and report 1, and only 2 pay out, you’ve still gotten ahead.

          But in the process, you’ve wasted tons of time for the developers who have to actually sort through, read the reports, and verify the validity of the issue. I think that’s part of the problem. Even if it sometimes finds a legitimate issue, these people are trying to make it someone else’s problem to do the real work.

        • BatmanAoD@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          15 hours ago

          The user who submitted the report that Stenberg considered the “last straw” seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it’s possible that by using an LLM to automate making reports, they’re making some money despite having a low success rate.

        • CandleTiger@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          20 hours ago

          Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am

          Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.

          Some of them are professionals in related fields

    • kadup@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      2 days ago

      We have several scientific articles being published and later found to have been generated via AI.

      If somebody is willing to ruin their academic reputation, something that takes years to build, don’t you think people are also using AI to cheat at a job interview and land a high paying IT job?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn’t (as is the way with training AI), and not caring about the people hurt in the process.

  • zarathustra0@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    2 days ago

    I have a dream that one day it will be possible to identify which AI said slop came from and so to charge the owners of said slop generator for releasing such a defective product uncontrolled on the world.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    On a barely related note:

    It would be funny to watch Markiplier try to take out a Tesla Bot, and then Asimo, and then a humanoid Boston Dynamics robot, in hand to hand combat.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I mean… the thumbnail looks almost exactly like Markiplier to me.

        All these years later, still can’t get his damn voice out of my head, purely from clicking on ‘really great vid’ links from randos on Discord… bleck.

  • TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    5
    ·
    2 days ago

    Just rewrite curl in Rust so you can immediately close any AI slop reports talking about memory safety issues. /s