• CodexArcanum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    32
    ·
    4 days ago

    I used to work for one of the nation’s largest survey marketplaces. Y’all have no idea how deep this hole goes.

    Surveys\polls are largely requested by political polling groups, research teams, and ad agencies. They put those up on an auction block just like ads, and then we would route traffic into it from various places. Mostly the survey takers come from mobile games (take this 3 question survey for 20 Blorp Points kind of stuff) or survey taker apps that give you points for gift cards and such.

    So even before bots, most polls are taken by “professional” survey takers who use banks of phones to maximize their point earnings. We spent a lot of energy on “proving” to the survey provider side that real humans were answering, and not using scripts or bots to just rapid finish them (answer B to everything kind of stuff). Using sophisticated bots to randomly answer was super common.

    They were super ready for AI. We talked about it everyday, game planned how it would work, designed systems around it. “Synthetic survey” was the buzz word. Why ask humans for answers if the statistics machine can convincingly predict the answer for you? We proposed ideas like generating the prediction fast and early, then using actual polls to adjust the result towards reality over time. We had tools to track people and connect their spending to poll questions so we could ask follow up questions on purchases, to provide “lift” metrics to agencies on if their ads were working. We were working on the “verification can” tech, only it would have been “Answer this 10 question survey to continue watching your movie.”

    I was so glad to leave that place. They got bought and consolidated into the world’s largest survey company a year later and they fired everyone else that had been left. All they wanted was the tech and the customers.

  • sudo@programming.dev
    link
    fedilink
    English
    arrow-up
    61
    ·
    5 days ago

    “The idea behind silicon sampling is simple and tantalizing,” they write. “Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use AI agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”

    Somebody invested money into this company. And there’s at least hundreds, maybe thousands, of other businesses with these asinine ideas about how to use AI. They’re all getting capital from someone who’s supposed to be smart because they have capital. Remember that when llm providers cost correct token prices.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    4 days ago

    Ah yes, “synthetic users.” This is being pushed at my job as well. We’re supposed to use AI to design the next feature for our website, then ask AI “users” what they think of it.

    That’s not our entire vetting process - it’s supposed to replace someone just writing down an idea and saying “I think this is good.” And I agree that just firing from the hip like that is dumb. We want our product managers to do more research into their ideas before they get greenlit to be built.

    The question is whether AI “synthetic users” add anything of value. The team that put this tool into service noted it has a “positivity bias,” aka “you’re absolutely right!” So we feed it an idea we think is good, and it says oh yes it’s very good.

    It’s read every customer email we’ve ever received and every user research report ever conducted by our human UX researchers. But it’s still just not that useful. I think AI is very useful for summarization, searching, and collation of information, but this goes beyond that, asking AI to imagine it is a person and then come up with things to say about an entirely novel concept. And AI is not good at that.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      You might as well just put all those emails into a hat and pull out random ones. Or maybe categorize them first and pick from the hats your feature falls under.

      Try this: ask the AI how useful it is to ask an AI for “synthetic user feedback” and it will probably even tell you why this particular task is particularly stupid for an LLM. Ok, I tried it with Haiku, you might need to follow up with a question that mentions that experience and implementation specifics matter but aren’t going to be in the context window before it will give an in-depth explanation about why this approach is a waste of resources, though using an AI to help summarize the important problem areas users want addressed can work, it just won’t be able to tell you how you did.

  • wampus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    4 days ago

    Seeing as the vast majority of “polls” I’ve seen in the last 5-6 years have been “this was a poll done online, so we can’t assign any certainty or margin of error, cause we have no idea who actually responded and it could’ve been just like, two dickheads with bots spamming nonsense, but the results were click baity enough for us to run a story” … I don’t see how them cutting out the two dickhead middle-men, and just using their own bots, is really that much different.

    • BarneyPiccolo@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      The only math class I ever enjoyed was a college statistics class which actually made sense to me. So I spent my life reading polls, always checking the sample size and the margin of error, because I knew how important those are to accuracy, etc. But that same knowledge also served to let me know that modern polls are becoming horsehit.

      I remember hearing Rush Limbaugh telling his listeners to either refuse to take polls, or to lie on them and say the opposite. He, and others, taught MAGAs to disrespect polls (cuz polls are the enemy of predatory politics).

      Also, many of the “pollsters” are MAGA operatives in disguise. Add unreliable pollsters to unreliable respondents, and you end up with a weird poll that doesn’t reflect reality at all.

      I no longer enjoy tracking polls. Too many of them have been games.

      • wampus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Yeah – I think a lot of people who took even just one stats course are in a similar boat. Though I think it’s a bit easier to understand the shift if you frame it within the context of Social Media sites controlling the population’s opinions / propaganda.

        Most govts understand at this point, internally at least, that if a message is repeated often, loudly, and it saturates a people’s media, they start to believe it / agree with it. The survey, and the reporting storm surrounding a survey, isn’t so much about showing people an accurate representation of how people’s viewpoints vary, but rather a vehicle for govts/companies to tell people how to think. Sites like Facebook don’t so much as sell advertising, as they sell the ability to socially engineer its users to like your product / political stance: make enough general noise about a niche position, and people will think it’s a majority opinion.

        Where the bots get used in the workflow, isn’t really that big of a concern.

        • BarneyPiccolo@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Valid. Media Manipulation is the name of the game right now, but that’s so inefficient. You throw out a bunch of propaganda, and hope something gets traction.

          Soon, Data Manipulation will be more important. They’ll start tracking all your date points, and soon they’ll find something they can manipulate you over. They’ll find some obscure crime to leverage you with, like a wrong statement on a Federal form interpreted as a deliberate lie, rather than a confused wrong answer. Or they’ll identify patterns, and create laws to make those patterns a crime, and get you on that.

          They’ll threaten your family, job, your healthcare, your money, your home, your freedom, and they’ll eventually get you to dance to their tune.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    5 days ago

    First off, got a chuckle from the bot check…

    The story quoted new poll findings by a company called Aaru, representing them as research based on the feedback of American adults. But according to an editor’s note, the piece had to be “updated to note that Aaru is an AI simulation research firm.”

    In other words, Axios had failed to disclose that it was citing alleged “polling data” that wasn’t drawn from human respondents at all. Instead, it was dreamed up by a large language model —yet the latest sign of every imaginable industry trying to leverage AI, even when doing so makes absolutely no sense.

    This was/is a problem, but giving up on stats because bad stats exist, is like refusing to ever eat food again because someone got you to try a sardine and spinach chocolate cupcake one time.

    In fact, the first, last, and most often brought up topic in graduate level statistical analysis isn’t about getting numbers, that’s easy. The hard part is finding the flaws in numbers, even in your own that proves yourself wrong.

    The vast majority of people never learn that, or learn that bad stats have been a problem as long as stats has existed. Even making it thru peer review doesn’t always mean anything.

    Like, every single time an article links to a study, do the due diligence and click, so what’s going, what the numbers really say, and search who funds them.

    It’s not like you’ll even know what to look for at first, but if you never try you’ll never improve.

    • PinkyPromise@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Some products give a sort of cashback promise for a positive review on a card that comes with their product sometimes.

  • betanumerus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    Sounds like something the GOP and O&G industry would do, with all their bots commenting horse dumpings all the time.