• amzd@kbin.social
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    6 months ago

    AI is insanely bad at distinguishing fact from hallucination, which seems like a terrible match for math

    • blargerer@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I haven’t read this article, but the one place machine learning is really really good, is narrowing down a really big solution space where false negatives and false positives are cheap. Frankly, I’m not sure how you’d go about training an AI to solve math problems, but if you could figure that out, it sounds roughly like it would fit the bill. You just need human verification as the final step, with the understanding that humans will rule out like 90% of the tries, but if you only need one success that’s fine. As a real world example machine learning is routinely used in astronomy to narrow down candidate stars or galaxies from potentially millions of options to like 200 that can then undergo human review.

      • dragontamer@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        6 months ago

        No one is talking about automated theorem provers (see 4 coloring theorem) or symbolic solvers (see Mathematica). These tools already revolutionized math decades ago.

        The only thing that came out in the past year or two are LLMs. Which is clearly overhyped bullshit.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          The article doesn’t mention LLMs, and many ML related things came out in the last year or two that aren’t LLMs.

  • EarthShipTechIntern@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 months ago

    Set to be revolutionized by AI because AI can’t do math.

    Says my brother, a Math Professor that works with people trying to develop AI

        • misk@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          It can do statistics and probability incredibly well. Chatbots are gross waste of that capability but it’s proving to be quite capable in areas where lots of brute force computation was required before (like in biotech).

        • misk@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          That’s because ChatGPT and the likes use machine learning to calculate odds of word combinations that make up a plausible sentence in a given context. There are scientific studies that postulate we’ll never have enough data to train those models properly, not to mention exponential energy consumption required. But this is not the only application of this technology.

  • Audalin@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    The article isn’t about automatic proofs, but it’d be interesting to see a LLM that can write formal proofs in Coq/Lean/whatever and call external computer algebra systems like SageMath or Mathematica.

    • CapeWearingAeroplane@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I was thinking something similar: If you have the computer write in a formal language, designed in such a way that it is impossible to make an incorrect statement, I guess it could be possible to get somewhere with this