Cheap data and the absence of coincidences make maths an ideal testing ground for AI-assisted discovery — but only humans will be able to tell good conjectures from bad ones.
I haven’t read this article, but the one place machine learning is really really good, is narrowing down a really big solution space where false negatives and false positives are cheap. Frankly, I’m not sure how you’d go about training an AI to solve math problems, but if you could figure that out, it sounds roughly like it would fit the bill. You just need human verification as the final step, with the understanding that humans will rule out like 90% of the tries, but if you only need one success that’s fine. As a real world example machine learning is routinely used in astronomy to narrow down candidate stars or galaxies from potentially millions of options to like 200 that can then undergo human review.
No one is talking about automated theorem provers (see 4 coloring theorem) or symbolic solvers (see Mathematica). These tools already revolutionized math decades ago.
The only thing that came out in the past year or two are LLMs. Which is clearly overhyped bullshit.
AI is insanely bad at distinguishing fact from hallucination, which seems like a terrible match for math
I haven’t read this article, but the one place machine learning is really really good, is narrowing down a really big solution space where false negatives and false positives are cheap. Frankly, I’m not sure how you’d go about training an AI to solve math problems, but if you could figure that out, it sounds roughly like it would fit the bill. You just need human verification as the final step, with the understanding that humans will rule out like 90% of the tries, but if you only need one success that’s fine. As a real world example machine learning is routinely used in astronomy to narrow down candidate stars or galaxies from potentially millions of options to like 200 that can then undergo human review.
AI isn’t just LLMs.
No one is talking about automated theorem provers (see 4 coloring theorem) or symbolic solvers (see Mathematica). These tools already revolutionized math decades ago.
The only thing that came out in the past year or two are LLMs. Which is clearly overhyped bullshit.
The article doesn’t mention LLMs, and many ML related things came out in the last year or two that aren’t LLMs.