To be fair an argument can be made for the Lego block one, using a novel combination of existing technologies to get better results is how nearly all innovation happens in machine learning.
Especially in ML too. It’s currently easier to integrate multiple small specialised models than to train a big model for every use case. If I understand correctly, that was one of the main motivations for Anthropic developing the Model Context Protocol, including interacting with LLMs from front-end clients.
Proving a thing that’s only known empirically is extremely valuable, too. We’ve an enormous amount of evidence that the Riemann hypothesis is correct - we can produce an infinite amount of points on the line, in fact - but proving it is a different matter.
To be fair an argument can be made for the Lego block one, using a novel combination of existing technologies to get better results is how nearly all innovation happens in machine learning.
Especially in ML too. It’s currently easier to integrate multiple small specialised models than to train a big model for every use case. If I understand correctly, that was one of the main motivations for Anthropic developing the Model Context Protocol, including interacting with LLMs from front-end clients.
Proving a thing that’s only known empirically is extremely valuable, too. We’ve an enormous amount of evidence that the Riemann hypothesis is correct - we can produce an infinite amount of points on the line, in fact - but proving it is a different matter.
And for the kid challenging the 0.1% result, that’s about as close to pure scientific method as you can get.