Researchers used AI to design a new material that they used to build a working battery – it requires up to 70 percent less lithium than some competing designs.
Also, AI would have just sped up an existing plan they had to try new approaches because AI doesn’t create new ideas or think of things out of nowhere.
If you tell AI to do things within a certain range and it gives you results then AI came up with a design as much as google came up with search results when you put something into the search bar.
That’s not true at all. AI can in fact generate novel techniques and solutions and has already done so in biotech and electrical engineering. I don’t think you understand how AI works or what it is
I think maybe people are running into a misunderstanding between LLMs and neural nets or machine kearning in general? AI has become too big of an umbrella term. We’ve been using NNs for a while now to produce entirely new ways to go about things. They can find bugs in games that humans can’t, been used to design new wind turbine blades (even made several asymmetrical ones which humans just don’t really do), or plot out entirely new ways of locomotion when given physical bodies. Machine learning is fascinating and can produce very unique results partly because it can be set up to not have existing design biases like humans do
And the nature of computers is that they are magnitudes better than humans at brute forcing. Machine learning can brute force (depending on the technique, it can be smarter than brute forcing, being more efficient) test many many many more designs and techniques than we could manually do. Sure it’ll fail many times, but it’s just a numbers game, and it can pump those numbers. It’ll try a lot of weird and unique stuff we wouldn’t even think to try, with varying degrees of success.
Name one that wasn’t just doing the thing it was told and the users being surprised. You know, the same way that people are surprised when research has results they did not expect using other approaches.
It’s a weird way of asking this. Of course it’s going to do what’s told, the alternative is that it, out of the blue, spits a battery design for no reason. If it were to somehow find a way to make batteries with less lithium in a way that never did before, isn’t that an unexpected result using other approaches?
This is not general artificial intelligence, everything we have is narrow AI, focused on solving one specific problem, for identifying birds to understand instructions between drugs.
That’s the point, it takes all the factors we know about and speed runs through all the possible ways it could work. Humans don’t have the time to look for every single possible way a battery could be constructed, but a ML model can just work it’s way through the issue faster and without human intervention.
Plus just like with the new group of antibiotics we just used AI to discover, it will allow truly thinking Humans to expand upon it.
Really sick of this “oh but you don’t realize AI don’t actually think! Therefore it’s all worthless!” With this smug bullshit like you think you’re bringing anything of value to the conversation.
I didn’t say it was worthless. In fact, I said the exact same things you just said in another post but with the additional detail that the name actually does matter when it is clearly misleading people into thinking it is something that it is not.
What a terribly ignorant thing to say, when people make these armchair comments they’re only hurting ordinary people that can make real benefits from using the technology.
What a giant leap you have taken there. Speeding up existing processes is an extremely helpful thing for the average people, just like weather models that also did things we were already doing far faster and with more variables than people could handle without the automation.
AI will be very helpful. It will not magically solve all of our problems on its own, which is how ‘AI comes up with’ is being presented.
By this very same logic, nobody has ever discovered anything because they’re just speeding someone else’s plans of improving or deriving from someone else’s findings
At the core, weather models, web searches, and AI are all pattern recognition with various levels of complexity and scope. Just like a bicycle is comparable to a motorcycle because they both have two wheels even though one is powered and can go faster and for longer without wearing out the rider.
By this very same logic, nobody has ever discovered anything because they’re just speeding someone else’s plans of improving or deriving from someone else’s findings
AI is not a person capable of coming up with something on its own.
I never claimed that, humans discoveries are just new permutations of observed phenomena. Every single mechanism of the universe is a permutation of the baseline functionality: physics. Therefore, if we’re shitting on permutations, you’re shitting on all of science. AI can do what we do faster. It’s just applied “knowledge” - no different than humans. In fact, that’s the whole point of neural networks, to emulate what our brains literally do right now.
Also, AI would have just sped up an existing plan they had to try new approaches because AI doesn’t create new ideas or think of things out of nowhere.
If you tell AI to do things within a certain range and it gives you results then AI came up with a design as much as google came up with search results when you put something into the search bar.
That’s not true at all. AI can in fact generate novel techniques and solutions and has already done so in biotech and electrical engineering. I don’t think you understand how AI works or what it is
I think maybe people are running into a misunderstanding between LLMs and neural nets or machine kearning in general? AI has become too big of an umbrella term. We’ve been using NNs for a while now to produce entirely new ways to go about things. They can find bugs in games that humans can’t, been used to design new wind turbine blades (even made several asymmetrical ones which humans just don’t really do), or plot out entirely new ways of locomotion when given physical bodies. Machine learning is fascinating and can produce very unique results partly because it can be set up to not have existing design biases like humans do
And the nature of computers is that they are magnitudes better than humans at brute forcing. Machine learning can brute force (depending on the technique, it can be smarter than brute forcing, being more efficient) test many many many more designs and techniques than we could manually do. Sure it’ll fail many times, but it’s just a numbers game, and it can pump those numbers. It’ll try a lot of weird and unique stuff we wouldn’t even think to try, with varying degrees of success.
Name one that wasn’t just doing the thing it was told and the users being surprised. You know, the same way that people are surprised when research has results they did not expect using other approaches.
It’s a weird way of asking this. Of course it’s going to do what’s told, the alternative is that it, out of the blue, spits a battery design for no reason. If it were to somehow find a way to make batteries with less lithium in a way that never did before, isn’t that an unexpected result using other approaches?
This is not general artificial intelligence, everything we have is narrow AI, focused on solving one specific problem, for identifying birds to understand instructions between drugs.
Yeah, that would be coming up with a battery design.
What novel solutions has ai done in electrical engineering?
That’s the point, it takes all the factors we know about and speed runs through all the possible ways it could work. Humans don’t have the time to look for every single possible way a battery could be constructed, but a ML model can just work it’s way through the issue faster and without human intervention.
Plus just like with the new group of antibiotics we just used AI to discover, it will allow truly thinking Humans to expand upon it.
Really sick of this “oh but you don’t realize AI don’t actually think! Therefore it’s all worthless!” With this smug bullshit like you think you’re bringing anything of value to the conversation.
I didn’t say it was worthless. In fact, I said the exact same things you just said in another post but with the additional detail that the name actually does matter when it is clearly misleading people into thinking it is something that it is not.
What a terribly ignorant thing to say, when people make these armchair comments they’re only hurting ordinary people that can make real benefits from using the technology.
What a giant leap you have taken there. Speeding up existing processes is an extremely helpful thing for the average people, just like weather models that also did things we were already doing far faster and with more variables than people could handle without the automation.
AI will be very helpful. It will not magically solve all of our problems on its own, which is how ‘AI comes up with’ is being presented.
My favorite part is the one where you skipped over exactly what I was talking about
My favorite part was where you accused me of hurting people because I said AI does what we already do faster.
You compared AI to googling bro
I’m done with this convo lmao
By this very same logic, nobody has ever discovered anything because they’re just speeding someone else’s plans of improving or deriving from someone else’s findings
Genius.
At the core, weather models, web searches, and AI are all pattern recognition with various levels of complexity and scope. Just like a bicycle is comparable to a motorcycle because they both have two wheels even though one is powered and can go faster and for longer without wearing out the rider.
AI is not a person capable of coming up with something on its own.
I never claimed that, humans discoveries are just new permutations of observed phenomena. Every single mechanism of the universe is a permutation of the baseline functionality: physics. Therefore, if we’re shitting on permutations, you’re shitting on all of science. AI can do what we do faster. It’s just applied “knowledge” - no different than humans. In fact, that’s the whole point of neural networks, to emulate what our brains literally do right now.
Not even close to true
Do you think AI just does things unprompted?
No one said anything about unprompted
😏
😳
Only a small subset of AI uses prompts.
Think of prompts as input
Deep learning, the most impressive type of AI, doesn’t use inputs.