I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work

  • x86x87@lemmy.one
    link
    fedilink
    arrow-up
    71
    ·
    5 months ago

    Simulating even one neuron is very complex. Neurons in artificial neuron nets used in machine learning are a gross oversimplification. On top on this you need to get the wiring right. On top on this you need to get the sensorial system right (a brain without input is worthless). On top of this you need an environment. So it’s multiple layers of complexity that we don’t have

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      5 months ago

      To clarify:

      We don’t even know how human intelligence/consciousness works, let alone how to simulate it.

      But we know how an individual neuron works.

      The issue with OPs idea is we don’t know how to tell a computer what a bunch of neurons do to create an intelligence/consciousness.

    • IvanOverdrive@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      To understand the complexity of the human brain, you need a brain more complex than the human brain.

  • WolfLink@lemmy.ml
    link
    fedilink
    arrow-up
    42
    ·
    5 months ago

    Short answer: Neural Networks and other “machine learning” technologies are inspired by the brain but are focused on taking advantage of what computers are good at. Simulating actual neurons is possible but not something computers are good at so it will be slow and resource intensive.

    Long Answer:

    1. Simulating neurons is fairly complex. Not impossible; we can simulate microscopic worms, but simulating a human brain of 100 billion neurons would be a bit much even for modern supercomputers
    2. Even if we had such a simulation, it would run much slower than realtime. Note that such a simulation would involve data sent between networked computers in a supercomputing cluster, while in the brain signals only have to travel short distances. Also what happens in the brain as a simple chemical release would be many calculations in a simulation.
    3. “Training” a human brain takes years of constant input to go from a baby that isn’t capable of much to a child capable of speech and basic reasoning. Training an AI simulation of a human brain is at least going to take that long (plus longer given that the simulation will be slower)
    4. That human brain starts with some basic programming that we don’t fully understand
    5. Theres a lot more about the human brain we don’t fully understand
  • los_chill@programming.dev
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    4
    ·
    5 months ago

    Neurons undergo physical change in their interconnectivity. New connections (synapses) are created, strengthened, and lost over time. We don’t have circuits that can do that.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Did OP mean accomplishing the connectivity and with software rather than hardware? No, we don’t have hardware that can modify itself like a brain does, but I think it is possible to accomplish that with coding.

      • palebluethought@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        5 months ago

        Sure, but now you’re talking about running a physical simulation of neurons. Real neurons aren’t just electrical circuits. Not only do they evolve rapidly over time, they’re powerfully influenced by their chemical environment, which is controlled by your body’s other systems, and so on. These aren’t just minor factors, they’re central parts of how your brain works.

        Yes, in principle, we can (and have, to some extent) run physical simulations of neurons down to the molecular resolution necessary to accomplish this. But the computational power required to do that is massively, like billions of times, more expensive than the “neural networks” we have today, which are really just us anthropomorphizing a bunch of matrix multiplication.

        It’s simply not feasible to do this at a scale large enough to be useful, even with all the computation on Earth.

      • Dkarma@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        Performance suffers. Basically we don’t have the computing power to scale the sw to the perf levels of the human brain.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 months ago

      Yes we do. FPGAs and memristors can both recreate those effects at the hardware level. The problem is scaling them and their necessary number of interconnections to the number of neurons in the human brain, on top of getting their base wiring and connections close to how our genetics build and wires our base brains.

  • PhlubbaDubba@lemm.ee
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    5 months ago

    That’s kinda the idea of neural network AI

    The problem is that neurons aren’t transistors, they don’t operate in base 2 arithmetic, and are basically an example of chaos theory, where a system is narrow enough for outer bounds to be defined, yet complex enough that the amount of “picture resolution” needed to be able to accurately predict how it will behave is currently beyond our scope of understanding to replicate or even theorize on.

    This is basically the realm where you’re no longer asking for math to fetch a logical answer to a question and more trying to use it as a way to perfectly calculate the future like an oracle trying to divine one’s own fate from the stars. It even comes with its own system of cool runes!

    I fully imagine we will have a precise calculation of Rayo’s Number before we have a binary computer capable of being raised as a human with a fully human intelligence and emotional depth.

    More likely I see the “singularity” coming in the form of someone who figures out how to augment human intelligence with an AI neural implant capable of the sorts of complex calculations that are impossible for a human mind to fathom while benefiting from human abilities for pattern recognition to build more accurate models.

    If someone figures out how to do this without accidentally creating a cheap 80’s slasher villain, it will immediately become the single most sought after medical device in human history, as these new augmented mind humans will instantly become a major competitive pressure for even most manual labor jobs.

  • rtfm_modular@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    5 months ago

    First, we don’t understand our own neurons enough to model them.

    AI’s “neuron” or node is a math equation that takes a numeric input with a variable “weight” that affects the output. An actual neuron a cell with something like 6000 synaptic connections each and 600 trillion synapses total. How do you simulate that? I’d argue the magic of AI is how much more efficient it is comparatively with only 176 billion parameters in GPT4.

    They’re two fundamentally different systems and so is the resulting knowledge. AI doesn’t need to learn like a baby, because the model is the brain. The magic of our neurons is their plasticity and our ability to freely move around in this world and be creative. AI is just a model of what it’s been fed, so how do you get new ideas? But it seems that with LLMs, the more data and parameters, the more emergent abilities. So we just need to scale it up and eventually we can raise the.

    AI does pretty amazing and bizarre things today we don’t understand, and they are already using giant expensive server farms to do it. AI is super compute heavy and require a ton of energy to run. So, the cost is a rate limiting the scale of AI.

    There are also issues related to how to get more data. Generative AI is already everywhere and what good s is it to train on its own shit? Also, how do you ethically or legally get that data? Does that data violate our right to privacy?

    Finally, I think AI actually possess an intelligence with an ability to reason, like us. But it’s fundamentally a different form of intelligence.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they’ve been set up with a chatbox where you’re interacting directly with something that attempts human-like responses, gives off the misconception that the thing you’re talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn’t do that good job of comprehending what exactly it’s telling you. It’s very confident when it gives responses which also means when it’s wrong, it’s very confidently delivering the incorrect response.

      • rtfm_modular@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        Talk to anyone who consumes Fox News daily and you’ll get incorrect predictive text generated quite confidently. You may also deny them their intelligence and lack of humanity with the fallacies they uphold.

        I also think intelligence is a gradient—is an ant intelligent? What about a dog? Chimp? Who gets to draw the line?

        It very may be a very complex predictive text generator that hallucinates but I’m concerned that it minimizes its capabilities for better or worse—Its ability to maintain context and has enough plasticity to reason and change its response points to something more, even if we’re at an early stage.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          What you’re alluding to is the Turing test and it hasn’t been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they’re speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn’t proof of an LLMs capabilities over more rudimentary chatbots.

          You’re also suggesting that it minimises the complexity of its outputs. My determination is that what we’re getting is the limit of what it can achieve. You’d have to prove that any allusion to higher intelligence can’t be attributed to coercion by the user or it’s just hallucinating based on imitating artificial intelligence from media.

          There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it’s a sophisticated machine learning algorithm.

          • rtfm_modular@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            All fair points, and I don’t deny predictive text generation is at the core of what’s happening. I think it’s a fair statement that most people hear “predictive text” and think it’s like the suggested words in a text message, which it’s more than that.

            I also don’t think Turing Tests are particularly useful long term because humans are so fallible. We too hallucinate all the time with our convictions based on false memories. Getting an AI to have what seems like an emotional response or show uncertainty or confusion in a Turing test is a great way to trick people.

            The algorithm is already a black box as is the mechanics of our own intelligence. We have no idea where the ceiling is for this technology yet. This debate quickly goes into the ontological and epistemological discussion about what it means to be intelligent…if the AI predictive text generation is complex enough where you simply cannot tell a difference, then is there a meaningful difference? What if we are just insanely complex algorithms?

            I also don’t trust that what the market sees in AI products is indicative of the current limits. AGI isn’t here yet, but LLMs are a scary big step in that direction.

            Pragmatically, I will maintain that AI is a different form of intelligence because I think it shortcuts to better discussions around policy and how we want this tech in our lives. I would gladly welcome the news that tells me I’m wrong.

  • swiftcasty@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    5 months ago

    Hardware limitations. A model that big would require millions of video cards, thousands of terabytes of storage, and hundreds of terabytes of ram.

    This is also where AI ethics plays into whether such a model should exist in the first place. People are really scared of AI but they don’t know that ethics standards are being enforced at the top level.

    Edit: get Elon Musk on the phone, he’s deranged enough to spend that much money on something like this while ignoring the ethical and moral implications /s

    • seaQueue@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 months ago

      Edit: get Elon Musk on the phone, he’s deranged enough to spend that much money on something like this while ignoring the ethical and moral implications /s

      You joke but he’d probably traumatize a synthetic intelligence enough that it’d think 4chan user behavior is the baseline human standard

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    5 months ago

    Simple answer: We don’t have any computer to run that on. While I don’t see any absolute limitations ruling out that approach… The human brain seems to have hundreds or thousands of trillions of connections. With analog electrical impulses and chemistry. That’s still sci-fi and even the largest supercomputers can’t do it as of today. I think scientists already did it for smaller brains like those from flies(?), so the concept should work.

    And then there is the question what are you going to do with it. You can’t just kill a human, freeze the brain, slice it and then digitize it by looking at a microscope a trillion times. So you have to make it learn from ground up. And this requires a connection to a body. So you also need to simulate a whole body and the world it’s in on top. To make it learn anything and not just activate random neurons. So that’s going to be sci-fi (like the Matrix) for the near and mid future.

  • letsgo@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    A programmer’s pet peeve is someone who says “why can’t you just…”.

    But the fundamental problem with your plan, assuming it’s possible at all - it’s been said that if the brain were simple enough for us to understand then we’d be too simple to understand it - is that you’re going to want to make your AI at least as smart as someone who’s 30-40 years old, which by definition would take 30-40 years.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 months ago

    AI is a very slow learner still. The base OS for humans is really advanced with hormones biases built in and a initial structure connected to input and outputs.

    Sure, it’s possible but we’re not there yet. It could be still 10-100 years until we manage to get a good one, depending on how we don’t know yet.

  • themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    Learning models operate like neurons in that they make connections based on experiences (data). But that’s like saying a microwave works like a chef in that it heats up food. We can’t build a microwave that can run a kitchen, design a menu, take a bump in the walk-in, and fire off dishes the way a chef will.

  • cygon@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Just some thoughts:

    • Current LLMs (chat AIs) are “frozen brains.” (Over-)Simplified, the synapses on the AI’s input neurons are given the 2048 prior words (the “context”) and the AI’s output synapses mean a different word each, so the synapse that lights up most strongly is the next word the AI will say. Then the picked word is added to the “context” and the neural network is executed once more for the next next word.

    • Coming up with the weights of the synapses takes insane effort (run millions of books through the “context” and look if the AI t predicts the next word correctly, if not, change a random synapse). Afaik, GPT-4 was trained on more than 2000 NVidia A100 GPUs for somewhere around 4 to 7 months, I think they mentioned paying for 7.5 Megawatt hours.

    • If you had a super computer that could keep running the AI with live training, the AI’s ability to string up words would likely, and quickly, degrade into incoherence because it would just ingest and repeat whatever went into it. Existing biological brains have these complex mechanisms of distilling experiences and evaluating them in terms of usefulness/success of their own actions.

    .

    I think that foundation, that part that makes biological brains put the action/consequence in the foreground of the learning experience, rather than just ingesting, is what eludes us. Perhaps at some future point in time, we could take the initial brain structure that grows in a human as the seed for an AI (but I guess then we’d likely have to simulate all the highly complex traits of real neurons, including mixed chemical and electrical signaling and possibly even quantum-level effects that have been theorized).