• INeedMana@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    It’s cool but my question is (I did not see this addressed in the article nor video but might have missed it) did it learn to win the game in general terms or only this one example? I mean, if the layout of the board was changed, would it still solve it?

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      11 months ago

      They don’t discuss it here, but it’s most likely a reinforcement model that operates different generations of learned behavior to decide if it’s improving or not.

      It would know that the ball going in the hole is “bad”, and then try to avoid that happening. Each move that is "good’ is then kept in a list of moves it should perform in the next generation of its plan to avoid the “bad” things. Loop -> fail -> logic build -> retry. After 6 hours, it has mapped a complete list of “good” moves to affect it’s final outcome.

      The answer your question: no, it would not be able to use what it learned here on a different map of the board. It’s building reactions to events based on this one board, and bound by rules. You could use the ruleset with another board, but it would need to learn it all again just as a human would.

      The thing about these models is less that they will work (it is assumed they eventually will through trial and error), but how efficiently they will work. The number of generational cycles and retries is usually the benchmark when dealing with reinforcement, but they don’t discuss that data here either.

      • INeedMana@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Yes, but that’s kind of my point

        We see it learn something with insane precision but most often it is almost an effect of over-training. It probably would require less time to learn another layout but it’s not learning the general rules (can’t go through walls, holes are bad, we want to get to X), it learns the specific layout. Each time a layout changes, it would have to re-learn it

        It is impressive and enables automation in a lot of areas, but in the end it is still only machine learning, adapting weights to specific scenario

    • indomara@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      It did learn to use shortcuts to skip parts of the maze, and had to be told not to. Super interesting!

      • INeedMana@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Yes, but that’s only because a generation found some random, specific motion that scored better. Not because it analyzed that doing a skip should be possible