• ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    8
    ·
    1 month ago

    What benefit would a lidar bring that they haven’t already achieved with cameras and radar? The car not seeing where it’s going is not exactly an issue they’re having with FSD.

    • Num10ck@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 month ago

      a lidar could tell the difference between a person on a bus billboard and a person. it brings 3d to a 2d party.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        13
        ·
        1 month ago

        A lidar alone can’t do that. It’ll just build a 3D point cloud. You still need software to detect the individual objects in there and that’s easier said than done. So far Tesla seems to be achieving this just fine by using cameras alone. Human eyes can tell the difference between an actual person and a picture of a person too. I don’t see how this is supposed to be somethin you can’t do with just cameras.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          edit-2
          1 month ago

          So far Tesla seems to be achieving this just fine by using cameras alone.

          Funny, last I heard, Tesla FSD has a tendency to run into motorcycles.
          With lidar there would be no doubt that there is an actual object, and obviously you don’t drive into it.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            12
            ·
            edit-2
            1 month ago

            No, and neither are your eyes, but you can still see the world in 3D.

            You can use normal cameras to create 3D images by placing two cameras next to each other and creating a stereogram. Alternatively, you can do this with just one camera by taking a photo, moving it slightly, and then taking another photo - exactly what cameras in a moving vehicle are doing all the time. Objects closer to the camera move differently than the background. If you have a billboard with a person on it, the background in that picture moves differently relative to the person than the background behind an actual person would.

            • Buffalox@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              1 month ago

              neither are your eyes

              That’s a grossly misleading statement.
              We definitely use 2 eyes to achieve a 3D image with depth perception.

              So the question is obviously whether Tesla does the same with their Camera AI for FSD.

              IDK if they do, but if they do, they apparently do it poorly. Because FSD has a history of driving into things that are obviously (for a human) in front of it.