Firm predicts it will cost $28 billion to build a 2nm fab and $30,000 per wafer, a 50 percent increase in chipmaking costs as complexity rises::As wafer fab tools are getting more expensive, so do fabs and, ultimately, chips. A new report claims that

    • tailiat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 months ago

      The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

      • go_go_gadget@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        11 months ago

        The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

        Eh I disagree. Every software engineer I’ve ever worked with knows how to make some optimizations to their code bases. But it’s literally never prioritized by the business. I suspect this will shift as IaaS takes over and it’s a lot easier to generate the necessary graphs showing the stability of your product being maintained while the consumed resources has been reduced.

    • winterayars@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      But what if i want to do all my work inside a JavaScript “application” inside a web browser inside a desktop?

      (We really do have do much CPU power these days that we’re inventing new ways to waste it…)

    • SynonymousStoat@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      edit-2
      11 months ago

      As long as humans have some hand in writing and designing software we’ll always have shitty software.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        While I agree with the cynical view of humans and shortcuts, I think it’s actually the “automated” part of the process to blame. If you develop an app, there’s only so much you can code. However if you start with a framework, now you’ve automated part of your job for huge efficiency gains, but you’re also starting off with a much bigger app and likely lots of functionality you aren’t really using

        • SynonymousStoat@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I was more getting at with software development it’s never just the developers making all of the decisions. There are always stakeholders who often force time and attention to other things and make unrealistic deadlines, while most software developers I know would love to be able to take the time to do everything the right way first.

          I also agree with the example you provided. Back when I used to work on more personal projects I loved it when I found a good minimal framework that allowed you to expand it as needed so you rarely ever had unused bloat.

        • go_go_gadget@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          11 months ago

          If you’re not using the functionality it’s probably not significantly contributing to the required CPU/GPU cycles. Though I would welcome a counter example.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    edit-2
    11 months ago

    And NVIDIA will use this as an excuse to hike up their prices by 100+%.

    On a serious note, this will progressively come down in price as time passes, plus not everyone needs to use 2nm cutting edge technology. Plus transition to 2nm will also increase the density, so comparing wafer prices without acknowledging the increased density is not giving you the whole picture.

    Plus DRAM scaling is becoming cumbersome and a lot more components cannot scale to 2nm, so 2nm is mostly a marketing term, and there are a lot of challenges that make this tech so expensive and difficult to design and produce.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    4
    ·
    11 months ago

    Afaik 2nm is the theoretical limit for current transistor tech so this sort of end-game for this type of tech.

    • Earthwormjim91@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      11 months ago

      2nm process doesn’t actually mean 2nm though. Hasn’t in over a decade.

      The current 3nm process has a 48nm gate pitch and a 24nm metal pitch. The 2nm process will have a 45nm gate pitch and a 20nm metal pitch.

      “Nm” is just “generation” today. After 5nm was 3nm, next is 2nm, then 1nm. They’ll change the name after that even though they’re still nowhere near actual nm size.

      • weew@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Intel already has plans to name the further generations xxA, after Angstroms

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Yeah I’m a bit curious what the marketing will be as they have to get more vertical, 3D. Will there be naming to reflect that or will they just follow existing naming, 0.5nm?

      • Nora@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        11 months ago

        The nm number is just the smallest part on the waffer. It’s not actually the transistor.

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        This was my understanding as well: That beyond ~7nm the reliability begins to lose value because the diameter of an electron ‘orbit’ or whatever becomes a factor.

        Admittedly I’m not an expert. But my understanding was that to break this limitation and keep Moore’s law were kinda leaning into quantum computation to eventually fill the incoming void.

        • Kyrgizion@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          The reason you mean is quantum tunneling. Essentially, at that small a scale an electron can ‘teleport’ outside of the system, which is obviously a big nono for computing.

  • OrangeCorvus@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 months ago

    Your device will be 11% faster and the battery will last 6% more but it will dramatically change the way you interact with your device.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      8
      ·
      edit-2
      11 months ago

      If it’s enough to run on-device ai, it’s a win. Imagine autocorrect being able to mangle your texting without ever connecting to the cloud. Huge prvacy win.

      With the goggles coming soon, I think they’ll focus chip improvements on GPU and neural engine to better support that

  • Sensitivezombie@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    12
    ·
    11 months ago

    Use that money to speed the process of quantum computing so it will make these transistor chips obsolete

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      Quantum computing wouldn’t make these transistors obsolete.

      Quantum computing is only really good at very specific types of calculations. You wouldn’t want it being used for the same type of job that the CPU handles.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Quantum computing is useless in most cases because of how fragile and inaccurate it can be, due in part to the near zero temperatures they are required to operate at.