• RandomVideos@programming.dev
    link
    fedilink
    arrow-up
    34
    arrow-down
    7
    ·
    1 year ago

    Why did it change from 64 gb of ram to 1.268869321 E+89(64!) gb of ram

    Also, 2.092278988 E+13(16!) gb is a lot more than 64 gb

  • yukichigai@kbin.social
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    Bonus if the vendor refuses to provide any further support until your department signs off on the resource expansion.

    In a just world that’s when you drop the vendor. In a just world.

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    3
    ·
    1 year ago

    Yeah, almost certainly the software only uses 4GB because it limits itself to what memory it has available.

    I have seen this conversation pan out a few times already. It has always been because of that, and once expanded things work much better. (Personally I have never took party at one, I guess that’s luck.)

  • nieceandtows@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    1 year ago

    Flip side of the coin, I had a sysadmin who wouldn’t increase the tmp size from 1gb because ‘I don’t need more than that recommended size’. I deploy tons of etl jobs, and they download gbs of files for processing to this globally known temp storage. I got it changed for one server successfully after much back and forth, but the other one I just overrode it in my config files for every script.

    • stevecrox@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      This is why Java rocks with ETL, the language is built to access files via input/output streams.

      It means you don’t need to download a local copy of a file, you can drop it into a data lake (S3, HDFS, etc…) and pass around a URI reference.

      Considering the size of Large Language Models I really am surprised at how poor streaming is handled within Python.

      • nieceandtows@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yeah python does lack in such things. Half a decade ago, I setup an ml model for tableau using python, and things were fine until one day it just wouldn’t finish anymore. Turns out the model got bigger and python filled out the ram and the swap trying to load the whole model in memory.

        • stevecrox@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          During the pandemic I had some unoccupied python graduates I wanted to teach data engineering to.

          Initially I had them implement REST wrappers around Apache OpenNLP and SpaCy and then compare the results of random data sets (project Gutenberg, sharepoint, etc…).

          I ended up stealing a grad data scientist because we couldn’t find a difference (while there was a difference in confidence, the actual matches were identical).

          SpaCy required 1vCPU and 12GiB of RAM to produce the same result as OpenNLP that was running on 0.5 vCPU and 4.5 GiB of RAM.

          2 grads were assigned a Spring Boot/Camel/OpenNLP stack and 2 a Spacy/Flask application. It took both groups 4 weeks to get a working result.

          The team slowly acquired lockdown staff so I introduced Minio/RabbitMQ/Nifi/Hadoop/Express/React and then different file types (not raw UTF-8, but what about doc, pdf, etc…) for NLP pipelines. They built a fairly complex NLP processing system with a data exploration UI.

          I figured I had a group to help me figure out Python best approach in the space, but Python limitations just lead to stuff like needing a Kubernetes volume to host data.

          Conversely none of the data scientists we acquired were willing to code in anything but Python.

          I tried arguing in my company of the time there was a huge unsolved bit of market there (e.g. MLOP’s)

          Alas unless you can show profit on the first customer no business would invest. Which is why I am trying to start a business.

  • ericbomb@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    narrows eyes

    Look I don’t “think” that was me this last few weeks. I’m pretty sure my support engineer butt was smart enough to check resources before blaming RAM…

    But it totally could have been me, and in that case I blame dev.

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I loath memory reservation based scheduling. it’s always a lie, always. Looking at you, Hadoop.