• msage@programming.dev
    link
    fedilink
    arrow-up
    38
    ·
    14 days ago

    This is literally me at every possible discussion regarding any other RDBMS.

    My coworkers joked that I got paid for promoting Postgres.

    Then we switched from Percona to Patroni and everyone agreed that… fuck yes, PostgreSQL is the best.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      14 days ago

      After having suffered with T SQL at MSFT for a number of years… yep, PostGres is almost always the best for almost any enterprise setup, despite what most other corpos seem to think.

      Usually their reasons for not using it boil down to:

      We would rather pay exorbitant licescing fees of some kind, forever, than rework a few APIs.

      Those few APIs already having a fully compatible rewrite, done by me, working in test, prior to that meeting.

      Gotta love corpo logic.

      • msage@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        14 days ago

        Yes, had those issues as well, though lately not a big corp, but mid-sized company.

        One manager just wanted MySQL. We had trouble getting required performance from MySQL, when Postgres had good numbers. I had the app fully ready, just to be told no, you make it work in MySQL. So we dropped some ‘useless stuff’ like deferring flushing to disk and such.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 days ago

      I used to agree, but recently tried out Clickhouse for high ingestion rate time series data in the financial sector and I’m super impressed by it. Postgres was struggling and we migrated.

      This isn’t to say that it’s better overall by any means, but simply that I did actually find a better tool at a certain limit.

      • qaz@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        13 days ago

        I’ve been using ClickHouse too and it’s significantly faster than Postgres for certain analytical workloads. I benchmarked it and while Postgres took 47 seconds, ClickHouse finished within 700ms when performing a query on the OpenFoodFacts dataset (~9GB). Interestingly enough TimescaleDB (Postgres extension) took 6 seconds.

        Insertion Query speed
        Clickhouse 23.65 MB/s ≈650ms
        TimescaleDB 12.79 MB/s ≈6s
        Postgres - ≈47s
        SQLite 45.77 MB/s1 ≈22s
        DuckDB 8.27 MB/s1 crashed

        All actions were performed through Datagrip

        1 Insertion speed is influenced by reduced networking overhead due to the databases being in-process.

        Updates and deletes don’t work as well and not being able to perform an upsert can be quite annoying. However, I found the ReplacingMergeTree and AggregatingMergeTree table engines to be good replacements so far.

        Also there’s !clickhouse@programming.dev

      • msage@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        14 days ago

        If you can, share your experience!

        I also do finance, so if there is anything more to explore, I’m here to listen and learn.

        • locuester@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          13 days ago

          Clickhouse has a unique performance gain when you have a system that isn’t operational data that is normalized and updated often. But rather tables of timeseries data being ingested for write only.

          An example, stock prices or order books in real-time. Tens of thousands per second. Clickhouse can write, merge, aggregate records really nicely.

          Then selects against ordered data with aggregates are lightning fast. It has lots of nuances to learn and has really powerful capability, but only for this type of use case.

          It doesn’t have atomic transactions. Updates and deletes are very poor performing.

        • Tja@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          13 days ago

          For high ingestion (really high) you have to start sharding. It’s nice to have a DB that can do that natively, MongoDB and Influx are very popular, depending on the exact application.

      • msage@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        12 days ago

        I mean, with mysql_fwd, I migrated the data quickly, and apart from manual ‘on duplicate update’ queries (or rare force index) it works the same.