I don’t think that casting a range of bits as some other arbitrary type “is a bug nobody sees coming”.

C++ compilers also warn you that this is likely an issue and will fail to compile if configured to do so. But it will let you do it if you really want to.

That’s why I love C++

  • panda_abyss@lemmy.ca
    link
    fedilink
    arrow-up
    78
    arrow-down
    2
    ·
    4 days ago

    I actually do like that C/C++ let you do this stuff.

    Sometimes it’s nice to acknowledge that I’m writing software for a computer and it’s all just bytes. Sometimes I don’t really want to wrestle with the ivory tower of abstract type theory mixed with vague compiler errors, I just want to allocate a block of memory and apply a minimal set rules on top.

    • jkercher@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      100%. In my opinion, the whole “build your program around your model of the world” mantra has caused more harm than good. Lots of “best practices” seem to be accepted without any qualitative measurement to prove it’s actually better. I want to think it’s just the growing pains of a young field.

      • SpaceCowboy@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        Even with qualitative measurements they can do stupid things.

        For work I have to write code in C# and Microsoft found that null reference exceptions were a common issue. They actually calculated how much these issues cost the industry (some big number) and put a lot of effort into changing the language so there’s a lot of warnings when something is null.

        But the end result is people just set things to an empty value instead of leaving it as null to avoid the warnings. And sure great, you don’t have null reference exceptions because a value that defaulted to null didn’t get set. But now you have issues where a value is an empty string when it should have been set.

        The exception message would tell you exactly where in the code there’s a mistake, and you’ll immediately know there’s a problem and it’s more likely to be discovered by unit tests or QA. Something that’s an value that’s supposed to be set may not be noticed for a while and is difficult to track down.

        So their research indicated a costly issue (which is ultimately a dev making a mistake) and they fixed it by creating an even more costly issue.

        There’s always going to be things where it’s the responsibility of the developer to deal with, and there’s no fix for it at the language level. Trying to fix it with language changes can just make things worse.

        • HER0@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          3 days ago

          For this example, I feel that it is actually fairly ergonomic in languages that have an Option type (like Rust), which can either be Some value or no value (None), and don’t normally have null as a concept. It normalizes explicitly dealing with the None instead of having null or hidden empty strings and such.

          • SpaceCowboy@lemmy.ca
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            I just prefer an exception be thrown if I forget to set something so it’s likely to happen as soon as I test it and will be easy to find where I missed something.

            I don’t think a language is going to prevent someone from making a human error when writing code, but it should make it easy to diagnose and fix it when it happens. If you call it null, “”, empty, None, undefined or anything else, it doesn’t change the fact that sometimes the person writing the code just forgot something.

            Abstracting away from the problem just makes it more fuzzy on where I just forgot a line of code somewhere. Throwing an exception means I know immediately that I missed something, and also the part of the code where I made the mistake. Trying to eliminate the exception doesn’t actually solve the problem, it just hides the problem and makes it more difficult to track down when someone eventually notices something wasn’t populated.

            Sometimes you want the program to fail, and fail fast (while testing) and in a very obvious way. Trying to make the language more “reliable” instead of having the reliability of the software be the responsibility of the developer can mean the software always “works”, but it doesn’t actually do what it’s supposed to do.

            Is the software really working if it never throws an exception but doesn’t actually do what it’s supposed to do?

            • HER0@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              It is fair to have a preference for exceptions. It sounds like there may be a misunderstanding on how Option works.

              Have you used languages that didn’t have null and had Option instead? If we look at Rust, you can’t forget not to check it: it is impossible to get the Some of an Option without dealing with the None. You can’t forget this. You can mess up in a lot of other ways, but you explicitly have to decide how to handle that potential None case.

              If you want it to fail fast and obvious, there are ways to do this. For example you, you can use the unwrap() method to get the contained Some value or panic if it is None, expect() to do the same but with a custom panic message, the ? operator to get the contained Some value or return the function with None, etc. Tangentially, these also work for Result, which can be Ok or Err.

              It is pretty common to use these methods in places where you always want to fail somewhere that you don’t expect should have a None or where you don’t want your code to deal with the consequences of something unexpected. You have decided this and live with the consequences, instead of it implicitly happening/you forgetting to deal with it.

    • Kairos@lemmy.today
      link
      fedilink
      arrow-up
      8
      arrow-down
      29
      ·
      3 days ago

      People just think that applying arbitrary rules somehow makes software magically more secure, like with rust, as if the compiler won’t just “let you” do the exact same fucking thing if you type the unsafe keyword

      • BatmanAoD@programming.dev
        link
        fedilink
        arrow-up
        23
        ·
        3 days ago

        It’s neither arbitrary nor magic; it’s math. And unsafe doesn’t disable the type system, it just lets you dereference raw pointers.

        • Kairos@lemmy.today
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          3 days ago

          That’s not what I meant. I understand that rust forces things to be more secure. It’s not not like there’s some guarantee that rust is automatically safe, and C++ is automatically unsafe.

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              3 days ago

              No there is not. Borrow checking and RAII existed in C++ too and there is no formal axiomatic proof of their safety in a general sense. Only to a very clearly defined degree.

              In fact, someone found memory bugs in Rust, again, because it is NOT soundly memory safe.

              Dart is soundly Null-safe. Meaning it can never mathematically compile null unsafe code unless you explicitly say you’re OK with it. Kotlin is simply Null safe, meaning it can run into bullshit null conditions.

              The same thing with Rust: don’t let it lull you into a sense of security that doesn’t exist.

              • BatmanAoD@programming.dev
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                3 days ago

                Borrow checking…existed in C++ too

                Wat? That’s absolutely not true; even today lifetime-tracking in C++ tools is still basically a research topic.

                …someone found memory bugs in Rust, again, because it is NOT soundly memory safe.

                It’s not clear what you’re talking about here. In general, there are two ways that a language promising soundness can be unsound: a bug in the compiler, or a problem in the language definition itself permitting unsound code. (unsafe changes the prerequisites for unsoundness, placing more burden on the user to ensure that certain invariants are upheld; if the code upholds these invariants, but there’s still unsoundness, then that falls into the “bug in Rust” category, but unsoundness of incorrect unsafe code is not a bug in Rust.)

                Rust has had both types of bugs. Compiler bugs can be (and are) fixed without breaking (correct) user code. Bugs in the language definition are, fortunately, fixable at edition boundaries (or in rare cases by making a small breaking change, as when the behavior of extern "C" changed).

      • Speiser0@feddit.org
        link
        fedilink
        arrow-up
        11
        ·
        3 days ago

        You don’t even need unsafe, you can just take user input and execute it in a shell and rust will let you do it. Totally insecure!

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          12
          arrow-down
          1
          ·
          3 days ago

          Rust isn’t memory safe because you can invoke another program that isn’t memory safe?

          • Speiser0@feddit.org
            link
            fedilink
            arrow-up
            8
            ·
            3 days ago

            My comment is sarcastic, obviously. The argument Kairos gave is similar to this. You can still introduce vulnerabilities. The issue is normally that you introduce them accidentally. Rust gives you safety, but does not put your code into a sandbox. It looked to me like they weren’t aware of this difference.

      • panda_abyss@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        I don’t know rust, but for example in Swift the type system can make things way more difficult.

        Before they added macros if you wanted to write ORM code on a SQL database it was brutal, and if you need to go into raw buffers it’s generally easier to just write C/objc code and a bridging header. The type system can make it harder to reason about performance too because you lose some visibility in what actually gets compiled.

        The Swift type system has improved, but I’ve spent a lot of time fighting with it. I just try to avoid generics and type erasure now.

        I’ve had similar experiences with Java and Scala.

        That’s what I mean about it being nice to drop out of setting up some type hierarchy and interfaces and just working with a raw buffers or function pointers.