• million@lemmy.world
    link
    fedilink
    English
    arrow-up
    152
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Refactoring is something that should be constantly done in a code base, for every story. As soon as people get scared about changing things the codebase is on the road to being legacy.

  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    126
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Until you know a few very different languages, you don’t know what a good language is, so just relax on having opinions about which languages are better. You don’t need those opinions. They just get in your way.

    Don’t even worry about what your first language is. The CS snobs used to say BASIC causes brain damage and that us '80s microcomputer kids were permanently ruined … but that was wrong. JavaScript is fine, C# is fine … as long as you don’t stop there.

    (One of my first programming languages after BASIC was ZZT-OOP, the scripting language for Tim Sweeney’s first published game, back when Epic Games was called Potomac Computer Systems. It doesn’t have numbers. If you want to count something, you can move objects around on the game board to count it. If ZZT-OOP doesn’t cause brain damage, no language will.)


    Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”. So what if the first language you used required curly braces, and the next one you learn doesn’t? So what if type inference means that you don’t have to write int on your ints? You’ll get used to it.

    You learned how to use curly braces, and you’ll learn how to use something else too. You’re smart. You can cope with indentation rules or significant capitalization or funny punctuation. The idea that some features are “unintuitive” rather than merely temporarily unfamiliar is just getting in your way.

    • Walnut356@programming.dev
      link
      fedilink
      arrow-up
      34
      arrow-down
      1
      ·
      1 year ago

      Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”…The idea that some features are “unintuitive” rather than merely temporarily unfamiliar is just getting in your way.

      Well i mean… that’s kinda what “unintuitive” means. Intuitive, i.e. natural/obvious/without effort. Having to gain familiarity sorta literally means it’s not that, thus unintuitive.

      I dont disagree with your sentiment, but these people are using the correct term. For example, python len(object) instead of obj.len() trips me up to this day because 99% of the time i think [thing] -> [action], and most language constructs encourage that. If I still regularly type an object name, and then have to scroll the cursor back over and type “len(”, i cant possibly be using my intuition. It’s not the language’s “fault” - because it’s not really “wrong” - but it is unintuitive.

      • fubo@lemmy.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        1 year ago

        If you only know C and you’re looking at Python, the absence of curly braces on code blocks is temporarily unfamiliar to you.

        But if you only know Python and you’re looking at C, the fact that indentation doesn’t matter is temporarily unfamiliar to you.

        Once you learn the new language, it’s not unfamiliar to you anymore.

        “Unintuitive” often suggests that there’s something wrong with the language in a global sense, just because it doesn’t look like the last one you used — as if the choice to use (or not use) curly braces is natural and anything else is willfully perverse on the part of the language designer.

        • Walnut356@programming.dev
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          “Unintuitive” often suggests that there’s something wrong with the language in a global sense

          I mean only if you consider “Intuition” to be some monolithic, static thing that’s also identical for everyone. Everyone has their own intuition, and their intuition changes over time. Intuition is akin to an opinion - it’s built up based on your own past experiences.

          just because it doesn’t look like the last one you used — as if the choice to use (or not use) curly braces is natural and anything else is willfully perverse on the part of the language designer.

          I don’t think it’s that deep. All people mean when they say it is that “[thing] defied my expectation/prior experience”. It’s like saying “sea food tastes bad”. There’s an implicit “to me” at the end, it’s obvious i’m not saying “sea food factually tastes bad, and anyone who says they like it is wrong or lying”.

        • Walnut356@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          You could say that about anything. Of course you have to learn something the first time and it’s “unintuitive” then. Intuition is literally an expectation based on prior experience.

          Intuitive patterns exist in programming languages. For example, most conditionals are denoted with “if”, “else”, and “while”. You would find it intuitive if a new programming language adhered to that. You’d find it unintuitive if the conditionals were denoted with “dnwwkcoeo”, “wowpekg cneo”, and “coebemal”.

        • 257m@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          But there are languages that require varying degrees of effort to become natural. Something like Malbolge will pretty much never be natural while something like Python can become natural to you in a few days.

          • xigoi@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Yeah. The original comment was about programmers who say that a language is “unintuitive” because it doesn’t look like another language they know.

        • kaba0@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Languages also have inner consistency. E.g. the mentioned python len function is inconsistent with the rest of the same language - and that is a statement that is true in itself, without an external reference point.

          • xigoi@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Yes, I agree that the len() thing in Python, and inconsistency in general, is bad. But pretty much all popular languages have many inconsistencies.

    • Cratermaker@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Idk, I don’t see a problem with saying a new language is unintuitive. For example, in js I still consider the horrible type coercion and the “fix” with the triple-equals very unintuitive indeed. On the flip side, when learning C# I found the multiple ways of making comparisons to be pretty intuitive, and not footguns.

    • FlumPHP@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”.

      Yeah. I’ve written in six or so different languages and am using Go now for the first time. Even then, I’m trying to be optimistic and acknowledge things are just different or annoying for me. It doesn’t mean anything is wrong with the language.

    • AlexWIWA@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I still think ruby is a bad language, even though I agree with you

      • morrowind@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I found ruby horribly confusing until I got over the intial learning bump.

        Now I love it. It really is lovely. In terms of design that is. Not sure about the monkeypatching

        • AlexWIWA@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I really don’t like how rails brings things into scope and you just have no idea what’s there or how it got there unless you know all of the conventions. I guess that’s a rails issue and not ruby though.

          I learned in python and C++ so I’m biased towards things that are extremely specific. Definitely doesn’t mean ruby is necessarily bad, I just don’t like it.

          • morrowind@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m one of those weirdoes who likes ruby and has never used rails, so no opinion there.

    • Konlanx@lemmy.ml
      link
      fedilink
      Deutsch
      arrow-up
      1
      ·
      1 year ago

      This is very true! Languages being unintuitive also becomes less of an issue the more languages you look into. There will be many concepts that multiple languages have since ultimately they are all trying to do similar things and the more you learn the more you will recognize making it easier to get into even more languages.

    • IonAddis@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Until you know a few very different languages, you don’t know what a good language is, so just relax on having opinions about which languages are better. You don’t need those opinions. They just get in your way.

      This is wise advice for ANY domain of knowledge.

      Lotta people get a little fragment of knowledge on something, then shut down their brain and stop accepting new input. But life is change, and to be able to change and learn new things you need to keep your mind open. Being able to relax on having opinions and keep learning and moving along is very important.

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    94
    arrow-down
    2
    ·
    1 year ago

    Dynamic typing is insane. You have to keep track of the type of absolutely everything, in your head. It’s like the assembly of type systems, except it makes your program slower instead of faster.

    • Cratermaker@discuss.tchncs.de
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Nothing like trying to make sense of code you come across and all the function parameters have unhelpful names, are not primitive types, and have no type information whatsoever. Then you get to crawl through the entire thing to make sense of it.

    • uniqueid198x@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      You can do typing through the compiler at build time, or you can do typing with guard statements at run time. You always end up doing typing tho

    • Olissipo@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I like it in modern PHP, it’s balanced. As strict or as loose as you need in each context.

      Typed function parameters, function returns and object properties.

      But otherwise I can make a DateTime object become a string and vice-versa, for example.

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        What happens when you coerce a string to a date-and-time but it’s not valid?

        Where I’m from (Rust), error handling is very strict and very explicit, and that’s how it should be. It forces you to properly handle everything that can potentially go wrong, instead of just crashing and looking like a fool.

        • Olissipo@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          My point is, you won’t ever try. You’d only use “weak” variables inside the function you’re working on.

          It’s explicit when you absolutely need it to be, when the function is being called and you need to know what arguments to pass and what it’ll return

            • Olissipo@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              When you say user, you mean a user of a function? In that case PHP would throw a TypeError, and presumably only happens when developing/testing.

              If you mean in production, like when submitting a form, an Exception may be thrown. In which case you catch it and return some error message to the user saying the date string is invalid.

              • argv_minus_one@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                By “user” I mean the person who is using the application.

                Using exceptions for handling unexceptional errors (like invalid user input) is a footgun. You don’t know when one might be raised, nor what type it will have, so you can easily forget to catch it and handle it properly, and then your app crashes.

                • Olissipo@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  you can easily forget to catch it and handle it properly

                  Even if I coded the form by hand and that happened, it’s on me, not on the programming language.

                  But I don’t, I use a framework which handles all that boilerplate validation for me.

  • AdmiralShat@programming.dev
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    4
    ·
    edit-2
    1 year ago

    If you don’t add comments, even rudimentary ones, or you don’t use a naming convention that accurately describes the variables or the functions, you’re a bad programmer. It doesn’t matter if you know what it does now, just wait until you need to know what it does in 6 months and you have to stop what you’re doing an decipher it.

    • A_Porcupine@lemmy.world
      link
      fedilink
      arrow-up
      33
      arrow-down
      1
      ·
      1 year ago

      However, engineers who rely solely on comments to explain their code, are bad at writing readable code.

    • fkn@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Self documenting code is infinitely more valuable than comments because then code spreads with it’s use, whereas the comments stay behind.

      I got roasted at my company when I first joined because my naming conventions are a little extra. That lasted for about 2 months before people started to see the difference in legibility as the code started to change.

      One of the things I tell my juniors is, “this isn’t the 80s. There isn’t an 80 character line limit. The computer doesn’t benefit from your short variable names. I should be able to read most lines of code as a single non-compound sentence in English with only minor tweaks and the English sentence should be what is happening in most of those lines of code.”

      • tatterdemalion@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        80 character limit is helpful though when you need to have many files open at a time. Maybe 100 is more reasonable. Fighting indentation is important too.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          I, too, remember the days before ultra high definition ultra wide monitors.

          I thought this argument was bogus in the 90s on a 21" CRT and the argument has gotten even less valid since then. There are so many solutions to these problems that increase productivity for paltry sums of money it’s insane to me that companies don’t immediately purchase these for all developers.

          • tatterdemalion@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            You have a point, devs should be using multiple large monitors. I will often need to have 3-4 files open at once, plus some browser windows. Having some limit on line length helps with this and for fighting code complexity.

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              The most important thing is comprehension. If something is too long and the length makes it less readable then it is too long.

              But if having 3-4 files open at the same time makes it harder for you to comprehend a single file because you can’t get the full picture, that’s on you.

          • icesentry@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I have a massive ultrawide and I still 100% believe in line limits. Long lines are harder to read in general but even with a limit of 100 I frequently have 3 files opened next to each other and I can’t read entire lines easily. Line limits just aren’t about the size of the monitor and I can’t believe people still say that.

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I understand the concern, but readability and comprehension are way more important than line length. If the length impairs readability, it’s too long. Explicitly limits are terrible. Guidelines, fine.

              Ultimately, you do you. I still think your crazy and I think your argument is poor.

              • icesentry@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Yes a strict 80 character limit would be bad but that’s why modern formatters aren’t strict and default to 90-100.

                I’ve pretty much never seen code that would have been more readable had the lines been longer than that.

                My main argument is still that shorter lines are more readable. I just think it’s a bullshit argument to say that long lines are fine because large monitors exists. I don’t see how that makes me crazy.

                • fkn@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  See, I think length limits and readability are sometimes at odds. To say that you 100% believe in length limits means that you would prefer the length limit over a readable line of code in those situations.

                  I agree that shorter lines are often more readable. I also think artificial limits on length are crazy. Guidelines, fine. Verbosity for the sake of verbosity isn’t valuable… But to say never is a huge stretch. There are always those weird edge cases that everyone hates.

      • MajorHavoc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        There’s no such thing as self documenting code, unless every method and variable name has the word “because” in it.

        Anyone can read what the code does. The comments are there to answer why it does what it does the way it does.

        Why is invariably lost to time, if it’s not committed to a comment here and there.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          This is a pretty ridiculous position to take but if you believe it then I’m glad you write the comments you do.

          There is an argument that commenting on the lack of expected code is valuable for this reason, but it certainly isn’t true in all situations.

          • MajorHavoc@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            We can agree on “not all situations”. Often the answer to “why did we do it this way?” is blazingly obvious, and no one wants a comment.

            But we all know that sometimes the “why” isn’t obnoxious at all.

            As far as I can tell, developers who do believe in self-documenting code either haven’t learned the power of “why?”, or they have a secret technique for encoding “why?” into their code structure.

            If it’s the second thing, I would be delighted to be brought in on it. (No sarcasm. Maybe I’ve missed a trick here.)

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I’ll answer in a couple of different ways.

              1. If I am writing library code my why is you have an end use and I don’t care why you use it and you don’t care why I wrote it. You only care about what my code does so you can achieve your why.

              2. If we are working on the same code we have different whys but the same what. Then your comment as to why isn’t the same as mine which makes the comment incorrect.

              3. We are looking at a piece of code and you want to know how it works, because the stated what is wrong (bugs). This might be the “why” you are looking for, but I call this a “how”. This is the case where self documenting code is most important. Code should tell a second programmer how the code achieves the what without needing an additional set of verbose comments. The great thing about code is that it is literally the instructions on the how. The problem is conveying the how to other programmers.

              There are three kinds of how: self evident, complex how’s requiring multiple levels of abstraction and lots of code and complex short how’s that are not apparent.

              The third is where most people get into trouble. Almost all of these cases of complexity can be solved with only a single layer of abstraction and achieve easily readable self documenting code. The problem for many cases is that they start as a one off and people are lousy at putting in the work on a one-off solution. Sometimes the added work of abstraction, and building a performant abstraction, makes a small task a large one. In these cases comments can make sense.

              Sometimes these short, complex how’s require specialists. Database queries, performant perl/functional queries, algorithmic operations, complex compile time optimized templates (or other language specific optimizations) and the like are some of the most common examples of these. This category of problem benefits most from a well defined interface with examples for use (which might be comments). The “how” of these are not as valuable for the average developer and often require specialist knowledge regardless of comments for understanding how they work. In these cases what they do is far more valuable than how or why.

              • MajorHavoc@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                You’ve given a lot of consideration to modern recently created code. But the best modern recent code goes on to become someone’s legacy nightmare. (The most fit and correct code survives long after anyone really wishes it would.)

                In high quality legacy nightmare code “why” is lost, unless someone wrote it down.

                I’ve been on both sides of that mystery. “Why didn’t they just do X?”

                • Sometimes it was because X didn’t exist yet, or wasn’t matute enough.
                • Sometimes it was because X is fundamentally the wrong solution, in a very subtle way.

                There’s two ways to know the difference:

                  1. Painful trial and error.
                  1. A comment (or document) answering “why”.

                I prefer the second way, but I happily charge more for the first way.

    • tatterdemalion@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This is why code review exists. Writer’s can’t always see what’s wrong with their work, because they have the bias of knowing what was intended. You need a reader to see it with fresh eyes and tell you what parts are confusing.

      That’s not to say you shouldn’t try to make it readable in the first place. But reviewing and reading other people’s code is how you get better.

    • rolaulten@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Let’s take this one step further. I should be able to get the core ideas in your code by comments and cs 101 level coding (eg basic data structures, loops, and if/then).

      • Carol2852@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Sure try to replace the one or two people that hold the whole team together. I’ve seen it a couple times, a good team disintegrates right after one or two key people leave.

        Also, if you replace half the team, prepare for some major learning time whenever the next change is being made. Or after the next deployment. 🤷‍♂️

  • BrotherL0v3@lemmy.world
    link
    fedilink
    arrow-up
    78
    arrow-down
    13
    ·
    1 year ago

    Tools that use a GUI are just as good (if not better) than their CLI equivalents in most cases. There’s a certain kind of dev that just gets a superiority complex about using CLI stuff.

      • russ@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Indeed, the problem with gui apps is when you can’t script them!

        I always loved alfred on osx, then loved scripting rofi on linux, only to come back to osx years later and find alfred can’t be invoked with stdin options. It’s damn shame….

    • brettvitaz@programming.dev
      link
      fedilink
      arrow-up
      37
      ·
      1 year ago

      I used to think something like this when I was younger. I spent an inordinate amount of time looking for good gui versions of cli tools. I have come to understand that this is not usually the case and cli tools are more convenient much of the time. I would not classify this as superiority complex, unless I’m being a jerk about it. I don’t care what you use, I just use whatever has the lowest barrier to entry with the most standardization, which is usually the original cli tool.

      That said, jetbrains git integration is awesome.

        • kaba0@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It also depends on the specifics — in many cases when a GUI is just a wrapper over the CLI tool, it is instructive to learn the CLI, similarly how you are a better programmer if you know about at least a layer beneath the one you are programming at (e.g. you can reason about this usage of hashmap because you roughly know what it does).

          It is probably the most visible in git, but if you can only do commit and push from a GUI, just please learn the CLI as well. You don’t have to use it, but understanding it is important and the GUI may abstract away too much from you.

      • fkn@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I agree only when your job function is specifically geared around those tools… Otherwise high quality guis are more valuable.

        Just because I can do everything in gdb that I can do in visual studio doesn’t mean 99% of most debugging tasks isn’t easier and faster in visual studio. Now if my job was specifically aimed at debugging/reverse engineering there are certain things that gdb does better on the CLI… But for most software devs… CLI gdb isn’t valuable.

    • stilgar [he/him] @infosec.pub
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      1 year ago

      There are some massive intrinsic advantages of the CLI though, that apply for everyone, not just leetcoders:

      • The terminal can remember everything you ever did. Forgotten the command you wrote 2 months ago? You can do a search for it with a tool like fzfand run the exact same command again.
      • Communicating with others. GUI programs require step by step instructions, often accompanied by screenshots while CLI may be copy/pasted.
      • Combining programs together. There are a few different techniques for combining CLI programs to search/format output, use secrets without ever having them in the clipboard or on disk, monitor something frequently/constantly etc etc

      So while I agree with you that there’s plently of elitism around the CLI, you do yourself a disservice if you try to avoid it.

    • bouh@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      4
      ·
      1 year ago

      Just no. CLI can be automated, which makes it superior. It’s not a superiority complex, it’s a fact. I’m not a minimal wage worker pushing buttons I don’t understand. I’m not a technician who learnt your shitty software to do the most basic tasks.

    • intelati@programming.dev
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      My gold standard app is a CLI where I have the option to visually add the flags. I’m thinking of the ytdlp-gui type programs.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Aside from automation, CLI can support significantly more complicated apps reliably. It can also be tested more reliably.

      GUIs are better for anything simple, and good UX designers can make a moderately complex one, but anything like server administration/git/configs are 100x better on CLI

    • adambard@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      This depends a lot on the GUI and the tool. Some cli tools are great alone or for scripting, others benefit from the extra attention to ux and exposure of options that a GUI can offer

      For git in particular, I encourage juniors to learn and use the CLI. I find that GUI git clients often do some or all of the following:

      • Use non-git terminology that ends up being confusing. “Sync” comes to mind as a frequent offender, I can think of several incompatible things that could refer to.

      • Ignore the useful ability to stage your changes

      • Don’t permit or encourage a review of the changes

      • Implement only the basics and make remediation of branching issues difficult

      In the worst case, I’ve seen people end up using the git GUI like a “save” button, blindly commiting and pushing the current state of their code, including to-be-removed print statements and other cruft. Yeah, git cli is a bit complex compared to that, but you gain a lot for that added complexity.

      That said, I’ve definitely jumped into a git GUI from time to time just for a visualization of whenever branching snafu I’m trying to untangle. None of the above invalidates GUIs if you take care to still understand the underlying tool properly!

  • MrTallyman@programming.dev
    link
    fedilink
    arrow-up
    65
    arrow-down
    1
    ·
    1 year ago

    My take is that no matter which language you are using, and no matter the field you work in, you will always have something to learn.

    After 4 years of professional development, I rated my knowledge of C++ at 7/10. After 8 years, I rated it 4/10. After 15 years, I can confidently say 6.5/10.

    • BaskinRobbins@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      Amen. I once had an interview where they asked what my skill is with .net on a scale of 1 - 10. I answered 6.5 even though at the time I had been doing it for 7 years. They looked annoyed and said they were looking for someone who was a 10. I countered with nobody is a 10, not them or even the people working on the framework itself. I didn’t pass the interview and I think this question was why.

      • fkn@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        Your mistake was giving them an answer instead of asking how the scale was setup before giving them a number. Psychologically, by answering first your established that the question was valid as presented and it anchored their expectations as the ones you had to live up to. By questioning it you get to anchor your response to a different point.

        Sometimes questions like this can be used to see how effective a person will be in certain lead roles. Recognizing, explaining and disambiguating the trap question is a valuable lead skill in some roles. Not all mind you… And maybe not ones most people would want.

        But most likely you dodged a bullet.

        • BaskinRobbins@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I was kicking myself for days afterwards for not doing exactly as you said. I’m not good at these types of interview questions in the moment. Also before that was the tech interview classic of asking a bunch random trivia questions, which I actually nailed. Also this was for dev II position.

          I definitely dodged a bullet though. Some months later I got hired at a different company for 30k more.

      • BilboBargains@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Did your interviewer profess to be a 10 in .net, otherwise how would they know what that looks like? I was told that I’m unsuitable as a programmer of PLC because I never used their software before. That I write the algorithms that go into a PLC was not sufficient. These people are looking for unicorns but find donkeys everywhere they look.

      • CodeBlooded@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        As a hiring manager, I can understand why you didn’t get the job. I agree that it’s not a “good” question, sure, but when you’re hiring for a job where the demand is high because a lot is on the line, the last thing you’re going to do is hire someone who says their skills are “6.5/10” after almost a decade of experience. They wanted to hear how confident you were in your ability to solve problems with .NET. They didn’t want to hear “aCtUaLlY, nO oNe Is PeRfEcT.” They likely hired the person who said “gee, I feel like my skills are 10/10 after all these years of experience of problem solving. So far there hasn’t been a problem I couldn’t solve with .NET!” That gives the hiring manager way more confidence than something along the lines of “6.5/10 after almost a decade, but hire me because no one is perfect.” (I am over simplifying what you said, because this is potentially how they remembered you.)

        Unfortunately, interviews for developer jobs can be a bit of a crap shoot.

        • BaskinRobbins@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 months ago

          They wanted to hear how confident you were in your ability to solve problems with .NET. They didn’t want to hear “aCtUaLlY, nO oNe Is PeRfEcT.”

          Yeah, I mean no shit, with hindsight it’s obvious they were looking for the 10/10 answer. I was kicking myself for days afterwards because that’s the only question I felt I answered “wrong”. Tech interviews are such a shit show though that you can start to overthink things as an interviewee. Also, an important aspect of the question that I didn’t mention was they specified “1 is completely new, and 10 is working at Microsoft on the .net framework itself”. The question caught me off guard. I have literally no idea what working at Microsoft on the framework is like. In that context being a 10/10 felt like being among the most knowledgeable person of c# of all time. Could I work on the framework itself? Idk maybe, I’ve never thought about it, I don’t even know what their day to day is. I should’ve just said 10/10 though, it was a dev II position to work on a web app, it wouldn’t have been that hard.

          • CodeBlooded@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            6 months ago

            10 is working at Microsoft on the .net framework itself.

            An interesting spin. I like to imagine that you could have answered “10/10,” taken a pause, and declared that you’re leaving the interview early to apply directly to Microsoft to “work on the .net framework itself.” 🤓

            dev II position to work on a web app

            ”we want you to tell us that you’re over qualified for the role”

  • idunnololz@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    1
    ·
    1 year ago

    You can always solve a problem by adding more layers of abstraction. Good software design isn’t to add more layers of abstractions, it’s to solve problems with the minimum amount of abstractions necessary while still having maintainable, scalable code.

    There are benefits to abstraction but they also have downsides. They can complicate code and make code harder to read.

  • asyncrosaurus@programming.dev
    link
    fedilink
    arrow-up
    51
    arrow-down
    6
    ·
    1 year ago

    SPAs are mostly garbage, and the internet has been irreparably damaged by lazy devs chasing trends just to building simple sites with overly complicated fe frameworks.

    90% of the internet actually should just be rendered server side with a bit of js for interactivity. JQuery was fine at the time, Javascript is better now and Alpinejs is actually awesome. Nowadays, REST w/HTMX and HATEOAS is the most productive, painless and enjoyable web development can get. Minimal dependencies, tiny file sizes, fast and simple.

    Unless your web site needs to work offline (it probably doesn’t), or it has to manage client state for dozen/hundreds of data points (e.g. Google Maps), you don’t need a SPA. If your site only needs to track minimal state, just use a good SSR web framework (Rails, asp.net, Django, whatever).

    • derpgon@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I do a lot of PHP, so naturally my small projects are PHP. I use a framework called Laravel, and while it is possible to use SPAs or other kinds of shit, I usually choose pure SS rendering with a little bit of VueJS to make some parts reactive. Other than that, it is usually, just pure HTML forms for submitting data. And it works really well.

      Yeah yeah, they push the Livewire shit, which I absolutely hate and think is a bad idea, but nobody is forcing me, so that’s nice.

    • nayminlwin@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’m still hoping for browsers to become some kind of open standard application environments and web apps to become actual apps running on this environment.

      • icesentry@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        How are browser not that already? What’s missing?

        They are an open standard and used to make many thousands of apps.

        • nayminlwin@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I’m thinking more along the line of ubiquitous offline first PWAs. Imagine google doc running offline in a browser and being able to edit local docs directly. I guess secure file system access is one of the major road blocks, though I’m not sure of the challenges associated with coming up with a standard for this.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Actual hot take, Blazor is awesome, it is like Microsoft looked into ASP.NET Forms, ASP.NET MVC and Razor, and bundled it to one quick framework to do simple WebApps.

      • asyncrosaurus@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Counter hot take, I do actually like Blazor but it has limitations due to how immature web assembly still is. It also does not solve the problem of being a big complex platform that isn’t needed for small simple apps. Of the half dozen projects I’ve written in Blazor, I’d personally re-write 3 or so in just Razor Pages with Htmx.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Server-side works better, webassembly and fat client on general imo aren’t worth it. It’s benefits require millions of users.

  • OADINC@feddit.nl
    link
    fedilink
    arrow-up
    47
    arrow-down
    5
    ·
    edit-2
    1 year ago

    This is the only way;

    if (condition) {
        code
    }
    

    Not

    if (condition)
    {
        code
    }
    

    Also because of my dyslexia I prefer variable & function names like this; ‘File_Acces’ I find it easier to read than ‘fileAcces’

  • Crisps@lemmy.world
    link
    fedilink
    arrow-up
    43
    arrow-down
    1
    ·
    1 year ago

    Dynamically typed languages don’t scale. Large project bases become hard to maintain, read and refactor.

    Basic type errors which should be found in compilation become runtime errors or unexpected behavior.

  • hansl@lemmy.ml
    link
    fedilink
    arrow-up
    42
    arrow-down
    1
    ·
    1 year ago

    Hot take: people who don’t like code reviews have never been part of a good code review culture.