Refactoring is something that should be constantly done in a code base, for every story. As soon as people get scared about changing things the codebase is on the road to being legacy.
Only if the code base is well tested.
Edit: always add tests when you change code that doesn’t have tests.
And also try to make tests that don’t have to change if you refactor in future (although there are some exceptions)
Doesn’t everybody agree with this? I really never thought of it as a hot take.
I highly doubt most corps do
Corps != people.
People just pass the buck and nobody stands up for what is most correct
deleted by creator
This
Most calls I have at work are like group therapy sessions, as everyone has ideas of what they believe is correct, but they know if they keep pressing with management or take the time to do what is right, it won’t go well for them.
This is coming from a guy who lasted a year and a half in the office. Sounds like it’s a systematic issue…
deleted by creator
deleted by creator
Today I removed code from a codebase that was added in 2021 and never ever used. Sadly, some people are as content to litter in their repo as they are in the woods.
Our company motto is: “leave it cleaner than you found it”
Yes please. Many times when I add a feature I end up refactoring some of the code first to better accommodate it.
thank_you_michael_scott.gif
We used to call this ‘Code is Cheap’ at my last job - you’re spot on about the value of it
Until you know a few very different languages, you don’t know what a good language is, so just relax on having opinions about which languages are better. You don’t need those opinions. They just get in your way.
Don’t even worry about what your first language is. The CS snobs used to say BASIC causes brain damage and that us '80s microcomputer kids were permanently ruined … but that was wrong. JavaScript is fine, C# is fine … as long as you don’t stop there.
(One of my first programming languages after BASIC was ZZT-OOP, the scripting language for Tim Sweeney’s first published game, back when Epic Games was called Potomac Computer Systems. It doesn’t have numbers. If you want to count something, you can move objects around on the game board to count it. If ZZT-OOP doesn’t cause brain damage, no language will.)
Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”. So what if the first language you used required curly braces, and the next one you learn doesn’t? So what if type inference means that you don’t have to write
int
on your ints? You’ll get used to it.You learned how to use curly braces, and you’ll learn how to use something else too. You’re smart. You can cope with indentation rules or significant capitalization or funny punctuation. The idea that some features are “unintuitive” rather than merely temporarily unfamiliar is just getting in your way.
Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”…The idea that some features are “unintuitive” rather than merely temporarily unfamiliar is just getting in your way.
Well i mean… that’s kinda what “unintuitive” means. Intuitive, i.e. natural/obvious/without effort. Having to gain familiarity sorta literally means it’s not that, thus unintuitive.
I dont disagree with your sentiment, but these people are using the correct term. For example, python len(object) instead of obj.len() trips me up to this day because 99% of the time i think [thing] -> [action], and most language constructs encourage that. If I still regularly type an object name, and then have to scroll the cursor back over and type “len(”, i cant possibly be using my intuition. It’s not the language’s “fault” - because it’s not really “wrong” - but it is unintuitive.
If you only know C and you’re looking at Python, the absence of curly braces on code blocks is temporarily unfamiliar to you.
But if you only know Python and you’re looking at C, the fact that indentation doesn’t matter is temporarily unfamiliar to you.
Once you learn the new language, it’s not unfamiliar to you anymore.
“Unintuitive” often suggests that there’s something wrong with the language in a global sense, just because it doesn’t look like the last one you used — as if the choice to use (or not use) curly braces is natural and anything else is willfully perverse on the part of the language designer.
“Unintuitive” often suggests that there’s something wrong with the language in a global sense
I mean only if you consider “Intuition” to be some monolithic, static thing that’s also identical for everyone. Everyone has their own intuition, and their intuition changes over time. Intuition is akin to an opinion - it’s built up based on your own past experiences.
just because it doesn’t look like the last one you used — as if the choice to use (or not use) curly braces is natural and anything else is willfully perverse on the part of the language designer.
I don’t think it’s that deep. All people mean when they say it is that “[thing] defied my expectation/prior experience”. It’s like saying “sea food tastes bad”. There’s an implicit “to me” at the end, it’s obvious i’m not saying “sea food factually tastes bad, and anyone who says they like it is wrong or lying”.
No programming language is “natural/obvious/without effort”.
You could say that about anything. Of course you have to learn something the first time and it’s “unintuitive” then. Intuition is literally an expectation based on prior experience.
Intuitive patterns exist in programming languages. For example, most conditionals are denoted with “if”, “else”, and “while”. You would find it intuitive if a new programming language adhered to that. You’d find it unintuitive if the conditionals were denoted with “dnwwkcoeo”, “wowpekg cneo”, and “coebemal”.
But there are languages that require varying degrees of effort to become natural. Something like Malbolge will pretty much never be natural while something like Python can become natural to you in a few days.
Yeah. The original comment was about programmers who say that a language is “unintuitive” because it doesn’t look like another language they know.
Languages also have inner consistency. E.g. the mentioned python len function is inconsistent with the rest of the same language - and that is a statement that is true in itself, without an external reference point.
Yes, I agree that the
len()
thing in Python, and inconsistency in general, is bad. But pretty much all popular languages have many inconsistencies.
Idk, I don’t see a problem with saying a new language is unintuitive. For example, in js I still consider the horrible type coercion and the “fix” with the triple-equals very unintuitive indeed. On the flip side, when learning C# I found the multiple ways of making comparisons to be pretty intuitive, and not footguns.
Please don’t say the new language you’re being asked to learn is “unintuitive”. That’s just a rude word for “not yet familiar to me”.
Yeah. I’ve written in six or so different languages and am using Go now for the first time. Even then, I’m trying to be optimistic and acknowledge things are just different or annoying for me. It doesn’t mean anything is wrong with the language.
I still think ruby is a bad language, even though I agree with you
I found ruby horribly confusing until I got over the intial learning bump.
Now I love it. It really is lovely. In terms of design that is. Not sure about the monkeypatching
I really don’t like how rails brings things into scope and you just have no idea what’s there or how it got there unless you know all of the conventions. I guess that’s a rails issue and not ruby though.
I learned in python and C++ so I’m biased towards things that are extremely specific. Definitely doesn’t mean ruby is necessarily bad, I just don’t like it.
I’m one of those weirdoes who likes ruby and has never used rails, so no opinion there.
This is very true! Languages being unintuitive also becomes less of an issue the more languages you look into. There will be many concepts that multiple languages have since ultimately they are all trying to do similar things and the more you learn the more you will recognize making it easier to get into even more languages.
Until you know a few very different languages, you don’t know what a good language is, so just relax on having opinions about which languages are better. You don’t need those opinions. They just get in your way.
This is wise advice for ANY domain of knowledge.
Lotta people get a little fragment of knowledge on something, then shut down their brain and stop accepting new input. But life is change, and to be able to change and learn new things you need to keep your mind open. Being able to relax on having opinions and keep learning and moving along is very important.
Dynamic typing is insane. You have to keep track of the type of absolutely everything, in your head. It’s like the assembly of type systems, except it makes your program slower instead of faster.
Nothing like trying to make sense of code you come across and all the function parameters have unhelpful names, are not primitive types, and have no type information whatsoever. Then you get to crawl through the entire thing to make sense of it.
I’m not sure that’s a hot take outside early uni programmers.
You can do typing through the compiler at build time, or you can do typing with guard statements at run time. You always end up doing typing tho
deleted by creator
I like it in modern PHP, it’s balanced. As strict or as loose as you need in each context.
Typed function parameters, function returns and object properties.
But otherwise I can make a DateTime object become a string and vice-versa, for example.
What happens when you coerce a string to a date-and-time but it’s not valid?
Where I’m from (Rust), error handling is very strict and very explicit, and that’s how it should be. It forces you to properly handle everything that can potentially go wrong, instead of just crashing and looking like a fool.
My point is, you won’t ever try. You’d only use “weak” variables inside the function you’re working on.
It’s explicit when you absolutely need it to be, when the function is being called and you need to know what arguments to pass and what it’ll return
A string being parsed as a date-time is presumably user input, which is potentially invalid.
When you say user, you mean a user of a function? In that case PHP would throw a TypeError, and presumably only happens when developing/testing.
If you mean in production, like when submitting a form, an Exception may be thrown. In which case you catch it and return some error message to the user saying the date string is invalid.
By “user” I mean the person who is using the application.
Using exceptions for handling unexceptional errors (like invalid user input) is a footgun. You don’t know when one might be raised, nor what type it will have, so you can easily forget to catch it and handle it properly, and then your app crashes.
you can easily forget to catch it and handle it properly
Even if I coded the form by hand and that happened, it’s on me, not on the programming language.
But I don’t, I use a framework which handles all that boilerplate validation for me.
If you don’t add comments, even rudimentary ones, or you don’t use a naming convention that accurately describes the variables or the functions, you’re a bad programmer. It doesn’t matter if you know what it does now, just wait until you need to know what it does in 6 months and you have to stop what you’re doing an decipher it.
However, engineers who rely solely on comments to explain their code, are bad at writing readable code.
Possibly, but I’d prefer this scenario over the other.
Self documenting code is infinitely more valuable than comments because then code spreads with it’s use, whereas the comments stay behind.
I got roasted at my company when I first joined because my naming conventions are a little extra. That lasted for about 2 months before people started to see the difference in legibility as the code started to change.
One of the things I tell my juniors is, “this isn’t the 80s. There isn’t an 80 character line limit. The computer doesn’t benefit from your short variable names. I should be able to read most lines of code as a single non-compound sentence in English with only minor tweaks and the English sentence should be what is happening in most of those lines of code.”
Absolutely agree, yeah
80 character limit is helpful though when you need to have many files open at a time. Maybe 100 is more reasonable. Fighting indentation is important too.
I, too, remember the days before ultra high definition ultra wide monitors.
I thought this argument was bogus in the 90s on a 21" CRT and the argument has gotten even less valid since then. There are so many solutions to these problems that increase productivity for paltry sums of money it’s insane to me that companies don’t immediately purchase these for all developers.
You have a point, devs should be using multiple large monitors. I will often need to have 3-4 files open at once, plus some browser windows. Having some limit on line length helps with this and for fighting code complexity.
The most important thing is comprehension. If something is too long and the length makes it less readable then it is too long.
But if having 3-4 files open at the same time makes it harder for you to comprehend a single file because you can’t get the full picture, that’s on you.
I have a massive ultrawide and I still 100% believe in line limits. Long lines are harder to read in general but even with a limit of 100 I frequently have 3 files opened next to each other and I can’t read entire lines easily. Line limits just aren’t about the size of the monitor and I can’t believe people still say that.
I understand the concern, but readability and comprehension are way more important than line length. If the length impairs readability, it’s too long. Explicitly limits are terrible. Guidelines, fine.
Ultimately, you do you. I still think your crazy and I think your argument is poor.
Yes a strict 80 character limit would be bad but that’s why modern formatters aren’t strict and default to 90-100.
I’ve pretty much never seen code that would have been more readable had the lines been longer than that.
My main argument is still that shorter lines are more readable. I just think it’s a bullshit argument to say that long lines are fine because large monitors exists. I don’t see how that makes me crazy.
See, I think length limits and readability are sometimes at odds. To say that you 100% believe in length limits means that you would prefer the length limit over a readable line of code in those situations.
I agree that shorter lines are often more readable. I also think artificial limits on length are crazy. Guidelines, fine. Verbosity for the sake of verbosity isn’t valuable… But to say never is a huge stretch. There are always those weird edge cases that everyone hates.
There’s no such thing as self documenting code, unless every method and variable name has the word “because” in it.
Anyone can read what the code does. The comments are there to answer why it does what it does the way it does.
Why is invariably lost to time, if it’s not committed to a comment here and there.
This is a pretty ridiculous position to take but if you believe it then I’m glad you write the comments you do.
There is an argument that commenting on the lack of expected code is valuable for this reason, but it certainly isn’t true in all situations.
We can agree on “not all situations”. Often the answer to “why did we do it this way?” is blazingly obvious, and no one wants a comment.
But we all know that sometimes the “why” isn’t obnoxious at all.
As far as I can tell, developers who do believe in self-documenting code either haven’t learned the power of “why?”, or they have a secret technique for encoding “why?” into their code structure.
If it’s the second thing, I would be delighted to be brought in on it. (No sarcasm. Maybe I’ve missed a trick here.)
I’ll answer in a couple of different ways.
-
If I am writing library code my why is you have an end use and I don’t care why you use it and you don’t care why I wrote it. You only care about what my code does so you can achieve your why.
-
If we are working on the same code we have different whys but the same what. Then your comment as to why isn’t the same as mine which makes the comment incorrect.
-
We are looking at a piece of code and you want to know how it works, because the stated what is wrong (bugs). This might be the “why” you are looking for, but I call this a “how”. This is the case where self documenting code is most important. Code should tell a second programmer how the code achieves the what without needing an additional set of verbose comments. The great thing about code is that it is literally the instructions on the how. The problem is conveying the how to other programmers.
There are three kinds of how: self evident, complex how’s requiring multiple levels of abstraction and lots of code and complex short how’s that are not apparent.
The third is where most people get into trouble. Almost all of these cases of complexity can be solved with only a single layer of abstraction and achieve easily readable self documenting code. The problem for many cases is that they start as a one off and people are lousy at putting in the work on a one-off solution. Sometimes the added work of abstraction, and building a performant abstraction, makes a small task a large one. In these cases comments can make sense.
Sometimes these short, complex how’s require specialists. Database queries, performant perl/functional queries, algorithmic operations, complex compile time optimized templates (or other language specific optimizations) and the like are some of the most common examples of these. This category of problem benefits most from a well defined interface with examples for use (which might be comments). The “how” of these are not as valuable for the average developer and often require specialist knowledge regardless of comments for understanding how they work. In these cases what they do is far more valuable than how or why.
You’ve given a lot of consideration to modern recently created code. But the best modern recent code goes on to become someone’s legacy nightmare. (The most fit and correct code survives long after anyone really wishes it would.)
In high quality legacy nightmare code “why” is lost, unless someone wrote it down.
I’ve been on both sides of that mystery. “Why didn’t they just do X?”
- Sometimes it was because X didn’t exist yet, or wasn’t matute enough.
- Sometimes it was because X is fundamentally the wrong solution, in a very subtle way.
There’s two ways to know the difference:
-
- Painful trial and error.
-
- A comment (or document) answering “why”.
I prefer the second way, but I happily charge more for the first way.
-
This is why code review exists. Writer’s can’t always see what’s wrong with their work, because they have the bias of knowing what was intended. You need a reader to see it with fresh eyes and tell you what parts are confusing.
That’s not to say you shouldn’t try to make it readable in the first place. But reviewing and reading other people’s code is how you get better.
Let’s take this one step further. I should be able to get the core ideas in your code by comments and cs 101 level coding (eg basic data structures, loops, and if/then).
Programing is a lot less important than people and team dynamics
People can always be replaced, they’re irrelevant.
The code can always be rewritten, it is irrelevant.
And it will change next week anyway. 😁
Sure try to replace the one or two people that hold the whole team together. I’ve seen it a couple times, a good team disintegrates right after one or two key people leave.
Also, if you replace half the team, prepare for some major learning time whenever the next change is being made. Or after the next deployment. 🤷♂️
Tools that use a GUI are just as good (if not better) than their CLI equivalents in most cases. There’s a certain kind of dev that just gets a superiority complex about using CLI stuff.
The big thing you can do from the command line is script it.
deleted by creator
Indeed, the problem with gui apps is when you can’t script them!
I always loved alfred on osx, then loved scripting rofi on linux, only to come back to osx years later and find alfred can’t be invoked with stdin options. It’s damn shame….
I used to think something like this when I was younger. I spent an inordinate amount of time looking for good gui versions of cli tools. I have come to understand that this is not usually the case and cli tools are more convenient much of the time. I would not classify this as superiority complex, unless I’m being a jerk about it. I don’t care what you use, I just use whatever has the lowest barrier to entry with the most standardization, which is usually the original cli tool.
That said, jetbrains git integration is awesome.
deleted by creator
It also depends on the specifics — in many cases when a GUI is just a wrapper over the CLI tool, it is instructive to learn the CLI, similarly how you are a better programmer if you know about at least a layer beneath the one you are programming at (e.g. you can reason about this usage of hashmap because you roughly know what it does).
It is probably the most visible in git, but if you can only do commit and push from a GUI, just please learn the CLI as well. You don’t have to use it, but understanding it is important and the GUI may abstract away too much from you.
deleted by creator
I agree only when your job function is specifically geared around those tools… Otherwise high quality guis are more valuable.
Just because I can do everything in gdb that I can do in visual studio doesn’t mean 99% of most debugging tasks isn’t easier and faster in visual studio. Now if my job was specifically aimed at debugging/reverse engineering there are certain things that gdb does better on the CLI… But for most software devs… CLI gdb isn’t valuable.
There are some massive intrinsic advantages of the CLI though, that apply for everyone, not just leetcoders:
- The terminal can remember everything you ever did. Forgotten the command you wrote 2 months ago? You can do a search for it with a tool like
fzf
and run the exact same command again. - Communicating with others. GUI programs require step by step instructions, often accompanied by screenshots while CLI may be copy/pasted.
- Combining programs together. There are a few different techniques for combining CLI programs to search/format output, use secrets without ever having them in the clipboard or on disk, monitor something frequently/constantly etc etc
So while I agree with you that there’s plently of elitism around the CLI, you do yourself a disservice if you try to avoid it.
- The terminal can remember everything you ever did. Forgotten the command you wrote 2 months ago? You can do a search for it with a tool like
Just no. CLI can be automated, which makes it superior. It’s not a superiority complex, it’s a fact. I’m not a minimal wage worker pushing buttons I don’t understand. I’m not a technician who learnt your shitty software to do the most basic tasks.
My gold standard app is a CLI where I have the option to visually add the flags. I’m thinking of the ytdlp-gui type programs.
Which yt-dlp GUI do you use?
On windows I was using youtube-dl-wpf
That’s the gold standard as far as I’m concerned. Haven’t used the ytdlp-gui yet, but it’s simple stupid… I might want a few more switches (more exactly the extract audio/subtitles) to turn
Aside from automation, CLI can support significantly more complicated apps reliably. It can also be tested more reliably.
GUIs are better for anything simple, and good UX designers can make a moderately complex one, but anything like server administration/git/configs are 100x better on CLI
This depends a lot on the GUI and the tool. Some cli tools are great alone or for scripting, others benefit from the extra attention to ux and exposure of options that a GUI can offer
For git in particular, I encourage juniors to learn and use the CLI. I find that GUI git clients often do some or all of the following:
-
Use non-git terminology that ends up being confusing. “Sync” comes to mind as a frequent offender, I can think of several incompatible things that could refer to.
-
Ignore the useful ability to stage your changes
-
Don’t permit or encourage a review of the changes
-
Implement only the basics and make remediation of branching issues difficult
In the worst case, I’ve seen people end up using the git GUI like a “save” button, blindly commiting and pushing the current state of their code, including to-be-removed print statements and other cruft. Yeah, git cli is a bit complex compared to that, but you gain a lot for that added complexity.
That said, I’ve definitely jumped into a git GUI from time to time just for a visualization of whenever branching snafu I’m trying to untangle. None of the above invalidates GUIs if you take care to still understand the underlying tool properly!
-
I don’t know, a tool we use at my work has a git GUI integrated, and it breaks all the time, lol.
My take is that no matter which language you are using, and no matter the field you work in, you will always have something to learn.
After 4 years of professional development, I rated my knowledge of C++ at 7/10. After 8 years, I rated it 4/10. After 15 years, I can confidently say 6.5/10.
Amen. I once had an interview where they asked what my skill is with .net on a scale of 1 - 10. I answered 6.5 even though at the time I had been doing it for 7 years. They looked annoyed and said they were looking for someone who was a 10. I countered with nobody is a 10, not them or even the people working on the framework itself. I didn’t pass the interview and I think this question was why.
Your mistake was giving them an answer instead of asking how the scale was setup before giving them a number. Psychologically, by answering first your established that the question was valid as presented and it anchored their expectations as the ones you had to live up to. By questioning it you get to anchor your response to a different point.
Sometimes questions like this can be used to see how effective a person will be in certain lead roles. Recognizing, explaining and disambiguating the trap question is a valuable lead skill in some roles. Not all mind you… And maybe not ones most people would want.
But most likely you dodged a bullet.
I was kicking myself for days afterwards for not doing exactly as you said. I’m not good at these types of interview questions in the moment. Also before that was the tech interview classic of asking a bunch random trivia questions, which I actually nailed. Also this was for dev II position.
I definitely dodged a bullet though. Some months later I got hired at a different company for 30k more.
Did your interviewer profess to be a 10 in .net, otherwise how would they know what that looks like? I was told that I’m unsuitable as a programmer of PLC because I never used their software before. That I write the algorithms that go into a PLC was not sufficient. These people are looking for unicorns but find donkeys everywhere they look.
He claimed everyone at dev II and higher was a 10 in their company. Complete dunning Kruger. I have no doubt I could’ve understood and worked on whatever software they have.
Ouch! Red flag. Sucks to get rejected, but maybe you dodged a bullet.
That’s a good way of looking at it.
As a hiring manager, I can understand why you didn’t get the job. I agree that it’s not a “good” question, sure, but when you’re hiring for a job where the demand is high because a lot is on the line, the last thing you’re going to do is hire someone who says their skills are “6.5/10” after almost a decade of experience. They wanted to hear how confident you were in your ability to solve problems with .NET. They didn’t want to hear “aCtUaLlY, nO oNe Is PeRfEcT.” They likely hired the person who said “gee, I feel like my skills are 10/10 after all these years of experience of problem solving. So far there hasn’t been a problem I couldn’t solve with .NET!” That gives the hiring manager way more confidence than something along the lines of “6.5/10 after almost a decade, but hire me because no one is perfect.” (I am over simplifying what you said, because this is potentially how they remembered you.)
Unfortunately, interviews for developer jobs can be a bit of a crap shoot.
They wanted to hear how confident you were in your ability to solve problems with .NET. They didn’t want to hear “aCtUaLlY, nO oNe Is PeRfEcT.”
Yeah, I mean no shit, with hindsight it’s obvious they were looking for the 10/10 answer. I was kicking myself for days afterwards because that’s the only question I felt I answered “wrong”. Tech interviews are such a shit show though that you can start to overthink things as an interviewee. Also, an important aspect of the question that I didn’t mention was they specified “1 is completely new, and 10 is working at Microsoft on the .net framework itself”. The question caught me off guard. I have literally no idea what working at Microsoft on the framework is like. In that context being a 10/10 felt like being among the most knowledgeable person of c# of all time. Could I work on the framework itself? Idk maybe, I’ve never thought about it, I don’t even know what their day to day is. I should’ve just said 10/10 though, it was a dev II position to work on a web app, it wouldn’t have been that hard.
10 is working at Microsoft on the .net framework itself.
An interesting spin. I like to imagine that you could have answered “10/10,” taken a pause, and declared that you’re leaving the interview early to apply directly to Microsoft to “work on the .net framework itself.” 🤓
dev II position to work on a web app
”we want you to tell us that you’re over qualified for the role”
Hahaha man I feel you
The mark of a true master.
Most modern software is way too complex for what it actually does.
Python is only good for short programs
You can always solve a problem by adding more layers of abstraction. Good software design isn’t to add more layers of abstractions, it’s to solve problems with the minimum amount of abstractions necessary while still having maintainable, scalable code.
There are benefits to abstraction but they also have downsides. They can complicate code and make code harder to read.
SPAs are mostly garbage, and the internet has been irreparably damaged by lazy devs chasing trends just to building simple sites with overly complicated fe frameworks.
90% of the internet actually should just be rendered server side with a bit of js for interactivity. JQuery was fine at the time, Javascript is better now and Alpinejs is actually awesome. Nowadays, REST w/HTMX and HATEOAS is the most productive, painless and enjoyable web development can get. Minimal dependencies, tiny file sizes, fast and simple.
Unless your web site needs to work offline (it probably doesn’t), or it has to manage client state for dozen/hundreds of data points (e.g. Google Maps), you don’t need a SPA. If your site only needs to track minimal state, just use a good SSR web framework (Rails, asp.net, Django, whatever).
deleted by creator
I do a lot of PHP, so naturally my small projects are PHP. I use a framework called Laravel, and while it is possible to use SPAs or other kinds of shit, I usually choose pure SS rendering with a little bit of VueJS to make some parts reactive. Other than that, it is usually, just pure HTML forms for submitting data. And it works really well.
Yeah yeah, they push the Livewire shit, which I absolutely hate and think is a bad idea, but nobody is forcing me, so that’s nice.
I’m still hoping for browsers to become some kind of open standard application environments and web apps to become actual apps running on this environment.
How are browser not that already? What’s missing?
They are an open standard and used to make many thousands of apps.
I’m thinking more along the line of ubiquitous offline first PWAs. Imagine google doc running offline in a browser and being able to edit local docs directly. I guess secure file system access is one of the major road blocks, though I’m not sure of the challenges associated with coming up with a standard for this.
Counter hot take, I do actually like Blazor but it has limitations due to how immature web assembly still is. It also does not solve the problem of being a big complex platform that isn’t needed for small simple apps. Of the half dozen projects I’ve written in Blazor, I’d personally re-write 3 or so in just Razor Pages with Htmx.
Server-side works better, webassembly and fat client on general imo aren’t worth it. It’s benefits require millions of users.
This is the only way;
if (condition) { code }
Not
if (condition) { code }
Also because of my dyslexia I prefer variable & function names like this; ‘File_Acces’ I find it easier to read than ‘fileAcces’
Dynamically typed languages don’t scale. Large project bases become hard to maintain, read and refactor.
Basic type errors which should be found in compilation become runtime errors or unexpected behavior.
Hot take: people who don’t like code reviews have never been part of a good code review culture.
Agile in it’s current implementation with excessive meetings wastes more time than the mistakes it tries to avoid.