• 0 Posts
  • 74 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle



  • In Rust you’re kind of stuck with it, but at the end of the day combined return types are just syntactic sugar for something a lot of languages can do. Even in plain old C there’s a pattern where you pass pointers to your return and/or error variables. In many languages you can return structs or similar. In some I’d argue it looks nicer than having to write Result<>, e.g. in Python or in Swift you can just return a tuple by putting things in parentheses. (Of course you can also still use something more explicit too. But if every function returned (result, error) by default and every call was like result, error = fn(), I don’t think it’d be necessary.)

    However I don’t really know of any language where people prefer to use this over exceptions if exceptions are available. Even in C some people used to use setjmp/longjmp in macros to implement exceptions. Exceptions have their problems but people seem to overwhelmingly be in favor of them.

    Personally I like exceptions in languages that have some kind of built-in “finally” for functions. For example defer in Swift. You can have proper error handling for a lot less typing in many cases because passing through exceptions is fine if your defer blocks handle the cleanup. And if you do want to handle an exception, Swift also has optionals, and a try? that transparently converts a return value into an optional that’s nil when an exception was thrown, and a coalescing operator ??, which means you can catch exceptions and provide a default value on one line, instead of a 4-5 line try…catch/except block or an error checking conditional for every call.



  • It would have been crazy in the CRT age and maybe the early LCD age. But then we got screens that require significant electronics to even be able to show an image, built into smart internet-connected TV’s which sometimes also have microphones and cameras built in. At the same time we also kind of dropped actual TV, and switched to streaming, where the streaming provider automatically and necessarily knows exactly what you watch, when you’re awake, what languages you speak, and so on.

    Which IMHO makes it even more crazy to say it. Like why would a sane person say any of this is secret.





  • It depends on the task. As an extreme example, I can get AI to create a complete application in a language I don’t know. There’s no way that’s not more productive than me first learning the language to a point where I can make apps in it. Just have to pick something simple enough for the AI.

    Of course the opposite extreme also exists. I’ve found that when I demand something impossible, AI will often just try to implement it anyway. It can easily get into an endless cycle where it keeps optimistically declaring that it identified the issue and fixed it with a small change, over and over again. This includes cases where there’s a bug in the underlying OS or similar. You can waste a huge amount of time going down an entirely wrong path if you don’t realize that an idea doesn’t work.

    In my real work neither of these really happen. So the actual impact is much less. A lot of my work is not coding in the first place. And I’ve been writing code since I was a little kid, for almost 40 years now. So even the fast scaffolding I can do with AI is not that exciting. I can do that pretty quickly without AI too. When AI coding tools appeared my bosses started asking if I was fast because I was using one. No, I’m fast because some people ask for a new demo every week. Causes the same problems later too.

    But I also do think that we all still need to learn how to use AI properly. This applies to all tools, but I think it’s more difficult than with other tools. If I try to use a hammer on something other than a nail, it will not enthusiastically tell me it can do it with just one more small change. AI tools absolutely will though, and it’s easy to just let them try because it’s just a few seconds to see what they come up with. But that’s a trap that leads to those productivity wasting spirals. Especially if the result actually somehow still works at first, so we have to fix it half a year later instead of right away.

    At my work there are some other things that I feel limit the productivity potential of AI tools. First of all we’re only allowed to use a very limited number of tools, some of them made in-house. Then we’re not really allowed to integrate them into our workflows other than the part where we write code. E.g. I could trivially write an mcp server that interacts with our (custom in-house) ci system and actually increases my productivity because I could save a small number of seconds very often if I could tell an AI to find builds for me for integration or QA work. But it’s not allowed. We’re all being pushed to use AI but the company makes it really difficult at the same time.

    So when I play around with AI on my spare time I do actually feel like I’m getting a huge boost. Not just because I can use a claude model instead of the ones I can use at work, but also just basic things like e.g. being able to turn on AI in Xcode at all when working on software for Apple platforms. On my work Macbook I can’t turn on any Apple AI features at all so even tab completion is worse. Or in other words, those realities of working on serious projects at a serious company with serious security policies can also kill any potential productivity boost from AI. They basically expect us to be productive with only those features the non-developer CEO likes, who also doesn’t have to follow any of our development processes…