• 0 Posts
  • 74 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • Because if you disable browser autocomplete, what’s obviously going to happen is that everyone will have a text file open with every single one of their passwords in so that they can copy-paste them in. So prevent that. But what happens if you prevent that is that everyone will choose terrible, weak passwords instead. Something like September2025! probably meets the ‘complexity’ requirement…


  • addie@feddit.uktoProgrammer Humor@programming.devPsychopath Dev
    link
    fedilink
    arrow-up
    38
    arrow-down
    4
    ·
    edit-2
    10 days ago

    A bit like when we renamed all the master/slave terminology using different phrasing that’s frankly more useful a lot of the time, I think it’s about time we got rid of this “child” task nonsense. I suggest “subtask”. Then we can reword these books into something that no-one can make stupid jokes about any more, like “how to keep your subs in line” and “how to punish your subs when they’ve misbehaved”.


  • Well now. When we’ve been enforcing password requirements at work, we’ve had to enforce a bizarre combination of “you must have a certain level of complexity”, but also, “you must be slightly vague about what the requirements actually are, because otherwise it lets an attacker tune a dictionary attack against you”. Which just strikes me as a way to piss off our users, but security team say it’s a requirement, therefore, it’s a requirement, no arguing.

    “One” special character is crazy; I’d have guessed that was a catch-all for the other strange password requirements:

    • can’t have the same character more than twice in a row
    • can’t be one of the ten-thousand most popular passwords (which is mostly a big list of swears in russian)
    • all whitespace must be condensed into a single character before checking against the other rules

    We’ve had customers’ own security teams asking us if we can enforce “no right click” / “no autocomplete” to stop their users in-house doing such things; I’ve been trying to push back on that as a security misfeature, but you can’t question the cult thinking.


  • We’ve found it to be the “least bad option” for DnD. Have a Discord window open for everyone to video chat in, have a browser window open with Owlbear Rodeo or Foundry / Forge for your tokens and character sheets, all works smoothly enough. The text chat is sufficient for sending the DM a private message; for group chat to share art of the things you’ve just run into or organise the next session.

    Completely agree that for anything “less transient”, then the UX is beyond awful and trying to find anything historical is a massive PITA.




  • If you move past the ‘brute force’ method of solving into the ‘constraints’ level, it’s fairly easy to check whether there are multiple possible valid solutions. Using a programming language with a good sets implementation (Python!) makes this easy - for each cell, generate a set of all the values that could possibly go there. If there’s only one, fill it in and remove that value from all the sets in the same row/column/block. If there’s no cells left that only take a unique value, choose the cell with the fewest possibilities and evaluate all of them, recursively. Even a fairly dumb implementation will do the whole problem space in milliseconds. This is a very easy problem to parallelize, too, but it’s hardly worth it for 9x9 sodokus - maybe if you’re generating 16x16 or 25x25 ‘alphabet’ puzzles, but you’ll quickly generate problems beyond the ability of humans to solve.

    The method in the article for generating ‘difficult’ puzzles seems mighty inefficient to me - generate a valid solution, and then randomly remove numbers until the puzzle is no longer ‘unique’. That’s a very calculation-heavy way of doing it, need to evaluate the whole puzzle at every step. It must be the case that a ‘unique’ sodoku has at least 8 unique numbers in the starting puzzle, because otherwise there will be at least two solutions, with the missing numbers swapped over. Preferring to remove numbers equal to values that you’ve already removed ought to get you to a hard puzzle faster?





  • You can write an unmaintainable fucking mess in any language. Rust won’t save you from cryptic variable naming, copy-paste code, a complete absence of design patterns, dreadful algorithms, large classes of security issues, unfathomable UX, or a hundred other things. “Clean code” is (mostly) a separate issue from choice of language.

    Don’t get me wrong - I don’t like this book. It manages to be both long-winded and facile at the same time. A lot of people seem to read it and take the exact wrong lessons about maintainability from it. I think that it would mostly benefit from being written in pseudocode - concentrating on any particular language might distract from the message. But having a few examples of what a shitfest looks like in a few specific languages might help



  • My old job had a lot of embedded programming - hard real-time Z80 programming, for processors like Z800s and eZ80s to control industrial devices. Actually quite pleasant languages to do bit-twiddling in, and it’s great to be able to step through the debugger and see that what the CPU is running is literally your source code, opcode by opcode.

    Back when a computers were very simple things - I’m thinking a ZX Spectrum, where you can read directly from the input ports and write directly into the framebuffer, no OS in your way just code, then assembly made a lot of sense, was even fun. On modem computers, it is not so fun:

    • x64 is just a fucking mess

    • you cannot just read and write what you want, the kernel won’t let you. So you’re going to be spending a lot of your time calling system routines.

    • 99% of your code will just be arranging data to suit the calling convention of your OS, and doing pointless busywork like stack pointer alignment. Writing some macros to do it for you makes your code look like C. Might as well just use C, in that case.

    Writing assembly makes some sense sometimes - required for embedded, you might be writing something very security conscious where timing is essential, or you might be lining up some data for vectorisation where higher-level languages don’t have the constructs to get it right - but these are very small bits of code. You would be mad to consider “making the whole apple pie” in assembly.


  • Genesis is a different style of game tho, isn’t it? Diablo-like rather than third-person hack and slash?

    Love the series. Personally prefer 3 due to its more limited scope; the other two are great, but to on for a very long time, and I really can’t be bothered playing through the Portal-like bits again. Happy if 4 is the same length as 3.


  • PS3 most certainly had a separate GPU - was based on the GeForce 7800GTX. Console GPUs tend to be a little faster than their desktop equivalents, as they share the same memory. Rather than the CPU having to send eg. model updates across a bus to update what the GPU is going to draw in the next frame, it can change the values directly in the GPU memory. And of course, the CPU can read the GPU framebuffer and make tweaks to it - that’s incredibly slow on desktop PCs, but console games can do things like tone mapping whenever they like, and it’s been a big problem for the RPCS3 developers to make that kind of thing run quickly.

    The cell cores are a bit more like the ‘tensor’ cores that you’d get on an AI CPU than a full-blown CPU core. They can’t speak to the RAM directly, just exchange data between themselves - the CPU needs to copy data in and out of them in order to get things in and out, and also to schedule any jobs that must run on them, they can’t do it themselves. They’re also a lot more limited in what they can do than a main CPU core, but they are very very fast at what they can do.

    If you are doing the kind of calculations where you’ve a small amount of data that needs a lot of repetitive maths done on it, they’re ideal. Bitcoin mining or crypto breaking for instance - set them up, let them go, check in on them occasionally. The main CPU acts as an orchestrator, keeping all the cell cores filled up with work to do and processing the end results. But if that’s not what you’re trying to do, then they’re borderline useless, and that’s a problem for the PS3, because most of its processing power is tied up in those cores.

    Some games have a somewhat predictable workload where offloading makes sense. Got some particle effects - some smoke where you need to do some complicated fluid-and-gravity simulations before copying the end result to the GPU? Maybe your main villain has a very dramatic cape that they like to twirl, and you need to run the simulation on that separately from everything else that you’re doing? Problem is, working out what you can and can’t offload is a massive pain in the ass; it requires a lot of developer time to optimise, when really you’d want the design team implementing that kind of thing; and slightly newer GPUs are a lot more programmable and can do the simpler versions of that kind of calculation both faster and much more in parallel.

    The Cell processor turned out to be an evolutionary dead end. The resources needed to work on it (expensive developer time) just didn’t really make sense for a gaming machine. The things that it was better at, are things that it just wasn’t quite good enough at - modern GPUs are Bitcoin monsters, far exceeding what the cell can do, and if you’re really serious about crypto breaking then you probably have your own ASICs. Lots of identical, fast CPU cores are what developers want to work on - it’s much easier to reason about.


  • Yeah. Unless they’ve some ulterior motive for porting their RE engine to iOS, then this is insane. That kind of cash will barely fund a senior engineer for a month once you’ve paid out overheads as well.

    If they’re planning to have some kind of phone tie-in to the next Resi game, then maybe it might have made sense to work the compatibility issues out. An app that runs on your phone that makes it “your phone in game”, so you can receive texts from the president’s daughter while shooting some definitely-not-Spaniards on your Playstation, bit of an augmented-reality thing. Could be a laugh to have your phone be in control of a drone so that you can see round corners, while juggling the other things you’re doing? But probably mostly so that you can get dinged for microtransactions.



  • Any decent conductor is going to to vary the beat based on how long it takes for sound to fill the venue in question. Beethoven’s choices for the music halls in Vienna might have made sense then, but not so much today.

    One of the things that’s always annoyed the conductors that I’ve worked with is that we always ignore the dynamics in his music. Beethoven’s markings are expressive, subtle. And we always play his stuff louder than indicated.