• 0 Posts
  • 58 Comments
Joined 2 years ago
cake
Cake day: August 7th, 2023

help-circle


  • Agreed — this is overall a really, really good thing for consumers. Now that my MacBook Pro, iPad Pro, and iPhone Pro all use USB-C it’s trivial to swap devices between them and generally they all just work. The USB-C Ethernet adaptor I have for my MBP work with my iPad Pro and iPhone Pro. As do Apple’s USB-A/USB-C/HDMI adaptors. And my USB-C external drives and USB sticks. And my PS5 DualSense controllers. And the 100W lithium battery pack with 60W USB-PD output. Heck, even the latest Apple TV remote is USB-C.

    AFAIK, this is the first time ever that there is one single connector that works across their entire lineup of devices. Even if you go back to the original Apple 1 (when it was the only device they sold), it had several different connector types. Now we have one connector to rule them all, and while the standard has its issues, it’s quite a bit better than the old days when everything had a different connector.


  • It’s worth remembering however that there weren’t a lot of options for a standardized connector back when Apple made the first switch in 2012. The USB-C connector wasn’t published for another two years after Lightning was released to the public. Lightning was much better than the then-available standard of micro USB-B, allowed for thinner phones and devices, and was able to carry video and audio (which was only achieved on Android phones of the time with micro USB-B by violating the USB standard).

    Also worth noting here is that the various Macs made the switch to USB-C before most PCs did, and the iPad Pro made the switch all the way back in 2018 — long before the EU started making noise about forcing everyone to use USB-C. So Apple has a history of pushing USB-C; at least for devices where there wasn’t a mass market of bespoke docks that people were going to be pissed off at having to scrap and replace.

    I’ll readily agree we’re in a better place today — I’m now nearly 100% USB-C for all my modern devices (with the one big holdout being my car — even though it was an expensive 2024 EV model, it still came with USB-A. I have several USB-A to USB-C cables in the car for device charging small devices, but can’t take advantage of USB-PD to charge and run my MacBook Pro). But I suspect Apple isn’t as bothered by this change as everyone thinks they are. They finally get to standardize on one connector across their entire lineup of devices for the first time ever, and don’t have to take the blame for it. Sounds win-win to me.


  • I’m still of the opinion that Apple benefitted from this legislation, and that they know it. They never fought this decision particularly hard — and ultimately, it’s only going to help Apple move forward.

    I’m more than old enough to remember the last time Apple tried changing connectors from the 30-pin connector to the Lightning connector. People (and the press) were apoplectic that Apple changed the connector. Everything from cables to external speakers to alarm clocks and other accessories became useless as soon as you upgraded your iPod/iPhone — the 30-pin connector had been the standard connector since the original iPod, and millions of devices used it. Apple took a ton of flak for changing it — even though Lightning was a pretty significant improvement.

    That’s not happening this time, as Apple (and everyone else) can point to and blame the EU instead. If Apple had made this change on their own, they would likely have been pilloried in the press (again) for making so many devices and cables obsolete nearly overnight — but at least this way they can point at the EU and say “they’re the ones making us do this” and escape criticism.


  • Depends on what you mean by “back in the day”. So far as I know you could be ~30, and “back in the day” for you is the 2005 era.

    For some of us “back in the day” is more like the early 90’s (and even earlier than that if we want to include other online services, like BBS’s) — and the difference since Eternal September is pretty stark (in both good and bad ways).


  • There are a lot of manufacturer-agnostic smart home devices out there, and with just a tiny bit of research online it’s not difficult to avoid anything that is overly tied to a cloud service. Z-wave, ZigBee, Thread/Matter devices are all locally controlled and don’t require a specific companies app or environment — it’s only really the cheapest, bottom-of-the-barrel WiFi based devices that rely on cloud services that you have to be careful of. As with anything, you get what you pay for.

    Even if the Internet were destroyed tomorrow, my smart door locks would continue to function — not only are they Z-wave based (so local control using a documented protocol which has Open Source drivers available), but they work even if not “connected”. I can even add new door codes via the touchscreen interface if I wanted to.

    The garage door scenario can be a bit more tricky, as there aren’t a lot of good “open” options out there. However, AFAIK all of them continue to work as a traditional garage door opener if the online service becomes unavailable. I have a smart Liftmaster garage door opener (which came with the house when we bought it), and while it’s manufacturer has done some shenanigans in regards to their API to force everyone to use their app (which doesn’t integrate with anything), it still works as a traditional non-smart garage door opener. The button in the garage still works, as does the remote on the outside of the garage, the remotes it came with, and the Homelink integration in both of our vehicles.

    With my IONIQ 5, the online features while nice are mostly just a bonus. The car still drives without them, the climate control still works without being online — most of what I lose are “nice-to-have” features like remote door lock/unlock, live weather forecasts, calendar integration, and remote climate control. But it isn’t as if the car stops being drivable if the online service goes down. And besides which, so long as CarPlay and Android Auto are supported, I can always rely on them instead for many of the same functions.

    Some cars have much more integration than mine — and the loss of those services may be more annoying.


  • …until the CrowdStrike agent updated, and you wind up dead in the water again.

    The whole point of CrowdStrike is to be able to detect and prevent security vulnerabilities, including zero-days. As such, they can release updates multiple times per day. Rebooting in a known-safe state is great, but unless you follow that up with disabling the agent from redownloading the sensor configuration update again, you’re just going to wing up in a BSOD loop.

    A better architectural solution like would have been to have Windows drivers run in Ring 1, giving the kernel the ability to isolate those that are misbehaving. But that risks a small decrease in performance, and Microsoft didn’t want that, so we’re stuck with a Ring 0/Ring 3 only architecture in Windows that can cause issues like this.


  • Along came Creative Labs with their AWE32, a synthesizer card that used wavetable synthesis instead of FM.

    Creative Labs did wavetable synthesis well before the AWE32 — they released the Wave Blaster daughter board for the Sound Blaster 16, two full years before the AWE32 was released.

    (FWIW, I’m not familiar with any motherboards that had FM synthesis built-in in the mid 90’s. By this time, computers were getting fast enough to be able to do software-driven wavetable synthesis, so motherboards just came with a DAC).

    Where the Sound Blaster really shined was that the early models were effectively three cards in one — an Adlib card, a CMS card, and a DAC/ADC card (with models a year or two later also acting as CD-ROM interface cards). Everyone forgets about CMS because Adlib was more popular at the time, but it was capable of stereo FM synthesis, whereas the Adlib was only ever mono.

    (As publisher of The Sound Blaster Digest way back then, I had all of these cards and more. For a few years, Creative sent me virtually everything they made for review. AMA).


  • I certainly wouldn’t run to HR right away — but unfortunately, it’s true sometimes that people just aren’t a good fit for whatever reason. Deadweight that isn’t able to accomplish the tasks that need to be done doesn’t do you any favours — if you’re doing your job and their jobs because they just can’t handle the tasks that’s hardly fair to you, and isn’t doing the organization any good — eventually you’ll burn out, nobody will pickup the slack, and everyone will suffer for it.

    My first instinct in your situation however would be that everyone has got used to the status quo, including the staff you have to constantly mentor. Hopefully if you can coach them into doing the work for themselves and keeping them accountable to tasks and completion dates will help change the dynamic.


  • I’m a tech manager with a 100% remote team of seven employees. We’re a very high performing team overall, and I give minimal hand-holding while still fostering a collaborative working environment.

    First off, you need to make outcomes clear. Assign tasks, and expect them to get done in a reasonable timeframe. But beyond that, there should be no reason to micro-manage actual working hours. If some developer needs some time during the day to run an errand and wants to catch up in the evening, fine by me. I don’t need them to be glued to their desk 9-5/10-6 or for some set part of the day — so long as the tasks are getting done in reasonable time, I let me employees structure their working hours as they see fit.

    Three times a week we have regular whole-team checkins (MWF), where everyone can give a status update on their tasks. This helps keep up accountability.

    Once a month I reserve an hour for each employee to just have a general sync-up. I allow the employee to guide how this time is used — whether they want to talk about issues with outstanding tasks, problems they’re encountering, their personal lives, or just “shoot the shit”. I generally keep these meetings light and employee-directed, and it gives me a chance to stay connected with them on both a social level and understand what challenges they might be facing.

    And that’s it. I’ve actually gone as far as having certain employees who were being threatened with back-to-office mandates to have them converted to “remote employee” in the HR database so they’d have to lay off threatening them — only 2 of my 7 employees are even in the same general area of the globe (my employees are spread in 3 different countries at the moment), and I don’t live somewhere with an office, so having some employees forced to report to an office doesn’t help me in the slightest (I can’t be in 6 places at once — I live far enough away I can’t be in any of those places on a regular basis!).

    Your employees may have got used to you micro-managing them. Changing this won’t happen overnight. Change from a micro-manager into a coach, and set them free. And if they fail…then it’s time to talk to HR and to see about making some changes. HTH!




  • Does this pump also dispense marked fuels through the same hose?

    In my province of residence gas stations near farming communities often sell “marked fuel” (fuel with an added red dye in it) that are taxed less, and which are intended for farming machinery, road work equipment, boats, and other non-highway use only. If you’re caught with red-dyed fuel being used for any other purpose you can be charged with an offence, and levied fines or other penalties.

    If you dispense a small amount of regular gasoline after another purchaser had bought marked gasoline, the dye in the fuel remaining in the lines likely isn’t diluted enough to tell the difference — and you could (hypothetically) then be charged with possessing marked fuel without the proper paperwork.

    (Anywhere I’ve ever seen marked fuels sold usually has a separate hose for the marked fuel to be dispensed from to prevent this from happening — but I don’t know your gas station or where you live, so maybe they rely on dilution rather than separation to differentiate?)




  • To put things into context, IBM didn’t get ripped off in any way (at least not from DOS - the whole IBM/Microsoft OS/2 debacle is a different story). The earliest PCs (IBM PC, IBM PC XT, IBM PC Jr., and associated clones) didn’t really have the hardware capabilities needed to permit a more advanced operating system. There was no flat memory model, no protection rings, and no Translation Look-aside Buffer (TLB). The low maximum unpaged memory addressing limit (1MB) made it difficult to run more than one process at a time, and really limits how much OS you can have active on the machine (modern Windows by way of example reserves 1GB of virtual RAM per process just for kernel memory mapping).

    These things did exist on mainframe and mini computers of the day — so the ideas and techniques weren’t unknown — but the cheaper IBM PCs had so many limitations that those techniques were mostly detrimental (there were some pre-emptive OSs for 8086/8088 based PCs, but they had a lot of limitations, particularly around memory management and protection), if not outright impossible. Hence the popularity of DOS in its day — it was simple, cheap, didn’t require a lot of resources, and mostly stayed out of the way of application development. It worked reasonably well given the limitations of the platforms it ran on, and the expectations of users.

    So IBM did just fine from that deal — it was when they went in with Microsoft to replace DOS with a new OS that did feature pre-emptive multitasking, memory protection, and other modern techniques that they got royally screwed over by Microsoft (vis: the history of OS/2 development).


  • As someone who has done some OS dev, it’s not likely to be of much help. DOS didn’t have much of any of the defining features of most modern OS’s — it barely had a kernel, there was no multitasking, no memory management, no memory protection, no networking, and everything ran at the same privilege level. What little bit of an API was there was purely through a handful of software interrupts — otherwise, it was up to your code to communicate with nearly all the hardware directly (or to communicate with whatever bespoke device driver your hardware required).

    This is great for anyone that wants to provide old-school DOS compatibility, and could be useful in the far future to aid in “digital archaeology” (i.e.: being able to run old 80’s and early 90’s software for research and archival purposes on “real DOS”) — but that’s about it. DOS wasn’t even all that modern for its time — we have much better tools to use and learn from for designing OS’s today.

    As a sort of historical perspective this is useful, but not likely for anything else.