If the screensaver is saving the information of what a pixel has been on average, there’s all sorts of potential for leakage of sensitive information onto a part of the computer that shouldn’t have that information.
If the screensaver is saving the information of what a pixel has been on average, there’s all sorts of potential for leakage of sensitive information onto a part of the computer that shouldn’t have that information.
I liken it to a professional basketball player with a low free throw percentage. If they’re still on the team and in the league despite missing 3 free throws a game, they must be really good at the other stuff.


JPEG Organisation?
The G in JPEG already stands for “Group.”


It’s not feasible for a mass market consumer product like Starlink.
Why not? That’s a service designed to serve millions of simultaneous users from nearly 10,000 satellites. These systems have to be designed to be at least somewhat resistant to unintentional interference, which means it is usually quite resistant to intentional jamming.
Any modern RF protocol is going to use multiple frequencies, timing slots, and physical locations in three dimensional space.
And so the reports out of Iran is that Starlink service is degraded in places but not fully blocked. It’s a cat and mouse game out there.


I’d think that there are practical limits to jamming. After all, jamming doesn’t just make radio impossible, it just makes the transmitter and receiver need to get closer together (so that their signal strength in that shorter distance is strong enough to overcome the jamming from further away). Most receivers filter out the frequencies they’re not looking for, so any jammer will need to actually be hitting that receiver with that specific frequency. And many modern antenna arrays rely on beamforming techniques less susceptible to unintentional interference or intentional jamming that is coming from a different direction than where it’s looking. Even less modern antennas can be heavily directional based on the physical design.
If you’re trying to jam a city block, with a 100m radius, of any and all frequencies that radios use, that’s gonna take some serious power. Which will require cooling equipment if you want to keep it on continuously.
If you’re trying to jam an entire city, though, that just might not be practical to hit literally every frequency that a satellite might be using.
I don’t know enough about the actual power and equipment requirements, but it seems like blocking satellite communications between satellites you don’t control and transceivers scattered throughout a large territory is more difficult than you’re making it sound.


Everything else that you said seems to fit the general thesis that they’re making a lot more money selling to AI companies.
If those reasons were still true but the memory companies stood to not make as much money on those deals, I guarantee the memory manufacturers wouldn’t have taken the deal. They only care about money, and the other reasons you list are just the mechanisms for making more money.


What’s crazy is that they aren’t just doing this because they make more money with AI.
No, they really are making more money by selling whole wafers rather than packaging and soldering onto DIMMs. The AI companies are throwing so much money at this that it’s just much more profitable for the memory companies to sell directly to them.


In terms of usage of AI, I’m thinking “doing something a million people already know how to do” is probably on more secure footing than trying to go out and pioneer something new. When you’re in the realm of copying and maybe remixing things for which there are lots of examples and lots of documentation (presumably in the training data), I’d bet large language models stay within a normal framework.


The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.
Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.


Most Android phones with always on have a grayscale screen that is mostly black. But iPhones introduced always on with 1Hz screens and still show a less saturated, less bright version of the color wallpaper on the lock screen.


It’s actually pretty funny to think about other AI scrapers ingesting this nonsense into the training data for future models, too, where the last line isn’t enough to get the model to discard the earlier false text.


On phones and tablets, variable refresh rates make an “always on” display feasible in terms of battery budget, where you can have something like a lock screen turned on at all times without burning through too much power.
On laptops, this might open up some possibilities of the lock screen or some kind of static or slideshow screensaver staying on longer while idle, before turning off the display.


Apple supports its devices for a lot longer than most OEMs after release (minimum 5 years since being available for sale from Apple, which might be 2 years of sales), but the impact of dropped support is much more pronounced, as you note. Apple usually announces obsolescence 2 years after support ends, too, and stop selling parts and repair manuals, except a few batteries supported to the 10 year mark. On the software/OS side, that usually means OS upgrades for 5-7 years, then 2 more years of security updates, for a total of 7-9 years of keeping a device reasonably up to date.
So if you’re holding onto a 5-year-old laptop, Apple support tends to be much better than a 5-year-old laptop from a Windows OEM (especially with Windows 11 upgrade requirements failing to support some devices that were on sale at the time of Windows 11’s release).
But if you’ve got a 10-year-old Apple laptop, it’s harder to use normally than a 10-year-old Windows laptop.
Also, don’t use the Apple store for software on your laptop. Use a reasonable package manager like homebrew that doesn’t have the problems you describe. Or go find a mirror that hosts old MacOS packages and install it yourself.


Most Costco-specific products, sold under their Kirkland brand, are pretty good. They’re always a good value and they’re sometimes are among the best in class separate from cost.
I think Apple’s products improved when they started designing their own silicon chips for phones, then tablets, then laptops and desktops. I have beef with their operating systems but there’s no question that they’re better able to squeeze battery life out of their hardware because of that tight control.
In the restaurant world, there are plenty of examples of a restaurant having a better product because they make something in house: sauces, breads, butchery, pickling, desserts, etc. There are counterexamples, too, but sometimes that kind of vertical integration can result in a better end product.


Yeah, getting too close turns into an uncanny valley of sorts, where people expect all the edge cases to work the same. Making it familiar, while staying within its own design language and paradigms, strikes the right balance.


Even the human eye basically follows the same principle. We have 3 types of cones, each sensitive to different portions of wavelength, and our visual cortex combines each cone cell’s single-dimensional inputs representing the intensity of light hitting that cell in its sensitivity range, from both eyes, plus the information from the color-blind rods, into a seamless single image.


This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.


Do MSI and ASUS have enough corporate/enterprise sales to offset the loss of consumer demand? With the RAM companies the consumer crunch is caused by AI companies bidding up the price of raw memory silicon well beyond what makes financial sense to package and solder onto DIMMs (or even directly solder the packages onto boards for ultra thin laptops).


Cutting edge chip making is several different processes all stacked together. The nations that are roughly aligned with the western capitalist order have split up responsibilities across many, many different parts of this, among many different companies with global presence.
The fabrication itself needs to tie together several different processes controlled by different companies. TSMC in Taiwan is the current dominant fab company, but it’s not like there isn’t a wave of companies closely behind them (Intel in the US, Samsung in South Korea).
There’s the chip design itself. Nvidia, Intel, AMD, Apple, Qualcomm, Samsung, and a bunch of other ARM licensees are designing chips, sometimes with the help of ARM itself. Many of these leaders are still American companies developing the design in American offices. ARM is British. Samsung is South Korean.
Then there’s the actual equipment used in the fabs. The Dutch company ASML is the most famous, as they have a huge lead on the competition in manufacturing photolithography machines (although old Japanese competitors like Nikon and Canon want to get back in the game). But there are a lot of other companies specializing in specific equipment found in those labs. The Japanese company Tokyo Electron and the American companies Applied Materials and Lam Research, are in almost every fab in the West.
Once the silicon is fabricated, the actual packaging of that silicon into the little black packages to be soldered onto boards is a bunch of other steps with different companies specializing in different processes relevant to that.
Plus advanced logic chips aren’t the only type of chips out there. There are analog or signal processing chips, or power chips, or other useful sensor chips for embedded applications, where companies like Texas Instruments dominate on less cutting edge nodes, and memory/storage chips, where the market is dominated by 3 companies, South Korean Samsung and SK Hynix, and American company Micron.
TSMC is only one of several, standing on a tightly integrated ecosystem that it depends on. It also isn’t limited to only being located in Taiwan, as they own fabs that are starting production in the US, Japan, and Germany.
China is working at trying to replace literally every part of the chain in domestic manufacturing. Some parts are easier than others to replace, but trying to insource the whole thing is going to be expensive, inefficient, and risky. Time will tell whether those costs and risks are worth it, but there’s by no means a guarantee that they can succeed.
I would argue that desktop software capable of doing this (storing and using past pixel values to calculate some sort of output) violates the principle of least privilege, so that an OS that supports this kind of screensaver being possible shouldn’t be used for sensitive data, even if that particular screensaver is disabled.
Better to harden the OS so that programs (including screensavers) can’t access and store the continuous screen output.
That’s one of the problems we have with Windows Recall. We don’t even want the OS to have the capability, because we don’t want that data being copied and processed somewhere on the machine.