

It’s unfortunate that it has come to this, since BCacheFS seems like a promising filesystem, but it is also wholly unsurprising: Kent Overstreet seemingly has an knack for driving away people who try to work with him
It’s unfortunate that it has come to this, since BCacheFS seems like a promising filesystem, but it is also wholly unsurprising: Kent Overstreet seemingly has an knack for driving away people who try to work with him
For example, the dd problem that prompted all this noise is that uutils was enforcing the full block parameter in slow pipe writes while GNU was not.
So, now uutils matches GNU and the “bug” is gone.
No, the issue was a genuine bug:
The fullblock
option is an input flag (iflag=fullblock
) to ensure that dd
will always read a full block’s worth of data before writing it. Its absence means that dd
only performs count
reads and hence might read less than blocksize x count
worth of data. That is according to the documentation for every other implementation I could find, with uutils
currently lacking documentation, and there is nothing to suggest that dd
might not write the data that it did read without fullblock
.
Until recently it was also an extension to the POSIX standard, with none of tools that I am aware of behaving like uutils
, but as of POSIX.1-2024 standard the option is described as follows (source):
iflags=fullblock
Perform as many reads as required to reach the full input block size or end of file, rather than acting on partial reads. If this operand is in effect, then the count= operand refers to the number of full input blocks rather than reads. The behavior is unspecified if iflags=fullblock is requested alongside the sync, block, or unblock conversions.
I can also not conceive of a situation in which you would want a program like dd
to silent drop data in the middle of a stream, certainly not as the default behavior, so conditioning writes on this flag didn’t make any sense in the first place
Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…
It’s not that BCacheFS would fail on big endian machines, it’s that it would fail to even compile, and therefore impacted everyone who had it enabled in their build. And you don’t need actual big endian hardware to compile something for that arch: Just now it took me a few minutes to figure what tools to install for cross-compilation, download the latest kernel, and compile it for a big endian arch with BCacheFS enabled. Surely a more talented developer than I could easily do the same, and save everyone else the trouble of broken builds.
ETA: And as pointed out in the email thread, Overstreet had bypassed the linux-next mailing list, which would have allowed other people to test his code before it got pulled into the mainline tree. So he had multiple options that did not necessitate the purchase of expensive hardware
One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.
It does not sound to me like standards were dropped for Asahi, nor that their use of Rust had any influence on the standards that were applied to them. It is simply as you said: What’s the point of testing code on architectures that it explicitly does not and cannot support? As long as changes that touches generic code are tested, then there is no problem, but that is probably the minority of changes introduced by the Asahi developers
I did enjoy this comment:
C code with a test suite that is run through valgrind is more trustworthy than any Rust app written by some confused n00b who thinks that writing it in Rust was actually a competitive advantage. The C tooling for profiling and checking for memory errors is the best in the business, nothing else like it.
In other words, a small subset of C code is more trustworthy than Rust code written by “some confused n00b”. Which I would argue is quite the feather in Rust’s cap
IMO, variables being const/immutable by default is just good practice codified in the language and says nothing about Rust being “functional-first”:
Most variables are only written once, and then read one or more times, especially so when you remove the need for manually updated loop counters. Because of that, it results in less noisy/more readable code when you only need to mark the subset of variables are going to be updated later, rather than the inverse. Moreover, when variables are immutable by default, you cannot forget to mark them appropriately, unlike when they are mutable by default
No gods, no masters