Given these positive signals
Those idiots waited for 4 years because they followed the hype of the moment. I’m glad I removed Google from my life.
This must be your first time seeing what Google support looks like
This is pretty standard unless you can get an exec’s personal attention.
something tells me they wanted their own formats to catch on.
I just want a picture of a got-dang hot dog.
Everyone should just be using AV1 at this point. https://en.wikipedia.org/wiki/AV1
I assume you mean AVIF? Because AV1 is not an image (file) format but a video compression format (that needs to be wrapped in container file formats to be storable).
“AVIF is an image file format that uses AV1 compression algorithms.” yes i mean that
deleted by creator
I just use old JPEGs. Not JPEG2000, not PNG, not WebP, not JPEG XL.
Feel free to use floppy disks. Btw if you are online, you use WebP and PNG all the time 🤣
Sir, don’t you dare encroach on those Lynx and W3M users. They don’t need no stinking images!
Lynx is the best browser.
I prefer offpunk.
Not if they use wget to only download the HTML!
If you are using Firefox:
- Enter the following in the address bar: about:config
- Search for: image.webp.enabled
- Set it to false Websites are delivering JPG/PNG instead of WebP again.
Maybe this should come with a warning. The purpose of WebP is to quickly serve images to the user without grabbing the entire image data. Without WebP all images will be fully loaded, in the right conditions a page could load real slow.
I love webp, but your explanation is a bit confused. Webp is typically lossy, just as jpeg — only, it’s more efficiently compressed, meaning smaller size for the same image quality. So there’s no ‘entire image data’, there are only different approximations of the original image and different compressed files. Full-blown lossless images in PNG or other formats take several times more data.
Disabling webp in favor of jpeg would use like 20-40% more data, in comparison. Which still sucks, but not as much.
Edit: maybe more than 40%, actually. Iirc I’ve seen webps that were half the size of jpegs. It’s a good format, shame it’s adopted rather poorly.
I wasn’t going to get into the whole lossyness of the formats and just simplified to full image instead of compressed formatted. That is interesting that it is only saving 20%-40%. I was under the impression that the page only rendered the image size necessary to fit the layout and not the full resolution image. Forcing it to less lossy or lossless would mean that the larger image would always be available to be served to be rendered without any web request.
That’s a rather interesting consideration as to whether rendering at smaller sizes skips decoding parts of the image.
First, the presented file is normally always loaded in full, because that’s how file transfer works over the web. Until lately, there were no different sizes available, and that only became widely-ish spread because of Apple’s ‘Retina’ displays with different dots-per-inch resolution, mostly hidpi being two times the linear size of the standard dpi. Some sites, like Wikipedia, also support resizing images on the fly to some target dimensions, which results in a new image of the JPEG or other format. In any case, to my somewhat experienced knowledge, JPEG itself doesn’t support sending every second row or anything like that, so you always get a file of a predetermined size.
First-and-a-half, various web apps can implement their own methods for loading lower- or higher-res images, which they prepare in advance. E.g. a local analogue to Facebook almost certainly loads various prepared-in-advance low-res images for viewing in the apps or on the site, but has the full-res images available on request, via a menu.
Second, I would imagine that JPEG decoding always results in the image of the original size, which is then dynamically resized to the viewport of the target display — particularly since many apps allow zooming in or out of the image on the fly. Specifically, I think decoding the JPEG image creates a native lossless image similar to BMP or somesuch (essentially just a 2d array of pixel colors), which is then fed to the OS’s rendering capabilities, taking quite a chuck of memory. Of course, by now this is all accelerated by the hardware a whole lot, with the common algorithms being prepared to render raw pixels, JPEG, and a whole bunch of other formats.
It would be quite interesting if file decoding itself could just skip some part of the rows or columns, but I don’t think that’s quite like the compression works in current formats (at least in lossy ones, which depend on the previous data to encode later data). Although afaik JPEG encodes the image in rectangles like 16x16 or something like that, so it could be that whole chunks could be skipped altogether.
Your name is amazing
No, I have WebP blocked in my about:config. And I use Pale Moon, which actually blocks the things unlike modern FF. And I don’t load PNG either.
Do you also hit yourself in the nuts every morning to show the world how tough you are?
lmao
Oh yeah? Well I named my firstborn child JPEG!
I would be more excited about JPEG XL if it was backward compatible. Not looking forward to yet another image standard that requires OS and hardware upgrades simply so servers can save a few bytes.
How would a new format be backwards-compatible? At least JPEG-XL can losslessly compress standard jpg for a bit of space savings, and servers can choose to deliver the decompressed jpg to clients that don’t support JPEG-XL.
Also from Wikipedia:
Computationally efficient encoding and decoding without requiring specialized hardware: JPEG XL is about as fast to encode and decode as old JPEG using libjpeg-turbo
Being a JPEG superset, JXL provides efficient lossless recompression options for images in the traditional/legacy JPEG format that can represent JPEG data in a more space-efficient way (~20% size reduction due to the better entropy coder) and can easily be reversed, e.g. on the fly. Wrapped inside a JPEG XL file/stream, it can be combined with additional elements, e.g. an alpha channel.
All you have to do is add a small traditional JPEG image at the start of the file. It doesn’t have to be high resolution or more than a couple of kb. The new format decoder would know this, and skip the traditional jpeg “header”, rendering the newer file format embedded in the image.
Would completely defeat the purpose of making a new smaller file format if we prefix if with the old format.
That would have been a brilliant move with wav vs MP3
If you’re really saving 20% in file size with XL, adding back a very compressed preview image that takes up one or two percent isn’t going to cost you much.
It requires neither of those upgrades though? Unless you’re still using Windows XP I guess for some reason. It’s just an update to the image decoder
What does backward compability in image format even means? Being able to open it in windows image viewer?






