I do like that Mastodon reminds you to add Alt text before posting an image. People think alt text is just for the blind or near blind but sometimes I have a hard time figuring out why a picture was posted and the alt text clears that up. All that to say, it’s reminders help create the habit of adding text descriptors, which helps everyone.
Perhaps it would help a bit, I don’t know. Even if it does, it would be far less than having the sharer to actually write something, and telling the reader the focus of the picture.
I’ll give you a personal albeit real example of that. I posted this picture in Mastodon, some time ago:
A machine learning model could theoretically say something like there’s a tabby cat in the picture, one semi-abstract acrylic painting, one figurative oil painting. Both paintings rest on a white wall… except that most of those things don’t matter, what matters is what the cat is doing towards the viewer.
Contrast it with the translated version of the alt text that I’ve provided: A playful tabby cat, leaning against the back of a chair, looking at the viewer. Her head, upper thorax, and paws are visible. One paw is holding the back of the chair; the other paw is on the air, in an “I got you!” movement towards the viewer. It’s completely different and, when I wrote this, I hoped that both blind and non-blind users could get something out of the picture that they wouldn’t without the alt text.
And it’s the same deal with other Mastodon posters, not just me. This system - where the user is expected to provide alt text - works well, IMO.
Alt text for blind people in images, a la Mastodon.
Could this be solved with an app?
I’m not sure but I don’t think so. It would require the server to store the alt text for the picture.
And it would also require people to actually use the feature. I still don’t know how Mastodon managed to pull this off in this regard…
I do like that Mastodon reminds you to add Alt text before posting an image. People think alt text is just for the blind or near blind but sometimes I have a hard time figuring out why a picture was posted and the alt text clears that up. All that to say, it’s reminders help create the habit of adding text descriptors, which helps everyone.
Would adding ML generated descriptions of images help here? Would be trivial to add in a third party client.
Perhaps it would help a bit, I don’t know. Even if it does, it would be far less than having the sharer to actually write something, and telling the reader the focus of the picture.
I’ll give you a personal albeit real example of that. I posted this picture in Mastodon, some time ago:
A machine learning model could theoretically say something like there’s a tabby cat in the picture, one semi-abstract acrylic painting, one figurative oil painting. Both paintings rest on a white wall… except that most of those things don’t matter, what matters is what the cat is doing towards the viewer.
Contrast it with the translated version of the alt text that I’ve provided: A playful tabby cat, leaning against the back of a chair, looking at the viewer. Her head, upper thorax, and paws are visible. One paw is holding the back of the chair; the other paw is on the air, in an “I got you!” movement towards the viewer. It’s completely different and, when I wrote this, I hoped that both blind and non-blind users could get something out of the picture that they wouldn’t without the alt text.
And it’s the same deal with other Mastodon posters, not just me. This system - where the user is expected to provide alt text - works well, IMO.