• Lvxferre@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Perhaps it would help a bit, I don’t know. Even if it does, it would be far less than having the sharer to actually write something, and telling the reader the focus of the picture.

    I’ll give you a personal albeit real example of that. I posted this picture in Mastodon, some time ago:

    See the rest of the text for a description of this pic.

    A machine learning model could theoretically say something like there’s a tabby cat in the picture, one semi-abstract acrylic painting, one figurative oil painting. Both paintings rest on a white wall… except that most of those things don’t matter, what matters is what the cat is doing towards the viewer.

    Contrast it with the translated version of the alt text that I’ve provided: A playful tabby cat, leaning against the back of a chair, looking at the viewer. Her head, upper thorax, and paws are visible. One paw is holding the back of the chair; the other paw is on the air, in an “I got you!” movement towards the viewer. It’s completely different and, when I wrote this, I hoped that both blind and non-blind users could get something out of the picture that they wouldn’t without the alt text.

    And it’s the same deal with other Mastodon posters, not just me. This system - where the user is expected to provide alt text - works well, IMO.