You 𝘵𝘩𝘪𝘯𝘬 it's 𝒸𝓊𝓉ℯ to 𝘄𝗿𝗶𝘁𝗲 your tweets and usernames 𝖙𝖍𝖎𝖘 𝖜𝖆𝖞. But have you 𝙡𝙞𝙨𝙩𝙚𝙣𝙚𝙙 to what it 𝘴𝘰𝘶𝘯𝘥𝘴 𝘭𝘪𝘬𝘦 with assistive technologies like 𝓥𝓸𝓲𝓬𝓮𝓞𝓿𝓮𝓻?
Here's the thing, the fancy-looking unicode characters almost never create words on their own, especially not those from the web, gemini or IMs, nor the ones for STEM fields.
A rudimentary look ahead "algorithm" can toggle transliteration on and off. We have transliteration mappings in every web CMS that deals with user uploaded files, because they need clean paths for SEO, just take the mappings from Drupal.
@walter @lena @devinprater Yeah, sadly, part of the problem is that accessibility tools do little more than the bare minimum. Yet another lovely gift of capitalism where things that don’t generate huge amounts of profit find little investment and love. It’s so sad that the most effective accessibility pitches are those that aim to convince corporations that improvements will lead to higher profits.
I would suggest that claiming they do “the bare minimum” is giving them too much credit given and how many years text to speech has been a usable technology
that sounds kinda like saying they don’t care if the text is in blue ink or black. They don’t have the option of telling the difference. Consider “i hate cats” vs “i HATE cats” . It’s like the difference between “cats are unpleasant” and “cats are the devil!” That’s the power of italics.
Not hearing parentheses can make a sentence nonsensical or incredibly confusing
We keep using the "screen reader voice" terminology" like it's the one true voice, the one and only. This is B.S., there should be multiple tracks in parallel, something like backing vocalists or backup singers, which can add emphasis to various parts of the text, and sometimes have them take over 1/2
@masukomi @devinprater @aral @lena
In some cases, the extra voices would add emphasis, but in other cases, they would add extra words while the main one is paused, like "opening double quotes", and you would hear this with a 3rd sound in the background, one specific to that phrase, similar to that "Windows XP error" sound we were all used to. In time, you can shorten the phrase from the 2nd voice or completely remove it. But…
While trying to find an example here, I remembered a, clearly staged, "impromptu acoustic" performance by a band. If you hear all the voices, you'll get a feel for what's happening, but if you don't see what's in the background, you'll never get the full "picture", pun intended. 3/4
with a YouTube link to Nu-Di-Ty by Kylie Minogue
@masukomi @devinprater @aral @lena
While the band was playing in the foreground, the people in the background had the most interesting choreography, everybody, even the janitors, were in for the action. And as the C.W. says, it's a bit lewd, but you might wanna get somebody to explain the background action, screen readers aren't gonna help here.
Not while reading a blog post or while writing code or debugging errors, we need to improve on that. Maybe we can get some marketing budget if we rephrase "accessibility" as "human friendly interfaces".
En Fin or backslash-backslash-zero (\\0) for the all cold ones out there.
I'm not sure why you would say that "most blind people don't seem to care." The blind/low vision people I know have explicitly stated that they think it's horrible, but they don't have a choice.
The example video does an excellent job at demonstrating how inaccessible it is. While it'd be great if the tools were better, it's really not difficult to produce more accessible content.
even worse: a lot of TTS engines just... get this right now. all of it. mappings, punctuation, everything. by default. they'd just have to be used by screenreader software.
still, using Unicode fonts is likely a bad idea, precisely because a lot of accessibility tools may suck.
@LunaDragofelis @masukomi @kescher @aral @lena @walter Basically, when a screen reader is speaking one thing, like a book, and something else comes in, like a notification, you don't really want the notification interupting the book mid-sentence. So a queue is usually made to hold the notification text to send off to the synthesizer after a time or the current text has been spoken. Apple doesn't really have that. And it's one reason why VoiceOver in the Terminal sucks.
@lena Even if these symbols were used for their intended purpose, this would be a terrible way to read them.
@lena all i care about is *emphasis* and having sourcecode rendered as monotype (or anything to make it look different from regular text)
Mamot.fr est un serveur Mastodon francophone, géré par La Quadrature du Net.