
We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!
Globally, there are almost 350 million people with blindness or some other form of visual impairment who need to use the internet and mobile apps like everyone else. However, this is only possible if websites and mobile apps are built with accessibility in mind – and not after the fact.
The problem
Consider these two sample buttons you might find on a webpage or mobile app. Each is from a simple background so they appear to be similar.
In fact, they are worlds apart when it comes to accessibility.
It’s a matter of contrast. The text on the light blue button has low contrast, so the word “Hello” may be completely invisible to someone with a visual impairment such as color blindness or Stargardt’s disease. It turns out that there is a standard mathematical formula that defines the correct relationship between the color of the text and its background. Good designers know about this and use online calculators to work out these ratios for each element in a design.
So far, so good. But when it comes to text on a complex background like an image or a gradient, things get complicated and helpful tools are rare. Previously, accessibility testers had to check these cases manually by sampling the background of the text at specific locations and calculating the contrast ratio for each of the samples. Aside from being tedious to measure, the measurement is also inherently subjective as different testers may sample different points within the same area and arrive at different measurements. This problem—cumbersome, subjective measurements—has held back digital accessibility efforts for years.
Accessibility: AI to the rescue
It turns out that artificial intelligence algorithms can be trained to solve such problems and even improve automatically when exposed to more data.
For example, AI can be trained to summarize text, which is helpful for users with cognitive impairments; or for image and face recognition, which helps people with visual impairments; or real-time subtitles that help people with hearing impairments. Apple’s VoiceOver integration on the iPhone, which is primarily used to speak out email or text messages, also uses AI to describe app icons and report battery levels.
Guiding Principles for Accessibility
Smart companies are rushing to comply with the Americans with Disabilities Act (ADA) and give everyone equal access to technology. In our experience, the right technology tools can help make this much easier, even for today’s modern websites with their thousands of components. For example, a website’s design can be scanned and analyzed by machine learning. It can then improve its accessibility through face and voice recognition, keyboard navigation, audio translation of descriptions, and even dynamic readjustments of image elements.
In our work we have identified three guiding principles that I believe are critical to digital accessibility. I’ll illustrate them here in relation to how our team, led by our data science team lead, Asya Frumkin, solved the problem of text against complex backgrounds.

Break the big problem down into smaller problems
If we look at the text in the image below, we can see that there is some sort of readability issue, but it’s hard to quantify it overall just by looking at the whole sentence. On the other hand, if our algorithm examines each of the letters in the phrase individually – for example, the “e” on the left and the “o” on the right – we can more easily determine whether each of them is legible or not.
If our algorithm continues to go through all the characters in the text in this way, we can count the number of readable characters in the text and the total number of characters. In our case, there are four of a total of eight readable characters. The resulting fraction, with the number of readable characters as the numerator, gives us a readability ratio for the entire text. We can then use an agreed preset threshold, e.g. B. 0.6, below which the text is considered unreadable. But the point is, we got there by running operations on each one piece of the text and then count from there.

Reuse existing tools whenever possible
We all remember Optical Character Recognition (“OCR”) from the 1970s and 80s. These tools showed promise, but ended up being too complex for their originally intended purpose.
But there was a part of those tools called The CRAFT (Character-Region Awareness For Text) model that showed promise for AI and accessibility. CRAFT maps each pixel in the image to its probability of being in the middle of a letter. Based on this calculation, it is possible to generate a heat map in which areas with a high probability are shown in red and areas with a low probability are shown in blue. From this heatmap you can calculate the bounding boxes of the characters and clip them from the image. With this tool we can extract individual characters from long text and run a binary classification model (as in #1 above) on each of them.

Find the right balance in the data set
The model of the problem classifies individual characters in a simple binary way—at least in theory. In practice, there will always be challenging real-life examples that are difficult to quantify. What complicates matters further is the fact that everyone, visually impaired or not, has a different perception of what is legible.
Here, one solution (and the one we chose) is to enrich the data set by adding objective tags to each item. For example, each image can be provided with a reference text on a solid background before analysis. This way, when the algorithm is run, it has an objective basis of comparison.
For the future, for the good of the community
As the world continues to evolve, every website and mobile application must be built with accessibility in mind from the start. AI for accessibility is a technological capability, an opportunity to step off the sidelines and engage, and a chance to build a world where people’s difficulties are understood and honored. In our view, the solution to inaccessible technology is simply better technology. That way, making websites and apps accessible is part of making websites and apps work—but this time for everyone.
Navin Thadani is co-founder and CEO of Evinced.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers