Accessibility • Voice AI • Product Development
January 30th, 2026
Building AI for the Blind: How We Made Dreami Accessible in One Day
We didn't just check the compliance box. We sat with a blind user and fixed every annoyance in real time.
To most people, AI is just text on a screen. But for people who are blind or severely visually impaired, text on a screen means nothing. That's why today we went from being compliant with JAWS—the leading screen reader—to making Dreami genuinely easy and non-annoying to use with JAWS.
There's a significant difference between those two things.
The Difference Between Compliant and Usable
JAWS is notoriously difficult to optimize for. If you try to make a screen reader announce one part of the interface while staying silent on another, things break quickly. As a developer, it's easier to just let the screen reader read everything. But easier for developers isn't easier for users.
So we did something different: a one-hour live development call with an actual blind user. They would encounter an issue with the screen reader, and we would fix it instantly—in real time. Another issue would surface from the fix. We'd solve that too. We kept iterating until the experience was genuinely good.
When you stop and actually talk to your users, you end up building an amazing product.
Voice-to-Text in Hours, Not Months
During that call, our user mentioned they wished we had voice-to-text. A few hours later, we released voice-to-text for all modern browsers. You hit the microphone button, talk, and the message sends.
But then we realized something: requiring a button press every time broke our core promise. Dreami is built around natural conversation—an evolving mind you talk with, not a command line you prompt. Pressing a button after every sentence isn't a conversation.
So we added continuous conversation mode. Press the microphone once, and it stays on. Now you can have a genuine back-and-forth dialogue without touching anything.
One Feature, Three Use Cases
This single feature solves multiple problems at once:
- For blind users: A natural conversational flow with an AI. Your screen reader announces "Dreami is typing" when your message sends, then reads the response when it arrives.
- For drivers: A completely hands-free AI assistant. No touching your phone, no distraction—just talk.
- For anyone who prefers voice: An alternative to typing that doesn't hide what's happening.
Why We Show the Text
OpenAI's voice mode doesn't let you see the text on screen during voice chat. We think that's a problem for people who want to visually confirm what was said, review conversations later, or use both voice and text together.
Our approach shows everything. Visual readers can see what Dreami thinks they said. Blind users get the natural conversational flow. Drivers can chat without looking at their device, then review the transcript later. Everyone gets what they need without sacrificing what others need.
About Response Time
If you notice Dreami takes fifteen to thirty seconds to reply, that's intentional. It's slower because it's doing its best to synthesize your entire conversation—not just react to your last message. You're not prompting an AI. You're having a conversation with one.
That tradeoff is worth it. The depth of understanding you get from an AI that actually processes context is incomparable to the instant but shallow responses you get from prompt-and-response systems.
What This Means for Accessibility
Accessibility isn't a checkbox. It's not something you achieve by meeting WCAG guidelines and moving on. Real accessibility means sitting with the people you're building for and feeling their frustration when something doesn't work.
We're extremely proud that we took a feature request from a live user call and shipped it—working and bug-free—in a matter of hours. That's the pace you can move at when you actually listen.