by A.J. Kleber
As AI and machine learning-based technologies become ever more integrated into many aspects of our lives, the need to address their weaknesses and gaps in capacity grows correspondingly urgent. Today’s AI often assumes a “one-size-fits-all” user, ignoring individual needs and preferences. With some systems, repercussions can be frustrating, or even dangerous.
Most users have run into at least a minor problem with misleading or confusing instructions from a standard GPS, directing cyclists to ride down an unexpected staircase or describing a curved roadway as a series of turns, causing a driver to go off-course. While technically more advanced, an AI assistant might talk too much, causing distractions, or confuse important words, like “left” and “right.” If you’re a blind user crossing a street, that’s not just annoying, it’s unsafe. Although these systems are designed to “learn,” their “universal” preference models may lead them to ignore feedback or provide irrelevant responses.
A human-centric approach to technology
This is the problem professor Professor Eshed Ohn-Bar is working to address, with the support of a prestigious Faculty Early Career Development Program (CAREER) Award from the National Science Foundation (NSF). Advancing AI for Accessibility with User Feedback aims to build AI systems that adapt to the experiences of real users, beginning with a focus on low-vision individuals. “There’s a critical gap in how AI understands and supports people with different abilities,” Ohn-Bar explains. “We’re building systems that align better with individual needs, especially in safety-critical, real-world settings.”
The project will support the first publicly-available, large-scale dataset of AI interactions and user preferences from a broad set of users, including many with a range of vision challenges, up to and including the blind. This dataset will capture how users actually experience assistive AI, and how they wish it worked, instead. Ohn-Bar and his team will use this data to develop a new AI model trained to respond less generically. With just a few examples of a given user’s preferences, the agent will adjust not only the information it provides, but also its verbosity and timing. “It’s about rethinking how we build AI in the first place, starting with the user it’s supposed to support.”
Building access into his career
Professor Ohn-Bar already has a notable track record with accessibility, inclusive design and assistive technologies to build on. Past research projects include a system designed to help visually impaired users navigate automated transportation (such as self-driving cars); he even co-sponsored a recent ECE Senior Design team in the development of a semi-autonomous bike for visually impaired riders. He has been diligent in partnering with local advocacy groups, like the Carroll Center for the Blind, to make sure that disabled people were involved throughout the entire development process of each project. As a researcher, he models the personal, human-centered approach he hopes to build into his new AI systems.
Assistant Professor Eshed Ohn-Bar joined BU ECE in 2020. He is the recipient of the College of Engineering’s 2025 Early Career Research Excellence Award; past accolades include a prestigious IEEE Intelligent Transportation Systems Society Best Dissertation Award (2017), a Humboldt Research Fellowship (2018), and a number of Best Paper Awards at machine learning, accessibility, and computer vision conferences.
