top of page
Search

Braille GPT: How a Muslim woman is building AI to reimagine accessibility

  • Nov 17, 2025
  • 4 min read

By Layal Alameddine


Dunya Hassan. Photograph: Layal Alameddine.
Dunya Hassan. Photograph: Layal Alameddine.

From her bedroom in Western Sydney, Dunya Hassan is developing a technology that could fundamentally shift how people with deaf-blindness experience communication. 


Her prototype ‘Braille GPT’ converts spoken language into real-time Braille, enabling people with combined hearing and vision impairments to access dynamic conversations. Braille GPT isn’t just innovation, it’s intervention in a world where most technologies remain built for the fully sighted and fully hearing.


Hassan founded Braille GPT after volunteering on Be My Eyes, an app connecting blind users with sighted helpers. When helping a man match his socks, she began to wonder: what if someone was both blind and deaf, who would they talk to?



“I searched it up,” she recalls, “and all that existed were big, bulky Braille devices costing over $3,000. That defeats the purpose if it’s meant to be accessible. 


“Everyone should have the right to expressively speak to someone and have that freedom of speech,” she says. “I wanted to bridge that gap with a portable device”.


In 2022, the World Health Organization (WHO) and UNICEF reported over 2.5 billion people worldwide require at least one assistive product, like a wheelchair, hearing aid, or communication support app. Yet nearly one billion individuals, predominantly in low and middle-income countries, still lack access to these essential technologies.


At its core, Braille GPT uses speech-to-text processing and tactile Braille mapping to translate spoken language into raised-dot output in real time. The prototype features simple controls, a record button, a confirm button, and a small moving Braille pad. 


“Whatever I say is instantly translated into text and then into Braille,” Hassan explains. “The dots move up and down as the message comes through.”


The BraileGPT prototype, developed by Dunya Hassan. Photograph: Layal Alamedinne.
The BraileGPT prototype, developed by Dunya Hassan. Photograph: Layal Alamedinne.

As an engineer of CALD and Muslim origins, the underrepresentation of women like Hassan in STEM is well documented. 


According to Australia's 2025 STEM Equity Monitor, 28% of STEM qualified positions are held by people of CALD origins, and fewer than 15% are held by women. Despite this, Hassan is part of a growing but often overlooked cohort of innovators reshaping the field from the ground up. 


For Hassan, accessibility is not just a product feature, it is a design philosophy; an ethos that Hassan describes is unique to marginalised communities who have struggled under the low expectations of Western society. 


“I don’t come from a family of entrepreneurs,” she says. “Everything has been self-taught, from dealing with lawyers to meeting accountants once a month.”


What’s particularly forward-looking about Hassan’s AI is its multilingual potential. For Australia’s multicultural population and the wider global landscape, a speech to Braille device that can understand multiple languages could help bridge both sensory and linguistic barriers, making accessibility truly inclusive across cultures. 


A multilingual speech-to-Braille system could also help deaf-blind users participate in education or work without translation bottlenecks, connecting communities that technology has historically excluded.


Australia’s disability technology landscape is undergoing a transformation, with the federal government just last year investing in disability innovation grants under the National Disability Insurance Scheme (NDIS). These grants support technology-driven projects that promote autonomy, access, and inclusion for people with disabilities. 


The Medical Research Future Fund (MRFF) also funds projects like Hassan’s, especially those focusing on emerging assistive technologies such as speech-to-text interfaces and sensory aids.


As governments grapple with how to regulate AI, Hassan makes it clear accessibility must be part of that conversation. 


On a global scale, the UN Committee on the Rights of Persons with Disabilities to work and employment 2022  has recognised the “new barriers or forms of discrimination” introduced by AIs integration into the workspace.


Heba Hagrass, the United Nations Special Rapporteur on the rights of persons with disabilities, has warned that without inclusive design, new technologies risk rebuilding the same barriers they promise to dismantle. 


“When computers came in and then the Internet, people did not notice that having computers and the Internet instead of creating a haven for everybody has built lots of obstacles and unbeatable barriers for many disabilities,” she said, emphasising the need to ensure accessibility from the outskirts.


Her point resonates with Hassan, whose speech-to-Braille AI directly responds to this challenge. According to the UN, an estimated 1.3 billion people, about 16% of the global population, live with significant disabilities, yet nearly a billion lack access to the assistive technologies they need. 


Artificial intelligence could bridge that gap by improving communication, mobility and independence if designed ethically and inclusively (UNRIC, 2024). Projects such as Hassan’s show how co-creating technology with disabled communities, rather than for them, can help realise the UN’s vision of “an accessible future for all.”


Hassan represents a new wave of AI developers who are building systems from the margins outward. Her work pushes back against the industry’s dominant design ethos, which often overlooks disabled users. 


“AI is only as inclusive as the people building it,” she says, reflecting on how her own lived experiences have shaped the way she codes. 

It’s a leap forward for accessibility tech and, more importantly, for communities often left behind in the AI boom.



 
 
 

Comments


bottom of page