- cost-efficient one-stop solution to the barrier-free movement for the visually impaired -
Team: WONG Kwong Yat Felix (BEng (CE)), WONG Chi Ping Desmond (BEng (CE))
Project supervisor: Dr W.L. TAM (Department of Electrical and Electronic Engineering
Supporting Fund: Tam Wing Fan Innovation Wing, H.K.U.E.A.A. Experiential Learning Fund
Guide dogs and white canes are currently the most common forms of navigational aid for the visually impaired. Common, not effective: the severe shortage of only 50 guide dogs in HK notwithstanding, prior training is time-consuming, and social stigma is still a prevalent concern. The white cane is instrumental in detecting immediate hazards, but the radius of safety is limited to its length, and the true nature of the detected obstructions is unknown to the user, resulting in constant uncertainty and unresolved safety risks.
We live in an urban setting overwhelmingly designed for sighted people, making environment traversal one of the most pressing concerns for the visually impaired. We believe that visual impairments should not restrict one’s autonomy to navigate their environment hindrance-free. That is why we came up with Lumino: in itself a guide dog, white cane and much more. With smart-object identification, hazard detection and real-time GPS navigation backed up by a novel machine learning algorithm, coupled with intuitive audio and haptic feedback and complemented by compact hardware, Lumino empowers the visually impaired, one confident stride at a time.
Youth Innovation Award – Best Creative Idea Finalist
Second Prize in the Collegiate Computing Contest: Mobile Application Innovation Contest
Youth Innovation Award – Best Creative Idea Finalist in the Singapore Digital Wonderland Exhibition
The best project award - ELEC3442 Embedded System @ The 1st Engineering InnoShow
We have adopted a hybrid model: data unique to each user, such as their daily routines and idiosyncrasies, is stored locally on their device, while data beneficial to all users such as those regarding typical structures and objects (walls, cars, keys) are stored in the cloud. Our self-developed algorithm processes the user’s surrounding environment obtained from the live feed of their smartphone camera, cross-references objects of interests from similar data stored in local and cloud storages, and produces suggestions for the user’s reference, or completes tasks for them in the background.
Lumino’s software has three main functions: object identification, hazard detection and navigation. All three main functions output feedback to the user via a virtual assistant in the same vein as well-established programs of the sort, such as Siri and Google Assistant.
Lumino is also seamlessly integrated with contemporary web mapping services such as Google Maps and Apple Maps to provide its users with navigational assistance. An overlay is added on top of such mapping services to insert waypoints at turns and crossings of the suggested routes, and the virtual assistant notifies the user at such junctions. Smartphone haptic feedback also ensures the user remains on the route by vibrating until the user faces the correct direction.