Skip to main content

Design, development and performance analysis of cognitive assisting aid with multi sensor fused navigation for visually impaired people

Abstract

The research and innovations in the field of wearable auxiliary devices for visually impaired and blind people are playing a vital role towards improving their quality of life. However, in spite of the promising research outcomes, the existing wearable aids has several weaknesses such as more weight, limitations in the number of features and cost. The main objective of this manuscript is to provide the detailed design of a novel lightweight wearable aid with higher number of features for visually impaired and blind people. The proposed research aims to design a cognitive assistant that will guide the blind people for walking by detecting the environment around them. The framework include a Multi-Sensor Fused Navigation system comprises of a sensor-based, vision-based, and cognitive (intelligent/smart) application. The visual features for the design include obstacle detection, uneven surface detection, slope and downward steps detection, pothole detection and hallow object detection; location tracking, walking guide, image capturing and video recording. This prototype is named as Blind’s Apron based on its appearance. The invention focusses on parameters like reduction on size (quite handy) and light weight (comfortable to wear), higher number of detection features, and minimum user intervention (high end operations like switching on and off). All user interactions are friendly and affordable to everyone. The results obtained in this research lead to a high end technical intervention with ease of use. Finally, the performance of the proposed cognitive assistant is tested with a user study in real-time. The feedback and corresponding results establish the effective outcome of the proposed invention which is a light weight and feature enhanced device with easily understandable instructions.

Introduction

Development of technologies for visually impaired and blind people are playing a vital role in improving their quality of life. The research and inventions in this regard mainly focus on wearable assistive devices for the blind and visually impaired people [1]. There are many problems faced by blind people ranging from indoor activities such as walking freely in their house to execution of outside activities by walking freely on the roads. Today, due to the advancement of technologies, many software programs are being developed and embed on electronic gadgets like mobiles and computers to help the blind people. The advanced systems using deep learning and Artificial Intelligence enable the researchers to develop aids that help the blind people to perform almost all their activities with immense comfort such as assistant aid for walking, writing aid with low cost, portable and easy to use pen for blind and visually impaired people [2]. Isaksson et al. have developed inventions which are focused on the application of step-by-step and phase wise decomposition of activities to perform various activities of mobility for blind and low-vision individuals [3]. Barontini et al. [4] have proposed a user centric approach for visually impaired people to assist indoor navigation. They developed a travel aid system consists of computing a visual information for obstacle avoidance, and a wearable device for guiding unknown indoor environment. However, huge opposition to these developments has been observed due to the fact that most of the blind people are not used to get along with these advanced technical gadgets. It is not surprising to see the tradition that followed by all the blind people is to use a stick that could guide them in their daily life. It is true that this stick comes handy and helps the blind to detect all the things and events around them easily. But after all, it is operated by the blind himself and he can only use the stick to take help based on his understanding of surroundings. If the surroundings are hard to compile, it would become difficult for the blind.

According to global data on visual impairments, the statistics are revealing that the cumulative value of blind, low visioned and visually impaired population in India is large when compared to other countries. Also, Table 1 reveals that the blind people are more in 50 and above age group when compared to other age groups. Since people with age more than 50 generally cannot be comfortable with handling high end technical gadgets, it is not feasible in developing sophisticated gadgets with more operational functionalities to help blind people.

Table 1 Global estimates of the number of people under blind, low vision and visually impaired under different age groups as released by the World Health Organization

So, it led to the development of the proposed design of the cognitive assistant which is a gadget with simple to use and easy to wear functionalities. The design of the proposed gadget assures without involving a person’s additional efforts like carrying a walking stick or holding a mobile phone. It can be worn like an apron on the body of the blind person and the rest is taken care of by the gadget itself without much effort from the blind. In view of this, the proposed gadget is incorporated with three major features such as detection of obstacle, potholes, and objects.

Related works

To start with, Dakopoulos et al. [5] have published a detailed survey of various electronic based travel aids for blind and concluded that no system till then (2010) incorporates all the features in a satisfactory degree. Further, opined that an ideal system should have all the features, many functionalities and most importantly the visual impaired users should feel confident about its overall performance, robustness, and reliability. The various categories of visual substitution systems include Virtual Reality Tools [6], electronic travel aids, electronic orientation aids, Wearable auxiliary devices [7] and position locator devices.

The performance of the assisting aid is majorly dependent on the hardware that is being used in it. Khanom et al. have suggested raspberry pi based system will perform well after presenting a comparative study on an assisting device with raspberry pi, Arduino and ultrasonic sensor as hardware components [8].

Broadly, the walking assistant’s taxonomy is categorized based on various development methods such as sensor based, computer vision based, and smart phone based walking assistants [9].

A methodology was introduced by Bujacz et al. for remote guidance of the blind [10]. The main idea is to assist the blind by a remote human assistant. The video data collected from the camera carried by the blind will be transmitted to a remote assistant for instructing the blind. This methodology is not efficient because instructing a blind person based on the video footage can turn out to be really dangerous and it ultimately requires human effort in assisting the blind person.

Further researchers have focused on a voice-based guidance system for the blind people [11]. It uses Ultrasonic and IR sensors along with APR sound system which warns the user whenever there is an obstacle detected by IR and ultrasonic sensors. This system is flexible since it can be used indoors as well as outdoor and with or without the guiding stick.

Nada et al. have proposed a fast responsive smart stick with the integration of the detection of staircases using infrared sensor and the detection of the obstacles coming in the way of the blind people using a couple of ultrasonic sensors [12]. The design of the stick uses a flash memory—ISD1932, a 18F46K80 microcontroller embedded system and along with vibrating motors. As a method of indication, they used speech warning messages and vibrating motors when any obstacle was detected.

An intelligent guide stick for the blind people is proposed by Kang et al. [13]. This stick design consists of two numbers of DC motors, a sensor with ultrasound displacement techniques, and a microcontroller. The total weight of this blind stick is 4 kg which is heavy for any person to handle, and the height and width of the stick is 85 cm and 24 cm respectively.

Chaurasia have proposed a methodology for guiding the blind people to walk indoors [14]. It involves a walking stick and radio frequency signals having different carrier frequencies. Different paths to different destinations from the user’s position carries a specific radio frequency signal. Each path is identified by its carrier frequency and any deviation from the path will produce tactile vibrations. The main drawback of this methodology is that the path will not always be in the same state.

This is addressed by Wahab et al. with the help of ultrasonic sensors and water sensors to detect the obstacles and water content respectively [15]. For warning the user, both voice-based and through vibrations.

Most of the aids used vibrations for indicating the users regarding the obstacles ahead [12,13,14,15]. This is because of the reliability of vibrations in any situation unlike the voice based indications which fails to serve the purpose in noisy environment.

Further, aids were developed with a GSM-GPS module to pinpoint the location of the blind along with informing the directions to the blind [16]. Such devices use ultrasonic sensors for obstacle detection and vibrating motors to warn the blind regarding the obstacles ahead. It also uses accelerometer sensors [12,13,14,15]. The functionality is extended with a major focus on guiding system with the directions to the blind by efficiently using GPS modules. Most of the researchers focus on the efficient way of measuring the distance between objects using ultrasonic and IR sensors [17, 18].

The main disadvantage in these designs would be the use of Ultrasonic sensors [19]. Since these Ultrasonic sensors have less reading frequency compared to Lidar and Time-of-Flight sensors, the stick might not efficiently detect the obstacles in motion. The other drawback is that the users by themselves must point the stick in a particular direction to detect the obstacles which might not be an efficient method. Since the obstacles might take different heights and shapes, moving the stick just above the ground level in a specific direction might not help the users from the obstacles. Also handling the stick with the hands can be tiring after some time.

Liu et al. [20] proposed a lightweight assistive system based on 3D point cloud instance segmentation with a solid-state LiDAR sensor, for holistic indoor detection. After the 3D instance segmentation, the segmented point cloud is post processed by removing outliers and projecting all points onto a top-view 2D map representation. Their proposed system integrates the information and interacts with users intuitively by acoustic feedback.

However, objects which are transparent and glazing phenomena create trouble in the mobility of visually impaired or blind people.

Zhang et al. [21] constructed a wearable system with a novel dual-head Transformer for Transparency (Trans4Trans) model, which is capable of segmenting general and transparent objects and performing real-time wayfinding to assist people walking alone more safely.

Advancements in technology leads to new innovations though, people with visual impairment or blindness need a kit of wearable smart glasses, an intelligent walking stick [22], a mobile device app, and a cloud-based information management platform for visually impaired people towards achieving a complete bag for the quality of their life.

Finally, it is evident that any one of the features cannot fully assist the visually impaired people. Consequently, availing all features leads to the inconvenience of holding of many devices and knowledge of usage of technology is essential.

Hence, a fusion of sensor based, vision based, and cognitive (smart) aid is needed.

Proposed design and methodology

The proposed Blind’s apron i.e., a cognitive assisting device aims to develop a wearable hardware embedded device with high features, less weight and easily understandable instructions for blind or visually impaired people. It includes various visual components required to assist the blind people such as obstacle detection, potholes detection, and object detection along with location tracking using geo location for care takers. The proposed design is divided into three functional modules viz., 1. Input module, 2. Heart of the Apron, 3. Output module.

Input module consists of sensors and a camera. This module captures a specified data and input the data into the heart of the apron for processing.

Heart of the apron consists of Raspberry Pi, Arduino and all the core circuits which are responsible for the Apron functionalities.

Output module consists of Buzzor, Vibrating Motors and Audio jack which is used to output the processed data to the user.

The model of the proposed blind’s apron is shown in Fig. 1.

Fig. 1
figure 1

Proposed blind’s apron modelled using in Catia software

The following components are used for various functionalities in the blinds apron:

  • Raspberry Pi: Used for implementing the entire image and object recognition operations using Google’s cloud services.

  • Arduino Mega: Used for switching operations of all other components in the heart of the apron. It is also used for pit hole detection mechanism.

  • Arduino Nano: Used for controlling individual lidar sensors and vibrating motors.

Lidar sensors (VL53L1X): Used for detecting the obstacles near to the blind person using Lidar technology.

Sonic RangeFinders (HC-HRF04): Used for distance calculation from the chest level of the blind person to the ground for detecting pit holes and uneven roads.

The complete workflow of the Blind’s apron is shown in Fig. 2.

Fig. 2
figure 2

The integrated working flow diagram of the Blind’s apron

The proposed model is a smart wearable device which uses a total of 5 V 3 amperes supply with rechargeable power unit. It has one micro HDMI output port coming from the raspberry pi 4 to configure the settings of raspberry pi by connecting to a monitor and a single charging port that is connected to rechargeable unit. The model uses VL53L1X Time-of-Flight sensor to measure the time. It considers the time as the light beams emitting from the sensor to reflect and reach back the sensor again. This time value is then sent to the Arduino nano for processing and calculating the distance at which the light beam got reflected. A total of ten VL53L1X sensors are used in the proposed model attached to the different locations of the body and each sensor is provided with a separate Arduino nano to increase the response time of the vibrating motors that are connected to each sensor. The vibrating motors start vibrating as soon as the Arduino nano detects the distance of the reflecting point of the light beam less than 30 cm. These vibrations at different parts of the body can easily help the blind person to predict the size of the obstacle ahead without any manual effort.

Sensor fusion

Multi-Sensor Fused Navigation system comprises a sensor-based, vision-based, intelligent (smart) application. Here, the sensors used are Lidar sensors, a vision module with a pi camera, GPS module and a smart application with the mobile application, firebase DB is used to provide various visual assistance features like obstacle, Potholes, and uneven surfaces identification, navigation, and location tracking and user instructions. Various functionalities of this sensor fusion are tabulated in Table 2.

Table 2 Functionalities of multi-sensor fused navigation system

In brief, ten small Lidar sensors are used for detecting close-range obstacles, while vision-based data from a camera (Pi camera) [23] connected to a cloud service is framing the forward scene in front of the user. Coupled with this, each Lidar sensor is also assisted with a vibrating motor, which is in charge of informing the user about the obstacle. The GPS module for user localization and navigation.

This feature makes the proposed design unique and stand out of line because no other design was able to help the blind person to detect and predict the height and size of the obstacles around him simultaneously without manual effort.

Security feature

It also involves a security feature where the user of the product should enter a four-digit pin by clicking the four sequential buttons that are provided at the side of the main body. These buttons are connected to Arduino mega and the functionalities of the Arduino mega such as switching on the sensors and Raspberry pi will only work if the right pin is entered by pressing the buttons. Once the right pin is entered, it makes a pin in the Arduino nano which is connected to that four buttons to go to HIGH state which is essential for the Arduino mega to perform its functionalities. This security feature is added mainly because to avoid unauthenticated users to operate the system.

Guidelines for graphics preparation and submission

The design guidelines of graphic presentation are elaborated broadly in various scenarios such as obstacle detection, uneven surface detection which include pothole, slope and downward steps detection, hollow object detection.

VL53L1X Time-of-flight sensors and obstacle detection

Time-of-Flight sensors are more precise and quicker in measuring the distances and the detection range is high when compared to the traditional ultrasonic sensors that have been used in traditional assisting devices [12,13,14,15]. They also weigh extremely less when compared to ultrasonic sensors which is important because it adds to the comfort of the blind person. The selection of appropriate sensors depends on different factors such as cost of the sensor, the type of obstacles to be detected, precision of measurement and detection range. A total of ten such sensors are used in the proposed design located at different positions of the body. Each sensor weighs around 0.4 to 0.7 g and will be operating at 15 mA supply current. So, the reason for selecting the VL53L1X Time-of-Flight distance sensors is mainly because of size and accuracy. These sensors are extremely small when compared to other distance measuring sensors like ultrasonic and IR sensors. Also, their accuracy is extremely precise which is definitely necessary since the product cannot compromise on accuracy of sensors as that could ultimately put the user in danger. The working flow of the blind’s apron for obstacle detection is shown in Fig. 3

Fig. 3
figure 3

Working flow of the Blind’s apron for the object detection

Table 3 shows the comparison among three different distance measuring sensors. These comparisons conclude that VL53L1X Time-of-Flight sensor is much better when compared to Ultrasonic or IR sensor for the proposed design.

Table 3 Comparison among different factors of sensors

The other feature of VL53L1X is that additional components include voltage regulator and level shifter circuit. Voltage regulator will make the sensor work at 2.2 V–5.5 V and the level shifter circuit is used to allow I2C communication. But I2C communication is not used in our current model because the need is insignificant. The range of these sensors is 4 m, and the working of these sensors is based on the time difference between the emission of light pulses and its return to the sensor.

The sensor also offers three distance modes: short, medium, and long. Long distance mode allows the user to measure the longest possible ranging distance of 4 m, but the maximum range will always be significantly affected by ambient light. Short distance mode is generally immune to ambient light, but the maximum ranging distance is mostly limited to 1.3 m. The maximum sampling rate present in the short distance mode is 50 Hz while the maximum sampling rate that is present in the medium and long-distance modes is 30 Hz. The advantage of having these different modes can be used in the product to set the working of the sensors based on the size of the environment the user is currently interacting. Even though the VL53L1X is not sensitive to external conditions, the main aim of the proposed design is to bring accuracy, comfort as well as fast responsiveness in the system. Figure 4 shows a situation where the blind can easily judge the height of different obstacles by wearing the apron.

Fig. 4
figure 4

VL53L1X Sensors used to judge the properties of the obstacle

Once the blind comes closer to an obstacle with a particular height, the vibrating motors on the system will vibrate at specific positions where the VL53L1X is exposed to the obstacle.

Uneven surface detection which include pothole, slope, and downward steps detection

The potholes detection is done with the help of sensor based method of framework, by considering convolution neural network as obstacle identification [24].

The potholes detection is designed with a single VL53L1X sensor. It is used to detect any uneven surfaces on the ground like potholes, speed breakers etc. This sensor along with the switching circuit, which is used to switch off and, on the sensors, and raspberry pi is controlled by Arduino mega. A GPU module is used to track the location of the blind person which is controlled by raspberry pi. The GPU module sends the data to the Firebase using Raspberry pi and it’s in built Wifi module and the caretakers can track the location of the blind people by importing the data from the firebase to a simple tracking application.

A VL53L1X sensor is placed at the abdomen level of the user. The sensor calculates the time difference between the light pulses hitting the ground surface and reflecting to the sensor. This difference is sent to Arduino mega. According to the instructions, mega will either activate or deactivate Pothole indication. The working flow of the blind’s apron for obstacle detection is shown in Fig. 5.

Fig. 5
figure 5

The working flow of the Blind’s apron for uneven surface detection

The algorithm 1 indicates that the sensor takes a couple of seconds to configure the initial reference value to a particular number through which the uneven surfaces are detected based on the difference between the reference value as mentioned in [18] and the subsequent values read by the sensor.

This configuration is done based on the height of the user. The reference value will be constant until the user reset the system. This value is basically the distance measured from the user's abdomen level to the ground at a particular angle, making a hypotenuse from the user’s abdomen to a point on the ground. This reference value is considered as ‘d’.

figure a

Now, if any Pothole or slope is present ahead of the user, the value of this hypotenuse length measured by the sensor will either increase or go out of range. So if calculated_distance > d or calculated_distance = Out of Range, then the proposed system will conclude that there is uneven road ahead and alert the user on the same. Figures. 6 and 7 shows the pothole, steps detection and measuring distance.

Fig. 6
figure 6

Potholes detection

Fig. 7
figure 7

Slope and downward steps detection

In Fig. 6, the sensor was encountered with a pothole within a time interval of 6 to 8 s thereby increasing the value of calculated distance from the user to the ground surface.

In Fig. 7, the user marched towards the downward steps and initially at 7th second the readings started to increase but at 9th second the readings of the sensor went out of range.

Hollow object detection

These sensors are also used to detect objects with hollow bottoms like Tables and Chairs. For such objects, this hypotenuse length will fall drastically within seconds if the user tries to walk against these objects. The processing unit can identify such sudden drop and alert the user on the same. Since the initial threshold value is configured for the sensor, once it comes across an object above the ground level with some height, the distance measured by these sensors will fall instantaneously. This drop can be detected by the microcontrollers and indicate the user regarding the event so that the user can take the measures to avoid the danger.

Object detection is designed with a Pi camera, raspberry pi, Google cloud API. Pi camera is used to capture the pictures intended by the blind person and that is sent to Raspberry pi where the pictures are sent to Google Cloud using Vision API services and the picture is processed in the cloud to detect objects in the image and the results are sent back by the cloud pub/sub service to the raspberry pi. The results are then informed to the blind person using voice output which is in-build in Raspberry pi 4. The workflow of the hollow object detection is shown in Fig. 8.

Fig. 8
figure 8

The working flow of Blind’s apron for the hollow object detection

In Fig. 9, the sensor was encountered with a dining table in front of the user. So, as soon as the user came close to the dining table, the distance measured kept dropping from 7th second to 10th second and later after walking away from the table, it restored back to the reference value.

Fig. 9
figure 9

Detection of table with hollow base

Pi camera and vision API services

Figure. 10 shows the workflow of object detection. This comprises of pi camera for input images and respective API components for further control of flow are identified. Pi camera captures pictures of the surroundings and outputs the image to raspberry pi. Raspberry pi will then send the image data to the cloud. In the cloud, the image is processed for object, features, landmark, and logo detection using pre-trained models and the results are given back to the “text to speech conversion” module of the Raspberry pi. Then the results are recited for the blind to hear and judge the surroundings.

Fig. 10
figure 10

Flow of working of object detection design

Once the image data is received by google cloud, it uses Vision API services to detect the objects, features etc. in the image. The Cloud Pub/Sub acts as a real-time messaging service which acts as an interface between the input image data and Vision API. It is an asynchronous messaging service that helps in decoupling the services that produce events from services that process the events. It simply allows us to create one to many or many to one or even one to one publisher subscriber channels. In Vision API services, we can use google pre trained prediction model trained with thousands of datasets and predict the objects, features and many other entities in the image. Auto ML services is used to train our custom prediction model using our own datasets. After the prediction by the model, the results are sent to the raspberry pi for letting the user know about the object captured with the help of pub/sub.

Due to the most flexible features that google provide in Vision API such as Object detection, Feature detection, Landmark detection, Logo Detection, Optical Character Recognition. Google vision API is known for its fast and accurate responses. Since blind people cannot afford delay in knowing the obstacles around them by using other GPU intensive deep learning techniques, Vision API will be a better choice.

GPS module and location tracking

Similar to the concept used in [16], the proposed design also uses location tracking system by using Ada fruit Ultimate GPS Module, which is connected with the Raspberry Pi. The key difference is that here, the GPS module is used not to guide the Blind, but to help the caretakers track the blind person’s movements. The following steps are included in the process of location tracking,

  1. 1.

    The GPS module sends the live location to the Raspberry Pi.

  2. 2.

    As soon as the location data is received, Raspberry Pi sends the data to the Google Firebase DB after proper authentication.

  3. 3.

    A mobile application is connected to this database with proper API key and database URL.

  4. 4.

    Once the location data from Raspberry Pi is sent to the Firebase, the same location data will be displayed on the mobile application, helping the caretakers to track the blind person’s current location.

Figure 11 shows the circuit diagram of GPS module with indicators of pins such as GPS Vin to 3.3 V (red wire), GPS Ground to Ground (black wire), GPS RX to TX (green wire), GPS TX to RX (white wire).

Fig. 11
figure 11

Circuit diagram of GPS module connecting to Raspberry Pi

Vibrating motor

This is a Direct Current vibration motor which is utilized in the mobile/cell phones. It generally requires around 125 mA current and 3 V to 5 V voltage supply. By applying the Pulse Width Modulation (PWM) method the speed control of these types of motors can be programmed, although the default speed is used in the proposed system. The motor diameter and thickness are 0.5 cm and 2.5 mm respectively.

Four pin security lock

The inbuilt security system is provided with the help of the Arduino board. The circuit as shown in Fig. 12 is designed in such a way that the user is provided with four different buttons along with a reset button, each button is assigned with numerical from 1 to 4. One terminal of the button switch is connected to its respective digital pin in arduino and the other is grounded. When the switch pressed the respective digital pin in the arduino turns low. Based on the pattern in which the pins are getting to lower states, the digital pin 7 will get its value. If the switches are pressed in correct order the output of digital pin 7 is high else the output is low. Once pin 7 is in its higher state, that pin sends voltage signals to all the reading pins of the remaining microcontrollers which instructs them to follow the switching instructions from the user. Once the user presses the reset button, the system gets locked again. A simple LED is turned ON if the entered pin is correct.

Fig. 12
figure 12

Circuit diagram of 4 pin security system designed in schematic diagram maker circuit diagram of GPS module connecting to Raspberry Pi

Communication in the Blind’s Apron

A wireless communication module is developed with “frequency modulation” (FM) to interact the blind people for finding the system location from them. The generation of radio frequency is done by an RF transmitter in its circuits as depicted in Fig. 13. The carrier signal is modulated, and the information is added to this 'carrier signal'. This integrated carrier and information which forms a composite signal will be then fed to the antenna. The electric and magnetic fields are being changed from its own antenna after the signal from the atmosphere with same frequency will be received by an RF receiver. The receiver circuits will then strip the information part from the composite signal and amplify this to a useful level for getting the audio signals which then could be used by the blind person or even the caretakers to find the system.

Fig. 13
figure 13

RF transmitter and receiver

Results and discussions

User study

The final prototype design was tested on a person. During the test, a blind person was asked to walk through an area where we placed different types of obstacles within 20-m range. The blind person walking speed was recorded. Time taken by the blind person before training and after training was successfully recorded. Then, the average speed was calculated of the blind person after training to be 0.792 m/s which is like the speed that was achieved in [12] by using Blind walking stick and before training to be 0.312 m/s. When compared with the speed of traveling by the normal people which is 1.4 m/s on average, it is evident that training of the blind person increased the traveling speed by more than 2 times. It is also noted that the speed with which the system is responding after detecting an obstacle started decreasing as the walking speed of the blind person started increasing and walking with a speed more than 3 km/h (0.833 m/s) led to collision with the obstacles. The results are obtained with identified design parameters and implementing the technologies. The distance encountered at various obstacles such as potholes, dining table, and steps are considered as three different cases and shown as time series plots in Fig. 14.

Fig. 14
figure 14

Time series plots of distance through sensor after encountering a Pothole b a dining table c downward step

Figure. 14 shows the experimental result obtained from vision API services. The response from the cloud vision API services was within 3 s of the request. The snapshot of the response returned by Vision API is shown in Fig. 15.

Fig. 15
figure 15

Snapshot of the response returned by Vision API

Instead of processing the image for object detection locally using custom convolutional neural nets, which was taking nearly 15 s since it is GPU intensive work, using the services of Google in the cloud to get quick results was a better option.

The sensors also gave accurate measurements and immediate responses were given by vibrating motors and voice signals based warning for normal obstacles detection and potholes and other uneven surface detection respectively since each sensor is handled by separate Arduino nano. Since we cannot afford delay in detecting obstacles, Lidar sensors were used instead of Ultrasonic sensors for detecting obstacles. Light waves are faster than sound waves and even the Lidar sensors are more accurate compared to Ultrasonic sensors. This design comes handy and can be carried anywhere. The total product weighs around 1.3 kg which can easily be handled by any person. The system will run for 4 to 5 h if used continuously and takes 3 to 4 h to recharge. We can use 5 V 3 amps supply to recharge the system even faster.

Performance and efficiency of the proposed system

The performance and efficiency of the present system are evaluated using various factors such as the accuracy of the detection, the response time, power consumption and range of detection.

For the first factor of accurate detection, the final prototype design was both tested on trained and untrained blind persons. The corresponding plot is shown in Fig. 16, which clearly indicates faster response due to the Multi-Sensor Fused Navigation system comprises a sensor-based, vision-based, intelligent (smart) application.

Fig. 16
figure 16

Performance and accuracy of blind’s apron

The second factor is the response time of the system, and a system detecting and responding within 0–100 ms is considered as fast, 100–200 ms as medium and higher than 200 ms as slow system. The proposed system, Blind’s Apron, falls under fast systems and it must be fast since our main aim for designing this system to make the response of the system as fast and as accurate as possible.

The responsiveness of the system is examined with our vibrating motors which serves the important purpose by indicating the threat around the blind person. It turns out that as shown in Figures. 17 and 18, there is a time gap T3 between the initial point when the VL53L1X sensor actually logs the distance of the obstacle which is below 30 cm by default into the system and the point at which the vibrating motors actually start vibrating. This latency turns out to be small with average being 710 ms. And, it should be noted that the vibrating motors won’t stop as soon as obstacle is disappeared from the sight of the sensor but would take some time which is interestingly almost equal to the latency T3.

Fig. 17
figure 17

Responsive timing diagram

Fig. 18
figure 18

Latency in milliseconds measured for a total of 7 obstacles

The third factor is the range of detection. Generally, a device which can detect obstacles within 0–2 m, 2–4 m and greater than 4 m can be considered as a low range medium range and high range devices respectively. The proposed system has a range of detection up to 4 m operating at different modes using VL53L1X.

The fourth important factor is the consumption of the power by the system and the amount of time it will work without the need to recharge. Generally, 0.5 W, 0.5–1W and greater than 1W of electrical power consumption is considered as a low power consumption, medium power consumption and a high-power consumption system respectively. The current proposed system comes under a high-power consumption system. Since the system is designed at a human body level and with additional activities like cloud interactions, it is not surprising to notice that the power consumption of the proposed system to be in high power consuming systems. But the operating voltage and current in the proposed design won't cause much harm to the user and the design is made with much care such that none of the electrical equipment will be in direct contact with the user. The working prototype of blind’s apron is shown in Fig. 19

Fig. 19
figure 19

A real user wearing the working prototype

Conclusion

The technology interventions in the creation of assistive devices promisingly assures the quality of life (QoL) of senior and blind people.

Thin blind apron has been designed in this view to assist the senior citizens, who are visually impaired and blind people.

The user feedback on this wearable assistive aid sounds high comfort, light weight and easy to use.

The novel utilization of Multi-Sensor Fused Navigation system comprises a sensor-based, vision-based, intelligent (smart) application enhanced the performance of the proposed blind’s apron.

Successful implementation with utmost accuracy and faster response more than 200% that of traditional assistive devices has been achieved for various scenarios of obstacle detection, uneven surface identification which include pothole, slope and downward steps and hollow object detection.

The feedback from the user was positive with a few limitations. The limitations include

  1. 1.

    The person cannot walk at a speed greater than 3 km/h.

  2. 2.

    The person has to wait for 4 to 5 s after switching on the apron for letting the apron switch to the initial configuration for calculating the hypotenuse length which is used in a pothole and uneven ground detection.

With less intervention the aid will assist the user for better environment understanding with voice instructions without carrying any other gadgets like mobile phones, user can track and capture the things.

Availability of data and materials

All the data generated or analysed during this study are included in this published article.

References

  1. Velázquez R. Wearable Assistive Devices for the Blind. In: Lay-Ekuakille A, Mukhopadhyay SC, editors. Wearable and Autonomous Biomedical Devices and Systems for Smart Environment Lecture Notes in Electrical Engineering. Berlin: Springer; 2010.

    Google Scholar 

  2. Juneja S, Joshi P. Design and development of a low cost and reliable writing aid for visually impaired based on Morse code communication. Technol Disabil. 2020;32(2):59–67.

    Article  Google Scholar 

  3. Isaksson J, Jansson T, Nilsson J. Desire of use: a hierarchical decomposition of activities and its application on mobility of by blind and low-vision individuals. IEEE Trans Neural Syst Rehabil Eng. 2020;28(5):1146–56.

    Article  Google Scholar 

  4. Barontini F, et al. Integrating wearable haptics and obstacle avoidance for the visually impaired in indoor navigation: a user-centered approach. IEEE Trans Haptics. 2020;14(1):109–22.

    Article  Google Scholar 

  5. Dakopoulos D, Bourbakis NG. Wearable obstacle avoidance electronic travel aids for blind. IEEE Trans Syst Man Cybern. 2010;40(1):25–35.

    Article  Google Scholar 

  6. Myneni MB, Ginnavaram SRR, Padmaja B. An intelligent assistive VR tool for elderly people with mild cognitive impairment: VR components and applications. Int J Adv Sci Technol. 2020;29(4):796–803.

    Google Scholar 

  7. Chen Z, Liu X, Kojima M, Huang Q, Arai T. A wearable navigation device for visually impaired people based on the real-time semantic visual slam system. Sensors. 2021;21(4):1536.

    Article  Google Scholar 

  8. Khanom M, Sadi MS, Islam MM, A comparative study of walking assistance tools developed for the visually impaired people, advances in science engineering and robotics technology (ICASERT) 2019 1st international conference on, pp. 1–5, 2019.

  9. Islam MM, Sheikh Sadi M, Zamli KZ, Ahmed MM. Developing walking assistants for visually impaired people: a review. IEEE Sensors J. 2019;19(8):2814–28. https://doi.org/10.1109/JSEN.2018.2890423.

    Article  Google Scholar 

  10. Bujacz M, Baraski P, Moranski M, Strumillo P, Materka A, Remote mobility and navigation aid for the visually disabled, institute of electronics technical university of łódź 211/215 wólczańska Poland.

  11. Sharma A, Patidar R, Mandovara S, Rathod I. Blind audio guidance system, national conference on machine intelligence research and advancement, 2013, p.17–19.

  12. Nada A, Mashali S, Fakhr M, Seddik A. Effective fast response smart stick for blind people. Second Int Conf Adv Bio-Informat Environ Eng. 2015. https://doi.org/10.15224/978-1-63248-043-9-29.

    Article  Google Scholar 

  13. Kang SJ, Ho Y, Moon IH. Development of an intelligent guide-stick for the blind. IEEE Int Conf Robo Automat. 2001. https://doi.org/10.1109/ROBOT.2001.933112.

    Article  Google Scholar 

  14. Chaurasia S, Kavitha KVN. an electronic walking stick for blinds. Int Conf Inform Commun Embedded Syst. 2014. https://doi.org/10.1109/ICICES.2014.7033988.

    Article  Google Scholar 

  15. Wahab MH, Talib AA, Kadir HA, Noraziah A, Sidek RM. Smart cane: assistive cane for visually-impaired people. Int J Comp Sci Issues. 2011;8(4):21–7.

    Google Scholar 

  16. Alshbatat AIN. Automated mobility and orientation system for blind or partially sighted people. Int J Smart Sensing Intell Syst. 2013;6(2):568–82. https://doi.org/10.21307/ijssis-2017-555.

    Article  Google Scholar 

  17. Mohammad T. Using ultrasonic and infrared sensors for distance measurement. World Acad Sci Eng Technol. 2009;51:293–9.

    Google Scholar 

  18. Benet G, Blanes F, Simó JE, Pérez P. Using Infrared sensors for distance measurement in mobile robots. Robotics Autonomous Syst. 2002;40:255–66. https://doi.org/10.1016/S0921-8890(02)00271-3.

    Article  Google Scholar 

  19. Cardillo E, Di Mattia V, Manfredi G, Russo P, De Leo A, Caddemi A, Cerri G. An electromagnetic sensor prototype to assist visually impaired and blind people in autonomous walking. IEEE Sens J. 2018;18(6):2568–76.

    Article  Google Scholar 

  20. Liu H et al. HIDA: towards holistic indoor understanding for the visually impaired via semantic instance segmentation with a wearable solid-state LiDAR sensor. 2021 IEEE/CVF international conference on computer vision workshops (ICCVW): pp 1780–1790, 2021.

  21. Zhang J, Yang K, Constantinescu A, K Peng, KE. Müller and R Stiefelhagen. Trans4Trans: efficient transformer for transparent object segmentation to help visually impaired people navigate in the Real World. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): pp 1760–1770, 2021.

  22. Chang W-J, Chen L-B, Chen M-C, Jian-Ping Su, Sie C-Y, Yang C-H. Design and implementation of an intelligent assistive system for visually impaired people for aerial obstacle avoidance and fall detection. Sensors J IEEE. 2020;20(17):10199–210.

    Article  Google Scholar 

  23. Jo Y, Ryu S. Pothole detection system using a black-box camera. Sensors. 2015;15(11):29316–31. https://doi.org/10.3390/s151129316.

    Article  Google Scholar 

  24. Islam MM, Sadi MS, Bräunl T. Automated walking guide to enhance the mobility of visually impaired people. Med Robotics Bionics IEEE Trans. 2020;2(3):485–96.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to express sincere gratitude to the corresponding affiliating institutes for providing necessary infrastructure, support and funding for completing the research. Further, authors wish to express thanks to Mr. Kothacheruvu Sai Sravan, an undergraduate student associate for extending support in the study, modeling and wearing the working prototype for Fig. 19 with full consent.

Funding

No funding is received for this study.

Author information

Authors and Affiliations

Authors

Contributions

Dr. MM: conceived, designed the analysis and collected the data. Dr. DNV: contributed the data and analysis tools. Dr. AH: contributed to executethe user study. Dr. CHVKNSNM: performed the analysis and manuscript preparation. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to D. N. Vasundhara or CH. V. K. N. S. N. Moorthy.

Ethics declarations

Ethics approval and consent to participate

This is an observational study and NO clinical trials have been undertaken during the study and consequently, ethical approval is not required. Informed consent was obtained verbally from all the individual participants during this observational study.

Consent for publication

Being observational study, not applicable.

Competing interests

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bala, M.M., Vasundhara, D.N., Haritha, A. et al. Design, development and performance analysis of cognitive assisting aid with multi sensor fused navigation for visually impaired people. J Big Data 10, 21 (2023). https://doi.org/10.1186/s40537-023-00689-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-023-00689-5

Keywords