1 Introduction

The field of assistive technology (AT), commonly considered to be technology designed for individuals with some form of impairment (or the elderly people), is a vital field expanding at a swift pace since it derives from many disciplines and is mainly driven by technology. Assistive technology for the visually impaired (VI) and Blind people is concerned with “technologies, equipment, devices, apparatus, services, systems, processes and environmental modifications” [43] that enable them to overcome various physical, social, infrastructural and accessibility barriers to independence and live active, productive and independent lives as equal members of the society.

Fig. 1
figure 1

Examples of Assistive Technology from current research for Visually Impaired and Blind people. a Assisted Vision project: award winning augmented reality smart glass developed at University of Oxford ([44], photo credit: Dr Stephen Hicks). b AI lenses: a prototype of Artificial Intelligence based lenses from the Center for Research and Advanced Studies (CINVESTAV), Mexico (photo credit: CINVESTAV). c BrainPort V100: a non-surgical assistive device that aids blind people in seeing with their tongues ([3, 85], photo credit: Wicab). d EyeMusic: a sensory substitution device that conveys visual information via an auditory experience of musical notes, developed at the Hebrew University of Jerusalem ([1], photo credit: Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek). e NAVI: a low-cost, Microsoft Kinect-based proof of concept of a mobile navigational aid developed at the University of Konstanz, Germany ([117], photo credit: University of Konstanz). f Anagraphs project: a Braille e-book reader from the Fraunhofer Institute, Germany which operates by thermo hydraulic micro-actuation (photo credit: Fraunhofer Institute; http://www.anagraphs.eu). g FingerReader: a small finger-worn form factor from the MIT Media Lab that assists blind users with reading printed text on the go ([98, 99], photo credit: Fluid Interfaces Group, MIT Media Laboratory). h 3-D smartphone for aerial obstacle detection for the visually impaired and blind individuals developed by University of Alicante, Spain ([94], photo credit: J. M. Saez)

Vision being an extremely vital sensory modality in humans, the loss of it affects the performance of almost all activities of daily living (ADL) and instrumental activities of daily living (IADLs); thereby hampering an individuals’ quality of life (QoL), general lifestyle, personal relationships and career. Therefore, technology that facilitates accessibility, safety, and an improved quality of life has a very relevant social impact [77]. Moreover, with our ever-increasing ageing and blind populations, it has the potential to broadly impact our quality of life in the future. This has driven novel research across many disparate disciplines, from cognitive psychology and neuroprosthetics to computer vision and sensor processing to rehabilitation engineering. More recently, advances in computer vision, wearable technology, multisensory research, and medical interventions have facilitated the development of numerous assistive technology solutions of both kind, invasive and non-invasive (a varied selection of a few compelling recent works on Assistive Technology for Visually Impaired and Blind individuals is shown in detail in Fig. 1).

Research on assistive technology for the visually impaired and blind people has traditionally focused on—mobility, navigation, and object recognition; but more recently on printed information access and social interaction as well [109]. Over the last decade there has been an expansion of research interest in the field and significant developments have taken place in the form of novel (miniaturized) wearable electronic travel aids (ETAs), smart canes, (wearable) form factors, smartphone based devices and apps, tactile displays and interfaces, cortical and retinal implants (bionic eyes), etc. The paper presents an insight into the current state of assistive technology for the visually impaired and blind people, with an emphasis on what can be learnt from the last two decades of published research and what the future trends are. This work is a survey of the state-of-the-art in research in the field based on information analysis of a database of scientific research publications. The publications are relevant to assistive technology for the visually impaired and blind individuals across various fields of technology, medicine and related sub-disciplines.

Table 1 Comparison with existing (subjective) surveys based on coverage of Assistive Technology for the Visually Impaired and Blind people related topics

While there are many excellent reviews: on mobile and wearable assistive devices for the blind by Tapu et al. [107], Velazquez [112] and Ye et al. [114]; reviews of electronic travel aids or locomotion aids by Dakoupoulos and Bourbakis [28], Freiberger [37] and Jacquet et al. [51]; on mobile assistive technologies by Hakobyan et al. [40]; on computer vision-based assistive technologies by Manduchi and Coughlan [77] and Terven et al. [109]; on the use of haptics for the design of aids by Levesque [68]; and on sensory substitution by Maidenbaum et al. [76] and Bach-y-Rita and Kercel [7]; and systematic reviews of electronic navigation aids by Roentgen et al. [90, 91]; all of these accounts are subjective in nature and emphasize subjects and themes that are part of the author’s expertise or research priorities. Table 1 compares and contrasts all prior subjective surveys with this current objective survey of literature. It shows how the scope and focus of our survey differs from prior surveys. At the same time, our purpose in this survey is to present a complementary review to the existing ones and an insight into the field from the viewpoint of a more objective statistical survey across the various sub-disciplines in the field. In particular we focus on the following five key questions by analyzing the corpus: What areas does assistive technology for the visually impaired and blind people encompass? Where is research for assistive technology for visually impaired and blind people published? How rapidly is Assistive technology for the visually impaired and blind people expanding as a research field? Are there research communities within the field of assistive technology for the visually impaired and blind people? Where is assistive technology for visually impaired and blind people headed in the future? Focusing on these questions will benefit the field, as we believe that the answers will clarify how the field evolved to the current state, how the field derives from novel research across other fields, what are the current research patterns and trends in the field, and where they appear to be heading in the future. Though design/interface, performance and usability would make a very interesting study, we felt we were limited from pursuing it in the current work as the application of information analysis and network theory techniques required us to objectively analyze the constructed publication database.

The methodology employed in this paper is inspired by the one employed by Lepora et al. [67] in a prior article for the field of Biomimetics. Though this methodology was found useful in guiding our approach, we differ from the Lepora et al. paper in the following ways: (i) To begin with, we consider four major international scientific databases, detailed in Appendix A (instead of three, in [67]), (ii) The publications we considered for analysis span over twenty years (1994–2014) (instead of fifteen, in [67]), (iii) We present growth patterns and publication trends of journals, conferences and book chapters, and also report briefly on growth of patents (instead of only journals and conferences, in [67]), (iv) Information analysis was carried out using Xpath (XML Path Language) and XSLT 2.0 (EXtensible Stylesheet Language Transformations), both of which are best suited for XML publication data (MATLAB was used in [67]); our collocation metric also differs (Appendix B). (v) We considered past and current research in the field and put forward our opinion of emerging trends, recommendations and discipline hierarchies for the near future (which we believe is one step further from the work by Lepora et al. [67]), and finally, (vi) Lepora et al. [67] focused on Biomimetics, while our work applies the information analysis methodology and network analysis to a totally different field, i.e. assistive technology for the visually impaired and blind people. We present an insight into the past, present and future of the field, which we believe makes the resulting objective statistical survey significant and different from all prior surveys in the field.

2 Background to assistive technology for the visually impaired and blind people

The field of assistive technology for the visually impaired and blind people is quite complex. There are different aspects of it and it can be approached from different points of view. The scope of it extends from the physiological factors associated with vision loss, to the psychological and human factors influencing orientation, mobility and information access for an individual with visual impairments, to the technological aspects in the development of rehabilitation devices (for mobility, wayfinding, object recognition, information access, entertainment, interaction, education), to medical interventions and prostheses including both current and cutting edge research. Hence, it is very hard to characterize or capture the essence of this field within a single snapshot.

Since the inception of the field, believed to be about five decades ago [16], i.e. when serious developments in electronic mobility aids occurred (1950s–1960s) and development continued throughout (1960s–1970s) [42], one of the key focus areas has been the design and development of electronic travel aids for the blind or elderly people. This has resulted in the continuous evolution of ETAs; beginning with the early sonar-based mobility aids such as the Russell Pathsounder (1965) and the Mowat Sensor (1977) [16] which functioned as obstacle detectors in the traveler’s immediate path. A string of prototypes based on similar operating principles followed thereafter; however, owing mostly to their disappointing performance the user acceptance was low. Soon computerized ETAs came into the picture—NavBelt [13, 100] was a wearable, computerized and sonar-equipped system that enabled blind users to safely walk through unknown, obstacle-cluttered environments. Numerous useful ETA prototypes have resulted from efforts to leverage technology for blind mobility. We focus on just the representative works done in the area. An ETA that has been a commercial success story is the award winning primary mobility aid Ultracane [47] which combined the long cane with ultrasonic sensors. Over the last decade many advanced hand-held, wearable, and embedded assistive devices have been developed, but a comprehensive device still remains an elusive goal.

Technological innovations led to minimizing the form factor and paved the way for miniaturized, wearable ETAs such as those with system on a chip (SoC) and sensors mounted on the shirt’s pocket, or attached to the garment or disguised as a brooch; a good example of which is the clear-path indicator for the blind people by Jameson and Manduchi [54]. With the advent of Microsoft Kinect and depth sensing, its use as a sensing modality for navigation in indoor environments by the blind has also been experimented in many systems including work by Bhowmick et al. [11] and Filipe et al. [33]. In recent years computer vision capabilities coupled with the freedom and flexibility of mobile computing has shown great promise and resulted in many interesting developments like the Crosswatch system [22, 23] for providing assistance to the blind at traffic intersections; the Roshni project [52, 53] for mobility in unknown indoor environments; ARIANNA for both indoor and outdoor navigation [24]; and the 3D smartphone-based aerial obstacle detection system that is discreet and also favours social integration [94]. These ETAs assist visually impaired and blind individuals in known or unknown, indoor or outdoor environments by providing rich environmental information, obstacle avoidance, object recognition, and navigation through the use of ultrasonic systems, GPS, cameras, infrared, laser, and mobile phone technology.

Another central theme of research over the decades has been the accessibility and inclusion of the visually impaired and blind individuals into mainstream society. Various projects have been developed over the decades to empower them towards accessibility, inclusion and participation. Exciting and emerging topics associated with this field include sensory substitution, visual prostheses, visual neuroprostheses, brain plasticity, brain-computer interfaces (BCI), artificial vision, tactile human-machine interfaces (HMI), accessible computing, and human factors research. Since Bach-y-Rita’s pioneering work on tactile vision substitution systems (TVSS) to translate visual information into tactile cues [8], the theoretical neuro-physiological basis of which is discussed in [6], major advancements over the decades have followed based on this formative work and has steered the field into many offshoot areas of research. SmartTouch [56] is a notable example of a tactile interface that uses electrodes to activate sensory nerves under the skin. Such systems (known as electrocutaneous display systems) have been developed to work on other skin receptors like the Optacon for finger tips [72] and The tongue display unit (TDU) [61] that creates real-time tactile images using the tongue as a HMI. Brainport V100 [3, 85] is a recent investigational, non-surgical visual prosthetic using tactile tongue stimulation to translate information. Critics however state the device is overrated and debate over its awkward camera-tongue combination feature and its limited value in decoding the clutter and chaos of everyday life [111]. HamsaTouch [58] is a novel smartphone-based TVSS evincing the advances of current technology. These are some noteworthy examples of latest versions of TVSSs.

Research on auditory vision substitution attempts to create a visual prosthesis through auditory vision substitution systems (AVSS). The auditory sense has great potential as an alternate medium for meeting the needs of people with visual disabilities. Sonification focuses on the use of non-speech audio to convey information or perceptualize data. Sonification techniques and Auditory Displays (AD) have evolved in relatively few years and is seen as a standard technique on par with visualization for presenting data in a variety of contexts, including development of interfaces for visually impaired people [41]. Keller and Stevens [60] categorized sonification into three categories: direct, indirect ecological, and indirect metaphorical. Direct sonification maps data to sounds associated with certain phenomena, e.g. sound of crackling flames represent fire. Indirect ecological sonification uses related ecological sound associations such as the sound of branches snapping represent a tornado. Finally, indirect metaphorical sonification uses analogical sound associations, e.g., the sound of a mosquito buzzing to represent a helicopter. Sonification and auditory displays have achieved significant progress in areas of computer access and mobility aids. Screen readers represent a major advance in computer access for people with visual disabilities. The efficacy of screen readers is still limited in complex tasks such as while tracking focus in a windowed graphical user interface (GUI), or reading spreadsheets or accessing dynamic website content [15, 42]. For the visually impaired people computer access is still a relatively smaller need than avoiding obstacle and navigating to places. The vOICe Footnote 1 vision system [4, 81] converts live camera feeds into ‘soundscapes’ or sound patterns representing specific environmental information. Participants equipped with the device were able to use auditory stimulation and obtain information necessary for locomotor guidance, localization, and recognition of objects in a 3-D environment. The vOICe for Android has been available since 2008. EyeMusic [1] shown in Fig. 1 is another AVSS that conveys visual information through pleasant musical notes via bone-conductance headphones. A comprehensive list of electronic travel aids including ETAs using sound for sensory substitution may be found in [28] and [112].

An emergent theme in sensory substitution research is cortical visual prosthesis, sometimes called brain implants, that provides visual sense directly to the brain via electrodes in contact with the primary visual cortex. Brindley and Lewin [18] are credited with producing the first functional cortical implant (consisting of 80 electrodes) on a blind subject causing her to experience sensations of light (‘phosphenes’), given the appropriate signals. This was the first major advancement in artificial vision. Dobelle experimented with surface-electrode implants (Dobelle artificial vision system) [30, 31] and reinforced the idea that a functional implanted visual prosthesis (IVP) could be developed. These early developments laid the foundation for the current status of “bionic vision” through brain implant technology. Modern cortical implants, such as the Monash Vision Group’s ‘GennarisFootnote 2 device designed to bypass the retina and optic nerve and wirelessly stimulate the visual cortex of the brain with electrical signals using arrays of micro-sized electrodes are still in the preclinical phase.

In recent years, retinal implant technology i.e. inducing visual perception by electrical stimulation of the retina with implantable micro-electrode arrays, has also emerged as a popular alternative. There are three retinal implant strategies: (i) epiretinal implants—placed on the surface of the retina, such as the EPIRET3 device [82, 92] and the Argus II Retinal Prosthesis System Footnote 3 [2, 32, 49] both of which have already been implanted into tens of blind subjects in clinical trials and found safe and beneficial. Argus II is currently the first and only bionic vision device licensed for use anywhere in the world after being approved by the U.S. Food and Drug Administration (FDA) in 2013. (ii) subretinal implants—placed between the retinal pigment epithelium (RPE) and the retina, such as the Alpha IMS subretinal implant developed by Retina Implant AG, Germany [103, 104], IMI Retinal Implant System [46], and the Boston Retinal Implant Project wireless device [89] which have ongoing implantation clinical studies, and (iii) suprachoroidal implants—placed in a more surgically-accessible location than the earlier two i.e. between the vascular choroid and the outer sclera. An example is Bionic Vision Australia’s first-in-human trial of retinal implants in the suprachoroidal space [5, 96] that has opened a new door of possibilities. IVP principles of operation, current state of research and development, and functional aspects are excellently covered in the extensive book edited by Dagnelie [26]. While a direct interface to the cortex or retina is a big advantage, this also makes it an invasive approach, requiring surgery. Audio and tactile vision substitution systems are an inexpensive and biocompatible alternative to surgical implants.

Of late, the integration of mobile devices, smartphones and computer vision has fostered exciting themes and applications in areas of independent living, accessibility, print access, social interaction, and web accessibility. With the evolution of smartphones, a lot of smartphone-based assistive technologies have been developed that offer a better quality of life in daily living activities. Some notable examples are the Carnegie Mellon University’s smartphone-based Trinetra project [65, 84] that scans product barcodes to aid in independent grocery-shopping experience for the blind people; StopInfo [20] that provides detailed information about bus stops tailored to the needs of blind riders; and the display reader that uses computer vision to help the blind user access household appliance displays [38]. Concerning access to print information, traditional Braille devices and low-vision aids (e.g. screen readers and magnifiers) are fairly well developed and have been the primary access routes. Audio descriptions (AD) are known to make the visual content of films, theater, opera and TV programmes [97] more enjoyable, interesting, and informative for the blind people. Web accessibility for the visually impaired and blind people is more complex due to the Web’s rich multimedia content [78]. The inevitable transition of the Web to rich internet applications (RIA) introduces additional accessibility challenges. Screen readers still struggle to deal with dynamic Web content (AJAX content updates, drop down menus), form filling process and automatic page refreshes [15]. Lately, smartphones and wearable devices are transforming the assistive technology and artificial vision landscape by boosting independence and helping the blind people overcome accessibility barriers. Some remarkable developments include Project Ray [86]Footnote 4, GeorgiePhone Footnote 5, MIT Fifth Sense Footnote 6, and the very promising portable device OrCam Footnote 7 which offer a full set of ground-breaking features to the visually impaired based on just voice and touch. Exciting features include reading printed text, outdoor signs; recognizing faces, products, currency notes, credit cards, colors; knowing what’s around, avoiding obstacles and hazards; identifying bus lines, navigating busy places like airports, etc., some of which are already available.

Fig. 2
figure 2

Popular terms in Assistive Technology for the Visually Impaired and Blind people based on our constructed publication database. The word cloud shows the breadth and focus of the terms occurring in the titles and abstracts of published research. More frequent words appear with greater prominence

3 An information analysis methodology

In order to be as objective as possible in this task we chose a methodology in which the state-of-the-art and future trends were surveyed using techniques from information analysis. An important aspect of this type of analysis of datasets is how the results are visualized, evaluated and interpreted. Data visualization which is a modern branch of descriptive statistics is a critical component in scientific research; one of its primary goal is to communicate information clearly and efficiently to users. Accordingly, we employed a variety of conventional and modern visualization techniques, including bar charts, word clouds, and co-occurrence graphs in order to analyze and survey the research field from a broad range of perspective.

Our first strategy was to identify scientific publications relevant to the field of “Assistive Technology for the Visually Impaired and Blind people” and retrieve or download the source metadata of all these publications in order to construct a publications database of the chosen field. Four major international scientific databases namely Elsevier ScienceDirect, IEEE Xplore, ACM Digital Library, and Thomson Reuters Web of Science were used as the sources for scholarly publications in our field of interest. A wide range of keywords and phrases were used as search terms including—vision rehabilitation, visual prosthesis, sensory substitution, wearable assistive devices, electronic travel aids, navigation systems, assistive technology solutions, rehabilitation technology, vision substitution. The focus of this research study was mainly on the past, present and future of assistive technology for the visually impaired and blind people. We narrowed down the search to include publications mainly from engineering, rehabilitation engineering, assistive and rehabilitation technologies, computer vision, sensor processing and technologies, cognitive science, cognitive psychology, vision prosthesis and multisensory research. The resulting publications database was then analyzed to infer interesting patterns and statistics concerning the main research areas and underlying themes. Leading journals and conferences which publish and disseminate knowledge in the field are identified; the growth of the research field is captured; and the active research communities are identified. Finally we present our opinions and predictions of future trends that are expected to shape the near future of the field.

The implementation details of this methodology are described in the two appendices: Database Construction in Appendix A and Database Analysis in Appendix B. In particular, we described how the publication metadata was retrieved and downloaded from the four major international scientific databases so that they could later be analyzed on a standard personal desktop computer. We retrieved a total of 3010 scientific publications on research relevant to assistive technology for the visually impaired and blind people covering the period from 1994 to 2014. The bar chart in Fig. 3 represents the overall split of publications. Our analysis consists of a series of tests, beginning with a survey of the relevant publications by topic based on the frequency of occurrence of terms and collocations appearing in the titles and abstracts of the published research. This is followed by the analysis of the year, journal and conference in which they were published and an analysis of the publication data in order to detect the underlying community structures. Finally our opinions are presented where the research field is heading in the future.

The information analysis methodology we adopted has limitations which must be noted at the outset. Firstly, though we believe that the database construction is systematic, it is possible for some publications that may be relevant to the field of assistive technology for the visually impaired and blind people and yet not be captured in the database. This is possible in rare cases, if the range of keywords and phrases used as search terms in the experiment does not match with any of the metadata information of the publication; or, if the publication is incorrectly labelled; or, if the descriptive stored metadata for a published document misleads the scientific database’s search engine; all of which are unlikely, but possible. Secondly, it is possible for a publication, whose subject matter may not be relevant to the field of assistive technology for the visually impaired and blind people research, and still have its source metadata captured in the database. This is possible but again unlikely because the descriptive stored metadata for each published document includes the title, publication title, abstract and author-defined keywords along with other information; and a scientific database’s search engine matches on each key word individually as well as on the whole phrase with the metadata information. However, none of the scientific databases can be said to be fully consistent and none of their search engines can be said to be free from errors; some degree of bias is present due to different referencing styles, typos, incorrectly recorded collaborations, citations, etc, as is reported in the analysis by Subelj et al. [106]. Hence, these limitations are unavoidable. We believe that the source metadata captured on our publications database is a reliable data of research being undertaken in the field of assistive technology for the visually impaired and blind people during the last two decades.

Table 2 Top 100 most common topics in Assistive Technology for the Visually Impaired and Blind people based on our constructed publication database

4 Analyzing the corpus: what areas does assistive technology for the visually impaired and blind people encompass?

A survey of topics reveals the big picture and provides a general introduction to the research field. Hence, the first question for our analysis concerning assistive technology for the visually impaired and blind people was: What are the individual topics that constitute the field of research, and how do they reflect the focus and breadth of the field? In order to answer this question, we extracted topics based on their frequency of occurrence in the ‘titles’ and ‘abstracts’ in the publication database through a word frequency counter written in EXtensible Stylesheet Language Transformations (XSLT), and listed them. We present a static word cloud visualization of these topics in Fig. 2 accompanied by the statistics for the 100 most common topics (summarized in Table 2).

A word cloud is a visual representation for text data that gives greater prominence to words that are more frequent in the source text. The words are placed in the cloud randomly, and the overall layout is determined by readability. More details on our word cloud construction and generation is discussed in Appendix B. Stop words and other non-informative words were excluded from the analysis.

Results Given that the database was constructed from scientific publications concerned with assistive technology for the visually impaired and blind people, it is not surprising to find that many of the leading concepts in the word cloud (Fig. 2) relate directly to the overall subject/research area. Furthermore, within the Assistive Technology research process words such as ‘systems’, ‘design’, ‘technologies’, ‘method’, and ‘applications’ (Table 2) are popular. Early and contemporary researches which have been recognized and published in (prominent) engineering, neuroscience, computer science and medicine journals and proceedings have sought to address the issues of accessibility, independent travel, information access, indoor and outdoor navigation for the visually impaired and blind people. The occurrence of words like ‘information’, ‘navigation’, ‘environment’, ‘objects’, ‘detection’, ‘aid’, ‘accessibility’, ‘access’, ‘indoor’, ‘obstacles’, ‘travel’, and ‘reading’, in the list reflects these efforts. The fields of sensor technology and computer vision have contributed to remarkable development in assistive technology systems for the visually impaired and blind people [77]. Leading terms from these fields such as ‘vision’, ‘sensory’, ‘learning’, ‘model’, ‘algorithm’, and ‘camera’, also appear. The prominent appearance of words ‘haptic’, ‘tactile’, ‘audio’, ‘auditory’, and ‘speech’ are due to the fact that these are the most popular alternate feedback modalities; though secondary auditory pathways such as bone conduction [10] have also been employed for navigation and obstacle avoidance purposes. Good user-centered design matches technology to the user’s needs. Usability testing or evaluation plays a vital role in the design and development of assistive technology. It is well known that lack of suitable interfaces, poor performance, slow user feedback, longer training times contribute to their lack of adoption [77]. Hence these words—‘interface’, ‘performance’, ‘feedback’, ‘training’ are prominent and appear in the list.

Other common topics from the word cloud again point to inspiration by advances in computer vision, machine learning, sensor processing and technology, human computer interaction, augmented reality, etc. Some terms are adopted directly from cognitive pyschology, such as ‘cognitive’, ‘map’, ‘space’, ‘spatial’, and ‘orientation’. This emphasis on cognitive psychology is consistent with the journals Neuroscience & Biobehavioral Reviews and ACM Transactions on Accessible Computing which have leading publications with one aspect of the research focus as cognition; specifically the psychological processes which have relevance to rehabilitation of the visually impaired people. In addition, a rich set of concepts from human-computer interaction (HCI) are also represented, including ‘wearable’, ‘tasks’, ‘model’, ‘service’, ‘interactive’, ‘camera’, ‘algorithm’, ‘display’, and ‘features’. These concepts are consistent with research on “Assistive Technology for the Visually Impaired and Blind people” being published in Computer Science and Human-Computer Interaction journals such as Accessibility and Computing which is the Newsletter of ACM SIGACCESS, International Journal of Human Computer Studies, and conferences such as ASSETS, CHI, and ICCHP.

5 Analyzing the corpus: where is research for assistive technology for visually impaired and blind people published?

Fig. 3
figure 3

Overall split of the 3010 publications considered for our analysis. The bar chart represents the proportions of publications relevant to ‘Assistive Technology for the Visually Impaired and Blind people’ that have been published in leading journals, conferences and book chapters and indexed in the four major scientific databases—Elsevier ScienceDirect, IEEE Xplore, ACM Digital Library, and Thomson Reuters Web of Science

Table 3 Leading journals in which research on Assistive Technology for Visually Impaired and Blind people is published

Secondly, we had a strategic question: Which journals and conferences publish research on Assistive Technology for the Visually Impaired and Blind people? Identifying the leading journals and conferences which publish or disseminate knowledge in the field is crucial. First-time authors need to know the leading journals and conferences in the field (so as to follow or attempt to publish). Experienced authors also look for better publication opportunities. A sensitized research community will lead to more contributions and progress. To answer the above question, we queried the XML publication database we constructed at the beginning (Appendix A) with XPath expressions. We then explored more factors such as their scholarly reputation as indicated by the impact factor (IF), the proportion of the journal’s total published content that focuses on assistive technology for the visually impaired research, and the journal’s field of study.

From a total of 3010 publications in our database, 1106 (36.74%) were published in journals, 1723 (57.24%) in conference proceedings, and 181 (6.01%) in book chapters. Altogether, 351 distinct journals, 673 distinct conferences, and 78 distinct book titles had atleast one publication relevant to Assistive technology for the visually impaired and blind people. The 57–36% overall split between conferences and journals is less in comparison to the approximate split of 86–14% in IEEE Xplore, 85–15% in ACM Digital Library; while the journal to book chapter approximate split is 95–5% in Thomson Reuters Web of Science, and 72–28% in Elsevier ScienceDirect. Based on these statistics, “Assistive Technology for the Visually Impaired and Blind people” has a higher proportion of research published in conferences than in journals and book chapters. However it must be noted that there are considerable differences between the scientific databases in terms of coverage. IEEE Xplore includes high quality technical literature in engineering and technology and thus contains a high number of relevant content published and disseminated via conference proceedings; Elsevier ScienceDirect includes medical research as well and disseminates via journal articles and book chapters. The proportion of relevant publications captured from all four scientific databases is shown visually in a bar chart (Fig. 3).

Results Based on the coverage of the journals in the field in our publication database, the twelve leading journals in “Assistive Technology for the Visually Impaired and Blind people” are listed in Table 3. They were selected based on the total number of relevant publications published by these journals in the same field till 2014. Additional details, including, the 2013–14 SCI impact factor (IF), number of relevant publications till 2013 as compared to the total journal output, and the journal’s field of study are also shown in the table. The content of these journals are split over general areas of human-computer interaction and vision rehabilitation. At present ACM SIGACCESS Accessibility and Computing (SIGACCESS ACCESS COMPUT) is publishing the maximum number of publications per year relevant to assistive technology for the visually impaired and blind people research, followed by ACM Transactions on Accessible Computing (TACCESS). ACM SIGACCESS Accessibility and Computing is the newsletter of ACM SIGACCESS indexed by ACM Digital Library as a periodical/journal. The journal International Congress Series is discontinued as of 2008; while Procedia Computer Science focuses entirely on publishing high quality conference proceedings.

Based on our publication database, the twelve leading conferences publishing on Assistive technology for the visually impaired and blind people research are listed in Table 4, along with the conference full names and the number of relevant publications till 2014. They were selected based on the total number of relevant publications published by these conferences in the same field. The top three conferences in the field of assistive technology for the visually impaired and blind people research are the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) with a total of 263 publications, ACM SIGCHI Conference on Human Factors in Computing Systems (CHI) with 126 publications, and the Web for All (W4A) Conference with 65 publications. Clearly, the ASSETS conference dominates the research output produced in this field contributing about 15% of all conference publications relevant to the field. The W4A Conference focuses on wearable and mobile assistive technologies as well as Web accessibility for the blind. Six of the twelve leading conferences are ACM-sponsored, while five of them are IEEE-sponsored, which also indicates the emphasis of the applications of Assistive Technology for the Visually Impaired people in fields of engineering, technology and computing. The conferences listed have topics of interest such as assistive and rehabilitative technology, technologies for wellbeing, technologies which improve living environments, accessible computing, sensors, wearable computing, etc and have highly competitive publication requirements with acceptance rates for ASSETS-2015 being 24%, W4A-2015 being 35%, ICCHP-2015 being 50%, MobileHCI-2014 being 28%, and ISWC-2016 being 19%.

Table 4 Leading conferences in which research on Assistive Technology for Visually Impaired and Blind people is published
Fig. 4
figure 4

Growth of research in Assistive Technology for the Visually Impaired and Blind people based on the publication data retrieved during our analysis. The stacked bar chart plots the number of scientific publication published each year in the specified field starting from 1994. Proportions of journal, conference, and book chapter publications are indicated by different colors

6 How rapidly is assistive technology for the visually impaired and blind people expanding as a research field?

Fig. 5
figure 5

Growth of journals and growth of conferences based on our constructed publication database. The plot shows the number of publications relevant to Assistive Technology for the Visually Impaired and Blind people published each year in leading journals (left, from year 2000) and in leading conferences (right, from year 1995). The legend gives abbreviations for these journals and conferences, with a key for the complete names provided in Tables 3 and 4

Our next question concerns: How rapidly is Assistive Technology for the Visually Impaired and Blind people expanding as a research field? From adding up the number of publications each year in our database we note that there has been a remarkable growth in the field in the last two decades. The number of research publications per year roughly doubled every 4 years (Fig. 4). From a fledgling field in the mid-1990s with less than 50 publications per year, Assistive Technology for Visually Impaired and Blind people has expanded to become a mature field today reaching about 400 scientific publications per year in 2014. Over the last two decades the growth in this field has outpaced the growth of modern science, which is growing at 8–9% (doubling every 9 years) [14]. Today, the field is evolving swiftly; reliable and sophisticated computer vision algorithms can execute in realtime on embedded computers [109]. Smartphones come to the aid of the visually impaired and blind people (reducing the usual stigma attached to assistive devices). The miniaturization of processors and sensors would enable new wearable devices that may be embedded in clothing [112]. Advances in bionic vision offers hope for partial restoration of visual function in individuals in the near future [69]. This leads us to believe that the remarkable growth of the field is far from saturation. It is expected that rapid advances in computer vision, (wearable) sensor and systems, multisensory research and bionic vision will continue to drive this field further [69, 109].

Results Based on this initial analysis, it is safe to say that there has been a remarkable growth of interest in research on assistive technology for the visually impaired and blind people. Leading developments in this field have contributed to a greater understanding of the challenges and needs of the target community and laid the foundations for broader areas of present and future research. The changing research landscape is also evident today as the field is driven by diverse experts, practitioners, researchers, research groups and departments in leading universities and institutes. The expansion of research interest and developments in this field is also reflected in the increased number of publications within individual journals and conferences (Fig. 5). A great extent of this could be attributed to some relatively new journals and conferences in this area that have flourished in a short span of years, e.g. ACM Transactions on Accessible Computing (2008), Procedia Computer Science (2009), ACM Transactions on Applied Perception (2004), and the conference proceedings for International Conference on Pervasive Technologies Related to Assistive Environments (PETRA) (2008), International Convention on Rehabilitation Engineering and Assistive Technology (i-CREATe) (2007), Web for All Conference (W4A) (2004), etc. In other cases, this growth can be related to the change of focus of well-established peer-reviewed journals and conferences; journals such as—International Journal of Human-Computer Studies (1994), Personal and Ubiquitous Computing (1997), and conferences—IEEE Engineering in Medicine and Biology Society (EMBC) (1988), IEEE International Conference on Systems, Man and Cybernetics (SMC) (1988), and IEEE International Conference on Multimedia and Expo (ICME) (2000). The steady growth of the field appears to have been stunted in 2011 and since then there is a visible dip in journal output although conference outputs are unaffected.

Figure 5(left) reveals that ACM SIGACCESS Accessibility and Computing (SIGACCESS ACCESS COMPUT), ACM Transactions on Accessible Computing (TACCESS) and Procedia Computer Science (PROCEDIA COMPUT SCI) have been consistently publishing over the last decade, even though there appears a dip in TACCESS output after 2010 followed by a recovery, SIGACCESS ACCESS COMPUT and International Journal of Human Computer Studies (INT J HUM-COMPUT ST) has been consistent, while PROCEDIA COMPUT SCI appears to have random growth. The steady expansion of International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), ACM SIGCHI Conference on Human Factors in Computing Systems (CHI) and the Web for All Conference (W4A) is also clearly evident in Fig. 5(right), compared with the modest but appreciable growth of other conferences.

As developments in wearable computers, sensor processing and visual prostheses are translated into technology and commercialized, they have the potential to make long-term social and economic impacts on the community. The WIPO Patent Landscape Report [102] identified a total of 35,251 relevant patent documents on hearing and vision impairment technologies. 30% of these patents were filed between 2007 and 2011. This remarkable growth in patents on assistive technology for the visually impaired and blind individuals reflects the confidence of an inventor that a product has a fair chance to bring profit, as patents can be expensive to obtain and maintain. Recent patenting and innovation trends show the annual growth rate are highest in areas including ‘voice and sound control associated with vision assistance’ (22%) and ‘sensor technology adapted for vision impaired persons’ (10%); indicating that rapid development of new technologies derived from these areas are taking place. We believe that additional analysis of corporate developments and patents would prove to be useful and an important study.

7 Are there research communities within the field of assistive technology for the visually impaired and blind people?

One of the major questions in this analysis relates to the inter-connectedness of assistive technology for the visually impaired and blind people as a research field. A question of interest to researchers is—Whether the field functions as a coherent discipline or unites many disparate fields together into a coherent whole.

Fig. 6
figure 6

A word co-occurrence graph showing connectedness of popular terms in Assistive Technology for the Visually Impaired and Blind people research. Each node is colorized based on its Modularity Class. Four communities are evident from colours Green, Purple, Blue, and Red

We note that there are several research groups and academic networks in assistive technology research, many of which have an active interest in the visually impaired community. To determine if there are sub-disciplines and communities and what these are, we considered a network theory analysis of the common nomenclature found in the titles and abstracts of published research. The result of the analysis is shown in Fig. 6. Two nodes are connected in the network if they co-occur within the database; with the size of the nodes proportional to its degree and the thickness of the edges proportional to the Log-likelihood Ratio scores. Modularity analysis detects the underlying community structures within the word co-occurrence graph. The coloring and positioning of the nodes in the co-occurrence graph are part of further analysis that we describe next.

Table 5 Top 50 most common word pairs

To address the question, we construct a co-occurrence graph which is very helpful for a visual inspection of the connectedness of fields in research. Our corpus at this stage consists of titles and abstracts extracted from published research, so the word co-occurrence graph is given by all significant word co-occurrences found within these two contextual units of information; this resulted in an undirected graph with 135 nodes and 303 edges. These nodes represent terms more than those in Table 2. The connectedness between word pairs is quantified by their co-occurrence within the contextual units (title and abstract), defined by the log-likelihood ratio (LLR). The Log-likelihood Ratio basically examines the difference between alternate explanations (\(H_0\) and \(H_1\)) for the occurrence frequency of a word pair \(w_1\) \(w_2\). Under the null hypothesis \(H_0\), the words occur independently; under the alternate hypothesis \(H_1\), the words do not (Refer to Appendix B). The frequent terms are interpreted as nodes of the graph; the LLR score defines the strength of the connections between the nodes.

In order to highlight the research areas well, the top 50 most common word pairs are presented in Table 5. Some of these word pairs e.g. “touch screen”, “stereo earphones” etc, do not really reveal the underlying research areas well but do serve in interconnecting with other terms and in the community structure. Much of the other word pairs reveal the underlying interesting themes and sub areas. For e.g., “sensory substitution” which is basically a technique for replacing one of the disabled sensory input by other senses that are still intact. Sensory substitution studies the different non-surgical visual prostheses and is known to specialize in “object recognition”, “obstacle avoidance”, “vision rehabilitation”, etc, themes which were captured well in Table 5. There are also separate issues for the visually impaired people like “web accessibility” for blind web users, ensuring the blind are able to sense and walk within “pedestrian crosswalks” and multidisciplinary research areas like “bionic eyeglass” or bionic eye development and “vision rehabilitation” efforts.

These related terms are positioned together in a word co-occurrence graph for network analysis using the Gephi visualization tool [9]. First, the Force Atlas algorithm was used to produce a layout with the strongly connected nodes pulled together and the weakly connected nodes drawn apart. Consequently, the terms that tend to co-occur frequently within the contextual units are positioned closely on the graph while terms that co-occur infrequently are positioned apart. The layout result is seen in Fig. 6. The next step was to perform a modularity-based analysis of the network in order to discover and study community structures—i.e. natural divisions of nodes within which connections are dense but between which the connections are sparser. Gephi implements a popular modularity optimization method called the Louvain method, for community detection. In practice, modularity score (Q) values >0.3 are known to indicate significant community structures in the graph [35].

Results The modularity analysis applied to our graph resulted in a modularity score (Q) of 0.625; which implies that there are distinct communities. Four communities (C = 4) are evident from the modularity analysis, and Fig. 6 depicts each community node with a unique color designated by the community detection algorithm. Considering the labels, connections, and color codes as cues, we have the following interpretation of the four communities:

  1. 1.

    Multisensory Research (Green)—A strong community with emphasis on several key areas such as sensory substitution and brain processes, computer vision, image processing and sensor signal processing, visual prosthesis, opthalmology and neuroscience, etc. Strong connections are seen in the community such as Computer-Vision, Low-Vision, Visual Impairment, Visual-Acuity apart from regular connections among other nodes. Methods include computer vision, ultrasonic sensors, tactile substitution via skin receptors, auditory substitution via the ears, vibro-tactile substitution, electrical stimulations (e.g. of the visual cortex [31], of the forehead [57], of the tongue [3]), medical or surgical approaches, etc. Applications include wearable assistive devices, mobility aids and electronic travel aids, visual prosthetic devices, brain implants, bionic eyes; for e.g. computer vision-based cortical implants for bionic vision [70, 71] and retinal implants, etc.

  2. 2.

    Accessible Content Processing (Purple)—A community which attempts to cover urgent areas such as human computer interaction (HCI), Braille-related technologies, printed information access (such as textbooks, street signs, product information, bar codes), optical character recognition (OCR) technology, speech synthesis technology, sign detection and recognition, web accessibility, etc; Methods include tactile access, sonification of data (e.g. use of nonspeech sound for accessing geo-referenced data [115]), audio transcriptions of printed information, audio browsers to access web sites and web content [93]; Applications include reading devices, smartphone apps (e.g. the Trinetra project [65], or the bar code reading app for the blind from the Smith-Kettlewell Eye Research Institute [108]), low vision aids, screen readers, tactile touchscreens and tactile maps.

  3. 3.

    Accessible User Interface Design (Blue)—We interpret this small community in the co-occurrence graph as an active community involved in areas such as human-computer interaction (HCI), interface design, user interface modelling, Braille technology, ubiquitous computing, human factors and ergonomics research, etc; Methods include user-centered design, auditory interactions and feedbacks, multiple accessibility features, design and usability evaluations; Applications include accessible user interface designs, frameworks for dual interfaces, non-visual interfaces to ubiquitous services (such as ATM machines, kiosks, home appliances), accessible games for the visually impaired [39], etc. An interesting application of an accessible user interface on mainstream technology is the EasySnap application for blind photography [55].

  4. 4.

    Mobility and Accessible Environments Research (Red)—A wide community that is actively involved in research areas covering human-computer interaction (HCI), electronic travel aids, rehabilitation robotics, obstacle detection and avoidance, indoor and outdoor navigation, wayfinding, cognitive psychology, augmented reality (AR), etc; Methods include pattern recognition and machine learning, computer vision methods, ultrasonic audio feedbacks or echolocations, sensor-based techniques, global positioning system (GPS)-based navigation, spatial cognitive mapping and orientation; Applications include (wearable) electronic travel aids, mobility aids, smartphone apps, accessible public transport [12], navigation technologies, accessible airports and accessible offices (as proposed by the ARIANNA navigation system when GPS information is unavailable [24]), augmented reality systems, etc.

Overall, assistive technology for the visually impaired and blind people as such remains a very coherent discipline with growth in many sub-disciplines. Among these sub-disciplines, the field does form distinct communities as is confirmed by the observation that each of these communities have identifiable themes.

8 Where is assistive technology for visually impaired and blind people headed in the future?

Finally, we aim to identify—what are the emerging trends and discipline hierarchies that may influence the future direction of research in this field? This section covers recent trends, together with our interpretation of developments and recommendations, opportunities and challenges that will shape the near future of this field (Table 6).

Although the advancement of technology is evident, only a limited number of assistive technology solutions have emerged to make a social or economic impact and improve quality of life. Fundamental challenges e.g. enabling the visually impaired individuals and blind people to—localize themselves in an unknown indoor environment, retrieve lost/fallen objects, locate a person of interest, read notices, enjoy inclusion in society, or evacuate independently from a building in an emergency, are still to be thoroughly addressed in a practical and cost-effective manner. Researchers and technologists undoubtedly play a critical role in meeting the target community’s need but it is important to first understand the nature, scope, complexity, and diversity of the challenges faced by the visually impaired people in order to come up with an acceptable solution. The challenge further complicates because the target community is quite diverse in terms of degree of visual impairment, age, gender, ethnicity, and abilities. The causes of visual impairment are also diverse including uncorrected refractive errors, cataract, glaucoma, corneal opacities, age-related macular degeneration (AMD), and diabetic retinopathy, childhood blindness, trachoma, etc [29, 36, 113].

Table 6 Summary of our opinion of future trends and discipline hierarchies for the near future in Assistive Technology for the Visually Impaired and Blind people research

Sensory substitution has an enormous potential for visual rehabilitation of visually impaired and blind individuals [75]. Sensory substitution devices have been undergoing developments for many years in controlled settings in laboratories but are yet to be adopted for widespread use. Their adoption depends on deciding factors such as—performance of visual processing algorithms, interface, apparatus size, cost, amount of training, portability, etc [28, 77]. In recent years retinal implant technology or retinal prosthesis, or the development of “bionic eyes” is seen as a ground-breaking strategy for restoring some functional vision to visually impaired patients and thereby improve their independence and quality of life. Retinal prosthesis in the suprachoroidal space is expected to drive research in the near future because it is less invasive and therefore a viable clinical option. While cortical visual prostheses (Sect. 2) appears to remain mostly experimental and uncertain in the risk versus benefit balance, retinal prostheses appears to show more promise in accelerating the development of bionic eyes. To name a few examples—Bio-Retina Footnote 8 is an ultra small implantable chip that attaches to the retina and is powered by its own nanoelectrodes and photosensors; PRIMA [73, 80] is another promising miniaturized retinal prosthesis. A small number of private companies and research groups in laboratories are reported to be at an advanced phase of clinical trials [27, 75]. It remains to be seen whether retinal prostheses overcome the clinical, surgical, and engineering issues and finally transition from laboratories to industry. The most notable claim so far is of a patient who was able to discern the direction of white lines on a computer screen with Argus IIFootnote 9 [74]. Despite the accomplishments made since Humayun et al. [48] pioneered retinal implant technology and Sakaguchi et al. [95] and Zhou et al. [116] independently pioneered early engineering on suprachoroidal devices, the field of bionic vision is still considered a fledgling and promising field. The field is wide open and currently garners a lot of interest, effort and funding [27]. Research trends reveal a strong interest in the field in the near future. There are also sobering challenges: the surgical techniques for retinal prosthetic implantation are technically challenging and take long hours, signal and image processing challenges are significant, hardware technology is new and experimental, adverse biocompatibility issues such as risks of complications or infection still remain, conclusive outcomes from clinical trials are still awaited. Recent advances also point to several research groups actively pursuing prostheses based on stimulation of optic nerve, thalamus, and lateral geniculate nucleus (LGN) as they provide alternative stimulation targets [69, 88].

Computer vision integrated with assistive technology has tasted much success recently in the development of appropriate devices and interfaces for the visually impaired and blind individuals. Image description has been a long-standing challenge in computer vision; it is considered the hallmark of image understanding. A good image description is often said to “paint a picture in our mind’s eye” [21]. A blind guide describes the scene verbally (in natural language), for e.g. “The escalator is directly in front of you about 10 feet away. You’ll hear it as you approach”. A meaningful description provides the spatial awareness that is crucial for the formation or reinforcement of cognitive maps in the visually impaired or blind person. Mental imagery or the use of diagrams and pictorial representations held in the human memory is known to play a critical role in problem solving [66]. Physiologists have demonstrated that visually impaired and blind people have the capability to form cognitive maps of spaces, and this is also present to some extent in the congenitally blind people [50, 83]. Although non-visual senses are ‘inferior’ at encoding spatial information, yet studies [83, 110] confirm that the visually impaired and blind people have the potential to acquire concepts and develop maplike representations of space. This form of encoding requires more cognitive effort. Whether the representation is equivalent to those of the sighted people is logically answered by three broad theoretical positions—(i) the ‘deficiency’ theory (ii) the ‘inefficiency’ theory (iii) the ‘difference’ theory [34].

Kesavan and Giudice [62] were one of the earliest to note that automatic and meaningful natural language (NL) image description provided through a smartphone or other portable camera-equipped mobile device could assist blind people in knowing the contents of an indoor scene (e.g., room structure, furniture, landmarks, etc.) and support independent and efficient navigation of the space based on these descriptions. Computer vision and image understanding have taken remarkable steps forward; recent Convolutional Neural Networks based software such as [59, 101] can accurately describe scenes shown in photos in natural language. This opens up a host of possibilities; implementing deep learning in mobile contexts can be a game changing combination and could lead to a variety of mobile sensing tasks—such as scene, activity, emotion, and speaker recognition powered by deep learning. Lane and Georgiev’s exploratory study [64] is an early step in this direction. The authors applied deep neural network (DNN) to inference tasks such as activity, emotion recognition without over burdening the mobile hardware. There are alternate non-neural network approaches; such as Kernel canonical correlation analysis (KCCA)-based [45] and text-mined knowledge-based [63] novel algorithms for mapping images and videos to meaningful natural language descriptions respectively. Hands-free devices such as the Xowi Footnote 10 which engages the user in pure voice interactions have the potential to simplify many basic activities for the visually impaired and blind people. Image/scene descriptions in natural language (i.e. sentence-based image description) is the meeting point between state-of-the-art algorithms in mobile computer vision and off-the-shelf hardware (consumer smartphones or mobile devices) and promises to be a technology solution for the visually impaired people to help ambulate independently whether it is navigation online or in the real world.

Navigation and wayfinding particularly in unknown indoor scenarios is a long-standing challenge. There is a lack of established technologies capable to fill this need. It is dangerous for the visually impaired or blind individuals when they are in an unfamiliar building, e.g. a hotel or an airport, and they have no awareness of where the exits are, where the elevators are or if there is a wet floor. The NPR Labs Nipper One Footnote 11 is a telecommunications device designed to translate emergency radio broadcasts (e.g. in the event of a fire) and display onscreen text and flashing alert lights for the hearing-impaired. It is a more complicated case for individuals suffering from visually impairments or blindness. Wearable and unobtrusive indoor wayfinding system such as the smartphone-based system by Jain et al. [52, 53] (Roshni project) which downloads the floor plan of the building, localizes and tracks the user and provides step-by-step direction to the destination from any location inside the building using voice messages and vibration is a small step forward in this direction. Such a system could safely assist the visually impaired and blind individuals in scenarios requiring emergency evacuation from the building. ARIANNA [24] is another commendable smartphone based attempt to tackle the problem but the downside is that it will require retrofitting the whole floor layout with colored paths. Smartphones are expected to play a crucial role in the future in the daily lives of individuals with impairments, because many accessibility and social interaction barriers still remain. ETAs must be unobtrusive, light, discreet and effective if they are to be adopted; it should require minimum training time and give fast user feedback. These challenges are sure to push the boundaries of sensory substitution research even further. Aerial obstacles or head-height obstacles (e.g. overhanging signs, branches, open awnings) poses as another long-standing challenge for they are known to cause head injuries to the visually impaired. Saez et al. [94] has tackled the problem with 3D smartphones in the context of assisting the visually impaired but 3D smartphones are very expensive and still to reach the masses. Perceiving depth through monocular vision provided by consumer smartphones and camera-equipped mobile devices, for the purpose of detecting and avoiding aerial or frontal obstacles (as in miniature Unmanned Aerial Vehicles) is another direction of research that can empower the Visually Impaired and Blind people users to be aware of surrounding objects and avoid collisions.

9 Summary of main results

By applying an information analysis methodology and a set of transformations to our database (in XML) of publications on assistive technology for the visually impaired and blind people over the last two decades we could answer a set of strategic questions concerning the past, present and future of the research field.

First, what areas does Assistive Technology for Visually Impaired and Blind people encompass? From a total of 3010 scholarly publications (relevant to the field of Assistive Technology for the VI and Blind people) retrieved and downloaded from four major international scientific databases, we have extracted the 100 most frequent and informative topics in order to provide a bird’s eye view on the subject. The topics reveal leading concepts, terminologies and impact and contribution of one field on the other. The concepts are consistent with current research and also reveal the underlying thrust in the field.

Second, in which journals and conferences is research for assistive technology for the visually impaired and blind people published? We identified the leading (top 12) journals and conferences which publish or disseminate knowledge in the field. We provide additional information, such as the proportion of published content by journals and conferences and year besides their reputations and the coverage of research in scientific databases. A higher proportion of research is published in conferences than in journals and book chapters in this field.

Third, how rapidly is assistive technology for the visually impaired and blind people expanding as a research field? We have been able to capture interesting patterns of growth of the field over the last two decades (1994–2014). The growth of this field has outpaced the growth of modern science during the same period. We speculate on the causes of expansion of research interest and growth in this field. The publication trends of each of the leading individual journals and conferences over the last two decades is presented and it reveals the field is far from saturation. The remarkable growth in patents in the field is reported.

Fourth, what are the research communities within the field of Assistive Technology for VI and Blind people? We considered a network theory analysis of the common nomenclature to identify the top 50 most common word pairs (collocations). Word pairs reveal community structure besides the underlying themes and subdisciplines. We quantified the connectedness between word pairs by the log-likelihood ratio. Four distinct communities—multisensory research, accessible content processing, accessible user interface design, and mobility and accessible environments research are depicted with a unique colour each. The detection of communities and interpretation of the results are presented. We note that though there is growth in many sub-disciplines, the field remains a very coherent discipline.

Finally, where is assistive technology for VI and blind people headed in the future? Based on the analysis of past developments and the current research characterized by the sub-disciplines and communities obtained, we identified the major emerging trends and discipline hierarchies together with our interpretation of opportunities and challenges that will influence the near future of this field.

10 Conclusion

The needs of the visually impaired and blind people are greater than ever before. Assistive technology as a mature field will continue to gain prominence and impact the lives of the visually impaired and blind individuals (and the elderly people) in ways not previously possible. The increase in functionality of mainstream mobile technologies, advances in computer vision processing algorithms, miniaturization of electronic devices, and cutting-edge new medical interventions are expected to drive this field further towards the challenges and reality of creating successful assistive technology.