Keywords

1 Introduction

Nowadays one of the most common type of vision impairments present in children with other disabling conditions is cerebral visual impairments (CVI), which may coexist with other causes of visual impairments such as ocular and ocular motor based [1]. Coupled impairments very often occur already in prenatal life and result from the hypoxic—ischaemic brain injury, chromosomal disorders or adverse influences acting on the child during the fetal period. Complex disability may also be caused by meningitis, infections of the central nervous system epilepsy taking place in postnatal period. The pattern of the cerebral injury depends on the maturity of the brain at time of its damage.

Children with complex disability require systematic brain stimulation. For this reason many visual, auditory and tactile stimuli should engage children’s attention. Brain receiving pieces of information from various senses organizes them by recognizing, analyzing and the integration. However, the first important step is to gain the knowledge to what extend children with impairments are able to use vision and for how long they can keep their attention at an environmental object. It may significantly improve work of educational teams and therapists.

Even weak visual abilities may serve as a starting point to a vision enhancement entailing overall intellectual development. Thus children with complex disability including visual impairment should be assessed in terms of functional vision and subsequently subjected to a vision therapy. The aim of the vision therapy is to stimulate vision through providing conditions favorable viewing (i.e. presenting objects possible to be noticed by a child) and provoking the development of basic skills through exercises of eye visual motor system.

Physiologically, vision may be divided into following elements: light, color, contrasts, movement and stereopsis. Assessment of the functional vision relies on determining conditions in which a child is able to see and perceive objects [5]. During tests the size, contrast and an object distance are taken into account to check following visual functions:

  • Fixation—ability to keep eyesight on a visual stimulus,

  • Eyeballs motility—ability to trace a moving stimulus with eyes,

  • Functional visual acuity—a distance from which a child recognizes a character of a given size. Visual acuity is the quantitative measure of the ability to identify black symbols on a white background at a standardized distance, when the size of the symbols varies,

  • Contrast sensitivity—the impact of a presented level of a contrast on a child’s visual ability,

  • Field of vision—an area, within which a child is able to see the presented object. The observation of child’s responses to an environmental object may reveal interesting information about impairments.

Such an assessment in case of young non—verbal children with CVI is challenging, while behavior ability, sensitivity to environmental changes and eye movement disorders are taken into account. Thus, determination of CVI, influenced by different cognitive and motor levels may be a difficult task and flexibility during its evaluation is very important.

2 Eye Tracking Support

As a consequence of the previously—presented reasons, two main challenges for treatment of children with complex impairments come to the fore. At first it is very important to determine whether a child is able to see a visual stimulation. Secondly, a therapy requires the usage of stimuli that are interactive—i.e. change in response to child’s reactions. In both areas eye tracking utilizing a specialized device seems to be a promising solution.

Because an eye tracker is able to track a gaze point (a place where a child is looking at), gathered data may be used to analyze children’s reactions to visual stimuli [4, 7]. Thus, in the diagnosis phase, it is possible to present to children several stimuli differing in colors, contrast and speed and use an eye tracker to check, which of the stimuli is the most interesting for them. During the therapy phase children may work with a computer display and an application that is adapted to their abilities and actively responds to their eye movements. Another important advantage of this method is that all children’s sessions may be stored and then analyzed offline by therapists.

It is worth noticing that there are different types of eye trackers. Head-mounted eye tracker allows participants to move their head freely during recordings, yet may not be adjusted to a child’s head and may disturb a child. This problem is not present in the case of remote eye trackers, but it must be remembered that turning a head away from an eye tracker causes loss of an eye movement signal. The aforementioned problems are strengthened when children with such impairments like cerebral palsy are taken into consideration, because of the lack of ability to control head movements.

The research discussed in this paper was devoted to development of the workspace, which may support the effort of therapists working on improving the quality of disabled children’s life. For this purpose, the group of therapists from an association for children with developmental disabilities was involved to elaborate the appropriate set of stimulations useful in their daily job. Initial experiments showed that this workspace may be helpful in both the diagnosis and therapy phases.

3 Workspace Description

The workspace created for experiments with children is presented in Fig. 1. It consists of one big display (‘stimulation screen’), which is used to present a stimulation to children and an additional display—for an operator/therapist but not visible to children—which shows some additional information including: gaze position of a child and eye location in an eye tracker’s camera. This display is called the ‘control screen’. The operator is able to invoke stimuli and observe children’s reactions. She or he can also use the application to trigger special actions such as sounds or an object’s movement on the stimulation screen.

Fig. 1
figure 1

Workspace used during experiments. It consists of the stimulation screen (right) visible to a subject and the control screen (left) visible to an operator

3.1 Calibration

Every eye tracking session should start with a calibration, during which a function mapping gaze to screen coordinates is built. In such a process a user is expected to follow with eyes a stimulus appearing in various places of a screen. Because children generally, and especially children with impairments, don’t tend to cooperate [10] it is not possible to use the same scenario for calibrating them. There are different possibilities how to deal with this problem. For instance during the studies presented in [6] the calibration was limited to a presentation of only two points. In work described in [3] a small, visually attractive sounding toy at one of the five predefined spatial positions was displayed. However, both solutions are not feasible for children with CVI as they don’t cooperate at all. For this reason a new solution had to be invented.

It was observed during our previous experiments that, when an eye tracker is calibrated for one person, it is possible to utilize an output it produces when another person is using the same eye tracker. Obviously, the signal must be recalibrated to show true gaze points of this new person, however, even before this recalibration it is possible to see that eye is moving and determine the direction of this movement [8].

To calibrate the eye tracker properly it is necessary to have some true gaze positions and corresponding eye tracker output. Therefore the idea was to perform an implicit calibration made by an operator [2]. Subsequently, the operator uses the control screen to observe both a stimulation and eye tracker output. When a new object appears on a screen and the operator sees child’s reaction (eye tracker output changes) it may be assumed that now the child is looking at this object. The operator needs only to click the object on the control screen and the application registers both the click coordinates and the current eye tracker output.

The gaze calculation module (GCM) has two inputs: uncalibrated gaze coordinates from an eye tracker and coordinates of operator’s clicks (Fig. 2). At the beginning the GCM returns gaze coordinates as it receives from the eye tracker. When an operator clicks with mouse any place on a screen, this information is sent to GCM and registered together with the eye tracker output obtained in the same moment as a new calibration point (CP). When at least four CPs are available, the recalibration model is calculated. This model maps an eye tracker output to gaze coordinates on the screen. Such a model consists of two functions:

$$\begin{aligned} x_s = f(x_e,y_e) \end{aligned}$$
(1)
$$ y_s = f(x_e,y_e) $$

where \(x_e\) and \(y_e\) represent data obtained from an eye tracker and \(x_s\) and \(y_s\) are an estimated gaze coordinates on a screen.

There are multiple regression functions possible to use [9]. In this study the polynomial quadratic function was used (2).

$$\begin{aligned} x_s = A_xx_e^2+B_xy_e^2+C_xx_ey_e+D_xx_e+E_xy_e+F_x \end{aligned}$$
(2)
$$ y_s = A_yx_e^2+B_yy_e^2+C_yx_ey_e+D_yx_e+E_yy_e+F_y $$

The coefficients of the function were calculated based on calibration points (CP) using Levenberg-Marquardt algorithm. Every new point (a new click by an operator) causes recalculation of the coefficients. After collecting at least four such points, the application is able to calculate a calibration function and use it to show child’s gaze point more accurately. Of course the accuracy increases when more points are available [8].

Fig. 2
figure 2

Simplified Gaze calculation module schema

3.2 Working Scenario

The work starts with a classic calibration made by an operator. Then a child is seated at the stimulation screen and the operator sits at the control screen. The operator may now start different stimulations and observe child’s reactions. The control screen displays the same image as the stimulation screen, together with information if and where eyes are visible by the eye tracker and about gaze positions. The gaze positions presented are not accurate, so the operator may use the procedure described in the previous section to calibrate the signal.

Every stimulation may be static or dynamic. The operator can start and stop objects’ movement for dynamic stimulations. Most of the stimulations are able to produce additional effects—for instance when a child looks at some object (like e.g. animal) it may produce a sound.

A therapist may also invoke all sounds effects manually by clicking a mouse key. Such a functionality is required when an output of the eye tracker in conjunction with an implicit calibration provided gaze coordinates, which did not match a position of the appropriate object of the simulation, but the therapist is convinced that the child reacted to the stimulus.

4 Initial Experiments

The research group accounted for the 3 children including two visually impaired with cerebral palsy and one child without any impairment. The first two participants involved in the experiment were chosen based on the following criterions—they were children with complex impairments with deep visual perception problem on one hand, and there were the conviction that children were able to see at all, on the other hand. For the purpose of the experiment description, eye movement signals gathered for these children are denoted as follows:

  • Sub1—the healthy girl (12 years old)

  • Sub2—the girl with deficits in the field of visual attention and with cerebral palsy (5 years old)

  • Sub3—the boy with deficits in the field of visual perception and with cerebral palsy (8 years old)

The experiments were conducted with the consent of those children’s parents.

In order to achieve the research goal, the five stimuli were developed, which were consulted with therapists working daily with the chosen children. Their experience in field of children’s interests and abilities was helpful in preparing appropriate incentives:

  • S1—a car that moves from one side of a screen to another. Gaze focused on the car triggers the sound effect representing an engine whirr.

  • S2—a bike moving between sides of a screen. Similarly to the previous case, focusing on the bike results in triggering the sound of thrown derailleurs.

  • S3—a flower in a pot and a watering can. At the beginning the flower is withered. When a subject looks at the watering can the animation presenting watering the flower is invoked, which results in the flower straighten. If a gaze is focused on the flower, its petals grow up.

  • S4—a boat, when focused the sound of a sea is invoked.

  • S5—waves with a fish, which floats through a screen and disappears. Observing waves makes the fish appearing once again but on the other side of the screen.

Each experiment consisted of two basic steps: the calibration process realized by a therapist and one of the described stimuli: S1, S2, S3, S4 and S5. The eye tracker had to be calibrated by the therapist because in case of two children it was impossible to explain rules of this process. After the first step, a child was placed in the front of the stimulation display while a therapist took place in the front of the control display ensuring a stimulation preview and its management (see Fig. 1). The operator screen was outside of a child’s vision field, not to disturb her/him during a test.

5 Analysis of Results

Due to the paper space limitation, results obtained during the tests will be only discussed in regard to two stimuli—the car (S1) and the flower (S3).

5.1 Car Stimulation (S1)

One healthy child (Sub1) and one child with visual impairments and cerebral palsy (Sub2) took part in the experiments based on the car stimulation. The map of the example set of registered fixations is shown in Fig. 3—on the left.

Fig. 3
figure 3

The fixation maps of example stimuli S1 (left) and for S3 (right)

Additionally, data from Sub1 was presented in Fig. 4—for OX axis on the left and for OY axis on the right, respectively. Coordinates are given in screen pixels and in the function of time expressed in seconds. The green area represents the moving object whereas the black line represents the eyes position in the specific time. It can be observed that for the most of the time of the experiment eyes were visible for the recording device. Furthermore, it is visible, that the child traced with eyes the moving object during the whole experiment.

In the next figure (Fig. 5) recordings from Sub2, with usage of the same division, were shown. The red area and black line play the same role as in the previous charts and represent the moving object and eyes position respectively. Analyzing these charts, it may be noticed that, during the first 12 s of the stimulation, the eyes movement coincides with the moving object area. After that time the schematic search from one edge of the screen to another is observed. It may indicate that the child became bored or her attention was distracted.

Fig. 4
figure 4

Eye movement recordings for stimulation S1 for Sub1 for OX (left) and OY (right) axes

Fig. 5
figure 5

Eye movement recordings for stimulation S1 for Sub2 for OX (left) and OY (right) axes

5.2 Flower Stimulation (S3)

The second experiment used the stimulation S3—the picture of the flower in the pot. All three children took part in this experiment, with eye movement signals denoted by Sub1, Sub2 and Sub3. The stimulation picture with example fixations for that part of tests was presented in Fig. 3, on the right side.

To demonstrate the correspondence between the simulation objects positions and the healthy child’s eyes movement, charts presenting this dependency were covered by two colors—blue and green. The first one was used to denote the flower location, while the second one for the watering can. Data belonging to the Sub1 was divided into OX and OY axes coordinates and is shown, in the function of time, in Fig. 6. Studies of these charts may lead to the conclusion that the child, at the beginning, was focusing on the both objects alternately. Then, after about 10 s, the child concentrated gaze on the watering can. Such a behavior triggered the animation representing watering the flower, after which the can disappeared. In Fig. 6—on the right side—it is visible that the child moved eyes towards the flower observing the effect of the watering (the flower straightening).

Fig. 6
figure 6

Eye movement recordings for flower stimulation (S3) and Sub1 for OX (left) and OY (right) axes

Fig. 7
figure 7

Eye movement recordings for flower stimulation (S3) and Sub2 for OX (left) and OY (right) axes

Fig. 8
figure 8

Eye movement recordings for flower stimulation (S3) and Sub3 for OX (left) and OY (right) axes

Figures 7 and 8 contain results of measurements for Sub2 and Sub3. To distinguish charts for the healthy child and for those with impairments, there were different colors used for simulation objects placement—orange one was used to mark the space used by the flower and red one for the watering can. Data obtained for Sub2 (Fig. 7) indicate the child’s interest until the end of the watering animation and the can disappearance. The small amount of data registered for the vertical eye movement is the sign that the child looked away from the display. It could be caused or by an external distracting factor or by a loss of interest in the experiment.

Recordings related to Sub3 (Fig. 8) show that this child often looked away from the beginning of the simulation. The loss of recordings in both axes points out that the eye tracker was not able to find the position of eyes.

6 Summary

Studying charts concerning the healthy child (Sub1) it may be noticed that she was able to focus on displayed objects and trace their movement. It shows that the application operated as it was assumed. With such results, we decided to utilize the workspace described earlier for experiments with children affected by complex impairments. However, it was expected that in this part of tests results would be worse, because might result from the vision limitation, cerebral palsy as well as from other external reasons. It is visible in recordings for Sub2 and Sub3. When conducting tests engaging children being wards of Children with Disabilities Center, a lot of tension and distraction was caused by new conditions they have been put into. Other possible problem might be the presence of additional people, because children were used to work only with one therapist.

However, the studies performed during the research showed that it is possible to use an eye tracker device for children, communicating with whom is difficult, which makes a child’s calibration impossible. The application described in the paper allows for the initial—done by a therapist—calibration, which during tests was corrected based on child’s eyes behavior. Additionally, with the described workspace, therapist gains an objective tool to assess the quality of vision.

Which is important to emphasize, due to different levels of children handicaps, a proper simulation should be prepared individually for every child. Before any experiment, a child should be observed in terms of things, which may likely draw his/her attention. Different image attributes as color, contrast, type of objects should be taken into account. It may help to prepare a simulation that would be able to activate and engage a child and focus his/her attention on simulation tasks. Additionally, before the start of a ‘core’ experiments, children should be acquainted with an environmental setup to get used to new conditions and devices. The workspace will be extended with new stimuli developed in the cooperation with therapists for other children being under their care.