1 Introduction

When one uses the internet, the first thing one see is the User interface (UI). User interface is a notion from human–computer interface (HCI), which applies not only to internet services, but to all devices or work progresses that humans use or get in contact with such as cars, factory equipment, software’s and etc [1]. Internet services various people utilize have various flaws along with their conveniences. Yet from a UI/UX standpoint, even with those individual differences they are all using a similar UI to provide their services. The reason behind this is because when constructing a UI/UX, Usability and Identity are developed in separate terms [2]. The terms of Usability means that the service used in the process is designed to take the shortest route and considering the user’s convenience such as designing the layout of buttons properly so the user wouldn’t make easy mistakes. Terms of Identity refers to the consideration of the user’s point of view. Whether the service component is designed to be intuitive and easily visible and has good-looking appearances. However UI’s distributed by the manufacturers are uniform and creates problems for the users as it did not consider each individuals condition and circumstances. For example, the senior generation tend to have weaker sight and hearings compared to that of an average man. Also they tend to have slower reflexes due to their old age. Yet the design of the UI’s distributed by the manufacturers do not put any of these possible circumstances into consideration. Making it difficult for senior generations to use the services from the internet. Furthermore this creates a gap between senior generations and younger generations when it comes to proficiency in dealing with devices which would eventually mean that the senior generations would not be able to catch up to younger generations causing a wider gap between generations which would lead to conflicts between generations.

To solve these kind of problems the research and structural development of a UI/UX considering the user’s cognitive response and behavioral pattern ‘is essential. But currently existing UI/UX researches have limitations such as being able to provide customized services only for specific users or it can provide customized UI/UX only for specific devices.

In this paper we propose a self-customizing UI/UX based on the user’s cognitive response and behavioral information analyzed on a distributed environment. Our proposal method is to measure the user’s cognitive ability and create an initial profile, providing it to a customized UI. And while the user uses the system it modifies the initial profile based on the user’s behavioral history and re-adjusts the UI accordingly. The initial profile was generated by a machine learning based classifier which used the cognitive response measurement data of 200 users’.

2 Related work

Since the importance of UI/UX has emerged, there have been various user UI/UX researches done in the past. Park et al. [3]. proposed a UI/UX smart health service method utilizing motion recognition in a machine-to-machine (M2M) environment. Unlike other existing health care platforms, [3] proposed a M2M based smart health care service that can be serviced at home which collected bio-information in real time. It collected data in real time through various sensors(i.e. EEG, ECG, GSR) as input devices and using computers, tablets, smart phones and the Kinect as a output devices. Also it can easily cope with the dynamic changes of wireless environment and by using motion recognition done by the technology to support mobility between sensor nodes on M2M, it can perform a systematic management based on the user’s motion recognition data. Voutilaien et al and Moon et al. [4, 5] proposed an advanced version of a responsive UI that emerged with the emergence of smart devices such as smart phones and tablet PCs. Most users expect an expandable UI not only compatible to just PCs but to other tablets and cellphones as well without any additional configurations. Therefore, [5] proposed a new advanced responsive web framework based on a XML user interface. Moon et al. [5] claims that a UI which automatically change according to the users psychology and aesthetic needs should be provided and therefore researched various theories of color, shape, wave, fonts and design language of UI. Those related in providing a more advanced form of UI. Raipal et al. [6] designed a dynamic interaction between the user and system to develop an intuitive and familiar interface. To do this the proposed system first observes the user’s behavior and recognizes any meaningful behaviors. Then, through the proposed interaction model the system predicts the users intention. The proposed interaction model design improves its usability by understanding the user’s intention and tendency through user learning. This system demonstrates the benefits of providing the user with appropriate information through the proposed dynamic UI/UX design. Park et al. [7] provided a user customized bookmark application that allows users to access frequently used services on their smartphone through the application. Kim et al. [8] proposed a UX/UI method based on the user’s finger tapping pattern providing a variety of user friendly tapping patterns while increasing recognition rate between patterns and also provides effective performance and stability to shorten delay time after pattern input is completed. Chen et al. [9] proposes an interactive smart watch display providing a highly interactive consumer-device interaction based on the researches of wearable devices and interaction designs. Chen et al. [9] tried to induce the users’ interest by departing itself from previous complex and fixed UI screen forms and increasing the UI’s convenience.

Like above though there have been various researches about UI/UX most of those were focused on specific environments or focused on the user and device relationship [10,11,12]. However, since most users use services through PCs or smart devices, the research of providing customized UI/UX services for genera environment and users are much needed, yet not much research is done to fix this. And so this paper proposes a model that can provide a customized UI/UX service in a general environment for general users.

3 Proposal method

In this section, we describes the overall architecture of our model: User-based adaptable UI/UX.

3.1 Model architecture

The proposed model is largely composed of 2 parts. The first part is a user-cognitive based UI/UX model that measures the user’s cognitive and then constructs a profile accordingly, along with providing a customized UI/UX as well. The second part is a user-behavior based UI/UX model which analyzes the user’s behavioral information generated when the user uses the system’s services and then constructs a profile according to the user’s behavioral pattern, along with providing a customized UI/UX (Fig 1).

Initially the model generates a general profile of the user’s current status through cognitive tests and then classify the user’s type through the classifier. Then it provides a customized UI/UX according to the user’s type. Also, the many behavioral information generated as the user consistently uses the system is used by the User-Behavior model to modify the user’s profile according to the user’s behavior. And then the model provides a customized UI/UX to the user according to the modified profile.

Fig. 1
figure 1

Architecture of user-based adaptable UI/UX model

3.2 User-cognitive model

In this section, we propose a user cognitive based modeling method based on the user’s sensory and athletic ability which collects cognitive data from the user through various devices in a distributed environment. Figure 2 is the outline of the proposed user cognitive model. The proposed model collects the user’s cognitive data from tablets, PCs, smart phones and etc. on a distributed environment. Through the collected data, the model performs a user cognitive modeling and clusters the user’s reaction type. Through the clustering the user cognitive modeling goes through a semi-supervised learning stage, since there is no correct answer for classifying user types, the user types are formed through clustering and these are used as an answer set. Using this answer set and supervised learning the classifier is produced. The cluster model uses K-means and the classifier uses a support vector machine. Cognitive responses include perception, sensory and athletic ability and this is collected through experimental measurements taking UI elements into account in cognitive responses and contents [13]. Through this kind of model structure, the users are classified into several types which enables the proposed model to provide a customized UI structure to the user by various distributed environments and contents such as the web, video etc.

Sensory and athletic performance indicators included vision test, an exercise test using the Tab task [14], Edinburgh handedness inventory [15] for the cognitive test, Video Caption Location task, Word span task [16], Color lexical decision test [17], Tower of London task [18], Visual field task [19] and a satisfaction test. All tests were done online or offline. The offline test was designed to use the E-prime to respond to the stimulus presented while on the online test an online cognitive response measurement task developed by our team was used. However, the Edinburgh handedness inventory was done through survey.

3.2.1 Vision test

This test was proceeded as an offline test and its goal was to measure the test subject’s vision. The test proceeds by responding to the presented number or symbol each according to the letters size. If the presented stimulus is a number the test subjects are instructed to input the identical number and when a ‘C’ shape is presented test subjects are instructed to respond by inputting the direction it’s pointing toward through the arrow keys in the keyboard. The fonts used in this test were consisted of total 12 fonts: 1, 2, 3, 4, 6, 9, 11, 13, 15, 18, 20, and 22pt.

3.2.2 Tap task

This test was proceeded as an offline test and its goal was to measure the test subject’s athletic ability. The test subject is instructed to push a certain key as many times as he can for a limited time of 10 s using only his right index finger. The test was consisted of eight set in total 10 s each. We calculate the average number of buttons pressed from each response.

Fig. 2
figure 2

Overview of user cognitive modeling on distributed environment

3.2.3 Edinburgh handedness inventory

This test was proceeded through an offline survey and its goal was to distinguish whether the subject was right-handed or left-handed. The test subject is instructed to read the items in the presented vie and check which hand he or she used more frequently. Through this it is possible to judge whether the test subject is right-handed (i.e. left hemisphere dominant) or left-handed (i.e. right hemisphere dominant).

3.2.4 Video caption location task

This task was proceeded as an offline task and its goal was to find the most effective caption location to be presented on screen. A video is presented on screen while the caption is presented in the middle of the screen. The task is to see if the presented caption and the dialogue from the video content match. The position measured on the screen consists of a total nine positions. When the video is presented, the caption is presented at a specific position. Those positions are left-up, center-up, right-up, left-mid, center-mid, right-mid, left-bottom, center-bottom, right-bottom.

3.2.5 Word span task

This task was proceeded as an offline task and its goal was to measure the test subject’s short-term memory span. On the center of the screen a series of words are presented and the test subject is instructed to read the word aloud and memorize them. Later, when a certain indication is given, the test subject is to speak out the words aloud in memorized order and then the experimenter checks whether the answer is correct. The task was divided according to the number of words presented with a total of five tasks: 3, 4, 5, 6, 7 words for each task.

3.2.6 Color lexical decision task

This task was proceeded offline and its goal was to find which combination of font color and background color was most effective for the test subjects to recognize the caption. On screen, a word or non-word is presented, and the test subject is instructed to recognize the given stimulus. The conditions of this task were background (Yellow) \(\cdot \) font (Blue), background (Lime) \(\cdot \) font (Red), background (Black) \(\cdot \) font (White), background (Blue) \(\cdot \) font (White), background (Yellow) \(\cdot \) font (Black), background (Lime) \(\cdot \) font (White), background (White) \(\cdot \) font (Black) and background (Grey) \(\cdot \) font(Black).

3.2.7 Tower of London

This task was proceeded online and its goal was to measure the test subject’s problem-solving skills. The test subject is instructed to stack the chips to match with the given picture by dragging the chips by his or her mouse. We get the results by measuring the total number of clicks the test subjects needed to solve the task.

3.2.8 Visual field task

This task was proceeded offline and its goal was to measure the test subject’s language center. On the screen, a word or non-word is presented in either the right or left part of the screen and when the word is presented the test subject is instructed to guess what that word is. If the stimulus was presented on the left side of the screen it was measuring the test subject’s left view (i.e. Right hemisphere related) and if the stimulus was presented on the right side of the screen it was measuring the test subject’s right view (i.e. Left hemisphere related).

Fig. 3
figure 3

Overview of user cognitive classifier

3.3 Satisfaction test

This test was proceeded offline and its goal was to find the most satisfying font size. Test subject grade the presented font size from score 1 to 4. Grading it 1 means Very Unsatisfying, 2 means Unsatisfying, 3 means Satisfying, 4 means Very Satisfying. The test consisted of 12 font sizes as conditions each: 1, 2, 3, 4, 6, 9, 11, 13, 15, 18, 20, and 22pt.

3.3.1 Clustering model

The proposed User cognitive modeling is modeled by the process of Fig 3. K-means, Hierarchical clustering, SOM clustering is used for clustering and among those models the most optimal model is adopted to cluster the user types. The clustering method is largely divided into partitioning clustering and hierarchical clustering. For Partitioning clustering, one of the most representative techniques, the k-means were used. In addition, SOM clustering, which is useful for visualization, was used to compare performance. The purpose of k-means clustering was to partition n observations into k clusters. As each observation belongs to the cluster with the nearest mean, and serves as a prototype of the cluster.

Fig. 4
figure 4

Design of online activity profile

Fig. 5
figure 5

Make user classification model using

Hierarchical clustering is a method which seeks to build a hierarch of clusters. There are two types of strategies that hierarchical clustering generally falls into. Agglomerative and divisive. In this paper, the agglomerative method (i.e. bottom up approach) was used which means each observation started in its own cluster and pairs of clusters merge into one as it climbs up the hierarchy. SOM or self-organizing map is a method to do dimensionally reduction. It is a type of artificial neural network (ANN) trained using unsupervised learning to produce a lower dimensional, usually two-dimensional, representation of the ‘map’, which is the input space of the training samples.

3.3.2 User cognitive response classifier

The User Cognitive Response Classifier is generated through the User cognitive modeling process of 3.2.10 Clustering model. It is a classifier to presume what kind of person the user is when a new user data is inputted. The models training process is that of Fig. 3.

In this paper, the proposed User Cognitive Response Classifier is trained through a semi-supervised learning after it goes through the User cognitive modeling of 3.2.10. Since there is no correct answer for user types, we cluster the user types using a clustering model and use this as an answer set and derive the classifier through supervised learning. In this paper, we use the support vector machine for the proposed classifier.

An SVM model is a representation of the examples. The examples are points in space, mapped so that the examples of the separate categories are divided by a clear gap which is as wide as possible. The SVM methods learning technique is very suitable for measuring human’s cognitive state. SVMs can generate both linear and nonlinear models and are able to compute both linear and nonlinear models equally efficiently.

3.4 User-behavior model

3.5 Structural design of online activity profile

The user online activity profile is part of the weblog, data that is generated through the process of users using the platform on the web. Users leave various traces in the platform in the process of acquiring data or purchasing goods. Using these traces as data, it is possible to extract various information about the users. In this study the profile structure based on a user’s online activity profile is designed by the identification of user activities extracted from analyzing literature and existing digital content platforms. In previous studies [20] proceeded on a research using access frequency, content view logs as an internet activity profile. Choi and Kim [21] researched online activity profiles while analyzing content categories as a major key feature and [22] researched online activity profiles by using content view choices, content view counts, content evaluation, content upload counts and etc. as logs. Lee et al. [23] researched it while seeing content purchase and such activities logs as a major key feature to analyze and [24] proceeded on the same research using content view choices, platform visiting times and etc. as logs.

Fig. 6
figure 6

Make user classification model

Fig. 7
figure 7

Cognitive model: result of elbow method(K-means)

The user online activity profile structure of this paper is presented in detail at Fig. 4, which is based on previous existing researches. online activity profiles is largely consisted of two entities: User (t_User) and content type (t_Content Type). The user entity is consisted of user ID (User_ID), password (Password), Email (Email) and other personal information related properties while it stays on a one-on-one (1:1) relationship with the user profile (t_User Profile). Content type entity is consisted of content ID (OID), content name (Title), content type code (Type Code) and content category (Category) and the user and content type entity stays on s many-to-many (N:M) relationship. For content type codes there exists e-book (EB), game (GA), music (MU), web (WE), and video(VI). Content categories include Information Technology (IT), Society/Diplomatic (SO), science (SC), sports (SP), economy (EC) and entertainment/culture (EN). The activity log (t_Action Log),which is the relationship (Relationship) between the User entity and Content type entity, is consisted of a user ID that is a primary key of the User and Content type entity, a content ID, an action code(Action Code), and an action date (Action date) attribute. For the action code, it consists actions of purchase (BU), evaluation (EV), comment (CO), post/registration (CR) and etc. Also, there exists a separate sub-entity from Content types called Total Comment (t_Total Comment) which is consisted of a content ID as a primary key of Content type, a comment ID (CID), Author ID (Register ID), Post Date (Register Date), and a Content of Comments (Comments). The Evaluation entity (t_Total Eval) is also consisted of a content ID and account ID (Account ID) as a primary key of Content type, evaluation Date (Eval Date), evaluation point (POINT).

3.5.1 Online activity profile based user modeling

The next Fig. 5 is a user modeling diagram based on user online activity profile. The user activity modeling based on online activity profile can model users through three processes based on user activity data collected through smart devices. First, the online platform collects user’s activities as data. Second, based on the user’s online activity profile data we classify users through a K-means algorithm. Finally based on the classified user data we generate a user classify model that classifies based on the user’s activity profile which uses various classification algorithms of machine learning.

The next Fig. 6 is a user modeling diagram based on online activity profiles and cognitive responses. By utilizing both the user activity data and the cognitive response data collected through the smart device for modeling, it is possible to generate an optimal user classification model that considers both the user’s cognitive response and online activity.

4 Experiments

In this section, we describe the experiments and results of our proposed model to evaluate its performance and satisfaction. The experiment were conducted to verify the model performance and accuracy of the cognitive based model and the behavior based model. Then we evaluated the model through the actual satisfaction of users who used this model.

4.1 Experiment details

In this paper, the test subject were recruited for cognitive modeling construction and evaluate the satisfaction of our proposed model. For cognitive modeling we recruited 122 adults and conducted the experiment to construct the modeling. Also 200 students were recruited for the satisfaction evaluation of our proposed model.

4.2 Result of the cognitive based model

We conducted experiments of 3.2 on 122 adults for user cognitive modeling. And as an approved experiment by the Korea University’s Ethics Committee all test subjects had been notified of the experiment beforehand as according to the procedure and signed the agreement of research participation. The data is consisted of 85 features in total and the missing value treatment was done by inputting the total average value on the missing value. A total of eight outliers were detected using Bonferroni p-value for outlier detection. The Bonferroni p-value is the p-value that solves the multiple comparison problem that occurs when performing multiple hypotheses tests. In other words, if multiple hypotheses tests were performed at the same time, the probability of rejecting the null hypothesis increases. This is called the multiple comparison problem, and the Bonferroni correction finds the p-value that solves this. In this condition if the p-value is less than 0.05, it is determined as an outlier.

After the detection of Outlier, we proceeded on to feature selection. For feature selection, we performed the Pearson correlation and a total of 55 features were used as final. After feature selection, we performed normalization through z-score and selected K-means for modeling after comparing all of the performances of K-means, Hierarchical clustering, SOM clustering after we performed them [25]. We proceeded with the initial k value being 3, and by looking at the result of the elbow method in Fig. 7, it can be seen that this is the most optimal choice. The Elbow method is a method used when one adjusts the number of clusters in a given dataset [26]. One monitors the number of clusters from a given dataset while sequentially increasing it until one gets the number they see fit.

We performed unsupervised learning through K-means and then by using the generated clustering information as an answer set we performed semi-supervised learning. For supervised learning, we used the Support Vector Machine. We used bot Linear SVM and Non-Linear SVM for analyzing and used RBF for Non-Linear SVM. As there weren’t as many data but 108, we used the ten-fold cross validation method [27]. The analysis results are shown as in Table 1.

Table 1 Result of the number of clusters in each groups
Table 2 Accuray result of users cognitive response

According to the results of Table 2, the performance of Linear SVM has come higher than Non-linear SVM. This can be interpreted that the distribution of data has a linear characteristic.

4.3 Result on behavior based model

We generated a user classifier model and classified clusters based on the user online activity profile data collected according to the designed online activity profiles structure. The collected activity profile was consisted of User ID, Content ID, Content view count, Content view date, Type of Activity, Category of content and a total of 146,313 logs and a data of 3,134 people. We used the K-means algorithm to classify the user clusters, and obtained the best optimal number of clusters for the K-Means algorithm by using the Elbow Method. The Elbow method is a method to find an appropriate number of clusters in a data set based on the consistency of the data in the clusters. The next Fig. 8 presenting the result of the Elbow Method show that the cluster’s coherence has significantly increased when the number of clusters are three. Therefore, based on these results, we set the number of clusters for the K-Means algorithms as three. The three clusters classified based on the K-Means algorithm there were three classes: The “active group” as the most active group consisting of 188 people, the “medium group” being the neither most active nor in-active group consisted of 845 people and finally the most in-active group was the “In-Active Group” consisted of 2101 people. Also, it has been confirmed that each cluster’s preferred activity or content category were different.

Fig. 8
figure 8

Behavior model: result of elbow method(K-means)

Later we used the classification algorithm to create the user classification model. The choice of the classification algorithm may vary depending on the size, quality and characteristics of the data. Therefore, in this study we selected the algorithm with the highest accuracy by comparing the results of the following algorithms for our Smart Senior Classification Model: k-Nearest Neighbor, Decision Tree, Neural Network, Support Vector Machine. The test data for the classification algorithm was set to 300, 10% of the total data. Also, in order to get a higher reliability of the test results, the ratio of training data to test data was set at 10% level according to each cluster.

4.3.1 k-Nearest neighbor

In the k-Nearest Neighbor algorithm [28] a total of 300 test data were given according to the clusters. The data were composed of 20 Active Group subjects, 80 Medium Group subjects, and 200 In-active Group subjects. And for the k neighbor value was set to 56, the closest number to the square root of the total sample number 3134. We trained the data of 2834 test subjects through k-Nearest Neighbor algorithm and performed classification on the remaining 300 test subject’s data. As result, In the Active Group test data it classified 18 out of 20 accordingly, showing an accuracy of 90% as misclassifying two test subjects. In the Medium Group test data, it classified 80 out of 80 accordingly, showing an accuracy of 100%. For the In-active Group test data it classified 198 out of 200 accordingly, showing an accuracy of 98.7% as misclassifying 2 test subjects. The results of the algorithms performance are presented in detail at Table 3.

Table 3 Result of KNN algorithm
Table 4 Result of decision tree
Fig. 9
figure 9

Result of ANN

4.3.2 Decision tree

In the Decision Tree algorithm [29] a total of 300 test data were given according to the clusters. The data were composed of 19 active group subjects, 90 medium group subjects, and 191 in-active group subjects. We trained the data of 2834 test subjects through decision tree algorithm and performed classification on the remaining 300 test subject’s data. As result, the Tree size was presented as 30, and in the active group test data it classified 19 out of 19 accordingly, showing an accuracy of 100%. In the medium group test data, it classified 85 out of 90 accordingly, showing an accuracy of 94.4% as misclassifying five test subjects. For the In-active Group test data it classified 187 out of 191 accordingly, showing an accuracy of 97.9% as misclassifying four test subjects. The results of the algorithms performance are presented in detail at Table 4.

Table 5 Result of SVM

4.3.3 Artificial neural network

For the Artificial Neural Network algorithm [30, 31], we trained the data of 2834 test subjects through the Artificial Neural Network algorithm and performed classification on the remaining 300 test subject’s data. For the hidden nodes, the algorithm did not see any significant increases in performance when the hidden nodes exceeded 5. Rather the excessive number of hidden nodes caused a drop-in accuracy. Also, considering the possibility of overfitting due to the excessive number of hidden nodes, we set the number of hidden nodes at 5.

So, as a result and recap, we trained the data of 2834 test subject through the Artificial Neural network algorithms and performed classification on the remaining 300 test subject’s data. We set the number of hidden nodes at 5 and as result, the overall classification accuracy was 96.8%. The results of the Artificial Neural Network algorithm are shown in Fig 9.

4.3.4 Support vector machine

In the Support Vector Machine algorithm [32] a total of 300 test data were given according to the clusters. The data were composed of 20 Active Group subjects, 80 Medium Group subjects, and 200 In-active Group Subjects Also, as most of the classification capabilities of the Support Vector Machine occur according to the kernel selection, in this experiment we used Radial Basis kernel to perform the Support Vector Machine algorithm [33, 34]. We trained the data of 2834 test subjects through decision tree algorithm and performed classification on the remaining 300 test subject’s data. As result, In the Active Group test data it classified 20 out of 20 accordingly, showing an accuracy of 100%. In the Medium Group test data, it classified 80 out of 80 accordingly, showing an accuracy of 100%. For the In-active Group test data it classified 199 out of 200 accordingly, showing an accuracy of 99.7% as misclassifying 1 test subject. The results of the algorithms performance are presented in detail at Table 5.

As a result of the 4 algorithms k-Nearest Neighbor, Decision Tree, Artificial Neural Network, Support Vector Machine, all algorithms presented an accuracy over 95%. However, among these performances the Support Vector Machine algorithm presented an accuracy of 99.7% showing the highest accuracy among others. The results of the four classification algorithms performed in this experiment are shown in Table 6 below.

Table 6 Result of 4 algorithms

4.4 Adapted UI/UX

The item selection for UI/UX is a very important task to provide a customized UI/UX. In this paper, we have selected five categories, 22 sub-categories and 63 sub-sub-categories of UI/UX based on the transformation factors selected through modeling. The selected items are shown in Table 7 below.

Table 7 Factor of UI/UX

In this paper, we provided a customized UI/UX according to the items above. The UI/UX is presented in three groups as shown in the cognitive model and the analysis of the three groups is as follows.

  1. (1)

    Normal group: A group that does not have any problems with their cognitive response and can use the UI/UX normally. They can use the UI/UX with ease just by adjusting the items according to their behavioral information.

  2. (2)

    Visible supported group: While they have no problems with their cognitive responses, it is a group that needs customized visually. The UI/UX items related to vision should be adjusted more actively.

  3. (3)

    Cognitively impaired group: A group that has a lower cognitive response compared to other groups and must adjust the number of displayed items and selectable items so it would become a more easy and convenient UI/UX service to them.

4.5 UI/UX user evaluation

In this paper, we designed the UI/UX according to the three type of groups shown in the previous section and provided the UI/UX to users according to how the model proposed. The users evaluated the usability and satisfaction of the UI/UX. The features of the provided UI/UX are as followed.

  1. (1)

    Normal group: Similar to that of a general Portal service structure and allowed the user to customize the control items and content items according to their preferences.

  2. (2)

    Visible supported group: In order to provide visual support compared to that of the Normal group, items related to the size of the content exposure number and font size were 30% larger compared with the normal group. Also, the font color and other items were customizable according to their own preferences.

  3. (3)

    Cognitively impaired group: Compared to the normal group we adjusted to reduce control items and modified them to be simpler and also rather than being able to customize according to their own preferences, we replaced that feature with a simple and easy form of a controller.

In this experiment for the evaluation group consisted of 121 from the Normal Group, 52 from the Visually Impaired Group, and 27 from the Cognitively Impaired Group. The evaluation was conducted b alternating between regular UI/UX and the proposed UI/UX for 15 minutes, evaluating usability and satisfaction.

Usability evaluation was based on using the proposed UI/UX first, and then answering questions asking about the usability of the proposed UI/UX. The total score was calculated by summing the scores of the questions. We asked seven questions about the usability of the UI/UX with 45 point as maximum. Distribution of the questionnaire was divided into five quintiles, and it was set as Horrible, Bad, Okay, Good, and Excellent (Table 8).

Table 8 Usability evaluation

Evaluation result shows that the proposed UI/UX was at a convenient level, suggesting that the proposed model was somewhat useful (21\(\sim \)30) to the users.

The evaluation of satisfactory was based on using the proposed UI/UX first, and then answering questions asking about the satisfactory of the proposed UI/UX. The total score was calculated by summing the scores of the questions. We asked 13 questions about the satisfactory of the UI/UX with 65 points as maximum. Distribution of the questionnaire was divided into five quintiles, and it was set as Horrible, Bad, Okay, Good, and Excellent (Table 9).

Table 9 Satisfactory Evaluation

Evaluation result shows that the proposed UI/UX was at a satisfactory level, suggesting that the proposed model was indeed useful (40\(\sim \)52) to the users.

5 Conclusions

In this paper, we proposed a customized UI/UX based on the user’s cognitive response and behavioral information in a distributed environment. The method we proposed was to measure the user’s cognitive response and construct an initial profile and provide a UI accordingly. And after consistently using the UI/UX it was designed to customize the UI/UX accordingly to the accumulated user’s behavioral information. By using a total of 122 user data through our proposed method we generated a cognitive based model and presented the UI/UX model to 200 users according to how the model proposed and performed evaluation of usability and satisfaction. Evaluation results showed that the usability score was 26.6 and satisfaction score was 41.1 both getting positive reviews compared to regular UI/UX.

Future works, we will need more users in order to generate an improved model and evaluate its performances in order to generate an even more improved user model. Also by increasing the users participating in the experiment and re-evaluate the model while improving the performance of the proposed model by tracking UI/UX changes according to accumulated history information.