1 Introduction

Along with the global aging problem, coming years special attention has to be given to Information and Communication Technology (ICT) for Health and ageing well with its focus on Prevention, Health Promotion and Integrated Health Care. Due to a rapidly rising of older people in Europe, there is a need for creation of more opportunities for older people to stay longer in work. Predictions are that, by 2060, approximately 30 % of the EU population will be aged 65+. The percentage of the EU population aged 80+ is forecast to increase fourfold from 1990 to 2060. However, this is asking for improving the quality of life of older people. At the same time, it is clear that the economic and social models of the past fifty years will not be able to face up to these changes. If people are living longer, they are not remaining healthy longer and the demand for health and support services will increase much more than the rate as the numbers of older people. This is asking for a complete renovation of our healthcare system.

Healthcare should be care for health. Many of the illnesses that cause poor health and dependency, such as cardiovascular disease, type II diabetes and mental illness, are preventable and their consequences on older people’s wellbeing can be managed. Chronical diseases are becoming a real health problem. In 2003, there were about 600,000 Dutch persons with diabetes and we expect a double of it within 2025. On a world-scale, these numbers are growing from 250 million up to 380 million in 2025. This has resulted in two trends:

  • Cost increases: The costs of healthcare have been increased enormously the last years. It is too high for various countries and groups.

  • Realization of health care: In several EU countries, the number of people working in healthcare is out of proportions. WHO expect this number in 2025 to be above 20 %, which is not possible to realize any more.

The health promotion work should be introduced into daily life, in order to change healthcare in care for health [1]. This can save Milliard Euros per year. Many discussions have taken place about self-management in healthcare, such as health systematics [2, 3], existing problems [4]. However, self-management for healthcare should be better managed and a smart interactive system is needed. The best way is to use the computer. More and more people have found the Internet for searching for information on healthcare. Available information from Internet can be sorted into three categories:

  • Search engine: A search engine like Google can return mass information. However, the amount is too much to accept by human beings. Moreover, they are not always solutions of a medical problem. The search results might be helpful to a medical doctor, but difficult for ordinary people to distinguish. Users need an authoritative system that all information is shared by the best medical doctors, so that the user could get pure, hight quality results.

  • Health portals: There are many existing health portals belongs to countries and organizations, and the number is still growing every day [58]. Unfortunately, they do not support enough the transparency in healthcare. On top of this, most information systems we can’t find information within some context as we might expect to solve real practical problems. A second problem is that most healthcare portals based on their own databases, which will generate information exchanging, sharing and reuse problems between different portals.

  • Expert system: Research in the last five decades has led to the development of Medical Decision Support (MDS) applications using a variety of modeling techniques [9]. Most of them work on statistical theory, such as Bayesian network and probability [10, 11]. However, in these mathematics modules, only a “virtual”, or an “average” patient is studied. This is very different to a real diagnosis case. A user prefers a definite conclusion, so he/she likes to get why and how to do suggestions from doctors, rather than a list of ranked probability values. Moreover, we need a large amount of clinical data to obtain a reasonable probability distribution. These data are expensive, and cannot reflect the new knowledge in time.

    In the earlier research, we also developed a medical information system [12]. This semantic system can give answers to user questions, like:

    • Having symptoms, what kind of disease can it be?

    • What kind of treatment can be done?

    • Treatment by medicine—what kind of medicine and where to buy?

    • Personal treatment—Professionals, doctor name and how to get there?

    • Support by assistive technology—Kind of prosthesis and where to buy?

    • Etc.

Existing methods might be good enough as a knowledge base, however, prevention is always better than cure. A passive information system simply receives user input, which is a big limitation for care of health. The reason is, ordinary people start to find medical information after getting sick, few of them can prevent before illness. Recently, awareness concept is widely used in various domains that need high level intelligent decision [13, 14]. For example, smart power grid system needs awareness on traditional power supply, in order to reduce the waste to energy [15]. A. Kainulainen et al. [16] describe awareness information with speech and sound. Data mining on social networks is also a kind of awareness [17]. We can also make the system aware of user's condition, in order to realize care of health and not only healthcare. For this purpose, in this paper, we proposed two awareness layers to enhance the developed information system [12]. As shown in Fig. 1, the inner disease awareness layer is used for disease diagnosis on the ontology level and the outer health awareness layer is used to communicate with the user and motivate the user from sickness to health. Both layers need user's interaction. The system has the following concepts:

  • The system could understand the situation, know what to do and why. This will help user to understand the diagnostic procedure.

  • The system applies a "user in the loop" structure. It does not control the user, but communicate and exchange information with the user, and stimulate him/her to do the right thing.

  • The system could be aware of user's health condition. If there are one or more problems, the system returns feedback. This procedure continues until the system reaches the final goal: user returns to a healthy condition.

Fig. 1
figure 1

The proposed two-layer medical system. The inner layer is called disease awareness layer, which is used for disease diagnosis on the ontology level. The outer layer is health awareness layer. This layer is used to communicate with the user and motivate him/her from sickness to health. Both two layers need user interactions in the loop

To realize awareness, knowledge in the field of healthcare should be described in such a way that information systems can understand it and execute automatic all kinds of tasks [18]. Within the last decade, ontologies have emerged as a powerful standard for representing computer-tractable knowledge [19]. Amongst others, the Semantic Web initiative is one of the driving forces behind this development [20]. This is a concept of adding meaning to the data that are presented on the web. Associated semantic descriptions can be used by automated reasoners to infer implicit meaning and relations which would provide for more contextually relevant information. Current ontology languages like OWL-DL have stable, well-founded and expressive semantics using description logics (DL) as underlying knowledge representation standard.

This paper is organized as follows: Sect. 2 prepares preliminary knowledge of this research, which briefly introduces the concept of awareness and ontology technology; Sect. 3 introduces the disease awareness component; the health awareness component with a user in the loop structure is discussed in Sects. 4, 5 shows the application; summary and conclusions are in Sect. 6.

2 Preliminaries

2.1 Awareness system

Computational awareness (CA) is an important feature in cybernetics. We define awareness as a mechanism for obtaining information or materials that are useful for human users, for other systems, or for other parts of the same system, to make decisions. In general, awareness does not necessarily lead immediately understanding [13]. Figure 2 shows an awareness system structure. We say a system is aware only if the system has the following functions:

  • An awareness system is aware of the current state or condition of the system and the environment.

  • An awareness system is aware of the goal state to reach.

  • An awareness system is aware of the method to reach the goal from any initial state.

Fig. 2
figure 2

A cybernetic system with awareness. An awareness system must be able to aware of the current condition, the goal situation and knows the method to reach the goal. The governor controls or changes the system state toward the goal step by step. An awareness system should be able reach the goal from any initial condition

In plain words, an awareness system should be able to know where it is (condition awareness), where to go (goal awareness) and how to go (method awareness).

Condition awareness is basically a pattern recognition problem. The point is how to design a good pattern recognizer. This is a search problem. Goal awareness is also a pattern recognition problem when the goal is known. However, in many cases the goal of a cybernetic system may not be pre-defined, and must be found in real-time dynamically. In method awareness, the governor controls the system by changing the system parameters. If there is no direct approach from current state to the goal, the governor should be able to generate temporary goals (short-term goal) than can be reached directly, in order to drive the system towards to the final goal step by step. Actually, aware of the method to reach the goal is still a search problem. Usually, the search process is iterative. That is, search is often conducted locally, so that to find a good direction and to improve the current situation. This is more difficult than condition and goal awareness.

In disease diagnosis, the goal and conditions highly depend on user interactions, they may not be easily identified in advance. Therefore, we applied the user in the loop structure, which means to treat the user as a component in the awareness system. See Fig. 3. The user receives information from the governor, adds his/her additional information into the loop. This structure is usually used when the identification of problems and requirements that may not be easily identified by other simulation.

Fig. 3
figure 3

Awareness structure with the user in the loop. The user in the loop structure could accommodate more conditions

Fig. 4
figure 4

Medical ontology construction from Internet. Step 1: Internet information is collected from different professional sites. Step 2, sub ontologies are generated. Step 3: To merge all sub ontologies to a mega ontology

2.2 Ontology

Ontology is an explicit specification of conceptualization. It is a body of knowledge describing some domains, typically common sense knowledge domain. An ontology contains three basic concepts: class, individual and property. Class can be defined as an extension or an intension. Individual is an instance or an object of a class, which is the basic component of an ontology. Properties between objects in an ontology specify how objects are related to other objects.

Ontology construction work flow is shown in Fig. 4. For the first step, we used a Web spider to collect information from different professional sites. A PHP library package simple_dom_HTML is utilized to convert variant site pages to XML, and then special PHP programs are made for each site [21]. For example, from the site Mayo Clinic we can get relations between diseases, symptoms and medicines [5], from site ZorgkaartNetherland we can get institute names, professional names all over the Netherland [22]. Eastin.eu lists thousands of assistive equipment [23]. Concepts and relations are stored in MySQL database [24].

After getting relations by programs, a medical doctor team starts to check these data manually, in order to keep our data authoritative.

Step 2 generates sub ontologies for each two neighbor domains. In this research, we simply adopted a simple corresponding relation between relational database and ontology: Table names are classes; records are individuals; relationships between two tables are properties. According to this relation, we use OWL API 3 to convert classes, individuals and relations from MySQL to sub ontologies programmatically [25]. Each sub ontology describes relations between two classes.

In order to integrate sub ontologies together, a meta ontology is generated by merging all sub ontologies in step 3. We used OWL API 3 to merge all sub ontologies to one ontology [25]. In this procedure, duplicated and synonymous individuals were merged into a unique concept. The merge operation makes information exchange between different knowledge domains possible. This mega ontology plays the role of the knowledge base of the reasoner.

2.3 Reasoner

Knowledge stored in ontology can be changed to DL expression. In DL, a facts is represent as a triple description "subject property object". For example, fact "disease diabetes usually accompanies fatigue and hunger" can be represented as "diabetes hasSymptom fatigue" and "diabetes hasSymptom hunger", where "Diabetes" is subject, "hasSymptom" is property, "fatigue" and "hunger" are objects. A DL reasoner is a piece of software able to query and infer logical consequences from a set of asserted facts or axioms. It accepts DL query as input and returns OWL individuals/structures. The DL query allows users to quickly test definitions of classes to see that they subsume the appropriate subclasses and individuals.

A simple DL query for individual retrieving contains three parameters: property, mode and individual names. Property specifies the relationship and individual names are query inputs. Suppose the user have two symptoms Symp 1 and Symp 2, he/she wants to find relative diseases. A DL query "hasSymptom value Symp 1" could get a disease set that has symptom Symp 1, and "hasSymptom value Symp 2" could get another disease set that has symptom Symp 2. If the user want to get diseases that have both Symp 1 and Symp 2, we can calculate the intersection of two disease sets. In DL query, the mode should be "and". That is, "property value Symp 1 and property value Symp 2". For disease set that has Symp 1 or Symp 2, we use "property some {Symp 1, Symp 2}" to make an union set. The "and" and "some" modes are equal when the number of individuals is one.

The FaCT++ is used as the backend reasoner, which is the new generation of the well-known FaCT OWL-DL reasoner [26]. Additionally, FaCT++ is implemented using C++ in order to create a more efficient software tool, and to maximize portability.

3 Disease awareness layer

Although a reasoner can infer from the current knowledge base, we must write individuals, values and properties according to description logic syntax. Users without enough IT and medical training may feel difficult to explore knowledge in an ontology database. Take symptom ⇔ disease relationship as an example, the relation between disease and symptom is "disease hasSymptom symptom". The inverse relation from symptom to disease is "symptom isSymptomOf disease". With these relations the layer could provide helpful disease information from symptom/cause inputs.

Actually, the disease awareness layer could not only help user to get possible diseases from an initial symptom/cause input, but can also be applicable to other relations. For example, by using "disease treatedBy medicine" and "medicine canCure disease" relations, this awareness layer can be applied to explore disease ⇔ medicine information. In the following section, we take symptom ⇔ disease relation for clarity. The layer is called disease awareness is just because the symptom ⇔ disease relation is the most important, the most complex and the most commonly used relationship in a diagnosis.

Figure 5 shows the work flow of the disease awareness layer. This layer receives user's conditions and output possible diseases. There are several input components to understand the current state. A reasoner can output diseases from a single symptom. However, when multi-possibility, conceptual conflict and multi-morbidity exist, usually the final diagnosis cannot be made directly. According to different temporary results, the reasoner needs additional information for the final goal. It is not easy to define all states based on temporary information. This task is left to the user. A feedback component provides helpful information to the user. The user revises the condition and goes to the next query, until the final decision can be done.

Fig. 5
figure 5

The disease awareness layer. The user generates inputs, including direct symptom and cause, general and individual level inputs. All inputs condition is converted to standard ontology knowledge by mapping operation. Reasoner outputs inferred diseases to feedback component. The user edits input condition and query again, until the final disease is inferred

3.1 Input component

User's condition is represented by user's inputs. For disease diagnosis, the inputs are symptoms and disease causes. The user can tell some symptoms directly, but he/she cannot be aware of all inputs. For example, a man is old and his mother has diabetes. He may not be able to aware that he is in a relatively high risk group of getting diabetes. Another person just got his physical examination result, but there are too many lab-data that he cannot understand. In order to make this information available for the reasoner, we use a general input component and an individual input component. All input will be mapped to standard ontology knowledge before using the reasoner.

The general input component collects user's general health condition by a set of questions, outputs relevant cause clusters to the reasoner. Cause clusters are collected from site MayoClinic, and checked by our medical experts one by one [5]. As shown in Fig. 6, these clusters include hereditary, age, lifestyle, sex and so on. In ontology, a hasGeneralCause property is used to indicate the general causes of a certain disease or a symptom. For example, type I diabetes has hereditary cause, type II diabetes not only has hereditary cause, but also has life style cause. Baby acne only happens on newborn and becomes much less common after age 1.

Fig. 6
figure 6

General level input component. This component collects user's general health condition, outputs relevant cause clusters to the reasoner

The general input component converts user's input to standard ontology knowledge, in order to make the system aware of user's general condition. Note that the general awareness component does not make a diagnosis directly, it plays the role as a filter. After getting inferred results, only results containing causes generated by general input component are kept. All other unrelated diseases or symptoms can be removed from the inferred set. For example, a man with age 30 will never have baby acne, and all gynecopathies have no relation to him. Our experiment shows that general awareness component could limit up to 70 % diseases in maximum, this is a significate reduction of original disease set.

In order to make a definite diagnosis, the system may need the patients’s situation in detail, how serious the situation is. The individual level input component is used here to understand more explicit data, such as symptom description, lab-data. Numeric lab-data can be obtained by medical examination. Some family medical devices/sensors could also be utilized for collection numeric data, such as blood pressure, heart rate, sleeping time, duration and quality, see Fig. 7. Complete definitions of lab-data can be found from Lab Tests Online® [27]. With these definitions, the individual input component converts these data to relevant symptom descriptions. Note that in this research, we mainly focus on how to do reasoning on user's input. User should get these data from clinic exam, sensor or devices by themselves.

Fig. 7
figure 7

The individual level input component is used to get more explicit data from a user

When received individual inputs, a mapping operation is conducted so that all inputs are converted to standard ontology knowledge. This procedure is done by existing text mining algorithms. First, all user inputs are stemmed and removed stop words; second, these sentences are converted to word vectors by term frequency-inverse document frequency (tf-idf) model; finally, cosine similarity distance is adopted to find the candidate ontology entities [28].

3.2 Feedback component

In the medical ontology, a symptom may have too many relevant diseases, such as "abdominal pain". The diagnosis is to remove these possibilities. Typically speaking, this is not a one step task. In the disease awareness layer, the goal is to get a proper disease, and the condition awareness has already been done by input components. The only task is how to reach the goal. Disease awareness layer imitates doctor’s diagnosis procedure by a feedback component.

The feedback component provides useful information to the user, according to the inferred result of the reasoner. The schema of the feedback component is shown in Fig. 8. For a set of symptoms/causes, the reasoner will check the inference conditions. If there is any missing condition for a reasoning, the feedback component calls a question generator. The question generator will ask the user to fill up the missing condition. After that, the reasoner retrieves disease set for each symptom, and make an intersection operation. The disease set is sent to the feedback component. Depends on the number of intersection diseases N d , the feedback component conducts different behaviors.

Fig. 8
figure 8

The feedback component. According to the inferred result, the feedback component provides helpful information and tools to the user. The user revises his/her input and goes to the next query loop, until the final decision can be done

3.2.1 One disease and less disease

If the number of inferred disease is 1, or slightly larger than 1, the feedback component calls symptom pattern controller. The controller lists all standard symptom for each disease and asks the user to select. The reliability of a disease increases when the number of checked symptoms increases. When the reliability of one disease is larger than a threshold, this disease is treated as the definite diagnosis of the disease awareness layer.

3.2.2 Many diseases

If N d is larger, for example over 5 diseases, the work of reliability checking becomes too high. This happens when the user inputs a common symptom, such as "abdominal pain". For this situation, the reasoner needs more input to reduce the disease set. The feedback system asks the user to select candidate symptoms from a symptom checker. A symptom is a candidate symptom if it shares the same disease with at least one user input symptom. Figure 9 shows the candidate generation method. Given selected symptom(s), the reasoner uses isSymptomOf property to generate a disease set union. The generated disease set is then used to get a larger symptom set union, by using an inverse property hasSymptom.

Fig. 9
figure 9

Example of candidate symptom generation. Step 1: For each symptom, use isSymptomOf property to get relevant disease(s); Step 2: For each disease, use hasSymptom to get relevant symptom set; Step 3: Calculate the union symptom set, and then mark all unchecked symptoms as candidate symptoms

3.2.3 No disease

An empty inferred disease set means there is no such disease that has all input symptoms. For this situation, a multi-morbidity checker is invoked. Multi-morbidity is defined as the co-occurrence of two or more chronic medical conditions in one person.

As shown in Fig. 10, if the user selects a mutual symptom from candidate symptom list, the multi-morbidity checker realizes the situation. This crucial function helps the user to understand multi-morbidity.

Fig. 10
figure 10

Multi-morbidity. Figure 9 is an example of irrelevant diseases, as there is no mutual symptom between disease 1 and 2. However, there is at least one mutual symptom in multi-morbidity

4 Health awareness

The outer health awareness layer used to communicate with the user and motivate the user from sickness to health. As shown in Fig. 11, this layer contains two awareness functions. The Natural Language Interpreter (NLI) is aware of user's intention. For health awareness, the DL query set generated by the NLI is the condition; inferred result is the goal; depends on the result, a communication component shows the user how to reach the goal; the user is an executer. This layer, the health awareness layer also is aware of the condition, goal and method to reach the goal.

Fig. 11
figure 11

The health awareness layer. This layer contains two awareness functions. The Natural Language Interpreter is aware of user's intention. For health awareness, the DL query set generated by the NLI is the condition; inferred result is the goal; depends on the result, a communication component shows the user how to reach the goal; the user is an executer

4.1 Natural language interpreter

The NLI could understand the user's intention, and then to use the reasoner to infer user desired results. In the NLI, the ontology is treated as a directed graph, where nodes represent classes and directed edges represent properties between classes. Each node is described by several natural language sentences. Figure 12 shows the mega ontology developed in Sect. 2.2. Given an input sentence, the most similar node can be found by k Nearest Neighbor (kNN) algorithm [29]. At the initial point, user's input is mapped to a start node and an end node. A shortest route is generated from start point to end point, with each sub route represents a sub DL query. NLI output is a DL query or a set of DL queries.

Fig. 12
figure 12

The mega ontology generated in Sect. 2.2. Ontology structure can be treated as a directed graph, where a node represents a class, the edge between each two nodes is the property between two domains

The NLI sends DL query to the reasoner and starts querying. As shown in Fig. 11, after each sub query, the system calls diseases awareness layer and asks the user to check sub results. If results are not satisfactory, user could add additional information to improve the query quality in the disease awareness layer. The previous DL query's result is used as next query's input. For example, user input "My abdominal feels pain, what medicine should I use?" wants to find treatment from symptom. However, there is no relation between Symptom and Treatment. The shortest route between Symptom and Treatment is Symptom ⇒ Disease ⇒ Treatment.

4.2 Communication component

As discussed before, an awareness system knows what happens and what to do. The goal of our system is to create awareness and to motivate the user to health conditions. However, the user is out of control by the system directly. That means, we cannot force a user to do something. That is the reason we treat the user as a system component. The system provides services, and tries to communicate with the user. When the user gets enough information and realized the importance, he/she may start to change.

Once the reasoner gets inferred result, the goal of health awareness is defined. For a disease, the goal is to cure or relieve the symptom. For a symptom, the goal is to find out causes and relevant diseases. For a medical instrument, the goal would be the product specific and how to buy it. Thus, to reach the goal is to provide relevant information to the user, so that the user could take actions.

This work is done by the communication component. However, the communication component does not simply list information to the user. It tries to communicate with the user in an interactive way. For example, if the user has a bad living habit, the communication component first shows the problem to the user. It then tells the user if he/she continues this habit, what will happen. Finally, the system provides several methods to get rid of this habit. The component communicates with the user in the first person. The user could feel that the system really "understands" him/her so that there will become a situation of trust. Past user data can also be used to evaluate the user's effort. If the final result is a disease, the system will show proper treatments, medicine information. If the user cannot do self-treatment, system could provide hospital information, professional information and preliminary knowledge before seeing a doctor. Not limited to plain text, all media formats can be used to communicate with the user. Such as graph, wave and videos.

5 Self-diagnosis support system

In this section, two scenarios are shown for demonstrating the use of the proposed system. The first scenario is for disease awareness. The user feels increased thirsty, so he wants to find out the reason. In the proposed system, the procedures are shown in Figs. 13, 14:

  • (a) Initial point: The user inputs "I feel increased thirsty. What is my problem?". After clicking the Check button, the system recognizes this is a symptom ⇒ disease query. The system show helping a message to ask the user if he wants to find disease from symptom description.

  • (b) Mapping function: The user confirms the question by clicking the Yes button. Based on the user description, the system finds out similar symptoms. The best fit symptom is "Increased thirsty", listed in the selected list box. Other similar symptoms are listed in the candidate list box, such as "Extreme thirst" and "Thirst". This mapping function normalizes user input to standard OWL individuals.

  • (c) Symptom checker: The user selects default symptom to search, and get many relevant diseases. The system generates a symptom list from inferred diseases, and asks the user to add additional information. As shown in the figure, the user selected "Blurred vision".

  • (d) Symptom pattern controller: After clicking "Add description", the system considers two symptoms together and gets fewer diseases. The symptom pattern controller is activated for checking three inferred diseases: Diabetes, Hyperglycemia in diabetes and Prediabetes.

  • (e–g) Check symptoms for each disease: The user is asked to check symptoms for each disease. The system calculates the disease reliability during this procedure. Diabetes gets highest reliability 58 %, compared with other two diseases 10 and 21 %.

  • Thus, the user has a very high probability of getting diabetes. Relative treatment method, medicine risk factors information is listed in the final result panel.

Fig. 13
figure 13

Scenario of disease awareness. a Initial point, b mapping, c too many results, call symptom checker, d less results, call symptom pattern controller, e check symptoms of diabetes, f check symptoms of hyperglycemia in diabetes, g check symptoms of prediabetes, h inferred disease: diabetes

Fig. 14
figure 14

Scenario of health awareness. a Inital point, from symptom to medicine, b Disease awareness, c Disease awareness (continue), d Inferred medicines, e final result, f information for Ifosfamide

In the second scenario, the user wants to find a medicine for his disease. As shown in Fig. 14:

  • (a) Initial point: The user inputs "I feel abdominal pain. What medicine can I use?". After clicking the Check button, the system recognizes this is a symptom ⇒ medicine query. There is no straightforward way for this query. The system generates an intermediate disease node to reach the final goal. That is, to find symptom ⇒ disease relation first, and then to find medicine information from inferred disease(s).

  • (b–c) Disease awareness: The disease awareness layer helps the user to find a high reality disease "Wilms's tumor".

  • (d) Reasoning medicines: Based on the temporary result "Wilms's tumor", the system gets five relative medicines. This is query is similar to disease awareness procedure. The only difference is input and output are different.

  • (e–f) Final results: Final result are listed here. Clicking on one medicine will show more information.

The system supports more queries like special list query, medicine instrument query, ISO/ICD query and so on. More information is available in the system, like location, price, supplier information. Due to the page limitation, we skip these features here.

6 Summary and conclusion

In this paper, we proposed a support awareness system for self-diagnosis. The system receives natural language as input, and output user intended medical information. The system is aware of user's condition and intention. There are two circle awareness layers in the system. The inner layer is used to explore information between any direct connected domains, called disease awareness layer. The outer layer is used for health awareness. It uses an NLI to understand the user's intention, call disease awareness layer for reasoning and uses a communication component to motivate the user to a health condition. The system could help ordinary user to understand his/her health condition, do self-diagnosis and realize the disease prevention to some extent.

As a future work, there are still a lot to do. First of all, the current system is only evaluated by our medical experts. The efficiency might be general known by medical doctors, but no system has been developed so far to show this effect. We should try to introduce some criteria for validation. Second, the user interface is designed for medical doctor, we should improve the interface and feedback component for ordinary users. The system should be able to generate smart questions so that to reduce the operations needed for the final result. This again needs awareness technology. Finally, we would like to provide an online system for self-diagnosis support. We will open the system to volunteer doctors for testing, after that, we will test the system in Netherland, and then to the world. With the web-based system and a friendly graphic user interface, we believe that we can help users to improve their's health conditions in daily life.