Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

1.1 The Concept of Molecular Biomarkers

The last two decades have seen extensive effort put into genome and proteome research which has led to a deeper understanding of the molecular basis of diseases, their occurrence, development, and cure. As a consequence of this knowledge, more suitable therapies are on the horizon and are discussed widely as “personalized medicine”. Molecular diagnostics will be an integrated part of this concept, since medication, success of treatment but also early occurrence of specific biomarkers for early detection of disease or even presymptomatic diagnosis will become the focus of medical treatments. Also genetic markers for risk screening and all aspects of companion diagnostics that define medication by the genetic constitution of a patient will help provide improved therapy.

Therefore, the molecular in vitro diagnostics market has good forecasts and is regarded to be a worldwide increasing market. Especially combined with point-of-care testing (sometimes better described as point-of-need) in vitro diagnostics might significantly improve the benefit obtained from molecular knowledge. Biochip- and Lab-on-Chip technologies designed for routine application open up the opportunity of performing complex analysis and multiparameter analysis on a small scale. Lab-on-Chip systems have the potential to transfer molecular diagnostics to the point-of-need.

A key component of the future development of diagnostics is the concept of biomarkers. In general biomarkers are all kinds of parameters that may be obtained from a patient and that are quantitative and correlate to a particular disease. Usually a biomarker qualifies to be called a surrogate marker when evidence has been gained from clinical studies that the biomarker represents a certain disease, a disease stage, or the patient’s reaction towards a particular treatment or medication. While the general concept of a biomarker includes all kinds of physiological data such as heart beat and lung volume, molecular biomarkers concentrate on biochemical or genetic parameters and patterns thereof, sometimes named signatures.

In addition, diagnostics becomes more complex when a deeper look at these various biomarkers is necessary (Fig. 1). Within the Human Genome Project about 25,000 genes were identified. Considering the dogma of molecular biology these genes are transcribed to various forms of RNA, then translated into proteins and post-translationally modified. Going one step further, also variation in metabolism may be linked to diseases as well. In this regard, genomic, proteomic, glycomic, and metabolomic research has led to the need for detection and quantification of completely different types of analytes ranging from genes and proteins to small molecules and combinations thereof.

Fig. 1
figure 1

Differentiation of biomarkers and number of possible diagnostic targets (data retrieved from corresponding homepages [30])

A multitude of detection methods are needed to match the requirements of each analyte. However, in many cases these methods are too sophisticated for routine diagnostics and in most cases too expensive as well. Hence, there is a need for new, more user-friendly technologies. This will lead to benefits for many patients and additionally may help to reduce costs caused by false and delayed treatments.

1.2 Societal Needs

Having established this type of biomarker-related diagnosis, the accumulation of relevant data over a time period by electronic means will lead to increasing insight into long-term effects and later presymptomatic or even prognostic diagnosis may be achieved (Fig. 2).

Fig. 2
figure 2

Biomarkers will help to provide diagnostics at various stages of a disease even before a patient shows symptoms. Also during or after medical intervention biomarker based diagnostics will help to follow the success

Taking advantage of these new biomedical findings will provide an opportunity for improved patient-centered care. Hence, biomarker-based diagnostics will not only be used for curative purposes but also for prevention of diseases, enhancement of therapy success, and in general for increasing the quality of life.

From a more societal point of view diagnostics can effectively reduce costs within health-care systems. In terms of personalized medicine, collecting data over time will allow more rational access to the best therapy, and for large collectives of patients gathering data will lead to valuable information for a health-care system. Biomarker-based diagnostics will thus help to reduce health-care costs by reducing the number of second line therapies, reducing treatment costs, reducing the number of follow-up therapies, reducing nursing, reducing consequential costs, and reducing the period of sickness absence. In this regard, there are only very few economic studies about the impact of point-of-care testing. One example has been undertaken for emergency departments in the U.S. It was shown that fast measurement of the diagnostic marker Troponin directly at the patient can reduce costs by a factor of four, by enabling faster patient management time and an overall better outcome for the patient [11].

From a more holistic viewpoint and considering the aging and increasing population, but also the ramifications of globalization, more versatile diagnostic technologies will be necessary. Moreover, diagnostics will have to be more frequently available and hence technologies have to be found that enable patient-near testing with the same quality as from the laboratory.

The scenario of bedside analysis is the first field of application of point-of-care testing (POCT). A short return time from sample to the location of the decision-maker, the so called “turn-around time” (TAT), is of great interest to the physician during his visit in the clinic. Also in the doctor’s office it often might be of great help, if the doctor had access to the blood parameters or other results from the laboratory while the patient is still in the office. The decision for therapy could be better targeted and the patient would be pleased to be well informed about the physician’s decision.

The role of biomarkers is to support the decision, which therapy might be most promising. A well-established biomarker might also be a guide for medication and for the appropriate dose for the individual patient. This scenario is called “companion diagnostics” and refers to the need of most medications to be appropriately adjusted to the physiological and genetic constitution of each patient individually to be most effective or sometimes even effective at all. Many drugs are known to be metabolized more or less effectively by different patients but up to now this information is only seldom available and useable by a doctor in his office. This information would be of help only if it were available immediately.

The utility and usefulness of biomarkers will increase, if samples other than blood, like saliva, urine, or other easily accessible body fluids are tested, which may help to make diagnosis less invasive.

Individual consultation and personalized therapy are the major trends of modern health care and both require diagnostics at the point-of-need and are what economists call the “market pull” for the development of Lab-on-Chip technologies for POCT [4].

1.3 Integration as a Key Parameter

Technologies that can be used for point-of-care testing have to match various requirements—especially because of the circumstances point-of-care testing is used in. Here, samples are not taken in a lab environment which enables users (medical personnel or physicians) to perform steps of sample preparation and its purification until results are obtained. More precisely, the technology has to provide user-friendly devices, that perform automatic processing of any sample of a body fluid and gives an interpretation of the measured results on a display. With this scenario in mind the following key features for technologies may be defined:

  • User-friendliness: The devices have to be as easy as possible to use. This includes not only sample preparation, but also handling of the device and the small sample volumes required for testing.

  • Miniaturization: In most application scenarios the device and also a possible base unit have to be as small as possible. By thinking about the assay itself, miniaturization of the assay will decrease the amount of sample needed for a particular analysis and will be of benefit in terms of faster reaction times.

  • Parallelization: Because of the increasing knowledge in biomedicine in many cases a parallel analysis of different biomarkers can be beneficial. Therefore, technologies have to deliver not only a single parameter. Moreover, the possibility to determine a multiple of different parameters to make a diagnosis not only on the basis of one parameter can lead to faster therapeutic action and hence better patient outcome.

  • Speed: Speed of analysis can be seen as crucial since nowadays applications in point-of-care testing are described as being linked to direct therapeutic action. For example, point-of-care testing for diabetes is directly linked to the injection of insulin, or a test for the determination of a cardiac infarction is directly linked to therapeutic action. These are two examples where patients directly benefit from a fast diagnosis and where speed especially in the second example is of great importance.

  • Interdisciplinary: The key to obtaining such devices and fulfilling the above-mentioned criteria is the convergence of different technologies. Hence, an interdisciplinary approach has to be chosen which combines not only biochemistry, but also electroengineering, microfabrication, material sciences, and knowledge about production which all have to work together.

Taking these five key features into account, it is necessary to start as early as possible within the design process to think about a holistic system solution. In this process the concept of integration is essential since integration of steps, materials and processes may lead to the desired device features. In this regard, the following sections describe different degrees of integration and try to outline necessary design rules for implementation of interdisciplinary technologies for realizing systems for point-of-care testing.

2 Integration Steps

Point-of-care testing has to integrate laboratory-like procedures and guarantee laboratory standards. Moreover, POCT has to be connected to the data management system of the clinic or of the physician who is in charge of the patient. Figure 3 shows how integration of bioanalysis proceeds and which steps have to be taken during further development. It can be regarded as a road map for integration in POCT for the upcoming years.

Fig. 3
figure 3

Different steps of technological integration

2.1 Biosensors and Biochips

The concept of biosensors has a long history; usually Clark’s glucose electrode proposed in 1962 is named as the birth of the technology [6]. Biosensors were defined by IUPAC in 1992; a biosensor is “a device that uses specific biochemical reactions mediated by isolated enzymes, immune systems, tissues, organelles or whole cells to detect chemical compounds usually by electrical, thermal or optical signals” [12]. By this definition classical biosensors are made from two components, the biological receptor molecule and the transducer, which are responsible for linking a biochemical reaction to a readout that may be quantified such as an optical or electronic signal.

Having a look at the technological side, in the early days of biosensors, there was a clear separation between the receptor molecules and the transducer which was made by physical entrapment of the receptor molecule within a membrane. The membrane itself was also used as a separation tool which only allows the analyte of interest to pass through. The next step in the process of integration was the generation of biosensors in which the membrane, the receptor molecule, and the transducer were all combined in one compartment [19]. Hence, the process of separation, binding, and transduction were located next to each other enabling faster electron transfer, better biosensor response, and higher sensitivities. These so-called membrane sensors where then replaced by second-generation biosensors in which the membrane was no longer necessary. This could be accomplished by new and more specific recognition elements which made the first separation step redundant. As fabrication technologies in microelectronics and microsystems progressed in the later 1980s smaller and affordable production of microelectronic and mechanical systems (MEMS) was achieved and thus the integration of receptor molecules, transducer, and the electronics necessary for data generation could be combined. The convergence of now three components led to the production of third-generation biosensors which were sometimes also termed “biochips” [19].

The literature contains a vast variety of attempts and concepts for biosensors and the number of publications is still increasing. Because of the impact of various technologies the improvements can be seen in all of these components with special emphasis on their interfaces. For example, the communication between enzymes and an electrode in an electrochemical sensor is of huge importance for its performance. In an amperometric detection mode electrons are measured which corresponds to the conversion of a substrate. To enhance the amount of electrons traveling from the enzyme to the electrode, two different methods can be chosen. One method is the possibility of using a sophisticated connecting layer in which the enzyme can be embedded. By adding a redox-mediator to this layer there is the possibility for an indirect electron transfer from the enzyme over the redox-mediator to the electrode. In a recent example, Nagel et al. showed the synthesis and application of a redox-polymer based on poly(N-isopropylacrylamide) (PNIPAM) with incorporated ferrocene moieties for an indirect electron transfer using NAD-dependent glucose dehydrogenase (NAD-GDH) or pyrroloquinoline quinone-dependent GDH (PQQ-GDH) and glucose as the analyte ) [15]. The authors detected a heterogeneous electron transfer rate of 80 s−1. This is twice as high compared to a normal self-assembled monolayer of a ferrocenepentanoate. Hence, by using hydrogels with incorporated mediators such as ferrocene a more effective electron transfer may be achieved. The other method is to modify the recognition element itself. To describe one example here, Demin and Hall modified a glucose oxidase (GOx) [10]. By different methods such as NMR spectroscopy and in silico calculations two considerations could be revealed: (i) oligosaccharide structures on the surface of the GOx are responsible for a larger space between the enzyme and the electrode; (ii) the path of the electron through the GOx could be shown hence the hemisphere of the enzyme could be determined through which the electron can pass to the electrode. From that, a genetically modified GOx was derived and produced bearing no oligosaccharide structures and a certain surface modification to facilitate direct immobilization. Hence, a better and direct electron transfer from the enzyme to the electrode could be accomplished.

This is a nice example of how the modification of biological recognition elements can lead to improved biosensor performance for applications such as glucose detection. Nevertheless, there is a trend to overcome the limitations of biological recognition elements such as stability problems under harsh conditions or batch-variations and to replace them with artificial receptor molecules. To obtain artificial receptors besides their chemical synthesis which is in most cases tedious and time-consuming, two approaches have been established in the last few decades. The first is the use of artificial DNA- or RNA-molecules called aptamers which may act as an antibody-like recognition element. For their synthesis a process called systematic evolution of ligands by exponential enrichment (SELEX), invented simultaneously by Gold and Szostak [9, 25], is used in which the tightest binding DNA- (or RNA-) strands are selected via a selection process. Through a generic approach aptamers against different molecules can be generated and used in biosensor applications [13]. Since the binding event is not directly linked to a signal generation most applications using aptamers are combined with an optical transducer. One example is the detection of TNT by an aptamer within a fiber-optic biosensor. Because of the selectivity of the aptamer it was possible to discriminate TNT from other explosives [7].

The second is the concept of molecularly imprinted polymers (MIP). Here, a polymerization is carried out in the presence of the analyte which is also the template during the imprinting process. Within the polymerization mixture monomers, so-called functional monomers, are also used which can specifically interact with the template molecule by covalent or noncovalent means. After polymerization the template is extracted leaving an artificial binding site in which the analyte may (re-)bind. First adaptations of the MIP concept to biosensors can be traced back to the work of Mosbach [14]. This concept is also a generic approach and may be used for a great variety of different analytes. Using the noncovalent approach it was, for example, possible to obtain a binding polymer against nitrofurantion, an antibiotic frequently used in farming in former times, however nowadays prohibited due to toxic side effects. With these polymers it was possible to detect nitrofurantion directly from bird seed avoiding tedious mass analytical measurements [2]. One prominent example of a covalently imprinted polymer is the use of boronic acids as functional monomers for the detection of saccharides such as glucose, fructose, or saccharide derivatives such as fructosyl-valine [18, 20]. Since the binding event is also not linked to a direct detection in many cases the transducer chosen for molecularly imprinted polymers is either based on a mass change measured by quartz crystal microbalance (QCM) or cantilevers or based on the measurement of the latent heat of the binding using calorimetry. For a fructosyl-valine imprinted polymer it could be shown that the thermometric response of the binding event is about forty times higher compared to a control polymer without imprinted cavities [18].

Not only improvements on the recognition site are responsible for better biosensors. As already mentioned also improvements in the design and production of transducers may lead to great advancements in how biosensors will perform in various applications. Miniaturization of transducers is beneficial for cost reduction as well as user-friendliness. Because of the still ongoing race in miniaturizing electronics, especially electrochemical- and MEMS-based sensors may be miniaturized. A limitation of this trend will be, when problems arise from the small surface area with small possible surface loadings of enzymes leading to small signal amplitudes. In this regard different amplification methods have to be applied to gain signals with a high signal-to-noise ratio. Besides microelectronic devices also the manufacturing of micromechanical devices finds its way into the research field of biosensors. To give an example, the fabrication of a microcantilever enables the measurement of mass changes or changes in viscosity. In a biosensor for glucose detection Birkholz et al. used resonating microcantilevers to measure the change in viscosity of a hydrogel in which glucose was bound [5].

To summarize, improvements in recognition elements as well as in transducers are responsible for miniaturization and integration of biosensors and biochips. With smaller devices and more specific and direct biochemical reactions there is the potential for many more applications. Because of the specificity of the biocomponent (or biomimetic component) biosensors are principally capable of working in crude samples without prior purification, like the measurements made by glucose sensor measures in undiluted blood samples. However, many biochemical assays, especially those related to the determination of nucleic acids need sample preparation. Also the introduction of labels for improvement of sensitivity requires preanalytical sample treatment. For this reason, further integration steps have to be performed to fulfill the needs of more complex analyses.

2.2 Lab-on-Chip Systems

While in biosensors and biochips data generation and amplification are included, preceding steps such as sample preparation, purification and washing steps for performing assays are not. For point-of-care testing, nevertheless, these steps should be included, for example when cells have to be destroyed to analyze interior compounds. Especially for the analysis of nucleic acids, usually lyses and amplification (with polymerase chain reaction, PCR) are necessary steps before determination of origin or more specific sequence details. Since user-friendliness was defined as being one of the key challenges all these steps have to be automated and integrated into the analytical device.

One concept to do so is a Lab-on-Chip system; the term derived from the notion of a “laboratory on a (microelectronic) chip” program. All the above-mentioned preanalytical steps have to be performed within the Lab-on-Chip device at laboratory quality and it has to be designed in such a way that an unprocessed sample is applied and finally measured by the biosensor which is part of the Lab-on-Chip. Work on Lab-on-Chip systems in general has been reported for many years. However, most approaches lack the possibility of serial production which hinders the development that finally might lead to commercialization. Also, in many studies just single steps of the analytical process were displayed and designed in a chip-like format but the whole process has not been covered yet.

Recently, the current authors published work on a Lab-on-Chip system called the “Fraunhofer ivD-platform” that was designed with the potential for serial production and offers a huge degree of modularity [21]. The system which will be described in greater detail in a later chapter of this book consists of a credit-card-sized cartridge (used as a consumable) and a read-out unit. Considering the concept of integration within the cartridge, which is the actual Lab-on-Chip system, all steps from sampling to transduction are integrated. Processes such as data amplification and displaying of the results are integrated into the read-out and processing unit. In this way, the combination of the cartridge and the read-out unit cover the whole process chain necessary for point-of-care testing.

To perform a test, blood from the finger pad is used and applied onto the cartridge. Besides a reservoir for the sample, the cartridge holds the reagents necessary for the particular test (assay), the sensor and actuators for displacing reagents and sample towards the sensor. Hence, after adding a drop of blood the cartridge is inserted into the read-out unit and processes such as washing, labeling, incubation, and read-out are automatically performed within 15 min.

The platform has been designed to be modular; it therefore allows a choice between an optical and an electrochemical sensor. Both sensors can measure different analytes in parallel depending on the used capture molecules which are immobilized on the surface of the sensor. While the electrochemical sensor is able to measure 16 different analytes at once, the optical sensor offers the possibility to measure as many biomarkers as necessary for the particular application (up to 500). To obtain such high numbers of parameters, a microarray is used [3]. In a microarray different capture molecules are deposited on a surface in small spots having diameters of around 50 μm and volumes in the nL-range.

For the detection of small molecules, proteins or antibodies either antibodies as capture molecules or antigens can be spotted. After addition of the sample from which the analytes can bind to the capture molecule different washing and labeling steps are performed to measure the amount of bound analyte by means of a fluorescence signal. This signal is measured and quantified within the read-out and processing unit in this case with a CCD-camera and software which quantifies the intensity of each spot.

For the detection of DNA the steps to be performed include sample preparation, sample amplification, labeling, and detection steps which have to be integrated within the Lab-on-Chip system. Within such systems purification of DNA may be carried out in two ways: After disruption of the cells either nanobeads bearing DNA-strands for purification or silica matrices may be used. After both purification procedures the purified DNA has to be amplified. The most common method for amplification is the polymerase-chain reaction (PCR), in which cooling and heating cycles are used for annealing, elongation, and denaturation of new DNA-strands with a polymerase. In the case of the ivD-platform an external peltier element is used for cooling and heating since the high rates cannot be accomplished within the cartridge itself. For detection, also a microarray now bearing DNA-probes is used.

Although in the case of the Fraunhofer ivD-platform the degree of integration can be further improved the system was chosen to be as simple as possible. This guarantees an easy transfer of already established assays.

2.3 Integration on the Chip: “Active Arrays”

An even more elegant way by means of biochemical integration especially for DNA-detection is given by the concept of “active arrays” (von Nickisch-Rosenegk et al. [29]. Here, the reverse primers are immobilized on a substrate as “capture molecules.” During the annealing step the templates from the sample bind to the primer and are elongated. After denaturation of the elongated DNA-strand a new primer anneals and the strand will be elongated again. Since the first primer is immobilized on the sensor the final PCR product can directly be measured spatially resolved within the microarray spot. In this case amplification and detection are combined enabling the possibility for real-time detection. However, heating and cooling cycles are still necessary for the amplification.

Moving one step further towards simplification, a procedure called helicase-dependent amplification has been adapted as an active array. In an example to measure three sexually transmitted diseases Neisseria gonorrhoeae (NG), Chlamydia trachomatis (CT), and Herpes simplex virus (HSV) with OnChip-HDA at once this isothermal amplification method was used [1]. In this reaction a polymerase is responsible for the elongation of the immobilized primer. In contrast to a normal PCR, the denaturation of the double strand is performed via a helicase followed by binding of a 5′-primer to the single strand for reverse elongation [28]. The main advantage of these types of amplification is that just a single temperature has to be applied making these concepts suitable for point-of-care applications. Even so, an easy adaption of already established assays is not possible and the primer design particularly for multiplexing is challenging.

Although the concept of active arrays increases the degree of integration, data acquisition and analysis are still performed within the read-out unit: Integration has to continue.

2.4 Autonomous Biosensors

As integration proceeds the next technological design has to integrate process steps from sampling to data acquisition and wireless transmission of the data to the physician or a data hub. This type of sensor may be implanted or operate in remote situations such as the home. Hence, this integration is particularly based on the miniaturization of electronic parts such as the transducer itself or application-specific integrated circuits (ASICS) for its processing or data acquisition. However, the technological development is not driven by biochemical or bioanalytical and diagnostic needs. The production of electronic parts and especially the reduction of their power consumption have to be named as key drivers of the development.

In general, all components ranging from sample taking to data acquisition are integrated in such kinds of autonomous sensors. Recent projects have aimed at the development of sensors that are able to measure small analytes, like ions or metabolites. These projects are oriented towards the example of capsule endoscopy which comprises a “pill”-like camera system sending pictures from the gastro-intestinal tract through the body [8]. This concept has been broadened towards sensors by first reports from Philips adding pH-measurement to the camera system [17]. The concept has consequently been worked out by a recent consortium working on implantable sensor concepts with the aim of implantable sensors for intensive care supervision [23]. Application areas for such autonomous biosensors can be the time-limited transplantation or digestion of such devices. With more than three autonomous biosensors, networks can be established. For example, in biotechnological processes such as fermentation or on fields for agriculture they can provide added value in terms of distribution of nutrients. Therefore, there is a lot of work left for IT-engineers to develop software which allows for communication between sensors and a base unit including expert systems to qualify data for decision support.

2.5 Sensor-Actor Molecules: Molecular Integration

In contrast to technological integration which is highly developed in autonomous biosensors also biochemical integration will contribute to finding new ways for point-of-care testing. The first steps in this direction have been accomplished in the context of the above-mentioned “active arrays” where enzymes act on samples to derive analytical information. More artificial are newly developed molecules that generate signals upon binding like the molecular beacons invented for nucleic acid characterization [26]. The potential of multiplexing of the approach has led to a variety of applications especially in the detection and characterization of infectious diseases [27]. The concept of molecular beacons has been adapted to aptamers [16] as well as to peptide-based systems [24]. Both of the latter, however, are less universal than the original molecular beacon concept and much more detailed work on the recognition site has to be performed to make aptamer- or peptide-based molecular beacons work. More general approaches may be undertaken to link binding to signaling on the molecular level. All these newly developed molecules may be integrated in future easy-to-use analytical or diagnostic systems. With this new method of biochemical integration the processes of recognition and signal generation are fully integrated. Application areas are various; the concept of “a-lab-in-a-tissue” which named the project [22] makes the need for such a testing possibility quite clear. Because of globalization and the growing world population fast, qualitative tests for viruses such as influenza are desired for improved pandemic control and governance.

3 The Goal: Systems Integration

The possibilities given by different technologies for in vitro diagnostic testing at the patients’ bed-side show a high degree of integration. From a technological point of view integration of the various features of a POCT device may be carried out with regard to different scales or levels such as the molecular level, electronic integration, optimization of production steps, and packaging. By integrating process steps for detection a faster and more user-friendly diagnostic method will be available in the near future. In addition, integration is again a key factor for their implementation into clinical workflows. In this regard it is not sufficient to supply technologies for a faster analysis. The supply chain has to be covered fully meaning that also integration of data into clinical information systems and finally into electronic health records are essential. In this way diagnostics is not only a service but becomes a directly available product in terms of values gained from an analysis. Hence, in vitro diagnostics and point-of-care testing has to be connected with information and communication technologies to give added value to patients, physicians as well as to society.