Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Medical diagnosis in remote and under-resourced areas of the world has not kept pace with modern medicine as practiced in developed, industrialized countries. While medical diagnosis in under-resourced countries relies heavily on the physical examination, diagnosis in the developed countries focuses more on the combination of the patient history, medical imaging, and laboratory test results. Many tropical infectious diseases do not require imaging for accurate diagnosis, but imaging is nonetheless very useful, particularly in identifying complications of the primary disease processes. Imaging can, however, be critical to the diagnosis of a particular disease as is described elsewhere in this text, including malignancies, maternal-fetal conditions such as placenta previa, and in the setting of trauma or acute medical emergencies such as appendicitis.

Not only is lack of access to imaging a critical issue in developing countries, but in the small number of clinics and hospitals in which imaging can be performed, there is often a critical shortage of physicians, or other personnel trained to interpret the images. In the absence of reliable high performance electronic data transmission, the images must be physically carried to the site where interpretation will be performed. This creates an additional barrier to timely diagnosis and treatment.

The goal of this chapter is to explore strategies for accomplishing the basic tasks of effectively transmitting images from the point of acquisition (a low resource setting) to someone who can interpret them, and then providing a way for the report to get back to the treating healthcare worker. The current status of the information technology (IT) infrastructure will be reviewed along with some methods to most effectively harness. The available infrastructure a brief discussion of some strategies for future improvements to the imaging IT infrastructure will conclude the chapter.

Current Status of IT

Even though the developing world lags far behind the industrialized world in advanced information transfer, some electronic technologies, such as cellular phone technology and basic Internet connections, are widespread. In fact, many developing countries did not install wired telephone service when it was the principal communication technology available, and as a result they have effectively bypassed this infrastructure-intensive stage in development in favor of the more modern and, less expensive wireless technologies such as Wi-Fi (wireless Ethernet) and cellular phone systems. Wireless coverage, however, is extremely variable in its extent and quality. Data rates are commonly at or below 10 kb; this is equivalent to telephone data transmission speeds in the USA during the early 1980s. The cost is quite variable with some countries providing service at rates lower than in the USA, while others charge higher prices. Since incomes are far lower in under-resourced areas, only a wealthy few can afford cellular service or Internet even when it is available.

Using Existing Electronic Infrastructure

Electronic transmission of imagery is important in low resource regions for the following reasons:

  1. 1.

    Photographic film and chemistry is becoming increasingly expensive

  2. 2.

    Poor local expertise for image interpretation

  3. 3.

    Poor roads and costly fuel make physical transport of film or paper images exensive and unreliable

  4. 4.

    Remote monitoring of studies, maintenance, and training can be better accomplished using electronic image and data transfer.

So given the need to use electronic transmission for medical imaging, how can the infrastructure be used most effectively? Several basic principles apply, all revolving around the idea of using the limited resources of power and data communications as efficiently as possible. The basic principles are as follows:

  1. 1.

    Limit communication bandwidth use. This means the images must be compressed so that less data needs be transmitted. Although some have maintained that no image data loss is acceptable, in practice many if not most “diagnostic quality” images reviewed remotely are slightly degraded compared to the original with no clinically detrimental effects. Many high-quality “lossy” (compression causing data loss) compression schemes are available.

  2. 2.

    Avoid the need for constant use of the network connection. Applications such as “cloud”-hosted applications that require constant connection not only must transmit and receive a lot of data, but they may not even work if the network link is only intermittent or unreliable.

  3. 3.

    Plan for frequent periods of network unavailability and/or severely constrained data transmission rates. Even in the USA, “dropped” calls and slowdowns of data rates are the rule; in under-resourced areas, these problems are magnified greatly. Software that can still function with an intermittent connection and adjust to fluctuating data transfer rates is critical. To efficiently compensate for an intermittent connection, software that monitors the data transfer and begins where the transmission was interrupted, rather than starting over from the beginning, is necessary but hardly ideally suited. Also, software that adjusts data packet size, so that smaller packets are sent during times of network instability, can mean greatly increased transmission rates.

  4. 4.

    Remember power limitations. If the clinic is in an isolated region where solar power is being used, power may be very limited, and the power consumption of all needed devices should be checked while operating at maximum activity. Even if power lines supply the site where equipment is to be placed, interruptions in power may require some battery or generator backup capability. Accurate knowledge of power consumption will make it easier to determine the correct size of uninterruptible power supply (UPS) that is needed.

  5. 5.

    Avoid costly software and hardware. Higher initial hardware (computers, modems, Ethernet switches, wireless network adapters, etc.) cost limits the amount of equipment that can be purchased. Higher-priced equipment may also result in higher-cost repairs. Luckily, the cost of hardware solutions is rapidly declining and slightly used, or refurbished equipment is readily available at large discounts. Similarly, software costs must be held in check. Look for open-source software that has the required functionality and has received favorable reviews. Checking with various software vendors to inquire about software donation or discounts for charitable purposes is an excellent strategy. Another strategy might be to look to other nonprofits who might be using similar software and propose lowering the cost for all by sharing software costs for multiuser licenses. Perhaps the best solution to adopting a very expensive software, or one that requires customization and maintenance, is to contact the vendor regarding the establishment of an ongoing partnership wherein they would gain positive publicity in return for ongoing software updates and support.

  6. 6.

    Weigh reliability against cost. High reliability components can be extremely costly, routinely double the price of alternative less durable components. Weigh the cost of having more reliable components against the cost of having a replacement on hand in case of a failure, account for the time and expertise required to make the swap. Since medical imaging is being offered in many cases as a new service, some lag time will probably be well tolerated by the people being served since they are accustomed to having no service at all. In most cases, replacing a low-cost component is a better option than buying a high-cost component with higher reliability, since failure rates are generally quite low for both, depending on the component. Situations where emergency imaging is frequently being performed or where no one is available to swap out components may be exceptions to this suggestion.

Seven Key Information Technology Steps to Deliver Medical Imaging to Remote and/or Low-Resource Environments

Step One: Select and Deploy an Imaging System

Extensive detail regarding the selection of a particular system or modality is outside the scope of this chapter and is covered elsewhere; however, a few key considerations in terms of information technology are relevant. Generally, there are several options for an imaging system, depending on many factors. However, if the key consideration for a particular imaging project is that the selected modality be low cost and consume few resources, three feasible solutions currently exist: a small ultrasound system, a webcam/cell phone camera for optical imaging, or a basic radiography (X-ray) system. The webcam or cellular phone camera is the lowest cost and draws the least power, but also provides less diagnostic information than a system that can look inside the body (however these options can be used to take photographs of hard copy imaging studies). Small ultrasound systems may be available for $8,000 or less (Fig. 8.1) and can draw relatively little power (75–100 W). A radiography system is the most expensive option in terms of cost (about $100,000 for both X-ray machine and a digital receptor system including the computed radiography plates and reader), and it draws considerable electrical power. There is also the issue of shielding from the emitted X-rays. If film and chemical processing are used instead of a digital receptor, the operating costs increase dramatically since film and film-developing chemicals are becoming more expensive and suppliers harder to find as fewer and fewer people use the outdated technology.

Fig. 8.1
figure 1

Three small ultrasound systems ranging in price from about $50,000 to $2,100

Other key requirements for any imaging device are that it being easy to use and maintain, and that it being able to transfer images electronically. The DICOM [1] (digital imaging and communications in medicine) standard is the means by which medical imaging devices transfer images electronically. It consists of both communication protocols that allow machines to communicate with one another, and a file format that allows information about the patient, medical problem, and how the image was created, to be transmitted along with the images. The DICOM image format is similar to the Tagged Image File Format (TIFF [2]) used for nonmedical images in that information about the image and how it was created resides in “tags” located in the header. Each tag carries some specific form of patient, instrument, or image format information. For example, DICOM tag 0010, 0010 titled “PN” contains the patient name and can have a maximum length of 64 bytes [3]. The DICOM protocols and format are the standard for image transfer from medical devices, but unfortunately many inexpensive ultrasound imaging systems do not include support for DICOM. It can be added if the manufacturer is willing, since there are a number of freeware toolkits such as DCMTK [4] that can be used to help implement DICOM on a specific system.

Other transfer formats could be used including transfer of JPEG and/or AVI files, but the appropriate patient identification information would have to be transferred as a separate file, raising the issue of having unidentified or misidentified images should the identification file get corrupted or deleted. Also, such a custom transfer process could not be expected to work correctly for any devices other than the few it was created specifically for. Once implemented, a DICOM protocol should work with any other device also using DICOM.

A physical means of transporting the image data to the next step in the image transfer chain is also needed. Most systems support wired Ethernet and universal serial bus (USB). A rapidly expanding group of devices also support wireless Ethernet, often termed Wi-Fi. Wired data transfer from the imaging device is preferable because of its higher speed unless the device must be portable. In the case of a portable device, transfer via Wi-Fi or even a 3G or 4G cellular network may be acceptable. Exporting the data to a USB hard disk or other memory device, and then physically transporting the memory device to a display workstation or the next data transfer point, is an old-fashioned but reliable way to make transfers. This is an excellent backup method even if a wireless or wired network connection is normally used.

Step Two: Apply Data Compression

The next step in transporting an image or images is to apply data compression. The purpose of this step is to conserve bandwidth as the image is transmitted to the location where it will be interpreted. If that location is local and within range of Wi-Fi, then bandwidth may not be an issue, then either no compression or lossless compression may be employed. A number of lossless compression schemes exist, and they are able to achieve a modest compression level of approximately 2:1 (meaning that the compressed data is ½ of the uncompressed).

For transmission over longer distances, much greater compression is usually required as either maximum data rates may be very slow (as noted above) or high data rates could be achievable but would be very costly. Modern lossy compression schemes employing variants of MPEG-4 are extremely useful for compression of series of images such as in CT and MRI (not radiography) where one image is similar to the next. The compression efficiency is due to the fact that the compression algorithm can make use of the adjacent pixels in the same image and in adjacent images to help reconstruct what a given pixel in the image must be. Other image compression protocols such as JPEG can only use nearby pixels in the same image. Recent coder-decoders (CODECs) that are efficient in encoding series of images include MPEG-4, part 2, DivX, and XviD. These are much more effective protocols than JPEG2000 or JPEG; they can compress 40–80 MB video files down to only about 300–400 KB while producing images that are almost indistinguishable from the original [5] (Fig. 8.2). Newer CODECs such as H.264, MPEG-4 part 10 (AVC), and Dirac are more effective still, producing files about half the size of MPEG-4 part 2 [6]. The new High Efficiency Video Encoding Codec, HEVC (H.265), is expected to cut the data rate and file size by another 50 % compared with H.264 while producing little discernible reduction in quality [7]. In practical terms this level of compression could result in an entire series of CT or ultrasound images being compressed down to about 100–200 kB, half the size of a single image.

Fig. 8.2
figure 2

Comparison of video compression to single image compression. A transverse image of the right kidney appears nearly identical to the original when compressed by XviD to a total file size of approximately 400 kB for 40 images (a). Images of the same series when compressed individually to the same total file size using JPEG compression yield images that are not even recognizable as ultrasound images (b)

Step Three: Transmit Data

Several modes of electronic transmission are potentially available in low-resource regions. Cellular network transmission is widely available although speeds can be quite low. Cellular transmission capability can be built into the imaging device, but it is often more practical to send the data to a network gateway, an inexpensive laptop PC connected via cellular modem to the network. This prevents the imaging system from getting overloaded with transmission and retransmission tasks in the event of network instability and also allows upgrading the transmission software and hardware without changing the imaging device. It also allows for the addition of imaging devices that use the same gateway. Communication between the network gateway and the imaging devices can be by wired or wireless Ethernet. Both options are low cost and low maintenance. Wireless allows the imaging device to be moved around more conveniently while leaving the network gateway in one place.

Wired Internet service is available in many low-resource environments and is a faster alternative to cellular when the cost is reasonable. Data rates of between 125 Kbps and 1 Mbps may be expected. For really remote sites, satellite links can be used, but the cost for such a connection may be very high and is probably not sustainable in the long term. A satellite connection that can be turned on only when needed could be a solid backup system, but negotiating a plan suitable for intermittent use at a reasonable cost could be challenging. Some additional useful characteristics for a network gateway are:

  1. 1.

    Ability to back up studies to CD, DVD, or USB flash drive for “sneakernet” manual transfer to a reading site should the network connection be interrupted for an extended period.

  2. 2.

    Ability to connect to an external antenna and/or signal booster to increase signal for borderline signal reception areas.

  3. 3.

    Low power consumption. Less than 20 W is desirable and achievable for gateway, modem, and switch/router combined.

  4. 4.

    Adequate storage for queued studies awaiting transmission and as a backup location for studies. In addition to the system storage, a 500 GB - 1.5 TB USB-powered external drive is inexpensive, requires low power, and provides massive backup capability.

In addition to the Internet connection, suitable software will be needed to perform the data compression, transmission, and decompression. For regions where network instability is the rule, auto-restart of the modem software as well as selective retransmission of only those data packets that failed to be received is useful. Some modems advertise automatic restart as a feature, but most often one will have to experiment with various models to find one with appropriate software. Some modems have freeware that will perform auto-restart. Basic transmission of data can be accomplished by a number of file transmission programs. The Unix utility rsync can be used, for example, to ensure that data files not already present on the receiving system are transferred, but this utility (also available for Windows) will resend a complete file if data flow is interrupted during transfer. This is not ideal for a network subject to frequent interruptions such as many cellular networks. So a more sophisticated transfer that (1) retransmits portions of files not properly received, and (2) monitors network performance to adjust packet size and minimize packet loss during transmission, is desirable. Additional features that are useful if not critical are:

  1. 1.

    A logging utility that sends the logs periodically to the receiving site for evaluation

  2. 2.

    An easy-to-use graphical user interface with an equivalent command line interface that can be used remotely using minimal bandwidth—or an alternative method of issuing remote commands to the software (and receiving responses) using minimal bandwidth

  3. 3.

    Detailed error messages that inform the system administrator of likely causes of transmission failures

  4. 4.

    Automatic correction of common problems with data transmission

  5. 5.

    Email transmission of warning messages regarding transmission failures, number of images awaiting transmission if it exceeds a certain threshold, or other system problems

Step Four: Assure Access to Images by Qualified Interpreters

The physicians or other healthcare workers assigned to interpret the images may be local, regional, or remote to the site where images are being acquired. Local interpretation can be accomplished by connecting a reading workstation to the imaging source by an Ethernet connection or wireless Ethernet connection. A wired connection is faster and with modern equipment gigabit Ethernet (approximately 1 Gbps transmission rate, known as 1000baseTFootnote 1) is feasible at low cost if the distance to the workstation is short (<100 m) using Category 5, 5e, or 6 Ethernet cable. For longer distances up to 500 m or more, 1000baseSX Ethernet using fiber-optic cabling is an option but is of higher cost and may not be readily available. Other versions of gigabit Ethernet using fiber-optic cables can communicate over distances of 70 km or more.

On the software side, the transmission protocol for the local (and remote) connection should be DICOM, since the DICOM standard provides proper patient and study identification. All high performance image viewing software supports DICOM, making daily workflow much easier. This means that the imaging device should support DICOM. The degree of DICOM support is listed in a DICOM conformance statement, but at minimum the imaging device should be able to connect with a DICOM service class provider (SCP) and transmit images with a properly formatted DICOM header, correctly windowed and calibrated for image measurement on the reading viewer or workstation.

For a local reading station, having DICOM viewing software and a DICOM server to support reception and storage of the images is the optimal approach. Both the server software and the viewer can reside on the same computer. A large number of vendors supply DICOM workstation software at varying prices, and a number of freeware packages exist. Some of the freeware packages differ in that they are not cleared by the FDA, whereas others offer limited functionality and hardware support, often too limited to be of great use. Potential sources of DICOM workstation freeware for Windows include ClearCanvas [8], K-PACS [9], and Onis [10], and for the MacIntosh, OsiriX [11] is a common choice. In some cases, a viewer alone can be used, but storage of cases in this situation is either nonexistent or very limited. A number of vendors also offer DICOM toolkits to help with building new DICOM applications. It may be possible to partner with a PACS vendor and obtain donated software for specific projects.

The local reading workstation may simply be a laptop running Windows or it may be a larger system with multiple monitors. Adding a single high-definition (HD) monitor to an inexpensive laptop creates a simple and cost-effective two-screen workstation with one screen for the patient list (the laptop screen) and another for the images (Fig. 8.3). A minimal system consisting of a netbook plus a 21 in. (or larger) HD monitor will perform adequately provided the system is upgraded from Windows 7 Starter (which does not support two independent monitors) to Windows 7 Home Premium or higher and the system is upgraded to 2 GB of memory. A faster system with more memory would of course be more desirable since entire image sets are often held in memory while viewing (to improve responsiveness). For viewing large radiographs, a slightly larger monitor (23–24 in.) rotated to portrait mode may be needed. A monitor with an in-plane switching (IPS) LCD panel is preferable to a “TN” type panel to avoid color and intensity shifts with changes in head position and to provide better gray scale. To add additional monitors to a laptop-based system, inexpensive USB video adapters capable of supporting 1080p or higher resolution are inexpensive and readily available.

Fig. 8.3
figure 3

A simple two-screen workstation consisting of a laptop computer connected to a single 22 in. 1080p external monitor. The larger monitor is used for image display and the smaller for text

For images sent to a regional or national reader(s) for interpretation, electronic transfer has major advantages. Images would be uploaded to the Internet or some trusted wide area network (WAN) and transferred to a DICOM server either at a regional center with high-speed connectivity or to a server located at the site of interpretation. The reading physician would access the DICOM server using a DICOM client workstation for that server or another third-party DICOM viewer, and reports would be returned to the ordering provider by phone, email, or text message.

The advantage of using a server located in the country where the images are acquired is that the system is not dependent on a communication link to the outside world, but the disadvantages include greater difficulty maintaining and managing the server as well as vulnerability to power instability. If the local Internet/WAN is inoperative, then images must still be sent manually via courier or must be held until the network is again operational.

Another option is locating the DICOM server in another country with power and communication infrastructure stability. The readers could access the server via the Internet to interpret the studies. This would work for regional, national, and international readers but is dependent on reliable Internet communications to the outside world for in-country readers. Since readers in multiple time zones could be recruited, 24/7 coverage might be easier to implement and maintain compared with only in-country readers. Also, the pool of potentially qualified readers is much larger. For this type of implementation, hardware requirements may be decreased since qualified international readers in urban areas would likely already have a workstation that would be usable, especially if the PACS software is all web browser-based or “zero footprint,” meaning that no client software resides locally at all—the browser is used and configured for all needed functions.

The disadvantages of locating the server in another country include the problem of local access to images if the international data link is down. This can be handled by having a backup system for transporting images to local or regional interpretation sites. Another disadvantage is management of readers scattered all over the world with varying languages and “styles” of interpretation. Just qualifying, training, and performing QA on such a diverse group is a daunting task because of its complexity. It might be possible though to limit the interpreting specialists to those speaking a few of the most commonly spoken languages and still recruit a large number of readers for studies.

Step Five: Provide for Reporting of Results

A reliable system for reporting results of imaging studies is needed for all situations except the local situation, where the treating physician actually performs or interprets the imaging all at one location. If the interpretation of the imaging study is local (meaning the same clinic or a nearby clinic), a verbal consultation, telephone call, or a handwritten note will suffice. This is not optimal for tracking results over time but it works and is inexpensive. For studies interpreted in the same region (e.g., the same district), face-to-face conversation is usually not feasible, but a telephone call and a written note carried to the requesting provider will work. This is inexpensive, but the report may not reach the requesting provider in a timely fashion. The alternative, making multiple telephone calls to referring providers, can be excessively time-consuming for the image reader.

Email and text (SMS) messaging are technologies that are available even in under-resourced areas because of the ready availability of cell phones and the low cost of many laptop and netbook computers. The problem with both may be lack of security, but even without a high level of security, this method may be a far better option than the slow and limited alternatives. Security can be increased by sending a very limited message by text or email that includes a link to a secure server that contains the complete report and possibly some example images. Email and text messaging are vulnerable to network outages, but regular telephone and hand-carried messages can always be used as backup in case of a cellular network failure.

A PACS-based reporting system can be implemented although this is often a feature of higher end (more expensive) PACS. The reporting system may include voice recognition, structured reports that can be easily translated, and recording of changes in reports including who the report was sent to. A PACS-based system may require more equipment (such as a limited PACS workstation) at the report-receiving end to obtain a complete report, but most also send basic reports as text messages, email, or voicemail. Many systems also record receipt of the report message electronically, and this feature could be made available in low-resource areas since it requires nothing more than an operational cell network to transmit the messages and confirmations. A search of the Internet for the terms PACS, reporting, and tracking will indicate vendors with capacity to provide these services.

Reporting by international readers will be limited to email, texting, and PACS-based systems. Phone calls are possible using inexpensive services such as Skype, and the various instant messaging (IM) services that also offer phone calling or inexpensive voice over Internet (VOIP) such as Vonage or Magic Jack. Telephoned reports will often not be practical however because of time zone differences between the reader and where the imaging study was created. On the other hand, having international readers can be a major advantage for covering night hours if nighttime imaging is part of the service plan provided. Readers that are four or five time zones away can cover the night hours during what for them will be regular day or early evening hours. PACS-based email systems and email alone may work best because of the variable complexity in setting up low-cost international texting.

A significant problem for international readers is proper translation of their report into a language that can be understood by the requesting provider. At this stage automated Internet-based translation for medical terminology is not mature enough to be of use for translating free-form (i.e., dictated or typed) imaging reports. Structured reporting can help with this problem since the report can have a limited predefined vocabulary that makes automated software-based translation much easier. For example, a structured report for a right upper quadrant ultrasound that only has four diagnoses for the gallbladder could be created. The diagnosis options might be “normal,” “benign gallstones,” “cholecystitis,” and “other.” The reader picks one of the diagnoses by checking a box on the structured report menu displayed on the PACS workstation (Fig. 8.4). The diagnosis is automatically translated into the proper language and sent (along with the diagnoses for the other organs in the right upper quadrant). The structured reporting menu can be pre-translated to display in the preferred language of the reader. Freeware-based translation programs such as Google Translate can be used to lower the cost of the translation process.

Fig. 8.4
figure 4

Example of online structured reporting. In this implementation, the user clicks a pull-down menu from the PACS, and a findings window opens allowing the reader to click of various possible diagnostic findings. The example in the figure is for obstetrical ultrasound in which the reader has identified the fetus as being in head down (cephalic) position

Step Six: Provide for a Quality Assurance (QA) Program

All countries want their citizens to have the highest quality health care possible in their environment. So it is important to develop a QA program that properly monitors performance of operators, readers, and equipment while not consuming large amounts of resources. Traditionally errors in technique or interpretation have been handled informally by personal communication and sporadic case reviews, but governments are moving toward requiring ever more stringent and systematic approaches. The extensive monitoring of performance required by regulators today is well handled by computerized systems. Hard-pressed clinical workers will certainly welcome any automation of the tedious and unrewarding task of peer review and quality monitoring. Many PACS systems have rudimentary QA modules built in, but add-in QA modules that provide greater functionality and reporting capability are becoming very popular. Some QA modules may be configured to allow outcome reporting by the treating physician or others who are following the patient. This can be very helpful for outreach projects where continued funding may depend on showing positive results (Fig. 8.5).

Fig. 8.5
figure 5

Quality monitoring. Common technologist errors can be entered by the interpreting physician for each case (a), and interpreter peer review can be accomplished using the standard American College of Radiology (ACR) categories for missed findings (b). Cases may be automatically or manually selected for peer review. The reviewer may then agree with the findings or select another category if there is disagreement

Step Seven: Provide for Network Maintenance and Redundancy

A feature of networks in developing countries or under-resourced areas is less reliable, since many are mostly wireless cellular networks subject to variable signal transmission quality. In addition, even the wired backbones of such networks may be vulnerable to outages lasting days or even weeks. Therefore having redundant methods of transporting images and reports is important. For example, one might have the primary network running over a wireless cellular data connection with a backup transmission system being a second cellular network, or even a shortwave radio link created from the point of image generation to a site that has wired Ethernet or to a primary interpretation site. The secondary backup system would be having a courier carry a USB flash drive to the interpretation site. It is important to recognize that 24/7 access is something that has never existed in many under-resourced environments, so system downtime will be more readily accepted by the patients and physicians than it would in a well-connected environment where downtime is rare.

No matter how reliable the network, problems with transmission will occur, so it is important to have the ability both to remotely monitor performance and to have a trained individual who can both troubleshoot and repair problems. Having a person conversant with the local languages is helpful in fixing problems on site, and it is ideal for larger networks consisting of several image acquisition locations. A person who can handle power problems (i.e., failure of a generator or solar array) in addition to pure network problems is ideal.

With respect to hardware components including computers and monitors, inexpensive small computer systems make the replacement of systems a better option than repair of these systems. Preconfigured spares can be placed in the region being served to enable rapid correction of hardware problems. The pretested and preconfigured spares may also be used to help diagnose network problems. To improve reliability and lifetime of electronics, dust control and cooling should be monitored. Simply covering systems with a clean cloth when not in use can be very helpful, and using cooling fans can improve cooling dramatically in high-temperature environments. Humidity control would be helpful, but it is not practical in most low-resource environments.

As mentioned previously, open-source software/freeware is a low-cost but reasonably reliable option for software. Support for such software is often informal via the Internet, but having high-cost software is no guarantee of improved reliability, since support for the software may not be available in remote under-resourced areas.

Data and System Security and Privacy

Ensuring the security of the imaging systems and network must be provided as part of any comprehensive plan or program to provide medical imaging technology. Specifically, privacy of patient data must be safeguarded and is expected from any provider permitted to operate within their borders. For a local imaging system at a single clinic, the traditional method of recording the patient information on the images and keeping all records secure by locking up both paper-based and digital records should suffice. The clinic may or may not keep a backup copy of the patient medical record if it is paper based, but for electronic data, including images and any reports, keeping a backup copy on a separate (secure) hard disk is good practice (a backup data drive can be very inexpensive given the low cost of high-capacity external USB disk drives).

In general, a system of backing up all electronic data collected during each day’s activities at the end of the day or beginning of the next day is recommended; the backup software built into modern Windows computers should be sufficient for backup of both the system files and incremental backup of the data files. Time should be invested on the part of the designated system maintenance person(s) in learning how to use the software and developing a standard operating procedure for periodic backups and recovery of data or the operating system, should it be needed. A separate backup of the images and/or reports that can be accessed by medical personnel without special training may be useful as a secondary system. All backup hardware should be stored in a locked and safe area away from temperature extremes at a location or building different from the location of the main data archive.

Workstations and imaging equipment should be protected from theft by physical security measures such as locking cables, computer case locks, and in some cases, personnel. Access to the workstations and electronic databases should be controlled using “strong” passwords (i.e., more than eight letters with at least one capital letter, one number, and one special character such as & or ?) and Wi-Fi access to any computer systems should be disabled unless actively used for image or data transfer. Consideration should also be given to disabling unnecessary USB ports (but this may not be possible if the ports are used for backup or other peripheral hardware, such as printers).

For regional, national, and international imaging networks, the status of the actual network (cellular, broadband, or Internet) used to transfer images is uncertain, and some form of data encryption during transfers will probably be necessary. The most common approaches are to use a virtual private network (VPN) or transport layer security (TLS) which is also known as secure sockets layer (SSL) [12]. A VPN may securely connect an individual to a network or one network to another. A VPN creates a secure link by encapsulating the data using one type of protocol while transporting it using a different network protocol (called “tunneling”) and adding data encryption. A number of different VPN implementations have been described [13] including Internet Protocol Security (IPSec), TLS, Datagram Transport Layer Security (DTLS used by Cisco AnyConnect VPN), Microsoft Point-to-Point Encryption (MPPE), Secure Shell (SSH), and others. A VPN can be very secure, but precautions must be taken to ensure security by protecting usernames and passwords by encryption if they are stored, and by implementing a secure method of user authentication. One potential problem with a VPN is a loss in performance that often occurs unless the VPN is very carefully optimized. Even an optimal VPN may decrease performance by increasing network overhead by 10–15 % and by increasing network latency [14].

TLS may be used to protect an entire network stack producing a VPN, but when TLS is implemented to protect a web server, the resulting system connections are designated HTTPS (hypertext transfer protocol secure). An HTTPS connection encrypts the entire underlying HTTP protocol including the request for a web page, query parameters, header, and cookies. Should the encryption key for an HTTPS link be compromised in any way, it may be revoked. Newer web browser versions of Internet Explorer, Chrome, Firefox, and Opera implement the Online Certificate Status Protocol (OCSP) to verify that the certificate has not been revoked. HTTPS may require client authentication allowing access to be limited to authorized users. While HTTPS is generally very secure, sophisticated software may be able to infer certain personal information based on the packet sizes being transmitted without knowing their contents [15]. This type of security compromise can be defeated by correct design of web applications being accessed by HTTPS. An HTTPS connection typically does not slow data transmission significantly, which may make it the preferred method for data-intensive traffic such as image transfer.

To further protect patient privacy, partial de-identification of the patient images, history, and reports can be performed prior to transmission. The most common example of this is use of a patient code in research studies such as “research patient 1” which is linked to the actual patient identification information on a master data sheet held by the research study coordinator. All other researchers have access to the data gathered from the patient, but cannot identify the patient information without access to the master data key. This differs from completely de-identified data where the link to an actual person has been lost or is otherwise not available. For patient care, some organizations implement partial de-identification by keeping the master data key at the site where images are created and giving the patient a card with their patient code. This way only the person creating the images or the patient will be able to identify their images or the corresponding reports. If the patient is sent to a hospital for treatment based on images taken, the hospital uses the patient code to find the appropriate reports sent as an email with links. Without the patient code, someone accessing the reports and images would at most be able to see them but would not know who the images came from. This technique strikes a reasonable balance between privacy protection and easy data access.

The Future

The future of imaging in low-resource environments is very bright. Infrastructure for data transmission is improving in quality and declining in cost. Electronic hardware needed for image transmission and display is dramatically declining in cost and increasing in performance. Storage costs are already so low that they are hardly a factor in project budgeting. New and still somewhat expensive technologies that could help in the expansion of medical imaging include solar power, satellite communications, “cloud”-based software and storage, and more capable cell phones for reports and reporting of patient follow-up.

Solar power is already widely available and is an easily adoptable power source for a field project, or in a remote country without reliable electrical infrastructure. For example, a field setup for ultrasound imaging was assembled by the imaging-based nonprofit “Imaging the World” which draws only 75 W while scanning and sending data; this amount of power can be delivered by portable solar panels that can be packed in a carry-on suitcase. High-output solar panels are still somewhat expensive, but the continuing advances in solar panel technology combined with the large number of competing manufacturers will likely cause prices to continue downward [16].

The practical and economic outlook for satellite communication is less certain; certainly, the trend in prices appears to be downward, which is likely due to lower costs for competing communication technologies and decreasing satellite launch costs due to low-cost competitors such as Space-X entering the launch vehicle market. Satellite communication is not critical for everyday use, but provides a solid backup option in case of failure of the primary communication system. New variants of Wi-Fi make the building of mesh Wi-Fi WANs, which make use of many small Wi-Fi routers, a possible option in the future. These trends plus continuing advancements in image compression technology will make it ever easier to transmit large image sets.

“Cloud”-based PACS systems, which run on typical web browsers and consume little in the way of local computing resources have advantages, and the implementation strategies are promising. The main advantage is, in the presence of a reliable Internet connection, nearly all “computing power” and storage space is off-site. This strategy may lead to reduced costs of remote image storage and allow individuals the ability to view and interpret images from virtually anywhere.

Finally, the capability of cell phones in under-resourced areas is rapidly increasing. As new low-cost “smartphones” proliferate, the ability of health professionals to receive patient information and to report on patient outcomes (including follow-up) will improve greatly; however, this is dependent on the availability of appropriate software to organize this information.

Conclusion

In conclusion, these and other improvements in information technology infrastructure will make it much easier for any imaging model intended for rural and under-resourced regions to be successful. Further research and objective evaluations of new field implementations will be needed to better understand the potential advantages of these and other technologies in supporting medical imaging delivery to low-resource settings.