Keywords

3.1 Moore’s Law

Moore’s law states that overall processing power for computers doubles every 1.5–2 years, or less [3]. This also applies to telecommunications. Although a general guide rather than a fundamental law, it has proved remarkably consistent since the implementation of the first semiconductor integrated circuit in 1960 (Fig. 3.1).

Fig. 3.1
figure 1

Plot of CPU transistor counts against dates of introduction. Note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years. Source http://en.wikipedia.org/wiki/Moore%27s_law

It is expected that for the immediate future, Moore’s law will ensure that computational power will continue to increase at current rates, bringing more speed and capacity to handle more sophisticated applications and end-user requirements. Devices are becoming increasingly intelligent and are able to monitor data and environment. Automobiles can contain up to 100 microprocessors to monitor the various functions of a car. New cars carry 200 lb of electronics with over a mile of wiring. On a wider front, the Internet of Things is able to connect together the embedded devices that can provide a wide variety of data and sensor information. Gartner [4] estimates that there will be 26 billion devices on the Internet by 2020. A network of autonomous smart devices will enable a whole range of operations and applications to be carried out without direct intervention by the user.

Utilization of digital media systems and the increase in the use of social media appear to follow the law of sharing, an equivalent of Moore’s law in the context of social media. The law of sharing states that the average amount of shared information doubles every year [5]. The analogy helps businesses to be aware of the rapidly changing environment in which they are operating, and enables them to define business information handling requirements, and develop and accelerate commercial and social applications, where appropriate.

Denning and Lewis [6] expect that many additional years of exponential growth are likely even if CMOS technologies reach their limit. As alternatives become feasible, it should be possible to switch to new technologies and continue the growth path.

3.2 Computing Technology Post-silicon

Since 2002, the heat generated in the circuits has so far limited clock speeds to 3.5 GHz (such as Intel Pentium 4), because the cost of heat dissipation technology to address this and to allow faster clock speeds is too prohibitive. The only way forward, therefore, with existing technology has been to use two-core chips and support the CPUs running in parallel which in turn required the software to be parallelized. The smallest transistors in production are currently around 7 nm in size [7], although Lawrence Berkeley National Laboratory has successfully built a functional 1-nm-long transistor gate [8]. Transistors smaller than 7 nm have higher development costs and are expected to experience quantum tunneling through their logic gates. It is envisaged that a technology to replace silicon will be needed at some stage if Moore’s law is to continue. Possible alternative technologies include optical computing, quantum computing, DNA computing, germanium, carbon nanotubes, and neuromorphic computing, and others [9].

3.2.1 Optical Computing

Optical computing uses photons for computation, with a potentially higher bandwidth than current technology. However, it is currently unclear whether they would be an improvement on silicon when the full range of performance criteria is taken into account such as speed, power consumption, cost, and size. For optical logic to be competitive beyond a few specialized applications, major breakthroughs in nonlinear optical device technology would be required.

3.2.2 Quantum Computing

Quantum computing makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Digital computing requires that the data be encoded into binary digits, each of which is always in one of two definite states (0 or 1). Quantum computation uses quantum bits, which can be in superpositions of states. In contrast to classical computing which is based on classical Boolean logic, quantum computing is based on the Birkhoff-von Neumann quantum logic [10]. It is expected to improve computational power for particular tasks such as prime factoring, database searching, cryptography, and simulation. Various approaches are being developed but it is not yet clear which will have the best chances of success, nor the timescale required to develop a commercial product [11]. There has been recent experimental verification that quantum computation can be performed successfully [12]. The significance of quantum computing may be gauged by the recent interest in the area by Google, IBM, Microsoft and major research laboratories [13].

3.2.3 DNA Computing

Information is carried by DNA at the molecular level to make logic and arithmetic operations. Shapiro and Ran [14] have demonstrated that DNA molecules can be programmed to execute any dynamic process of chemical kinetics. They can also implement an algorithm for achieving consensus between multiple agents. There is also the possibility of using nucleotides, and their pairing properties in DNA double helices, as the alphabet and basic rules of a programming language. Thus, DNA can represent hardware and software and can provide a direct interface for the digital control of nanoscale physical or biological systems. It can also use many different molecules simultaneously and therefore run computing operations in parallel.

3.2.4 Germanium

A new design for germanium nFETs which improve their performance significantly has been reported by Bourzac [15] and generated interest in this technology.

3.2.5 Nanotubes

In theory, carbon nanotubes could be considerably more conductive than copper. They are also semiconducting. Thus, it has the capability for replacing silicon on a nanometer scale [16].

3.2.6 Neuromorphic Computing

Neuromorphic computing seeks to utilize neural systems to process information. Neuromorphic engineering is a new interdisciplinary subject that takes its motivation from the biological and natural sciences to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose structure and properties replicate those of biological nervous systems.

3.3 Post-wimp User Interfaces

Post-wimp are interfaces which seek to go beyond the paradigm which uses windows, icons, menus, and a pointing device. Wimp interfaces have been traditionally suited to 2D screens and 2D documents, because they operate in an analogous manner to dealing with physical 2D documents and diagrams. However, they are not so well suited to 3D representations or interactive games, as the complexity of the image and text can obscure the interaction. Such interfaces will normally benefit from customization to enable both representation and interaction to proceed in a beneficial and constructive manner. These interfaces are classed as post-wimp. However, it is recognized that they are difficult and challenging to construct as they embody not only technical aspects but also human perception, cognition, and social interaction aspects. In addition, trade-offs may have to be made between rapid learning, fast performance, and low error rates.

The rationale of post-wimp user interfaces is detailed by Gentner and Nielsen [17, 18], and van Dam [19].

3.4 Virtual Reality Interfaces

A joint European Union and National Science Foundation workshop on Human-Centered Computing, Online Communities, and Virtual Environments identified the following issues with regard to future interfaces:

7.2.13 Authoring and Development Environments

While visual programming tools, user interface toolkits and UI management systems have made the task of constructing WIMP GUIs significantly easier, they are limited to the well-understood, well-constrained set of 2D visual widget conventions. Also, they deal primarily with ‘look’ rather than sophisticated ‘behavior’ feel, which largely still has to be programmed explicitly. Building 3D widgets, let alone other UI components for the other senses, has no equivalent development/authoring environment, in part because the design space is so much larger, and in part because so little commonality has been found in post-WIMP UIs. Some (visual) authoring environments do exist for building 3D worlds (c.f. Jaron Lanier, VPL Body Electric), but those don’t help with the task of building multimodal UIs, let alone perceptual UIs, where even the component technology is still immature. We need to get beyond the point of handcrafting our post-WIMP UIs because it is a complex multi-disciplinary specialty too few developers will possess [20].

The workshop identified the following issues with regard to mobility:

7.2.17 Mobility: Not Just a Connectivity Issue; Heterogeneity

Future users will not be anchored to any particular place nor to any particular machine. They will, however, still be anchored to many of the same tasks - calculating, conceiving ideas, consuming and communicating. Moreover, they are going to want to continue to do all of these while they are moving about.

Devices carried by the user will enjoy greater or lesser degrees of connectivity depending on where the user moves. Consequently, the utility of a device for a particular task will increase or decrease depending on proximity to other elements of the computing environment. For example, the audio and video components of the UI will probably degrade when their connection switches from a wireless LAN to a longer-range cellular network. Nevertheless, these changes to the utility of the device must make intuitive sense to the user.

The fixed components of the computing environment must be similarly flexible. The mobile user will continue long-running tasks while moving from desktop to PDA to automobile to public kiosk. We desire that a single task seamlessly migrate through all of these environments. The task needs to adapt to a constantly changing set of UI capabilities throughout its duration, and this leads directly to the requirement for plasticity in the UI. A familiar health-related scenario, to some of us, is the pregnancy/labor/delivery task that, when augmented with computation and communication, still involves a home environment, followed by a mad dash via auto and a variety of hospital environments. The amount of decision making, information gathering, sensing and recording, not to mention the inappropriateness of conventional UIs, make this task a prime scenario for an interface that is mobile and flexible, and that makes no demands on the attention of the user.

Mobility also stretches our notions of ownership and membership. The most often used example is printing. Does a user gain access to a printer merely because of physical proximity? Generalize this example to all of the elements of computing, and one can begin to appreciate how ‘clunky’ the best of our current mobile services (e.g., ATMs and copy centers) really are [20] .

3.5 Virtual Environments and Creativity

A virtual environment generates a 3D world and presents this to the user via a display in an enclosed headset, a walk-in space such as that provided by a Cave where the user is surrounded by a virtual representation, or a combination of the real environment and the artificial by means of augmented reality. Figure 3.2 shows a Cave environment.

Fig. 3.2
figure 2

A Cave “CAVE Crayoland” by user: Davepape—own work (self-photograph using timer). Licensed under public domain via commons—https://commons.wikimedia.org/wiki/File:CAVE_Crayoland.jpg#/media/File:CAVE_Crayoland.jpg

The name is thought to be a reference to the allegory of the Cave in Plato’s Republic in which a philosopher contemplates perception, reality, and illusion—though this Cave was a constrained environment compared to the real world. The user’s immersion in the Cave as depicted in Fig. 3.2 is thought to provide an additional sense of realism over and above that which would be experienced by observing a 2D image or even a 3D stereoscopic image with depth cues. It has also been suggested that immersion of a human in an environment of this kind provokes a kind of “suspension of disbelief”—so that even though the world displayed is artificial, it is made to feel more real because the observer feels they are a participant within it. Virtual environments have been very successfully used for flight simulators, and also for the presentation and simulation of a wide variety of objects and spaces. Do they offer any advantages in the design process? Research studies in architectural design in immersive virtual environments have demonstrated that designers perceive and understand volumes, spaces, and spatial relationships better than in 2D environments [21]. Virtual environments also assist in the exploration of 3D spaces and can provide realistic “walk-throughs” to give the user a direct experience of what a 3D building or object will look like and feel like after it has been constructed. If some spaces are constricted compared to what it is envisaged they will be used for, it gives the designer the opportunity to modify this before the building is finalized and constructed. Thus, there is significant potential for producing an optimum design. In addition, color schemes and furniture can be trialled in the virtual building in order to determine what is most suitable for the purposes of the building.

3.6 Desk-top Virtual Reality

Desk-top-based VR enables a 3D virtual world to be displayed on a desktop display which is then displayed in a VR headset.

3.7 Virtual Reality Equipment

Table 3.1 provides some examples of relatively low-cost virtual reality equipment.

Table 3.1 Examples of virtual reality equipment

Google Daydream is a headset which is made from lightweight material into which a mobile phone is fitted [22].

3.8 Conclusions

The increasing power and reducing cost of digital technologies is bringing more speed, capacity, and connectivity for applications. They can provide more opportunities for local, national, and international collaborations via networks and the Internet. Post-wimp user interfaces can provide flexibility and innovation for a wide variety of applications including those in the areas of art and design. The increasing availability of lower cost virtual reality interfaces is providing new opportunities for artists and designers. Developments in computer games are driving many of the innovations in this area, which in turn can benefit art and design.

Further Reading