Abstract
We discuss and present design probes investigating how pervasive displays could offer unique opportunities for enhancing discovery and learning with “big data.” Our collaboration across three universities undertook a series of design exercises investigating approaches for collaborative, interactive, tangibles, and multitouch-engaged visualizations of genomic and related scientific datasets. These exercises led to several envisionments of tangible interfaces that employ active tokens and interactive surfaces to facilitate co-located and distributed engagement with large datasets. We describe some of the motivation and background for these envisioned interfaces; consider key aspects linking and distinguishing the designs; and relate these to the present and near-future state of the art for tangible and multitouch engagement with pervasive displays toward collaborative science.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Both the first (2012) and latest (2019) “Pervasive Displays” conference venues framed pervasive displays in terms of a “new communication medium for public and semi-public spaces.” While altogether a less common thread in this forum, the pursuit of science also is deeply concerned with communication in public and semi-public [1] spaces. For example, Francis Crick (co-receiving a Nobel Prize for his contributions to the first characterization of DNA) asserted “communications is the essence of science” [2]. The context of this statement and of DNA’s attributed early characterization was problematic. Both were partly made regarding Maurice Wilkins’ sharing of the transformative “Photo 51”—an X-ray diffraction image of crystallized DNA by Rosalind Franklin and her doctoral student—without Franklin’s approval or knowledge [3]. But Crick’s statement does speak to the frequently collaborative nature of modern science. Illustrative examples include the 5154 authors on a Higgs Boson paper [4]; more than 1000 authors on the paper reporting the LIGO consortium’s Nobel-winning observations of gravitational waves [5]; and (for one author of this paper) participation among more than 700 co-authors on two high-impact human genomics papers [6, 7] (with more than 5000 citations each).Footnote 1
In our experience, these large collaborative projects incorporate several facets and phases of “communication… in public and semi-public spaces.” Loosely framed in terms of “when what is communicated with whom,” early stages of scientific research can be seen as spanning a spectrum between private and semi-public. Even in the context of large consortium projects, students generally would share results with advisors and within their research group prior to sharing with wider audiences. An academic research lab could well be considered a semi-public space. Similarly, a broader research consortium (common for large science projects) that collaborates toward shared scientific ends also can be regarded as semi-public space.
Such work often is constrained by at least two forms of “embargoes,” as one “red line” distinguishing “public” from “semi-public.” In genomics contexts, the Bermuda [8], Ft. Lauderdale [9], and Toronto [10] agreements all govern conventions by which data, which is commonly made publicly available as it is generated, can be commonly engaged but not published in the interim (sometimes many years) without consent by the consortium before the first major “marker” publication is realized [11, 12]. Press embargoes are also factors, but typically for much shorter periods [13].
Especially prior to pre-publication data embargoes, it is common for hundreds of researchers spanning dozens of institutions to vigorously collaborate for years. Tools to facilitate scientific dialog, going beyond emailed slides before voice conference calls, hold potential for high scientific impact. Collaborating institutions commonly bring different disciplinary expertise, sometimes from different disciplines, increasing communication obstacles already posed by distance. This heterogeneity is often replicated in smaller form within individual research groups, both across seniority and disciplinary focus (e.g., computational methods vs. basic natural science).
Once work is published, a new ecosystem of “semi-public and public spaces” can be seen to exist. Some of these are in the context of formal education, be it postgraduate, undergraduate, or K-12. Others engage “broader impacts” outreach efforts, be they through museums, non-classroom K-12 activities, or others. Both for larger and more moderate government-funded scientific efforts (e.g., as with LIGO), these are often either encouraged or mandated as a condition of funding, toward eliciting greater engagement with students and the general public. Our genomics interests in particular hold criticality beyond basic “scientific literacy,” to a more fundamental literacy that will reshape our medical care, and may even impact people’s ability to hold employment or make prenatal or even preconception decisions [14, 15]. Thus, this is of profound relevance to all humans. Here, too, “new communication medium[s] for public and semi-public spaces” hold special potential for impact.
Crick’s statement about communications and science can also be partially viewed as a comment on the collaborative, facilitative role of physical manipulatives and (indirectly) tangibles [16,17,18,19,20,21]. Writing in the “Double Helix,” Watson discussed the crucial role of physical fabrication to the discovery of DNA. Watson described bringing sketches of amino acids to his machine shop collaborators, and (ca. 1953) waiting for days in anticipation of the machined metal pieces so that these could be used toward solving the three dimensional puzzle that grew into their double helix proposal [21, 22]. When their seminal article was published in Nature, Watson wrote of finally receiving “appreciation that our past hooting about model building represented a serious approach to science” [22].
(Physical molecular models are recognized as dating to ca. 1860 with the work of August Wilhelm von Hofmann [23], which have also been characterized as in the service of education and communication. The work of Watson and Crick, along with that of John Kendrew [24] (in the late 1950s), is sometimes credited as being among the first skeletal molecular models. These model-building efforts helped usher in decades of (e.g.) ball-and-stick physical molecular models, which have been successively complemented, but not replaced, by 2D and 3D graphical molecular models [25, 26]. We return to consider the intersection of interactive physical and virtual molecular building in Section 2)
To connect these discussions to the pervasive display domain, one of our envisioned, partially prototyped interfaces, is illustrated in Fig. 1. From the pervasive display vantage, we note the relative density and (in some respects) heterogeneity of interactive displays depicted within. Figure 1 represents (at least) one large, three medium, and eight small interactive displays manipulated by several individuals within a relatively small (several square meter) area. Similar technological resources might potentially span a much larger extent both in one physical site; and potentially bridge synchronous or asynchronous interactivity with interfaces at other physical sites (whether pairwise, tens, or even many thousands).
From 2012 to 2014, our collaboration across three universities undertook a series of design exercises investigating how we might realize interactive systems at the intersection of pervasive displays, tangible interaction, computational genomics, and collocated + distributed [1] collaboration. These efforts included development of two partially prototyped envisionments (overview images in Figs. 1 and 2) that began to flesh out specifics of how such interfaces might be born into practice. In this manuscript, we consider some of the background and related work—both from technology, scientific, and genomic vantages—which shaped these efforts. We then introduce the two envisionments, engaging their content, rationale, tradeoffs, and use cases. We conclude with a discussion and consideration of future work.
2 Background and related work
One major challenge in computational genomics relates to the scale of datasets. Many genomic research efforts involve the study of multiple genomes. Today this may involve a thousand or more genomes in parallel, each containing billions of DNA base pairs. Soon, such scenarios may involve millions of genomes. There is a need for new computational tools for analysis and that facilitate meaningful interactive engagement with these vast datasets. Present interaction tools for computational genomics rarely venture beyond traditional graphical interaction techniques, thus missing the latent potential of alternate interaction paradigms.
Tangible and embodied interfaces (TEI) offer unique opportunities for enhancing the practice of computational genomics [14]. However, TEI research has not yet addressed scale and complexity of this magnitude. Understanding how to support more complex computations and flexible/scalable mappings between input and output has been recognized as an important challenge area to move the field forward [31].
Many arguments supporting tangible interfaces have been made, including cognitive, pedagogic, esthetic, kinesthetic, and cultural [19, 32,33,34]. In parallel, important limitations for tangible interfaces remain. Challenges include the development of interfaces that go beyond one-to-one mapping and provide means for searching, comparing, and sharing big data. Which representations are appropriate for large volumes of abstract data? What interaction techniques could facilitate fruitful exploration of big data? How can we effectively combine representations and manipulations to potentially reduce the mental workload associated with handling big data? Furthermore, as communication is frequently key to the success of genomic investigations, how can we best manage work across multiple co-located users, given the complex workflow and broad temporal range of interactions (from seconds to years)?
Direct touch has become a standard input method for tangible and multitouch interfaces. Yet, in data-intensive applications, representations are typically small [35]; here, finger size and occlusion make direct interaction difficult [35,36,37]. Also, in data-intensive applications, WIMP-style control elements provided by various multitouch toolkits, such as scrollbars, sliders, checkboxes, and text fields, may often be either too small for effective and accurate touch interaction, or consume relatively limited screen real estate [35, 38].
Several studies have considered novel multitouch interaction techniques for data-driven applications [35, 37,38,39]. While providing advantage over touch interaction with WIMP-style controls, multitouch gestures often suffer from low discoverability and lack of persistence [35]. We considered an alternative approach: exploring large data sets on multitouch and tangible surfaces using tangible interaction with active tokens, complemented by multitouch and gestural interaction.
Active tokens are programmable physical objects with integrated display, sensing, or actuation technologies [27, 40, 41]. Thus, they can be reconfigured over time, allowing users to dynamically modify their associations with datasets or controls. Users can thereby choose and evolve appropriate tools over successive stages of (e.g.) scientific workflows. Active tokens can also be arranged in various spatial configurations, utilizing physical syntax to represent complex information workflows. The majority of tangible interfaces to date have, from a human sensory perspective, employed passive physical tokens.
While these artifacts have often been embedded with various forms of tags and sensors, mediation has typically been via active surfaces illuminated internally, from beneath, or above. While they can support perceptual coupling of bits and atoms [42] or “coincidence of input and output space” [43] while on such surfaces, passive tokens are often perceptually divorced from their digital associations when in hand (above a surface) and in reserve (on or outside of surface bezels). Especially in big data domains, the number of available tangibles is likely to be dwarfed by their potential range of digital bindings (or “cyberphysical associations”). Active tokens hold the potential to address these and other important limitations.
Here, we focus on a subclass of active tokens that can be manipulated both within mechanical constraints, and using gestures independently from such constraints. These kinds of active tokens enable the expansion of tangible interaction with multitouch and tangible surfaces beyond interaction on the surface into less explored areas such as tangible interaction on bezel, in air, hovering above, or in front the surface. Expanding interaction with active tokens beyond the surface could free much needed real estate for visual data representations, among other potential benefits.
Ubiquitous computing has long highlighted interaction between multiple interactive devices of different form factors. Weiser et al.’s seminal work integrated vertical and horizontal “boards,” “pads,” and “tabs” of large, medium, and small form factor [44]. Rekimoto et al.’s multiple-device interactive surface research further illustrated how such ensembles and ecologies of interactive devices could interoperate [45]. We will also discuss a number of such collaboration support systems specifically developed within collaboration contexts below.
2.1 Computational genomics and big data
While TEIs designed for large data sets can apply to many different areas, we have chosen computational genomics as a target domain for our research for several reasons. Advances in genomic technologies have transformed biological inquiry and have begun to revolutionize medical practice to offer much-improved healthcare [46, 47]. For example, cancer treatment is now often individually tailored toward the genetics of the cancer, highlighting the potential of precision medicine and providing a glimpse into the future of medical treatment. Also, genomic and biological technologies are positioned to address some of the most pressing problems of our times, including food and clean water shortages, as well as increased demand for alternative energy sources [39]. Further, the field of genomic technologies has opened new interfaces between biology and computer science, fueling fields such as bioinformatics that enable biological questions to be tackled computationally [46], and creating a new frontier for human-computer interaction [48].
Resonant with broader evolutions in science [49, 50], the study of genomes now engages theory, experimentation, and computation on equal footing. The combination of advanced genomic technologies (e.g., high-throughput DNA sequencing) and powerful computational tools has facilitated biological investigations in previously impossible manners and scales [51]. No longer limited to small-scale analyses (e.g., of a few genes or specific genomic regions), researchers now often conduct large-scale experiments where information from multiple genomes is measured, recorded, analyzed, and stored. The bottlenecks and challenges along the path to transforming the “big data” generated by these experiments into biological insights have shifted from data generation to data analysis [14, 46]. These have highlighted the need for new computational tools that facilitate effective, meaningful, collaborative analyses.
2.2 TEI systems for scientific understanding
A number of systems illustrate possibilities for supporting scientific discovery and higher education with TEI. Brooks et al. [52] developed the first haptic display for scientific visualization. Gillet et al. [53] presented a tangible user interface for molecular biology that used augmented reality technology to view 3D molecular models. Schkolne et al. [54] developed an immersive tangible interface for the design of DNA molecules. Grote et al. developed a tangible user interface for biodesign that supports a scientific workflow that requires the exploration of large datasets through the construction of complex queries [28]. While these systems highlight potential benefits of TEI for scientists, they mostly focus on the representation of objects with inherent physical structure. We are interested in a broader use case, where abstract information (for which no intrinsic spatial representation typically exists) is represented and manipulated.
Several projects investigate augmented capture and situated access to biological data. Labscape [55] is a smart environment for cell biology labs. ButterflyNet [56] is a mobile capture and access system for field biologists. Mackay et al. and Tabard et al. [57, 58] explore the integration of biologists’ notebooks with physical + digital information sources. While these systems demonstrate the feasibility of augmenting experimental workflows, our focus in these efforts has been upon transforming data into insights.
Other systems have been developed to facilitate collaboration among co-located teams of scientists across large displays and multitouch tables. WeSpace [59] integrates a large data wall with a multitouch table and personal laptops. TeamTag [60] allows biodiversity researchers to collaboratively search, label, and browse digital photos. Isenberg et al. studied collaborative visual analytics [61]. eLabBench [58] investigated tabletop interfaces as interactive wet lab benches. Kuznetsov et al. explored the development of artifacts for supporting DIYbio [62].
Other related resources include coordination policies and guidelines for co-located groupware [63, 64] and evaluation methodologies for collaborative environments (sometimes explicitly within CSCW contexts) [65,66,67]. Westendorf et al. introduced a methodology for studying how groups of eight users collaborate around a large-scale interactive tabletop during a data exploration task, which combines traditional video-coding methods with novel computational methods that leverage image processing to analyze collaboration around large-scale tabletops [68]. Van der Meulen et al. [69] presented a method for exploring the visual behavior of multiple users engaged in a collaborative task around a large interactive surface by synchronizing input from multiple eye trackers to identify joint attention across multiple users.
TEI systems have also demonstrated potential to support science education. Those most relevant to genomics include Augmented Chemistry [70], a tangible user interface for chemistry education; Involv [71], a tabletop interface for exploring the Encyclopedia of Life that shares our challenge of creating effective interaction techniques for large data spaces; PhyloGenie [72], a tabletop interface for collaborative learning of phylogeny through guided activity. In contrast to these works, we are interested in the development of interfaces that empower both expert and novice researchers to conduct open-ended hands-on inquiry; and BacPack, a tangible museum exhibit that engages visitors in a playful bio-design activity—engineering bacteria for sustaining life on Mars [73]. In contrast to these works, we are interested in the development of interfaces that empower both expert and novice researchers to conduct open-ended hands-on inquiry.
The end-user development literature also holds broad relevance to our envisionments [74,75,76]. Whether the end-users are scientists, students, librarians, or bearing countless alternate perspectives, interaction with both of our envisionments might frequently evolve between (e.g.) parametric engagement with a pre-described set of genomic content, and more fundamental user-driven reconfiguration of the cyberphysical system’s constitution and function. Similarly, end-user development would potentially impact both the physical manifestations of such systems (as with the physically representational tokens of Figs. 2 and 3), or (whether implicitly or explicitly) in the digital rebinding and associated screen, illumination, and/or digital shadow updates of the more generic tokens within both envisionments.
2.3 Tangible genomics envisionments
With this background, we turn to several envisionments of prospective tangible genomics interaction environments.
2.4 Elaboration on 2012 and 2014 envisionments
2.4.1 2012 envisionment
Our first scenario and envisionment engaged a comparative genomics analysis of different primate genomes (Figs. 2 and 3). Repetitive sequences are abundant in primate genomes; many are nearly identical to each other.
In Figs. 2 and 3a, 3D models of primates represented the associated primate genomes. Rectangular blocks surfaced with University logos represented collaborating partners and were used both to invoke and manipulate video links to the partners on the vertical displays, as well as control horizontal display overlays associated with remote partners. Extruded, skewed cubical tokens represented datasets. Truncated cylindrical tokens (highlighted in Fig. 3b) represented specific dataset parameters. Rectangular, detented-tab tangibles represented analysis workflows (Fig. 3c). Rectangular, physically slotted tangibles allowed passively constrained, back-illuminated manipulation of parameters. University ID cards were used as authentification credentials. Smartphones, Sifteos, and tablets were envisioned as providing more open-ended, dynamically evolving representations of system state—sometimes tangibly constrained and manipulated (esp. with Sifteos), and other times using more traditional and legacy multitouch interaction approaches. Interaction was envisioned as taking place on a multitouch visualization spreadsheet [77], itself containing a mixture of graphical and tangible elements (often highlighted and interpreted with varying digital shadows [78]), flanked by parameter manipulation workspaces on the left and right edges of the interaction table.
In Figs. 2 and 3, a type of repeat is selected with the truncated-cylinder parameter tokens. All sequences satisfying a certain threshold of similarity are selected from the UCSC genome browser, a common resource used in genomics. The sequences are bound to the parameter tokens and visually compared for presence or absence in a different primate genome (e.g., chimpanzee), where the primate tokens represent the other genome. The intersection of the datasets is visualized, allowing results and next steps to be discussed with collocated and distributed collaborators.
2.4.2 2014 envisionment
A large fraction (e.g., > 50%) of most eukaryotic genomes (including those of humans and other primates) is occupied by “jumping genes” (transposable elements). These elements move and often multiple with every transposition event. Once these elements are inserted in the genome, there is no active mechanism to remove them from the genome. Over time elements accumulate mutations. The number of mutations per element can be used to establish the age of the elements as shown in the envisionment.
Figures 1, 4, and 5 illustrate a prospective interaction comparing Platy-1 mobile elements (a recently discovered mobile element specific to New World monkeys [79, 80]) within the marmoset genome. Rotating physical tokens, or engaging finger-constraining interactions in the empty token wells, were envisioned as allowing the addition or changing of genomes, targeting of mobile elements, and assignment of full-length thresholds. For example, interaction with the knobs can be used to select and intersect datasets for downstream analyses. Additional visualizations, primer design, computational analyses, and other actions would be physically or graphically invoked and parametrically controlled through similar interactions. Active tokens could be lifted from the workspace and held, placed, or exchanged with other users to support varying styles of epistemic cognition [81]. Tokens might be virtually or physically brought to (e.g.) a laptop for manipulation in a conventional spreadsheet, or to a wall-scale display for presentation use.
2.4.3 Physical and virtual elements of envisioned interaction workspaces
Both of our envisionments have been framed in the context of a several square meter workspace. Each could easily (and perhaps preferably) be room-spanning. As each was anticipated to be installed within at least three different university contexts, we tended toward self-contained prototypes. Both systems reflect our interests in integrating mass-market commodity devices, including several technologies specific to the period. For example, our research programs had each engaged Sifteo Cubes [82,83,84,85,86]. These “cubes” (in actuality, 1.5″ × 1.5″ × ¾″) contain touch screens and are motion- and proximity-aware. While designed as gaming devices, Sifteo went open source and allows the development of non-game content. Sifteo Cubes use gestures—including tilt, shake, neighbor, press, wiggle, slide, flip, and stack—as modes of interactions.
While a compelling platform for continuing work—in our case, also incorporating special capabilities coordinated with the manufacturer—the Sifteo technology was acquired and discontinued. By the time of Fig. 1’s creation, we expressed our intent to consider (e.g.) smart watches and small form factor smartphones as alternatives. Similarly, Fig. 2 centers around the form factor and functional properties of the Microsoft PixelSense/Samsung SUR40 device. SUR40, too, was short-lived; Fig. 1 excised its inclusion.
The name of our envisioned system, Tabula, was used for roughly 1000 years as a term for medieval European “counting tables”—a calculating approach somewhat reminiscent of the abacus and a predecessor to computational spreadsheets. The information visualization spreadsheet concept [77] also seemed congruent to both multitouch and tangibles use.
We sought to provide paths for employing tangibles to represent the “key objects of interest” [19, 21]. Several specific planned variations included the following:
-
Some tangibles are used to represent data; others, tools;
-
Some proposed tangibles are passive; others, active (e.g., incorporating sensing and displays);
-
Some tangibles are physically representational (e.g., Fig. 3a; representing different kinds of primates); others, visually representational (Fig. 3b, representing different campuses); others, physically and visually abstract.
-
Some interactive elements are physical, others virtual. For example, Fig. 3 b illustrates both “hard” (physical) and “soft” (virtual) tokens representing different campuses.
Rather than expecting all aspects of the interface to be physically embodied, we instead envisioned many system facets at different stages flowing between representation in physical and virtual forms. Thus, we sought to take advantage of digital malleability and proactivity evident in (e.g.) predictive web search, while also engaging the benefits of tangible interaction.
2.4.4 Prospective elements
Both in research and teaching labs and in the classroom, we envisioned Tabula engaging ~ 6–12 active tokens, and one or several interactive surfaces. Each active token could take on various functional bindings. Several prospects are summarized in Table 1.
Active tokens were envisioned to combine with constraint cartouches [87] in several ways. First, they could be bound to different associations manually. For example, using a two-handed interaction, a user might touch a binding on a tablet or tabletop with one hand, and depress a target token with the other hand. Second, active tokens could be manipulated within constraints to operate upon token bindings. In Table 1’s examples, rotating token no. 1 could select between several available primate genomes (marmoset, gibbon, etc.); rotating no. 6, expressing a mobile element’s full-length threshold (a process which typically requires iterative manipulation to parametrically select anywhere from a handful to hundreds of thousands of target elements). In addition to passive haptic feedback from turning the token, we envision providing active visual feedback on the token itself, on the backing interactive surface, and on a proximal vertical display. These are intended to support evolving views by multiple collaborating users.
3 Discussion and future work
In this manuscript, we have introduced two phases of a case study regarding envisionments of computational technologies toward investigating the potentials for complex pervasive display systems. These have included ecologies of large and small displays; horizontal and vertical displays; and mass-market, niche-market (e.g., gaming), and custom devices. We have also investigated and developed early proposals relating to prospective mappings for ensembles of active and passive tangibles toward diverse computational bindings in complex scientific (and specifically genomic) contexts.
In some respects, the 2014 variation of Fig. 1 expresses a subset of the representational forms and proposed functions of Fig. 2’s 2012 version. Where the 2012 version was targeted toward distributed collaboration, the 2014 version was focused on co-located interaction. Where the 2012 version incorporated both physically representational and abstracted tangibles, the 2014 version was populated primarily with abstracted tangibles. And where the 2012 version anticipated ambitious use of the tabletop display, this was removed from the 2014 version.
That said, to our knowledge, both the 2012 and 2014 interface envisionments illustrated and aspired to a more ambitious set of digital functionality (at minimum, within the context of computational science) and diversity of integrated displays than any to-date tangibles interface of which we are aware. One interpretation is a platform/content tension alluded by Ansoff [88], articulated by Merrill, and elaborated within [34]. Implementing the hardware alone of Fig. 1 or 2 is an ambitious proposition, as would be the software alone. Especially with the resources of academic contexts, a direct ad hoc de novo creation of the full hardware and software ecosystem is likely to be challenging and fraught. Our team realized this and sought to position existing software environments like Galaxy [89,90,91]—an open source environments for genomic analyses targeted toward non-programmers—and platforms like Sifteo and PixelSense [85, 92]. But we also noted a relatively wide functional and API gap between Galaxy and our needs. Also, the Sifteo and PixelSense platforms were already in rapid decline, and the Sifteo remains currently without a commercially available successor.
In our 2014 envisionment, our team identified tangible reinterpretations of smart watches as active tokens (including stackable semantics and modified, custom-fabricated active bezels) to be one promising vector. To our knowledge, this was without precedent prior to our 2014 proposal. Some of us have pursued this further [93,94,95,96,97], resulting in deployment of numerous systems. Some of us have also partially developed active tokens utilizing ePaper and NeoPixel rings, which could also offer a complementary platform. In all cases, the rapid turnover of hardware platforms (as with smart watches) and the resource demands of platform development remain significant obstacles. The creation of interoperable virtual editions—both on 2D screens and also in VR environments—remains one attractive path, if partly as a bootstrapping vector. At the same time, we anticipate a careful balancing act must be made. If, as in our passage from the 2012 to 2014 prototypes (or in another example, the evolution of the Urp tangible interface from initial to classroom-deployed form [98, 99]), there is too much functional and representational dilution, the result may be insufficiently compelling to attract and sustain use and development.
Also, our proposed composition of large horizontal and vertical displays, tablets, smartphones, and smartwatch-based tangibles remains a fertile one. One of our recent systems shares these elements, including some of our originally envisioned abstract and genomics-specific semantics and visualizations [94]. Recently, Brudy et al. presented an analysis and taxonomy of a corpus of 510 papers on cross-device computing domain [100]. We suspect other such systems, both in and outside of the genomics domain, will continue to bear fruit in time as well.
3.1 Tangible interaction prototyping with many-device high-fidelity simulations
The prototyping efforts we described took several forms. Our 2012 iteration involved still renders within a 3D modeler (Sketchup), including simulated screen states (as textures produced with CorelDRAW). Our 2014 iteration involved compositing photographs of physical prototypes with simulated screen states.
While facilitating much more concrete discussion and reflection than (e.g.) hand-sketched storyboards, neither of these approaches supports interactivity. Since these efforts, several technology industry developments have enabled compelling new alternatives. Consumer VR devices have become widely available, at low cost and relatively high (visual) fidelity. Device simulators and emulators are progressively more widespread, including for Sifteo Cubes and iOS + Android watches [1, 2, 3]. Some of these work via web browsers, allowing them to be more readily integrated into interactive 3D simulations (e.g., via Web browser assets in 3D Unity simulations). Also, realtime lighting simulation within complex 3D models, including within free modeling programs with AR+VR support (e.g., Eevee rendering in Blender XR) now exist. (Realtime lighting has implications for engaging with, e.g., LEDs with complex illuminations.)
Taken together, these allow both passive and partially active tangibles to be used as proxies for several or dozens of active tokens—including for devices like Sifteo Cubes that are not presently commercially available. With the full software stack publicly hosted on github for Sifteo [], were a compelling use to be fleshed out, the product would hold prospects for reincarnation—whether in original or more compact form, likely in both cases with greater performance and lesser cost.
Simulated or emulated active tokens have strong potential not only for prototyping but also as a primary interaction genre. Interaction support has been regarded by many as a weakness for VR systems in particular. VR-mediated active tokens hold potential for substantially reduced cost, easier replication, and substantially heightened interaction support within VR environments.
In the present and near-future, we see several promising prospects toward catalyzing the creation of such functionalities. New mediation technologies such as LightCrafter (used for positional sensing by Zooids [101]), in combination with active tokens, could provide paths for tabletop mediation and sensing with newly compelling capabilities and economics. The combination of small, inexpensive ePaper modules and embedded computers (e.g., Wi-Fi integrated Arduinos), combined with 3D printing, could change some of the platform dynamics underlying active tokens. The rapid growth and investments of VR, combined with compelling complementarities between VR and tangible interfaces [102], could drive the creation of software platforms that could accelerate creating such environments.
Continuing evolutions in the smartwatch space—e.g., decrease in the cost of high-function legacy devices, perhaps as inbuilt batteries of Apple watches fail, or as Android variants gain functionality and traction—could reshape the active token landscape. Decreasing costs and new technologies in the sensate large-screen landscape is another driver. The accelerating trajectories of both personal genomics and genomics within academic research—and corresponding demands for and software platforms enabling new interactive modes for engaging genomics—would also be a powerful complementary driver. In a final variation, where low-level protocols like TUIO [103,104,105] achieved substantial impact and uptake, higher-level sister APIs—perhaps initially in the context of games, music, and other mass drivers—could substantially ease system development for ambitious tangible pervasive displays.
3.1.1 A broader view
In 1984 [106] and 1979 [107], tangibles pioneers Robert Aish and Peter Noakes presciently wrote:
1984: It is not suggested that a layman using this system will create a ‘better’ building than an inventive architect or engineer working with a pencil, paper and a pocket calculator. Yet buildings, are not, and do not have to be, totally rational objects. Architecture is a mixture of subjective and objective decision and the designer, knowingly or unknowingly, makes ‘tradeoffs’ between aesthetic and engineering attributes. Architecture without numbers is just a more effective way of making these decisions and observing these tradeoffs. ….
This can be expected to develop into a greater understanding by both professional and laypeople of the complex underlying relationships which exist between design, performance, and perceptual variables that characterize architectural design.
1979: … as the two applications illustrated, rather than replacing conventional computer graphics, [the tangible interface] will complement the graphical channel of man-machine communication…. It is suggested that the most important potential contribution of the building block system will be to enhance the relationship between the client and the design team. Using [the tangible interface]… it will be possible for the architect to directly demonstrate to clients the advantages of alternative schemes….
Alongside with excerpts from Jonathan Swift’s 1726 “Gulliver’s Travels” regarding the Sages of Lagado, we find these sentences among the most insightful excerpts written regarding tangible interfaces in general, and their aspirational prospects relating to tangible genomics in particular. At the risk of verbosity, we consider a recasting of the above text into aspirations for the tangible genomics envisionments we have developed and discussed:
It is not suggested that a member of the general public using such a system will make ‘better’ genomic inferences or decisions than a skilled physician or bioinformatician working with laptop, tablet, and PC graphical interfaces. Yet genomes and humans, are not, and do not have to be, totally rational objects. Human genomics and genetics is a mixture of subjective and objective decision and even the genomics professional, knowingly or unknowingly, makes ‘tradeoffs’ between profoundly varying genomic expressions ranging from somewhat “known” and often fundamentally unknown nature. Tangible genomics is just a more effective way of making these decisions and observing these tradeoffs. ….
This can be expected to develop into a greater understanding by both professional and laypeople of the complex underlying relationships which exist between genomic sequence, biological and health expression, and epigenetic and environmental factors that characterizes human genomics.
… as the two envisionments illustrated, rather than replacing conventional computer graphics, [the tangible interface] will complement the graphical channel of man-machine communication…. It is suggested that the most important potential contribution of tangible genomics will be to enhance the relationship between patients and the general public with health professionals, scientists, and policy makers. Using [the tangible interface]… it will be possible for the each of these parties to directly demonstrate to each other different anticipated, and known unknown, implications of alternative schemes….
In the four decades since Aish built his first tangible architectural interfaces and expressed these aspirational anticipations, the impacts of graphical CAD upon architecture have diffused and democratized profoundly. But the tangible expressions of architectural CAD systems in 2019 remain rudimentarily and (to our knowledge) minimally deployed.
Even so, the trailing paragraphs of [42] regarding hypertext in the early 1990s remain deeply inspirational and aspirational to us. After comparable decades of steady progress, the field of hypertext then blossomed from an academic curiosity into a profoundly transformative force that has radically impacted the full span of human endeavors. The field of genomics is now poised to transformatively impact and perhaps reshape fundamental aspects of the human condition; and the field of tangibles, in our view, in comparable form today as hypertext in 1991. As aspired and anticipated by Aish for architecture, we hope both near- and farther-future evolutions of tangible genomics hold potential to enable scientists, students, senators, street people, and far beyond to deeply enter dialogs that will reshape the future and fate of both humanity and our many sister species.
Notes
This manuscript draws in significant part from a same-authored ACM Pervasive Displays 2019 conference proceedings
References
The Genomes Project, C., et al., An integrated map of genetic variation from 1,092 human genomes. Nature, 2012. 491: p. 56
Garvey WD (2014) Communication: the essence of science: facilitating information exchange among librarians, scientists, engineers and students. Elsevier. ISBN 9781483182070
Due Credit (2013) Nature 496(270)
Collaboration, A et al (2015) Combined measurement of the Higgs boson mass in pp collisions at sqrt(s)=7 and 8 TeV with the ATLAS and CMS experiments. Phys Rev Lett 114(19):191803
Collaboration, L.S et al (2016) Observation of gravitational waves from a binary black hole merger. Phys Rev Lett 116(6):061102
Genomes Consortium (2010) A map of human genome variation from population-scale sequencing. Nature 467(7319):1061–1073
Konkel MK, Ullmer B, Shaer O, Mazalek A (2019) Envisioning tangibles and display-rich interfaces for co-located and distributed genomics collaborations. In: Proc. of Pervasive Displays, vol 2019
Marshall E (2001) Bermuda rules: community spirit, with teeth. Science 291(5507):1192–1192
Sharing data from large-scale biological research projects: a system of tripartite responsibility. (Wellcome Trust, 2003); available at http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003207.pdf
Toronto International Data Release Workshop, A (2009) Prepublication data sharing. Nature 461:168
Nanda S, Kowalczuk MK (2014) Unpublished genomic data–how to share? BMC Genomics 15(1):5
Koboldt D (2013) Data sharing, embargo, and big science. MassGenomics Available from: http://massgenomics.org/2013/06/data-sharing-embargo.html
Siegel, V. The logic of journal embargoes: why we have to wait for scientific news. The Conversation 2016; Available from: http://theconversation.com/the-logic-of-journal-embargoes-why-we-have-to-wait-for-scientific-news-53677
Shaer O et al (2013) From big data to insights: opportunities and challenges for TEI in genomics. In: Proc. of TEI’13, pp 109–116
Baldi P et al (2011) Countering GATTACA: efficient and secure testing of fully-sequenced human genomes. In: Proceedings of the 18th ACM conference on Computer and communications security. ACM, Chicago, pp 691–702
Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. D. . 3D tangibles facilitate joint visual attention in dyads. in Proc. of CSCL 2015. 2015
Schneider B et al (2011) Benefits of a tangible interface for collaborative learning and interaction. IEEE Trans Learn Technol 4(3):222–232
Klemmer SR et al (2001) The designers’ outpost: a tangible interface for collaborative web site. In: Proc. of UIST’01, pp 1–10
Shaer, O. and E. Hornecker, Tangible user interfaces: past, present, and future directions. Found. Trends Hum.-Comput. Interact., 2010. 3(1--2): p. 1--137
Everitt KM et al (2003) Two worlds apart: bridging the gap between physical and virtual media for distributed design collaboration. In: Proc. of CHI ‘03, pp 553–560
Ullmer, B., Tangible interfaces for manipulating aggregates of digital information. 2002, Massachusetts Institute of Technology
Watson, J.D., The double helix: a personal account of the discovery of the structure of DNA. 1968: Antheneum
Iwasa J (2010) Animating the model figure. Trends Cell Biol 20(12):699–704
Myers N (2008) Molecular embodiments and the body-work of modeling in protein crystallography. Soc Stud Sci 38(2):163–199
Höst GE, Larsson C, Olson A, Tibell LA (2013) Student learning about biomolecular self-assembly using two different external representations. CBE—Life Sciences Education 12(3):471–482
Dori YJ, Miri AB (2001) Virtual and physical molecular modeling: fostering model perception and spatial understanding. J Educ Technol Soc 4(1):61–74
Valdes C et al (2014) Exploring the design space of gestural interaction with active tokens through user-defined gestures. In: Proc. of CHI’14, pp 4107–4116
Grote C et al (2015) Eugenie: multi-touch and tangible interaction for bio-design. In: Proc. of TEI’15
Bartindale T, Harrison C (2009) Stacks on the surface: resolving physical order using fiducial markers with structured transparency. In: Proc. of ITS’09, pp 57–60
Agarawala A, Balakrishnan R (2006) Keepin’it real: pushing the desktop metaphor with physics, piles and the pen. In: Proc. of CHI’06, pp 1283–1292
Hornecker E et al (2008) TEI goes on: tangible and embedded interaction. IEEE Pervasive Computing 7(2):91–96
Mazalek A, Van den Hoven E (2009) Framing tangible interaction frameworks. AI EDAM 23(3):225–235
Ishii, H. and B. Ullmer, Tangible bits: towards seamless interfaces between people, bits and atoms, in Proc. of CHI’97. 1997. p. 234–241
Ullmer B et al (2019) Tangible and embodied interaction. ACM Books
Drucker SM et al (2013) TouchViz: a case study comparing two interfaces for data analytics on tablets. In: Proc. of CHI’13, pp 2301–2310
Vogel, D. and P. Baudisch, Shift: a technique for operating pen-based interfaces using touch, in Proc. of CHI’07. 2007 p. 657–666
Voida, S., et al., Getting practical with interactive tabletop displays: designing for dense data, fat fingers, diverse interactions, and face-to-face collaboration, in Proc. of ITS’09. 2009. p. 109–116
Block F et al (2012) The DeepTree exhibit: visualizing the tree of life to facilitate informal learning. IEEE Transactions on Visualization and Computer Graphics 18(12):2789–2798
Isenberg P et al (2013) Data visualization on interactive surfaces: a research agenda. IEEE Computer Graphics and Applications 33(2):16–24
Zigelbaum, J., et al., The tangible video editor: collaborative video editing with active tokens, in Proc. of TEI’07. 2007. p. 43--46
Mazalek A et al (2014) Tangible meets gestural: gesture based interactions with active tokens, in Gesture-based interaction design: communication and cognition, CHI. Workshop, p 2014
Ullmer B, Ishii H (2000) Emerging frameworks for tangible user interfaces. IBM Syst J 39(3):915–931
Ishii, H., Tangible bits: beyond pixels, in Proc. of TEI’08. 2008. p. xv--xxv
Weiser M (1991) The computer for the 21st century. Sci Am 272(3)
Rekimoto, J. and M. Saitoh, Augmented surfaces: a spatially continuous work space for hybrid computing environments, in Proc. of CHI ‘99. 1999. p. 378--385
Chin L et al (2011) Making sense of cancer genomic data. Genes & Development 25(6):534–555
Chen R, Mias GI, Li-Pook-Than J, Jiang L, Lam HY, Chen R, Miriami E, Karczewski KJ, Hariharan M, Dewey FE, Cheng Y, Clark MJ, Im H, Habegger L, Balasubramanian S, O’Huallachain M, Dudley JT, Hillenmeyer S, Haraksingh R, Sharon D, Euskirchen G, Lacroute P, Bettinger K, Boyle AP, Kasowski M, Grubert F, Seki S, Garcia M, Whirl-Carrillo M, Gallardo M, Blasco MA, Greenberg PL, Snyder P, Klein TE, Altman RB, Butte AJ, Ashley EA, Gerstein M, Nadeau KC, Tang H, Snyder M (2012) Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell 148(6):1293–1307
Shaer, O., et al., Communicating personal genomic information to non-experts: a new frontier for human-computer interaction. Foundations and Trends® in Human-Computer Interaction, 2017. 11: p. 1–62
Benioff, M.R., et al., Computational science: ensuring America’s competitiveness. President’s Information Technology Advisory Committee (PITAC), 2005
Grand challenges to computational science. 1988
Mardis ER (2008) The impact of next-generation sequencing technology on genetics. Elsevier Trends Journals, pp 133–141
Ouh-young M et al (1988) Using a manipulator for force display in molecular docking. In: Proc. of Robotics and Automation, vol 3, pp 1824–1829
Gillet A et al (2005) Tangible augmented interfaces for structural molecular biology. IEEE Computer Graphics and Applications 25(2):13–17
Schkolne S, Ishii H, Schroder P (2004) Immersive design of DNA molecules with a tangible interface, in Proceedings of the conference on Visualization ‘04. IEEE Computer Society, Washington, pp 227–234
Arnstein L et al (2002) Labscape: a smart environment for the cell biology laboratory. IEEE Pervasive Computing 1(3):13–21
Yeh R et al (2006) ButterflyNet: a mobile capture and access system for field biology research. In: Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, New York, pp 571–580
Mackay WE et al (2002) The missing link: augmenting biology laboratory notebooks. In: Proceedings of the 15th annual ACM symposium on User interface software and technology. ACM, New York, pp 41–50
Tabard A et al (2011) The eLabBench: an interactive tabletop system for the biology laboratory. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces. ACM, New York, pp 202–211
Wigdor D et al (2009) WeSpace: the design development and deployment of a walk-up and share multi-surface visual collaboration system. In: Proceedings of the 27th international conference on Human factors in computing systems. ACM, New York, pp 1237–1246
Morris MR et al (2006) TeamTag: exploring centralized versus replicated controls for co-located tabletop groupware. In: Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, New York, pp 1273–1282
Isenberg P et al (2010) An exploratory study of co-located collaborative visual analytics around a tabletop display. In: Symposium on Visual Analytics Science and Technology (VAST), pp 179–186
Kuznetsov S et al (2012) At the seams: DIYbio and opportunities for HCI. In: Proceedings of the Designing Interactive Systems Conference. ACM, New York, pp 258–267
Scott SD, Grant KD, Mandryk RL (2003) System guidelines for co-located, collaborative work on a tabletop display. In: Proc. of ECSCW’03. Kluwer Academic Publishers, Norwell, pp 159–178
Morris MR et al (2004) Beyond “social protocols”: multi-user coordination policies for co-located groupware. In: Proc. of CSCW’04, pp 262–265
Shaer O et al (2012) The design, development, and deployment of a tabletop interface for collaborative exploration of genomic data. Int J Hum Comput Stud 70(10):746–764
Kirsh, D. Methodologies for evaluating collaboration behavior in co-located environments. in CSCW 2004 Workshop: Methodologies for Evaluating Collaboration in Co-Located Environments. 2004
Inkpen, K. “Just because:” the challenges of evaluating face-to-face collaboration. in CSCW 2004 Workshop: Methodologies for Evaluating Collaboration in Co-Located Environments 2004
Westendorf L et al (2017) Understanding collaborative decision making around a large-scale interactive tabletop. Proceedings of the ACM on Human-Computer Interaction 1(CSCW):1–21
Meulen HVD et al (2016) Towards understanding collaboration around interactive surfaces: exploring joint visual attention. In: In Proc. of UIST’16. ACM, Tokyo, pp 219–220
Fjeld, M., et al., Tangible user interface for chemistry education: comparative evaluation and re-design. Proc. of CHI’07. 2007. 805–808
Horn, M.S., M. Tobiasz, and C. Shen, Visualizing biodiversity with voronoi treemaps, in In Proc. of ISVD ‘09. 2009. p. 265–270
Schneider, B., et al., Phylo-Genie: engaging students in collaborative ‘tree-thinking’ through tabletop techniques, in Proc. of CHI’12. 2012. p. 3071--3080
Loparev, A., et al., BacPack: exploring the role of tangibles in a museum exhibit for bio-design, in Proc. of TEI’17. 2017, ACM: Yokohama, Japan p 111-120
Tetteroo, D., I. Soute, and P. Markopoulos, Five key challenges in end-user development for tangible and embodied interaction, in Proc. of ICMI’13. 2013, ACM: Sydney, Australia p 247-254
Turchi, T. and A. Malizia, Pervasive displays in the wild: employing end user programming in adaption and re-purposing, in End-User Development: 5th International Symposium, IS-EUD 2015, Madrid, Spain, May 26–29, 2015. Proceedings, P. Díaz, et al., Editors. 2015, Springer International Publishing: Cham. p. 223–229
Turchi T, Malizia A, Dix A (2017) TAPAS: a tangible end-user development tool supporting the repurposing of pervasive displays. J Vis Lang Comput 39:66–77
Chi, E.H.-H., et al., A spreadsheet approach to information visualization, in Proceedings of Information Visualization’97. 1997. p. 17–24
Ishii: Tangible bits: towards seamless interface - Google Scholar. 2009, \urlhttp://scholar.google.com/scholar?cites=16442430376593508398%5C&hl=en
Konkel MK et al (2016) Discovery of a new repeat family in the Callithrix jacchus genome. Genome Res 26(5):649–659
Carbone L, Harris RA, Mootnick AR, Milosavljevic A, Martin DI, Rocchi M, Capozzi O, Archidiacono N, Konkel MK, Walker JA, Batzer MA, de Jong PJ (2012) Centromere remodeling in Hoolock leuconedys (Hylobatidae) by a new transposable element unique to the gibbons. Genome Biol Evol 4(7):648–658
Kirsh D, Maglio P (1994) On distinguishing epistemic from pragmatic action. Cogn Sci 18(4):513–549
M. E. Scott, S. Perry, L. Staskawicz, et al. github : sifteo / thundercracker. 2011; Available from: https://github.com/sifteo/thundercracker
Merrill, D., J. Kalanithi, and P. Maes, Siftables: towards sensor network user interfaces, in Proc. of TEI’07. 2007, ACM: New York, NY, USA. p. 75--78
Merrill, D.J., Interaction with embodied media. 2009, MIT
Merrill, D., E. Sun, and J. Kalanithi, Sifteo cubes, in CHI’12 Extended Abstracts. 2012. p. 1015--1018
Pillias, C., R.E. Robert-Bouchard, and G. Levieux, Designing tangible video games: lessons learned from the sifteo cubes, in Proc. of CHI’14. 2014. p. 3163--3166
Ullmer, B., et al., Cartouche: conventions for tangibles bridging diverse interactive systems, in Proc. of TEI’10. 2010. p. 93--100
Ansoff I (1957) Strategies for diversification 35(5):113–124
Giardine B et al (2005) Galaxy: a platform for interactive large-scale genome analysis, pp 1451–1455
Goecks J, Nekrutenko A, Taylor J (2010) Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biol 11:1–13
Grüning BA, Rasche E, Rebolledo-Jaramillo B, Eberhard C, Houwaart T, Chilton J, Coraor N, Backofen R, Taylor J, Nekrutenko A (2017) Jupyter and galaxy: easing entry barriers into complex data analyses for biomedical researchers. PLoS Comput Biol 13(5):e1005425
Geurts L et al (2014) Playfully learning visual perspective taking skills with sifteo cubes. In: Proceedings of the First ACM SIGCHI Annual Symposium on Computer-human Interaction in Play. ACM, New York, pp 107–113
Arif, A.S., et al., Sparse tangibles: collaborative exploration of gene networks using active tangibles and interactive tabletops, in Proc. of TEI’16. 2016, ACM: New York, NY, USA p 287--295
Roozbeh Manshaei, S.D., Uzair Mayat, Dhrumil Patal, Matthew Kyan, and Ali Mazalek. Tangible BioNets: multi-surface and tangible interactions for exploring structural features of biological networks. in Proc. of EICS’19. 2019
East B et al (2016) Actibles: open source active tangibles. In: Proceedings of theACM International Conference on Interactive Surfaces and Spaces. ACM, Niagara Falls, Ontario, pp 469–472
Roozbeh Manshaei, U.M., Aneesh Tarun, Sean DeLong, David Chiang, Justin Digregorio, Shahin Khayyer, Apurva Gupta, Matthew Kyan, and Ali Mazalek. Tangible tensors: an interactive system for grasping trends in biological systems modeling. in Proc. of C&C 2019. 2019
DeLong, S., Ahmed Sabbir Arif, and Ali Mazalek. Design and evaluation of graphical feedback on tangible interactions in a low-resolution edge display. in Proc. of Pervasive Displays 2019. 2019
Ben-Joseph E et al (2001) Urban simulation and the luminous planning table: bridging the gap between the digital and the tangible. J Plan Educ Res 21(2):196–203
Underkoffler, J. and H. Ishii, Urp: a luminous-tangible workbench for urban planning and design, in Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit. 1999. p. 386--393
Brudy, F., et al., Cross-device taxonomy: survey, opportunities and challenges of interactions spanning across multiple devices, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019, ACM: Glasgow, Scotland Uk p 1–28
Le Goc, M., et al. Zooids: building blocks for swarm user interfaces in Proceedings of the 29th Annual Symposium on User Interface Software and Technology
Harley, D., et al. Tangible VR: diegetic tangible objects for virtual reality narratives in Proceedings of the 2017 Conference on Designing Interactive Systems
TUIO: A protocol for table-top tangible user interfaces. 2005
Software implementing TUIO. http://www.tuio.org/?software
Kaltenbrunner, M., reacTIVision and TUIO: a tangible tabletop toolkit, in Proc. of ITS’09. 2009. p. 9--16
Aish R, Noakes P (1984) Architecture without numbers-CAAD based on a 3 D modelling system. Comput Aided Des 16(6):321–328
Aish R (1979) 3D input for CAAD systems. Comput Aided Des 11:66–70
Acknowledgments
We thank David Merrill, Liam Staskawicz, Consuelo Valdes, Casey Grote, André Wiggins, and Michael Lynn for supporting this work.
Funding
We are appreciative of NSF grants CNS-1828611, CNS-1126739, IIS-1149530, and IIS-1320350; the NSERC Discovery Grant and Canada Research Chairs programs, and the Canada Foundation for Innovation (CFI) and Ontario Ministry of Research and Innovation (MRI) Innovation Fund for partial support of this work.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Konkel, M.K., Ullmer, B., Shaer, O. et al. Toward tangibles and display-rich interfaces for co-located and distributed genomics collaborations. Pers Ubiquit Comput 26, 767–779 (2022). https://doi.org/10.1007/s00779-020-01376-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00779-020-01376-5