Keywords

1 Introduction

The growing variety of data sources in the Big Data Age not only increases the challenges and opportunities for competitive technical intelligence (CTI), but also leads the revolution of business management and decision making. Of significance is “data-driven,” which uses Information Technology (IT) to support rigorous, constant experimentation that guides decisions and innovations (Bughin et al. 2010), and has being characterized as one of the most competitive features in various R&D planning and business models (McAfee et al. 2012).

Technology roadmapping (TRM) approaches, described as a representative, prominent, and flexible instrument for long-range technological forecasting and strategic planning, make good sense to actively incorporate business data into planning procedures (Phaal et al. 2004). Quantitative data, especially Science, Technology, and Innovation (ST&I) data, have already been largely involved in TRM models; however, since diverse emphases and possible time gap between different ST&I data sources, traditional TRM model usually only focuses on single ST&I data source, e.g., publications or patents. Undoubtedly, the booming new data sources, e.g., Twitter, news, customer comments, R&D project proposals, and product reports, also match the concept of ST&I researches perfectly, and the rapid engagement of multiple ST&I data with different formats and emphases introduces new challenges for current studies. Meanwhile, the way that current TRM study used to transfer vague human thoughts to defined numerical values (Lee et al. 2011) is still unfavorable, and the combination of qualitative and quantitative methodologies is also not as smart as what we imagine. At this stage, the emerging concerns are the approaches that refer to real-world problems, explore value-added information from complex data sets, fuse analytic results and expert knowledge effectively and reasonable, and demonstrate to decision makers visually and understandably.

Aiming to provide a solution for the questions—(1) how to incorporate multidimensional information from diverse ST&I data sources and (2) how to construct an intelligent TRM method, this paper develops a multiple ST&I data incorporation model and a fuzzy set-based semi-automatic TRM generation model. Based on a ST&I topic identification approach (Zhang et al. 2014b) and a traditional TRM composing method (Zhang et al. 2013), this paper designs a multilayer TRM method to arrange topics and related concepts (e.g., idea, technique, and product) to explore potential relationships. It is a challenge to build up an entire intelligent TRM composing method with novel IT techniques, e.g., training machine to discover potential linguistic relationships between technological components; however, this paper seeks approaches to introduce fuzzy set (Zadeh 1965) to transfer vague expert knowledge into defined numeric values and help automatically locate technological components for TRM composition. The empirical study selects the United States (US) National Science Foundation (NSF) Award data (innovative research ideas and proposals) and the Derwent Innovation Index (DII) patent data source (technical products), which demonstrates the efficiency and feasibility of our methods and also provides vantage points at the top-bottom stages of R&D process and assists in description of computer science macro-trends for decision makers.

The main contributions of this paper are as follows: (1) We provide an approach to incorporate multiple ST&I data into TRM; (2) we construct a systematic approach to apply fuzzy set to the traditional TRM method and help compose TRM in a semi-automatic function; and (3) the process in which we think and solve problems emphasizes the combination of qualitative and quantitative methodologies and is also adaptive and transferrable to related ST&I researches.

This paper is organized as follows: The Related Works section reviews previous studies on TRM. In the Methodology section, we present the detailed research method for the semi-automatic TRM composing method, involving with a multiple ST&I data incorporation model and a fuzzy set-based semi-automatic TRM composing model. The Empirical Study section follows, using the US NSF Awards and DII patent data as the case. Finally, we conclude our current research and outline future work.

2 Related Works

This section reviews related literatures that include TRM and related ST&I data incorporation studies and also summarizes the limitations of these previous works.

Based on the significant work of Phaal et al. (2004), who summarized previous TRM methods and constructed an effective qualitative composing model, TRM research has already been extended from qualitative study only to a combination of qualitative and quantitative methodologies. Representatively, Huang et al. (2014) introduced a bibliometric technique-based four-dimensional TRM for the science and technology planning of China’s solar cell industry; Zhang et al. (2014c) combined TRM model with Triple Helix innovation and Semantic TRIZ concepts and presented an empirical study on China’s dye-sensitized solar cell industry; Lee et al. (2015) proposed a scenario-based TRM that involved with a plan assessment map and an activity assessment map for organizational plans, which also engaged the Bayesian network for topology and a causal relations definition. Geum et al. (2015) added the association rule mining to TRM to identify relationships between items on different layers of TRM.

The current TRM model has been combined with various concepts and methods, applying into real ST&I assessment, forecasting, and planning. According to previous studies, it is promising to conclude the benefits of current TRM studies as follows: (1) TRM is a visual model to present content, which enables easy understanding for both macro-, meso-, and micro-level problems (Zhang et al. 2013); (2) the hierarchical structure of TRM helps indicate potential relationships between items on different layers (Geum et al. 2015); (3) TRM is able to take dimensional impact factors into consideration, e.g., time, science policy, market pull, and technique push (Lee and Park 2005; Robinson and Propp 2008; Huang et al. 2014); and (4) previous TRM studies have perfect adaptability for general publication and patent data sources and match requests from different industrial domains.

Multiple ST&I data incorporation for TRM always holds great interests for ST&I analyzers. Data incorporation, known as data integration or data fusion, is one important branch of data mining and relies on strong computer science and IT background. Current research is to combine data residing at different sources and provide users with an unified view of these data (Lenzerini 2002). However, these data incorporation studies focus on data structure problem and ignore underlying insights and interactions of different data sources, which in contrast are the key factors for ST&I analyses. On the other hand, it has been a long time since the idea of multiple ST&I data incorporation generated, but until now related researches are rare. Robinson and Propp (2008) designed a multipath roadmapping framework where four layers—research lines, experimental integration, integrated platform/product, and application area—were used to emphasize the different phases of technology development. Zhang et al. (2013) engaged core terms derived from Web of Science (WoS) publications and International Classification Code (IPC) retrieved from the United States Patent Trademark Office (USPTO) into one TRM model to highlight the technological researches and applications.

Another tough task of current TRM study is the approach transferring vague human thoughts to defined numerical values (Lee et al. 2011), and this issue will heavily influence the efficiency and accuracy of TRM auto-generation process. In this context, fuzzy set would be a helpful instrument to minimize expert aid and consumed time, but maximize the usage of expert knowledge. Lu et al. (2011) constructed a novel group decision-making method with fuzzy hierarchical criteria for theme-based comprehensive evaluation in new product development, which calculated ranking results by fusing all assessment data from human beings and machines. Lee et al. (2011) introduced a fuzzy analytic hierarchy process to TRM model, which was definitely an innovative attempt for the combination of fuzzy concepts and TRM, although the empirical study only applied to a small-range data set (five sub-technologies of hydrogen energy) with expert ranking.

3 Methodology

On the purpose of providing solutions for constructing a multilayer TRM to reveal multidimensional information from emphasis-differed ST&I data sources and to explore insights for technical intelligence understandings, this paper constructs a TRM method for multiple ST&I data incorporation with the following steps:

  1. 1.

    Inputs: Grouped Topics—we focus on raw ST&I data and retrieve meaningful phrases and terms via a Term Clumping process (Zhang et al. 2014a) and then apply a K-means-based clustering approach (Zhang et al. 2014b) to identify hot research topics as key technological components;

  2. 2.

    Step 1: Multiple ST&I data incorporation model—we construct a designing process for multiple ST&I data incorporation that introduces Technology Readiness Level to analyze the emphases of ST&I data and proposes a questionnaire for expert consultation.

  3. 3.

    Step 2: Fuzzy set-based semi-automatic TRM generation model—in order to combine the qualitative and quantitative methodologies, we engage experts to evaluate topics by removing meaningless topics, consolidating duplicate topics, and highlighting significant ones, and we also evaluate each topic and group them into specified fuzzy sets, after which we generate the TRM in an automatic manner;

  4. 4.

    Outputs: TRM.

Note that the both steps would think about the combination of qualitative and quantitative methodologies. The step 1 is to consider the diverse emphases of applied ST&I data and data analyst would take an active role for the design process, while domain experts would dominate the step 2 for evaluating selected topic candidates. The framework of the TRM method for multiple ST&I data incorporation is given in Fig. 12.1.

Fig. 12.1
figure 1

Framework of technology roadmapping method for multiple ST&I data incorporation

3.1 Inputs: Grouped Topics

How to retrieve meaningful phrases and terms from raw ST&I data and identify valuable topics via clustering analysis are definitely interesting research questions, but they are not the foci of this paper. Briefly, we apply a revised version of the Term Clumping process (Zhang et al. 2014a) to retrieve phrases and terms from ST&I textual data by term removal, consolidation, and clustering, and then, a K-means-based clustering approach (Zhang et al. 2014b) is used to group related linguistic elements, e.g., phrases and terms, or records, into meaningful topics. We define these grouped topics as technological components, which reflect scientific or technical information of ST&I data.

3.2 Step 1: Multiple ST&I Data Integration

It is a general understanding that the emphases of different ST&I data are diverse. As shown in Fig. 12.2, we summarize the emphases of the selected mainstream ST&I data sources as below:

Fig. 12.2
figure 2

Emphasis of selected mainstream ST&I data sources

  1. 1.

    Academic proposal, e.g., NSF proposals, is usually granted by national government to support academic institutions for basic research, whose content focuses on new ideas, concepts, and unrealized innovative actions. Discoveries derived from academic proposal would be an express way to dive into innovation and promising for both academies and industries.

  2. 2.

    Publication contributes to both basic research and application research, but, in detail, conference paper, e.g., IEEE paper, mostly presents draft research frameworks, experimental results, or mature ideas, while publications in Web of Science including SCI and SSCI data, and the EI Compendex indexed papers emphasize fundamental research and application research, respectively.

  3. 3.

    Patent, e.g., DII patent, contributes to actual applications or products, the same as detailed technical report or guidebook of business services. In particular, patent terms should be vague and address legal effects (with patent barrier), and, comparably, academic terms are more clear and direct.

  4. 4.

    Business news, e.g., Factiva data, highlights social significances, where common technical terms and modifying adjectives will heavily influence the description.

  5. 5.

    Social media data, e.g., Twitter, are similar to the content of news, but it presents information in a more informal way. Sentimental analysis, sometimes, is applied to understand real meanings of these free texts.

  6. 6.

    National R&D program proposal, comparing with the five ST&I data above, is a complex one, which would include all aspects of ST&I emphases. Therefore, it would be not a smart option to incorporate national R&D programs with other ST&I data, but one feasible solution is to apply national R&D program proposal as a comparable case for other ST&I data.

The time gap problem is always the barrier that prevents incorporating multiple ST&I data effectively. It takes time to transfer an innovative idea into feasible plans and valid experiments, and related patents and mature products would be years or even decades after the idea firstly comes out. The time would be shorter and more unpredictable for emerging technologies. In this context, expert knowledge would be easier to deal with this situation than quantitative computation. Referring to the technology readiness levels (TRL) (Mankins 1995), we design a TRL scale for ST&I data sources, as shown in Table 12.1.

Table 12.1 Technology readiness level scale for ST&I data source

Although the definition of each TRL is still a fuzzy concept and its actual applications would depend on specified technological domains of ST&I data, Table 12.1 proposes a manner to transfer the emphases of ST&I data into an operable format for expert consultation. According to Table 12.1, we would recommend selecting ST&I data on neighbor TRLs, since the possible time gap would be tiny and could be ignored. However, if the time gap is able to be fully handled and considered, a comparison between the ST&I data sources at TRL 1 and TRL 5 would also make good sense to see an entire technological evolutionary pathway. Furthermore, based on the revised TRL scale, we design a questionnaire for multiple ST&I data incorporation in Table 12.2.

Table 12.2 The questionnaire for multiple ST&I data incorporation

We attempt to provide a logical work flow to organize expert in a workshop process. The questions in Table 12.2 aim to lead the discussions and gain valuable information effectively. Note that what we list are only options for possible case studies, and we need to revise and refine this questionnaire to match actual requirements.

Based on the hierarchical landscape and the expression form of components in our previous TRM models (Zhang et al. 2013; Zhou et al. 2014), we enrich the structure by the following: (1) distinguishing the scope of ST&I data sources with the shape of components, (2) softening the definition of Y axis to reserve an interface with the configuration of fuzzy set, and (3) defining linkages between components with multifactors, e.g., semantic similarity, time, science, policy. A sample of TRM for multiple ST&I data incorporation is given as in Fig. 12.3.

Fig. 12.3
figure 3

The sample of TRM for multiple ST&I data incorporation

In Fig. 12.3, time is marked as the X axis, while Y axis is used for multilayer demonstration. Shaped components indicate topics derived from different ST&I data, and they are located among the multiple layers of TRM, and grouped components would share similar topics with “some” possible linkages. It is also available to locate components on separated layers to highlight their origin ST&I data and relationships between different layers. In usual, exerts will help to identify the macro-level linkage between components to track possible evolutionary pathways.

3.3 Step 2: Fuzzy Set-Based Semi-automatic TRM Generation

The balance between qualitative and quantitative methodologies in TRM composing model is definitely an intriguing and complicated research topic at the current time. Expert knowledge has been largely engaged with results refinement, component allocation, and the understanding of TRM for decision making (Zhang et al. 2015). At this stage, we aim to minimize the aid of experts and maximize the usage of expert knowledge in the limited time and scope, so that the fuzzy set can be considered as an effective tool to deal with this issue. As mentioned above, the emphases of ST&I data, the definition of TRL, the evaluation of the time gap, and the criterion for classifying the layers of TRM also depend on subjective expert knowledge. These vague elements definitely match the main concept of fuzzy set and afford possibilities for the engagement of fuzzy set.

As a general definition, we denote “all components” of TRM as the universe \(X = \{ x_{1} ,x_{2} ,\, \ldots ,\,x_{i} ,\, \ldots ,\,x_{n - 1} ,x_{n} \}\) and “each phase/layer” as a fuzzy set \(A_{j}\) defined on the \(X\) where \(j \in \left[ {1,\,m} \right]\). The membership function \(A_{j} \left( {x_{i} } \right)\) is considered as the degree that the component \(x_{i}\) belongs to the phase/layer \(A_{j}\) and will be decided depending on research purposes and empirical data. The detailed steps are outlined below:

  • Considering specific case, to identify \(X\), \(A_{j}\) and \(A_{j} \left( {x_{i} } \right)\);

  • For each component \(x_{i}\), experts would help classify into one of the fuzzy sets \(A_{j}\) and mark a membership grade \(A_{j} \left( {x_{i} } \right)\) for the selected fuzzy set;

  • Based on \(A_{j} \left( {x_{i} } \right)\), to calculate the \(X\left( {x_{i} } \right)\) for each component and set as the Y value;

  • To generate TRM automatically via macros.

3.4 Outputs: TRM for Multiple ST&I Data Incorporation

After our 2-step method, we integrate multiple ST&I data sources, fuse the analytic results with expert knowledge, and generate the graphic TRM as our final output.

4 Empirical Study

Computer science would not be still considered as an emerging technology as what we did decades ago, but it has been integrated with IT techniques and various engineering applications and has become a fundamental instrument for multidisciplinary researches. This paper focused on the technology commercialization studies for computer science, and our purpose was to incorporate multiple ST&I data to track the evolutionary pathway of computer science technologies for commercialization. Since the limited condition for time and data sources, this paper only chose the NSF Award data (granted proposals) and the DII patent data for empirical study. Our consideration was the two data sources concentrated on the innovative ideas and mature technical products, respectively, and the contrast on the technology commercialization level would be better to indicate the importance of information fusion for multiple ST&I and also to demonstrate benefits of our TRM method. In addition, we built up our expert base with the help of twelve experts (Associate Professors, Lecturers, Researchers, and Ph.D. Candidates) from the Centre for Quantum Computation & Intelligent System, University of Technology Sydney, Australia, and the Knowledge Management and Data Analysis Laboratory, Beijing Institute of Technology, China.

4.1 Step 1: Incorporation of NSF Awards and DII Patents

The importance of NSF Awards and DII patens has been discussed separately in our previous studies (Zhang et al. 2014b, c), as shown in Table 12.2, and academic proposal is at the bottom of TRL, while patent belongs to the top of TRL; thus, the large distance between TRL scale introduces not only challenges but also promising insights, and the comparison would help identify the technology commercialization trend in a specified time interval. In this case, we aimed to track the rapid technological changes occurring with the coming of the Big Data Age—how innovative ideas boomed and how applicable techniques evolved; at this stage, we set the time interval from 2009 to 2013 to highlight Big Data-related techniques and their changes. Moreover, we consulted domain experts and decided to apply an integrated multilayer TRM to emphasize the contrast between NSF Awards and DII patents, and the layers included “basic research—TRL 1 and 2,” “application research—TRL 3 and 4,” and “products—TRL 5.”

We grouped the topics of NSF Awards and DII patents separately. We selected 12,915 granted proposals under the Division of Computer and Communication Foundation in the NSF Awards (Zhang et al. 2014b) and 177,974 DII patents with the field Topic and Subject Category as “computer science,” and the field Basic Patent Country and Priority Country as “US.” The revised term Clumping steps were applied for feature extraction, and the process is given in Table 12.3.

Table 12.3 Steps of term clumping processing

We, then, applied the K-means-based clustering model (Zhang et al. 2014b) to group topics and acquire 54 topics from the NSF Awards and 44 ones from the DII patents.

4.2 Step 2: Fuzzy Set-Based Semi-automatic TRM Generation

According to the three layers for the technology commercialization study, we let the power set \(A = \{ A_{1} ,\,A_{2} ,\,A_{3} \}\) and introduced Gaussian distribution to define the membership functions. The three membership functions are provided below, and the distribution curves are shown in Fig. 12.4.

Fig. 12.4
figure 4

Distribution curves of member functions

$$\begin{array}{*{20}l} {A_{1} \left( x \right){:}\,X \sim N \left( {0, {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {2\pi }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${2\pi }$}}} \right), x \in [0,\,1]} \hfill \\ {A_{2} \left( x \right){:}\,X \sim N \left( {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}, {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {2\pi }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${2\pi }$}}} \right), x \in [0,\,1]} \hfill \\ {A_{3} \left( x \right){:}\,X \sim N \left( {1, {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {2\pi }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${2\pi }$}}} \right), x \in \left[ {0,\,1} \right]} \hfill \\ \end{array}$$

As shown in Fig. 12.4, we divided the universe into three intervals, where [0, 0.25] was mapped to “basic research,” while [0.25, 0.75] and [0.75, 1] were for “application research” and “products,” respectively. The experts firstly classified specified topic \(x_{i}\) into one fuzzy set \(A_{j}\) and marked the membership grade \(A_{j} \left( {x_{i} } \right)\) to the fuzzy set. Then, we calculated the \(X\left( {x_{i} } \right)\) and assigned the topic \(x_{i}\) into one of the three fuzzy sets. As a sample, we list parts of the marked topics in Table 12.4, and the generated TRM is given in Fig. 12.5.

Table 12.4 Big data-related topics with membership grades of three fuzzy sets
Fig. 12.5
figure 5

Technology roadmapping for computer science from 2009 to 2013 (based on the incorporation of NSF awards and DII patents)

Although the experts only assigned each topic into one fuzzy set, we were able to calculate its membership grade for each fuzzy set. As an example, the topic “Video Frames” was assigned to A 2(x) with the membership grade 0.81, and we got the X(x) as 0.76, so according to the membership function, it was definitely possible to calculate the membership grade vector of “Video Frames” as (0.18, 0.81, 0.83). At this stage, we re-assigned the 98 topics with the following rules: (1) to classify the topic to the fuzzy set as the First Preference with the largest membership grade, if same membership grades occurred, we preferred to choose the fuzzy set for a lower TRL scale and (2) if the second largest membership grade was not less than 0.7, we set the related fuzzy set as the Second Preference. In this consideration, we found 51 topics in \(A_{2} \left( x \right)\) (42 NSF topics) and 47 topics in \(A_{3} \left( x \right)\) (35 DII topics) with First Preference and three topics in \(A_{1} \left( x \right)\), all of which belonged to NSF Awards, 13 topics in \(A_{2} \left( x \right)\) (8 NSF topics), and 20 topics in \(A_{3} \left( x \right)\) (13 NSF topics) with Second Preference.

4.3 Findings for the Commercialization of the Computer Science-Related Technologies

In our previous studies, single ST&I data-based TRM took active role in technical intelligence studies, which hold more benefits on exploring inner features of related technologies and identifying technology development chains. However, the incorporation of multiple ST&I data sources makes possible to stand on a higher macro-level to understand technology evolutionary pathways and to discover the gaps during technology development and transfer. We attempted to understand the insights of Fig. 12.5 for R&D plan and technology management concerns, and our findings are given as below:

  • The “mobile device” and related techniques were, are, and still will be a hot commercial target in the near future.

Obviously, the “mobile device” and related techniques (marked as the blue solid box) keep being identified as the hot topics from 2009 to 2013 in the DII patents, which means keen competitions occur or will occur in this filed, and the inventors (including the commercial firms) are seeking the intellectual property protection from the patents. At the same time, not only the mature products reflected in the DII patents, but also some undertaking researches in the NSF researches (marked as the blue dashed box) could be addressed in Fig. 12.5, e.g., “mobile video processing” and “wireless smart camera networks.” Therefore, it is reasonable to conjecture that innovation and advanced techniques will also be engaged and transferred as a strong technical support for the follow-up developments.

  • Big Data is not a creation, but a result of technology evolution and fusion, all related techniques of which are able to track down the origins.

Big Data is an unavoidable topic in recent years. Social media, e.g., Twitter, Facebook, is more popular than any other periods in the history, and the boom of various new techniques, e.g., MapReduce, Hadoop, also illustrates revolutionary changes. In this situation, the voice that highlights the new creation of the Big Data-related techniques would be enough to have its supporters. However, as shown in Fig. 12.5, it is definite to declare that the Big Data could be considered as the results of technology evolution or fusion, and all related techniques are able to track down the origins. Extending the discussion by Zhang et al. (2014b), social media-leaded online social networks and web data (marked as the orange-dotted box) constitute parts of the foundation of Big Data, and the coming Big Data age also increases the concerns on the information security (marked as the orange-dashed box) rapidly. On the other hand, the efforts on the improvement of existed algorithms (marked as the orange solid box) have never been stopped, which compose the mainstream techniques of the Big Data Age.

In addition, if we narrow down the scope to technology commercialization, current Big Data-related research still concentrates on the NSF Awards and stands at the fundamental stages that include constructing concepts (e.g., Trust Worth Cyberspace, Real Time) and algorithms (e.g., Bayesian Network Computing, Large Asynchronous Multi-Channel Audio Corpora, Large Scale Hydrodynamic Brownian Simulations), and collecting data and applying it to experimental applications, while Big Data-related business models and real-world applications are crude, even there are no direct related topics in the DII patents. Thus, we should imagine that it will take time to transfer a new technique to commercial practices, and this kind of attempts would be an obvious trend in the near future.

  • The process of technology commercialization is much faster than that several years ago.

It is a common sense that the NSF interests include various fundamental researches that hold potential capabilities on further innovation, and the DII topics only concentrate on applicable techniques with any commercial benefits, e.g., software or hardware techniques. However, considering the three fuzzy sets for technology commercialization studies, only few topics belong to the set “basic research,” and most of them are on the medium level between basic research and products. The possible explanation could be that the current process of technology commercialization is much faster than that several years ago, and new techniques could be used to solve real-world problems in a short time, or as we say, the experimental time is engaging into the commercialization process. In addition, more and more innovations are originated from real-world needs, which might be another strong driving force.

5 Discussion and Conclusions

Highlighting real-world needs and the engagement of emphasis-differed multiple ST&I data sources, this paper proposes an effective method to (1) incorporate multiple ST&I data to explore value-added information for R&D plans and technology innovation management and (2) introduce the fuzzy set concept to fuse the analytic results and expert knowledge smoothly and, then, help generate the TRM in a semi-automatic model. The thinking that combines qualitative and quantitative methodologies runs through the whole paper, the attempts on which provide great potential for related expert systems or decision-making processes.

We anticipate further study to look into the following directions: (1) to introduce novel IT techniques, e.g., machine learning, to enrich the semi-automatic model to an entire automatic composing model, and (2) to apply the multiple ST&I data incorporation for real-world applications. In addition, we will also consider the influences resulting from different empirical domains, e.g., emerging technology, social science, and mixed data with multidisciplinary, and address the concerns with more experiments.