Keywords

1 Introduction

The software industry has witnessed a growing trend of the development of software products by small teams of people with limited resource and little operating history. Despite this global movement of high-tech entrepreneurship, the majority of software startups fail within two years of their creation, primarily due to self-destruction rather than competition [1]. The number will be much higher when counting startup teams which have not reached the launching milestone. It is known that there is no common recipe for entrepreneurs to be successful. It is difficult to frame successes and failure from startups [2], as each startup will have a unique evolution path depending on an abundant amount of context factors. Lean startup, a common methodology among entrepreneur, emphasizes the role of validating business ideas via building MVPs. It is also common that a pivot occurs after a series of MVPs are created [3, 4]. Such a startup journey is also an artefact-creating process, given that major milestones for startups (namely: pitching events, first paid customer and fund-raising) tight to certain artefacts. Entrepreneurship research provides a grounded foundation that startup is an emergent sequence of events, in which an event is both, path dependent on prior processes and contingent on contemporaneous processes [1, 5,6,7].

While it is useful for an entrepreneur to view entrepreneurial development from an MVP-creating process perspective, it is more important for them to know what they can learn from their MVPs. Ries mentions the Build-Measure-Learn circle in his method [8]. The concept of the loop explains that build stage is based on the hypothesis formulated by an entrepreneur. In order to test the hypothesis, an experiment has to be configured. Learning is intended during the testing of hypothesis [9]. Therefore, this loop could also be regarded interpreted as a traditional scientific hypothesis-metric-experiment loop. The cycle that starts with the hypothesis and ends with a prototype to test the hypothesis. While exercising the loop, the earlier a startup realizes a hypothesis is wrong, the quicker it should be updated and retested [9]. However, the cycle does not directly imply what software entrepreneur actually learn from their previous experience embedded in MVPs. Software startup teams are excessively focused on the developing a better software solution and delivering a prototype to its customer. Individuals exercising so many experiments to win the software development timeline, often neglect the learning involved in software startups [10]. The objective of this study is to understand the entrepreneurial learning from an MVP-creation process. We assume that entrepreneur has predetermined business ideas, which are formed as a hypothesis, that is validated by building MVPs. Therefore, adopting MVP as the unit of analysis, our research questions are RQ1: Do entrepreneur learn from formulated hypotheses for their business and product? RQ2: Are their corresponding MVPs for a formulated hypothesis? The study is organized as follows: Sect. 2 presents a background about startup development and entrepreneurial artefacts. Section 3 describes our study design, case description, data collection and data analysis. Section 4 presents the entrepreneurial journey of two software startups: Startuppuccino and MUML AS. Finally, Sect. 5 presents the discussion and concludes the paper.

2 Background and Related Work

To explore our research questions, we articulate two theoretical fields: startup development and entrepreneurial artefacts as illustrated in Fig. 1. On the grounds of software engineering, a startup doing experiments contributes with knowledge on software development process, techniques and their outcomes. The procedure to carry out experimentation helps the startup team to better predict, understand and develop the software development process [11].

Fig. 1.
figure 1

Theoretical aspects of MVP’s

2.1 Startup Development

Lean Startup [8] as a methodology for entrepreneurship has become increasingly popular in the past several years, evidenced by dedicated conferences and global Lean Startup meet-ups. As a result, it starts to enter entrepreneurship education programs as the main topic too. The Lean Startup approach was inspired by the lean concepts of focusing on the efforts that create value for customers and eliminating waste during entrepreneurial processes [8]. However, since the customers are often unknown, what customers could perceive as value is also unknown. Therefore, entrepreneurs should get out of the building to involve the customers since day one [12]. Lean Startup advocates to build the product iteratively and deliver to the market as quickly as possible for earlier feedback [8]. Lean Startup is essentially a hypothesis-driven approach [13] which bases entrepreneurial decisions on evidence and validated learning. To capture customer value, an entrepreneur should start a feedback loop that turns an idea into a product, learning whether to pivot or persevere. This can be done by developing an MVP using agile methods to collect customer feedback about the product [8]. The feedback becomes the input to improve the product and validate the hypothesis. As a result, the startup might pursue new directions of the business or continue and scale it [14]. Figure 2 is a high-level representation of the Lean Startup methodology. Pivots in software startups are common to occur and discussed by various scholars. According to Ries [8], it is a kind of change done to validate the startup hypothesis about a product, business model and the engine of growth. Bajwa et al. in their study refer to various different types of pivots that can happen in startups: Zoom-in, Zoom-out, Customer Segment, Customer need, Platform, Business Architecture, Value Capture, Engine of Growth, Channel, Technology, Complete and Side project [4]. A startup journey can be seen as a process of creating entrepreneurial artefacts [15]. According to the science of artificial, one of the schools of theory adopted in entrepreneurship research [16], an artefact is defined as an interface between the internal team and its surrounding environment. MVP is one type of artefact created as a result of the entrepreneurial process. As a core concept of Lean startup [8], MVP is a version of a new product which allows a team to collect the maximum amount of know-how about customers with the least effort [8]. Eric Ries listed several types of MVPs, for example, an explainer video, a landing page, a wire-frame, and a single feature prototype [8]. In Software Engineering context, Nguyen Duc et al. discussed the throw-away prototype and the evolutionary prototype as an MVP [17]. MVP is also considered as a type of boundary object in startup context [3].

Fig. 2.
figure 2

Lean startup process model [14]

2.2 Theoretical Model of Startup Evolution

Based on the Build-Measure-Learn approach, hypothesis about both product and customer should be formed and validated using MVPs [8]. The loop repeats and moves forward, from problem-solution space to product-market space and eventually to scaling. Lindgren and Münch present a study about experiment-driven product development in the startup context. The authors describe the product development as a series of linear increment of experiments [18]. Fagerholm et al. propose a framework for the continuous experiment which includes the elements of the lean startup [19]. This type of experiment points out the importance of continuous testing in order to support the development process to achieve the high-end product. Continuous in this context refers to running many iterations of Build-Measure-Learn feedback loop. In addition to whisking the experiment Fagerholm et al. provides the description of required artefacts, tasks and roles [18, 19]. This experiment-driven process facilitates the development of MVP or minimum viable features (MVF) and supports the plan, implementation and analysis of experiments. Holmström et al. study describes the Hypothesis Experiment Data-Driven Development (HYPEX) model which helps to blend the experiments with the customer in the software development process. The HYPEX model aims at reducing the customer feedback loop. Hence this leads to less development pressure in the software development process. Similar to the approaches mentioned earlier Nguyen et al. represents the evolution of startups via double loop model of sense-making [20]. We formed a process-based framework to realize the entrepreneurial process as in Fig. 3.

Fig. 3.
figure 3

Hypothetical process of artefact-driven startup evolution

3 Research Approach

This section describes the research methodology adopted to study our cases. Given startups are a dynamic and multi-influenced environment, our initial plan was to conduct an exploratory case study. Further, in the research process, our data was dominated by participant observations due to the fact that all of the paper authors were heavily involved in the startup cases. This motivated us to conduct a tailor ethnography study [21]. Ethnography derives from traditional anthropology aiming at telling a credible, rigorous, and authentic story, giving voice to people in their local context [22]. The central focus of ethnography is to provide rich, holistic insights into people‘s views and actions, as well as the scenario where they behave, through the collection of detailed observations and interviews [23]. There have been some attempts to adopt ethnography in software engineering context [24]. In this type of study, ethnographic methods are helpful in generating rich and detailed accounts of software project teams, their interactions with project stakeholders, and their approaches for delivering products, as well as in-depth accounts of their experiences [24]. Hence, we would like to adopt the approach to leverage all contacts and insights we have from the cases.

3.1 Case Description

A case was selected from our convenient sample. We defined four criteria for our case selection: (1) a startup that operates for at least six months, such that their experience can be relevant, (2) a startup that has at least a first running prototype, (3) a startup that has at least an initial customer set, first customer payments or a group of users, (4) a startup that has software as core value of their business. We eventually decided to study the hypothesis-driven journey of two startup cases: case 1: Startuppuccino and case 2: MUML AS.

Case 1. The startup is named after the name of the developed application, Startuppuccino [25], which is based at the Free University of Bozen-Bolzano in the northern part of Italy. Startuppuccino started with the experience and observation of two team members who are also university teachers. The initial idea of the teachers was to recommend good software tools to initiate and support startups that miss key skills in their teams (e.g., design, web development) [26]. Commonly, early-stage startups lack resources and look for some startup tools in order to launch their idea and test the product solution fit. Later, the idea pivoted into an educational platform that aims at helping entrepreneurship educators in providing students with better learning experience during their courses. Tools were also recommended to users at this level. So far the journey of Startuppuccino did three pivots: (1) startuptools.club, (2) MineToolz and (3) current version running as Startuppuccino [25].

Case 2. MUML AS is a spin-off from a Norwegian social media company. The CEO of the company quit the job and sought for a technical team to develop a hyper-local news platform. She started with the business idea and hiring several consultants, freelancers and contractors to realize and refine the idea. After that, a CTO joined the team and started a prototyping contract with a Vietnamese outsourcing team. The team was selected after a bidding process to ensure the lowest price quote. The contract was made based on six-milestone delivery and payments were made after each milestone. The outsourcing team worked in a Sprint-based approach adopting Sprint planning and retrospective meetings, burn-down chart and communication via social media. After nine months of collaboration, the CEO stated that it was a positive experience regarding the value perceived. The outsourced team was offered to be a part of the startup.

3.2 Data Collection

Semi-structured individual interviews [27] and participant observation were used to collect data since they enable enough focus on the topic of interest, but also flexible structures to discover unforeseen information. Table 1 shows outlook of the data collection instrument. An interview guide was slightly different between two cases, between different people in the same case and even between the same interviewee subject. However, we asked three types of questions: (1) warm-up question about the current context of the interviewees related to business and product development, (2) past experience question to investigate how the interviewees did in certain project scenarios in the past and (3) lessons learnt questions to capture the beliefs that emerged or evolved from the project experiences. Most of our performed observations are active participation, in which researchers are members of the startups, actively involving in business development, decision making, product development and customer interaction. When counting observations with predefined research goals, there were six planned observation sessions conducted in MUML AS and ten planned observation sessions were conducted in Startuppuccino. The researchers came to observed sessions with a clear research goal in mind, sometimes with a check-list. Field note was done after the observation. In case of Startuppuccino, the observation of actions and thoughts were captured in a startup diary. Data triangulation was done by looking at project’s artefacts, such as project plan, meeting notes, technical document and project management board. By triangulating our data sources and our instruments, we addressed issues of validity and obtained comprehensive insights into the application of ethnographic methods.

Table 1. Data collection instrument

3.3 Data Analysis

Interview transcripts and observation diary were available for analysis. We adopt-ed a narrative analysis by going through the scripts, identifying the relevant piece of text and labelled them by codes representing: business, product ideas and descriptions of MVP. Combining with extra materials, we came up with a list of hypotheses and MVPs. Hypotheses were either directly stated or indirectly explained by an interviewee. We also noted the timestamps when a hypothesis or an MVP occurs. The connections among hypotheses are interpretative and conducted by all co-authors of the work. For instance, the connection between hypotheses is interpreted by their semantic meanings. Most of the connections between hypotheses and MVPs are evident from our data. After that, a cross-case analysis was done to identify commonality and difference between two cases. This was done on top of the previous analysis of hypotheses and MVPs in each case.

4 Results

This section describes our finding with regards to each case. First, we explain the Startuppuccino and then the MUML AS journey with the list of hypotheses formulated, then the MVPs that were created, the pivots that occurred and finally the relationship diagram between hypotheses and MVPs.

4.1 Entrepreneurial Journey of Startuppuccino

With regards to RQ1, we found that in Startuppuccino entrepreneurs had some initial ideas and assumptions about customer problems. Table 2 shows that most of the hypotheses relate to the customer problems, which is based on their business model canvas. Some hypothesis, for example, H04, was derived after obtaining the new knowledge from testing a previous hypothesis, i.e H02 and H03. Hence, we formulated a parent-child relationship between these hypotheses. The hypotheses are also temporally ordered; H01 is the first hypothesis and H07 is the last hypothesis in the investigated time-frame. During the postmortem analysis, we were also able to identify the MVPs that are associated with these hypotheses, as described in Table 3. We identify 7 MVPs (in which the pivots occurred at M02, M05, M07 as marked *) and 7 hypotheses as described in Tables 3 and 2. MVPs were described with their types and how they were built in the startups. The MVP is numbered chronologically: M01 is the first MVP and M07 is the last one within our investigated time-frame. Pivots are evidence of visible knowledge and experience transfer in Startuppuccino. M02 is a zoom-in pivot, where a major change occurred in the team, targeted market, UX design of the product. M05 is a customer segment pivot, coming with new team members and vision change. M07 is the least knowledge transfer as it was a complete pivot, where the whole business model got changed.

Table 2. Hypotheses formulated in Startuppuccino journey
Table 3. MVPs build in Startuppuccino journey

4.2 Entrepreneurial Journey of MUML AS

With regards to RQ1, Table 4 shows that most of the hypotheses relate to the business objectives driven by their business model canvas. The hypotheses are also chronologically ordered; H01 is the first hypothesis and H14 is the last hypothesis in the investigated time-frame. During the postmortem analysis, we were also able to identify the MVPs that are associated with these hypotheses, as described in Table 5. We identify 13 MVPs (in which the pivots occurred at M03 and M13) and 14 hypotheses as described in Tables 5 and 4. MVPs were described with their types and how they were built in the startups. The MVP is numbered chronologically: M01 is the first MVP and M13 is the last one within our investigated time-frame. In MUML AS, two pivots happen, which occurred by building a new MVP (M3 and M13) based on previous learning from customer needs and product design. M03 is a customer need pivot, which is quite disconnected from its previous MVP. However, the learning experience regards to UX design and customer involvement remained the same with those in previous MVPs. M13 is a technology pivot, where new market research results in a new technical platform. Only the platform was changed here, all the knowledge about the customer, product design and business model remained the same.

Table 4. Hypotheses formulated in MUML AS
Table 5. MVPs build in MUML AS journey
Fig. 4.
figure 4

Relationship between hypotheses and MVPs in startuppuccino and MUML AS

4.3 Findings from Cross-Case Analysis

We observe some commonalities in terms of hypothesis and MVP development in the two startup cases: With regards to RQ1, we found that startups do actually learn during entrepreneurial evolution and the learning can be marked with either hypothesis testing or MVP creation. However, the overall learning does not occur systematically and linearly. The relationship between hypotheses and MVPs is non-linear. The theoretical model of startup evolution includes a series of incremental experiments that involves hypothesis testing. In both cases, we find that the actual model of hypothesis testing in startups is more complicated. It is not straightforward that a hypothesis is associated with an MVP. In some cases, a business hypothesis is tested by multiple MVPs, at different times in the startup life-cycle. Validating one hypothesis can lead to another hypothesis (parent-child relationship). In some cases, one hypothesis can be derived from multiple parent hypothesis. Some hypotheses are so complex that they are fully tested by the very late MVPs. We also observe some MVPs that answer multiple hypotheses. These are often important MVPs that turn into commercial products. With regards to RQ2, We capture the relationship between hypotheses and MVPs as in the Fig. 4. In the figure, the dashed link represents the temporal relationship or the evolution flow over time of the startup. The white-head arrow links represent the parent-child relationship of the hypotheses. The black-head arrow links represent the evolution of MVPs. It is also used for the association link between a hypothesis and an MVP. In the case M1 there is no link with the hypothesis as the MVP was never validated. In the case of M2, which was built on top of M1, the pivot occurred hence it is highlighted green. In reference to the case of M5 and M6, the pivot occurred during M5, but M6 was tested at the same time of M5. Both MVPs were developed in parallel around the same time. In relation to RQ2, we found that there are NO correspondences between hypotheses and MVPs. According to Lean Startup, learning occurs while validating pre-defined hypotheses. However, we find in both cases that some MVP is built without an association to a hypothesis. The MVP is built either as an extension of a previous one or with the push from customer and market demands. There are also hypotheses not tested. Startup founders recognize that derived hypotheses were not fully covered by MVPs. Some are skipped due to intuitive reasons; some are skipped mistakenly. Moreover, we found that pivot can be captured from the MVP-creation approach. A pivot marked by a new MVP often inherits learning from the previous MVPs. Typically, the pivoted MVP will start from scratch. This means an MVP before the pivoted one, is typically considered as a throw-away prototype. There are also situations in which a pivoted MVP reuses source code from the previous MVPs. In our cases, the reuse also involves a significant refactoring and change of code bases. A pivoted MVP is also found to be associated with a new (sub) hypothesis disconnected with the previous hypothesis.

5 Discussion and Conclusions

This study describes the hypothesis-driven journey of two software startups expedition which started with forming the hypothesis, building MVPs and pivots that occurred. Lean Startup and previous studies on software startups have neglected the relationship between hypothesis and MVPs or considered them in an ideal context. We found that entrepreneur does learn from testing their hypotheses, however, they do not always focus on hypothesis formulation and hence, the relationship between business objective to test and MVPs to build is not always straightforward. Through two case studies, we observed that a relationship between hypothesis and MVPs is non-linear and incomplete. We also proposed an approach to visualize the startup journey from capturing the Hypothesis-MVP relationships. From our cases, it seems that the amount of learning entrepreneur have depends on user involvement and their existing knowledge about market, industry and technology. Little user involvement might lead to little experience gained from testing hypotheses. For an entrepreneur, it is crucial to solving the urgent problem of a user, although a startup has to face a complete pivot. This could be time-consuming and a big move for a startup to deal with, but beneficial too. Moreover, an entrepreneur should grab every opportunity to experiment with MVPs. Furthermore, the need and effectiveness of having a strong business driver for a startup are important. Last but not least, with the usefulness of visualizing startup journeys demonstrated in this paper, an entrepreneur can find the journey maps an useful tool for reflecting and reviewing possible gaps in the business and product development. We are not aware of a specific toolset for this purpose in the market. However, an entrepreneur can use generic graph tools, such as Graphviz, GraphTea and Plotly and follow the approach described in this paper. There are several threats to validity worth to discuss [28]. One internal threat of validity is the bias in data collection, as data might not represent a comprehensive story. In order to mitigate this threat, we selected CEOs during the postmortem analysis, who have the best understanding about their startups. We used all opportunities for interviewing relevant people of our cases in this context of the study. We also used artefacts (Trello, project charts, kanban board, dairy) during postmortem to increase our understanding of the cases. With both startups, we also acted as startup team members, which enables a lot of insights beyond interviews. Another internal threat to validity regards how reliable the reported cases are. This ensured that all of the authors have not only theoretical background about software startups but also hands-on experience. A construct threat to validity is a possible inadequate description of constructs. An external threat to validity is the representativeness of our selected cases. Both of the cases are small startups. Besides, the startup decisions on MVP might be influenced by individual personalities. Future research can validate results from this work by systematic adoption of the approach in a larger set of cases. We also call for a development of a specific toolset to visualize startups hypotheses, MVPs, and the connections among them. The toolset will definitely highlight the learning and experience flow during the entrepreneurial development.