Keywords

1 Introduction

Today, technology has become essential in organizations and businesses. The role of technology is becoming increasingly significant in the growth and efficiency of businesses [1]. Technologies play a critical role in the ability of companies to compete with other organizations. Selecting the right technologies can create remarkable competitive advantages and in the process of selecting them, several characteristics need to be explicitly considered [2]. Used in various industries, Technology Assessment (TA) consists of monitoring technologies and their relationship in their environment, thus contributing to the survival of companies in competitive markets. TA will help identify existential problems and clarify them, but will not solve them [3]. Thus, TA performed prior to technology adoption reduces the risk of ineffective investment decisions and keeps systems and technologies in check [1, 4].

As Information Technology (IT) adoption has received considerable attention in the last decade, technology acceptance models have emerged [5]. The Technology Assessment Model (TAM) is the most widely used technology acceptance model [6]. This is not a descriptive model, meaning that it does not provide a diagnostic capability for specific failures in technology, but it is intended to assess and predict the acceptability of technology [7]. TAM reflects the everyday experiences of acquiring or using technology, as well as the perceived needs of the user [8, 9].

The focus of this paper is to create a global technology assessment model by studying the TAM 3 model, an evolution of the TAM model. As a case study of the model created, the ioAttend platform was used. The sample questionnaire was adapted to the platform and collected responses with ioAttend users to understand their evaluation. This platform was developed by the technological startup IOTech with the objective of facilitating the scheduling of attendance and events. Subsequently, with the results obtained in the questionnaires, a study of the same is carried out through indicators, for the purpose of demonstrating the results.

This article is divided into seven sections. The first section gives an introduction of the paper. The second presents the background and contextualizes the concepts discussed. Then, the third section represents the tools and methods used. The fourth section contemplates the global model of the questionnaire. The case study is presented in the fifth section. Then, in the sixth section, the results are discussed. Finally, in the seventh section, the conclusions are made.

2 Background

This section presents the crucial concepts for the development of this work, such as technology assessment, its importance in industry and organizations, the Technology Acceptance Model methodology, and finally a brief presentation of the ioAttend platform that will serve as the basis for the application of this study.

2.1 Technology Assessment

Mendes and Melo [3] state that a technology is the integrated set of knowledge, techniques, tools, and work procedures. Technologies that are considered new are used to replace procedures previously used by the organization. Due to the impacts that can be caused by a new technology in an organization, it is necessary to conduct a previous analysis. The purpose of TA in companies is to provide broad and objective information about the potential consequences of actions related to technological development. Its purpose is to find and point out more appropriate intervention alternatives in the development of technologies, due essentially to the concern with the effects, not only the planned ones such as cost/benefit, but particularly those that are not expected [3]. Through TA, companies can understand its compatibility for the use of a given technology, thus allowing the adoption of an action plan to prevent and reduce technology gaps [10]. According to Hamzeh and Xu [2], in the last years, transforming industries is being used advanced technologies (e.g. IOT, AI, Big Data). Identifying the best technology from a set of possible alternatives is the problem of technology selection. Thus, TA performed prior to technology adoption reduces the risk of ineffective investment decisions and keeps systems and technologies in check [1, 4]. In addition to helping companies effectively identify strategic and operational gaps, enterprise TA also paves the way for companies to explore new concepts and ideas. By helping companies identify opportunities that leverage their interests, AT becomes central to driving the overall performance and efficiency of organizations [1].

2.2 Technology Acceptance Model

To develop a reliable model that could predict user attitudes toward the use and actual use and acceptance of any specific technology, Davis adapted the Theory of Reasoned Action (TRA) and Theory of Planned Behavior (TPB) theories and proposed the TAM [11]. However, the author made two main changes to the TRA and TPB models. First, he did not consider the subjective norm in predicting an actual behavior and only considered a person’s attitude toward it. Second, he identified two primary factors that influenced an individual’s intention to use a new technology: Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) [5, 6, 11,12,13]. These are considered the main determinants of attitude toward a technology, which in turn predicts the behavioral intention to use and ultimately the actual use of the system [8]. In TAM, PU refers to the degree to which a user believes that using a particular system will help improve their job performance, while PEOU refers to the degree to which a person believes that using a particular system will not entail effort [5, 6, 12].

Later, to complete the model by incorporating the antecedents of the original TAM, Venkatesh and Bala developed 2008 a model of the determinants of PEOU, TAM 3, to enable understanding of the role of interventions in technology adoption. TAM 3 comprises four constructs: PEOU, PU, Behavioral Intention (BI) and Usage Behavior (UB). The UB represents the actual use of the system by the individual. TAM 3 allows for a richer analysis of the relationship between users and technologies and stands out for including personal variables, thus allowing for a more human analysis, with the user being the determinant actor in the decision of whether to use a technology [13].

2.3 ioAttend

The application ioAttend was developed by IOTech, for IOS and Android. The ioAttend is an intelligent system capable of recording attendance at events, activities, in class, or at work, quickly and securely. The platform’s main goal is to simplify the process of attendance booking and space management through a simple click, using triple authentication. The main functionalities of ioAttend are event management, user management, group management, space management, and saving user history. Organizations will be able to manage their events, their employees, and the spaces available for attendance registration quickly and digitally, avoiding the use of paper and the maintenance of electronic systems.

Based on IOTech’s personal and professional experience, problems have been detected at the level of agenda and event management and thus, ioAttend emerges as an intuitive, simple, and easy-to-use solution that solves all the problems detected and presents new features that streamline the user’s agenda management.

ioAttend can be used by two types of people: users and administrators. The tools available and the type of use vary according to the type of each user.

3 Material and Methods

This research aims to design a global questionnaire model based on TAM 3 methodology to evaluate several types of technology. TAM 3 was chosen because of experience with this approach and previous work [14, 15].

As a scientific research methodology to guide this work, Case Study (CS) was chosen. CS is an ideal methodology when a holistic and in-depth investigation is required [16]. It often uses qualitative data, collected from actual events, and aims to clarify, investigate, or report on current phenomena introduced in its own context [17]. Following the first stage of the methodology, the research design was made, the objective was outlined, which in this case is the realization of a global questionnaire model, and all the necessary information was collected. In the second phase, the model questionnaire was put into practice and developed. In the third phase, the model was adapted to the platform used as an example, the ioAttend, where through this questionnaire all the necessary data was collected to achieve the study objective. These were disseminated in chats with current employees of the IOTech startup as well as former employees. In the last phase, the data collected through the questionnaire was analyzed and a discussion and conclusion of the results were drawn up.

To develop this work, the tools described below were used:

  • mSurvey: a platform developed by IOTech for conducting questionnaires;

  • Microsoft Excel: this program was used to analyze the results obtained in the questionnaires;

  • Paleontological Statistics (PAST): is a data analyzer software and was used for the Kendall’s Tau analysis.

4 Global Assessment Model

For better organization of the questionnaire template, it has been divided into seven sections:

  1. 1.

    Level of Experience in the Technological Area;

  2. 2.

    Interface;

  3. 3.

    Operational Characteristics;

  4. 4.

    Technical Characteristics;

  5. 5.

    Behavioral Characteristics;

  6. 6.

    Relevance of the Platform;

  7. 7.

    Evaluate whether platform usage can be advantageous.

There are three possible response types: scaled, multiple-choice, or open-ended. The first section is made up of option response and aims to get to know the user better. From the second to the sixth section the answers are scaled responses and are dedicated to exploring the platform and understanding the user’s desire to use it. In turn, the seventh and last section is open-ended and allows the respondent to suggest and report positive or negative aspects of the platform.

In regards to the scaled questions, the Likert Scale [18] was applied, ranging from one to five. This scale enables a level of agreement that the short or long-scale options do not, which also allows to obtain little dispersion in the results, with two negative values and other two positive values, and a neutral value [19].

The five levels stipulated for the scale were:

  1. 1.

    Totally disagree;

  2. 2.

    Disagree;

  3. 3.

    Neither agree nor disagree;

  4. 4.

    Agree;

  5. 5.

    Totally agree.

To get a better perception of the level of concentration and veracity of the answers, screening questions should be placed in the middle of the sections, thus ensuring that the questionnaires were answered responsibly by the respondent. An example of a screening question is “One + Three”.

If the platform being evaluated contains more than one type of user (e.g. user, administrator) a different questionnaire should be created per user type. In case the questions are the same, there should be a question about the type of user the respondent is. This question can be for example put in Sect. 1.

To evaluate the functional and technical characteristics of the platform, it was necessary to understand the user’s behavior towards the technology as well as their level of use. Thus, the questionnaires were based on the TAM 3 constructs, which are PU, PEOU, BI, and UB. The Delphi methodology was also used so that the questionnaires could be formulated with quality and rigor.

The questions with open answers do not enter into the analysis according to the TAM 3 constructs, as well as some questions with option answers since these only serve to obtain more data and feedback for analysis of the respondents. That said, some questions in Sect. 1 and all questions in Sect. 7 are not included in the TAM 3 analysis and they are presented below this text, and then the remaining sections of the questionnaire template will be presented in a matrix with the TAM 3 constructs. The questions with response options are presented with the appropriate examples of options.

  1. 1.

    Level of experience in the technological area

    1. 1.1.

      In what business sector do you work? (e.g. Education, Technology, Other sectors)

    2. 1.2.

      What is your role in the company? (e.g. Administrative, Consultant, Other)

    3. 1.3.

      What is your position in the company? (e.g. CEO, Team Manager, Other)

    4. 1.4.

      What is your experience in the technological area?

    5. 1.5.

      What type of device do you usually use to access digital information (news, e-mail, reports, others)? (e.g. Cell phone, Tablet, Computer, Other)

    6. 1.6.

      What operating system do you currently use? (e.g. Windows, Android, iOS, Linux, Others)

    7. 1.7.

      On average, how often do you use technological devices per day? (e.g. Less than 2 h/day, Between 2 to 4 h/day, More than 4 h/day)

    8. 1.8.

      Type of User? (e.g. Full autonomy, Rarely needs technical support, Regularly needs technical support)

    9. 1.9.

      Do you use the computer mainly for? (e.g. Personal production application, Handle/consult administrative information, Handle/hide management information)

    10. 1.10.

      What device did you use to test the platform? (e.g. Laptop computer, Mobile Phone, Tablet, Other)

    11. 1.11.

      List the technical characteristics of this device.

  2. 2.

    Please evaluate if it is advantageous to use the platform

    1. 7.1.

      Why do you use the platform?

    2. 7.2.

      Positive aspects of the platform?

    3. 7.3.

      Negative aspects of the platform?

    4. 7.4.

      Suggestions to make the platform more advantageous.

To get a sense of user behavior it is necessary to map the questions with the TAM 3 constructs. Each question can evaluate more than one construct. Table 1 shows some possibilities for mapping. For example, the question “What is your experience in the technological area?” corresponds to the construct PU, PEOU and UB.

Table 1. Matrix between the questions and the TAM 3 constructs.

To put the global model into practice, you need to adapt it to the technology you want to evaluate and create specific questions. Using a general application as an example, some additional questions in Sect. 3 referring to operational features are: “Do you consider that the application improves the performance of data recording?” and “Do you consider that the application allows greater control of data recording?”. In Sect. 6 (User Relevance of the platform) it is also possible to add more questions, such as “Do you believe that using the platform influences the speed in the way data recording is done?” and “Do you consider that those who have a need to do data recording should use the platform?”.

In this study and as a proof of concept of this global model the ioAttend application was used and will be put into practice in the case study. The case study helps to understand what kind of information and indicators can be obtained with this model.

5 Case Study

As a case study, the ioAttend application was used, where the global questionnaire was adapted to its needs. Two questionnaires were prepared to cover all types of users of the ioAttend platform, these being end-users and administrators. The administrators’ questionnaire contains 58 questions and the users’ questionnaire contains 51.

The questionnaires were shared with ioAttend users, which makes the respondents experts on the platform, thus not requiring many answers for the study to be credible. A total of 37 responses were obtained, where 51.35% correspond to the number of responses from administrators and 48.65% from users.

To obtain a more reliable and coherent analysis of the results, this analysis was initialized by excluding the answers of respondents who missed the screening question. The screening question was “One + Three” where the correct answer is the value 4. Answers with the value 1, 2, 3, or 5 are considered wrong, thus removing the questionnaires that contain these values in this question from the data analysis. 8 questionnaires were eliminated, which corresponds to 21.62% of the total number of answers acquired, 4 of which were for administrators and the rest for users. Thus, a new total of 29 responses were obtained for analysis of the results, corresponding to 51.72% of the number of responses obtained by administrators and 48.28% by users.

The data were automatically exported from the mSurvey platform to Excel format. After being exported and observed, the statistical data was created. Table 2 presents the technology experience of the respondents. For example, in the question “In what business sector do you work?” the most chosen answer option was “Technology” by both administrators and users, with 87% and 72% respectively.

Table 2. Level of experience in Information Technology.

To study the data, an overall analysis was performed of the questionnaire and per TAM 3 construct. Two types of analysis were performed: univariate statistical analysis and correlation coefficient (Kendall’s Tau).

5.1 Univariate Statistical Analysis

The univariate analysis covers the minimum (Min), maximum (Max), sum (Sum), mean (Mean), mode (Mode), standard deviation (SD), variance (Var), and median (Med).

Univariate Statistical Analysis per Respondent

To summarize the statistical analysis per respondent, the answers from users and administrators were grouped together. The Media of the sum of the questions is 155.862 which indicates that they responded with high values to the questions. The average response of respondents is 4.054 which means that they are satisfied with the platform. The mode of response is 5, equivalent to an “I totally agree”. The standard deviation is 0.595, which is not very high but shows that there is a slight dispersion in the responses among respondents. The average of the variance shows that the value of each set is not far from the average value since it has a value of 0,455. The Median is the center value of a data set, being in this analysis 4. In general, respondents are satisfied with ioAttend. You can see this analysis in Table 3.

Table 3. Global univariate statistical analysis per respondent.

Univariate Statistical Analysis per Question

In Tables 4 it is possible to observe the univariate analysis per question of the questionnaire conducted to users. By analyzing the Table 4 it is possible to understand that the average of the answers per question has a value of 4, equivalent to 30,41% of the answers. In almost all questions, at least one respondent, was evaluated with 2 or 3 a question. Only the questions in Sect. 7 were evaluated with the minimum value of 1. All questions were evaluated at least once with the value 5. Question 2.2 “Do you find it intuitive to change the language on the platform?” was the question with the lowest standard deviation, being 0,61. The mode of this questionnaire was the value 5.

Questions 2.2 “Do you find it intuitive to change the language in the platform?”, 2.5 “Do you find the navigation bar intuitive?” and 2.6 “Do you find it intuitive how to visualize an event?” present the best results statistically with the highest response averages, these being higher than 4 and with the lowest standard deviation, this being well below 1. The mode of response to these questions is 5. This shows that users are satisfied with the topic addressed in these questions. In turn, questions 1.4 “What is your experience in the technological area?”, 6.4 “Do you think the application has been important in the digital evolution of your company?” and 7.2 “View locations?” have the lowest response means, below 4, and a high standard deviation, higher than 1. The mode of response is 5.

Table 4. Univariate statistical analysis per question (user)

The same results can be obtained for the administrators’ questionnaires. The analysis allows us to conclude that the average of the answers per question corresponds to the values 3 and 4. 23,97% of administrators attributed the values 3 and 25,71% the value 4 in their answers to the questionnaire. In almost all questions, at least one respondent rated a question with 1 or 2. All questions were evaluated at least once with a value of 5. The mode of this questionnaire, the most frequently answered value, was value 5.

Questions 2.1 “Do you consider the login page intuitive?”, 2.14 “Do you consider it intuitive to log out of the platform?” and 7.1.2 “Do you select the start and end date?” show the best results statistically with the highest response means, these being higher than 4 and with the lowest standard deviation, this being less than 1 or very close to 1. The mode of response to these questions is 5. This shows that administrators are satisfied with the topic addressed in these questions. In turn, questions 3.3 “Do you consider that the platform improves event management?”, 3.6 “Do you consider that the platform allows mitigating situations of heavy workload?” and 6.4 “Do you consider that the platform has been important in the digital evolution of your company?” have the lowest response averages, close to 3, and a high standard deviation, greater than 1. The mode of response is 4, 3, and 3 respectively.

Analysis by Construct

Previously, the questions were divided by their corresponding constructs, these being four: PU, PEOU, BI, and UB. The statistical analysis of each construct will be performed for administrators and users per question.

Kendall Tau was used to measure the degree of agreement between two ordinal variables. The Tau correlation coefficient returns a value from −1 to 1, where 0 means the two variables are independent, 1 represents a perfect relationship, and −1 corresponds to perfect disagreement [20]. Several analyses were performed among which were the mean and standard deviation for each of the constructs per question and Kendall’s Tau per construct per question. Next, some of the analyses performed will be presented. The analyses will be presented in more detail in an extended version of this article. The statistical analysis regarding the users’ PU construct can be found in Fig. 1 and it is possible to verify that the average of answers in users is approximately 4.

Fig. 1.
figure 1

Analysis of Mean and Standard Deviation of the PU construct (User)

Table 5 shows Kendall’s Tau statistical analysis referring to the BI construct per administrator question. Most correlations have a Tau coefficient equal to or close to 0, which means that the questions are not related.

Table 5. BI Kendall’s Tau administrator

Regarding the UB construct, its analysis per question of users is shown in Fig. 2. This presents respondent response averages between 3 and 4.

Fig. 2.
figure 2

Analysis of Mean and Standard Deviation of the UB construct (User)

6 Discussion of Results

When analyzing the construct PU, it is noteworthy the presence of some standard deviation in the answers of both respondents, approximately 1.00 per question, which refers to dispersion in the answers. In the users’ analysis, question 2.2, “Do you find it intuitive to change the language on the platform?”, has the highest average of 4.64, and is also the question with the lowest standard deviation of 0.61. In the Kendall’s Tau analysis of the PU construct, it is noticeable that most of the respondents’ answers to the questions are unrelated to each other since the Tau coefficient is 0 or very close to it. Some correlations of the questions show a coefficient very close to or equal to 1 which indicates that they are related to each other. The correlation of the administrators’ answers to questions 1.4. “What is your experience in the technology area?” and 2.9. “Do you find the functionality of editing an event intuitive?” shows a Tau coefficient of 1 which indicates a perfect relationship. In turn, there are negative coefficients thus demonstrating that there is a small disagreement between the questions. The correlation of the administrators’ questions 3.7. “Do you consider that the platform allows greater control of the various tasks?” and 1.4. is the one that presents the most negative coefficient being −0.45.

Analyzing the PEOU construct by question, it was concluded that the average of the respondents’ answers is approximately 4. The users’ answers to questions 2.2. “Do you consider it intuitive to change language in the platform?”, 2.5. “Do you consider the navigation bar intuitive?”, 2.6. “Do you consider it intuitive how to view an event?” and 2.15. “Do you consider it intuitive to log out?” are the ones with the highest average of 4.6. In turn, the lowest average is 3.6 and belongs to question 7.3.2. “Add external users or import via CSV?” of the administrators. As for the standard deviation, it has a value very close to 1, which demonstrates the existence of some dispersion in the answers of the respondents. The results of the Kendall’s Tau correlation coefficient of the administrators of the PEOU construct show that most of the questions are not related since the coefficient is very close to or equal to 0. Regarding the coefficients close to 1, it is visible the presence of some and even two correlations where the coefficient is equal to 1 (1.4. “What is your experience in the technological area?” and 2.9. “Do you consider intuitive the functionality to edit an event?”, and 1.4. And 2.14. “Do you consider intuitive to log off in the platform?”) which shows a perfect relationship between the answers to these questions. The Tau coefficient equal to or greater than 0.80 is underlined to facilitate the reading of the table. In this analysis it is also verified the presence of negative coefficients which indicates a disagreement between the answers, being the correlation between question 7.1.3. “Select the type of event and your options?” and 1.4. With the highest negative coefficient of −0.47.

The analysis of the BI construct per question reveals that the respondents’ average response is approximately 4 and the users’ questions 2.4. “Is the application design intuitive?” and 3.3. “Do you consider that the application improves presence marking performance?” have the highest average of 4.50. The standard deviation is high, indicating a discrepancy in the respondents’ answers. The question with the lowest standard deviation is question 2.3 “Is the application interface suitable for the functions to be performed?” with a value of 0.72. Table 5 presented before shows Kendall’s Tau statistical analysis referring to the BI construct per administrator question. The correlation of questions 6.2 “Do you think those who deal with event management should use the platform?” and 3.3 “Do you think the platform improves event management?” has the closest coefficient to 1, being 0.86, thus showing that they are related. There are no values below 0 which show that there are no questions with answers in disagreement. The analysis concerning the user is very similar to this one.

Regarding the UB construct, its analysis per question of users is shown in Fig. 2. The users’ questions 3.3 “Do you think the application improves the performance of presence marking?” and 4.2 “Do you think that access to the application is safe?” have the highest average of 4.50. Question 4.2. “Do you consider that access to the application is secure?” of the users shows the lowest standard deviation of 0.63. Overall, the questions show some standard deviation, this being very close to or greater than 1, which reveals a dispersion among the respondents’ answers. According to Kendall’s Tau, half of the correlations between the questions that make up the respondents’ UB construct show a coefficient very close to or equal to 0, which reveals that they do not have a relationship. In the other half negative coefficients or coefficients higher than 0 are visible. No coefficients equal to −1 or 1 are found, which means that there are no disagreements or perfect relationships between the questions. The correlation with the most negative coefficient, −0.45, is between the administrators’ question 3.7. “Do you consider that the platform allows greater control of the various tasks?” and 1.4. “What is your experience in the technological area?”. In turn, the correlation with the highest coefficient is between the users’ questions 1.4. “What is your experience in the technological area?” and 6.2. “Do you consider that those who need to mark presence in an event should use the platform?” and has a value of 0.94, which demonstrates a high relationship between the questions.

To summarize, Table 6 shows the 3 questions with the highest average and these all belong to Sect. 2 “User Interface” of the user questionnaire.

Table 6. The 3 questions with the highest average.

In Table 7 are the 3 questions with the lowest average, and these belong to the administrator’s questionnaire. Two questions belong to Sect. 3 “Operational characteristics” and the other to Sect. 6 “Relevance of the platform by the administrator”.

Table 7. The 3 questions with the lowest average

In Table 8 below it is possible to obtain a general perception of the average and mode of each construct, both for the administrator and the user. It is visible that the users are more satisfied with the platform than the administrators, but in a general way both accept the use of ioAttend.

Table 8. Global analysis for each construct

7 Conclusions

The development of this article aimed to create a model questionnaire according to the Technology Acceptance Model (TAM 3) to evaluate the acceptance of a technology by its users. The model questionnaire consists of 7 sections (Level of Experience in the Technological Area, Interface, Operational Characteristics, Technical Characteristics, Behavioral Characteristics, Relevance of the Platform, and Evaluate if it is advantageous to use the platform) to analyze the platform and the user’s perception of it.

As a practical case, the ioAttend application was used, where the questionnaire was adapted to it and the necessary statistical analysis of the data obtained was performed. It was possible to conclude from the statistical analyses and Kendall’s Tau that both users are satisfied with using ioAttend. However, it is visible that users (acceptance average of 4,252) are more satisfied than administrators (acceptance average of 3,775) for the four constructs evaluated (Perceived Usefulness, Perceived Ease of Use, Behavioral Intention, and Use Behavior). This indicates that improvements need to be made to the platform for them to feel more motivated and to continue using it. The PEOU construct was the one that got a better evaluation having the highest average responses in both the administrator (3,926) and the user (4,295). The question in which there was more agreement among the respondents was 2.2. “How intuitive do you find it to change the language on the platform?” with the lowest standard deviation of 0,610 in the users’ questionnaire.

In the future, the results will be used to improve the system, mitigate some reported problems, and add new features. After that, a new round of questionnaires will be carried out to understand if there was any improvement for users at the level of TAM 3 constructs.