Keywords

1 Introduction

The Partial Least Squares Structural Equations Modeling (PLS-SEM) is a variance-based method for estimating structural equation models with the goal of maximizing the explained variance of the dependent variables [1]. Moreover, PLS-SEM is an ordinarily utilized approach in the assessment of causal effects in the scope of path models taking latent constructs which are measured indirectly through many indicators [2]. The PLS-SEM was developed by the seminal paper by Wold [3] as mentioned by Vinzi et al. [4]. Extensive reviews on the PLS approach, with further developments, are given by Chin [5], and Chin and Newsted [6] for the new graphical interface and improved verification methods. The basic idea of PLS-SEM is that intricacy in a system can be studied by causality relationship among latent concepts “called latent construct”, each measured by several observed indicators usually defined as manifest constructs [4].

There are many reasons to use PLS-SEM namely: (1) PLS-SEM can examine the measurement model and structural model at the same time [7]; (2) PLS-SEM is suitable for complex models like those with hierarchical structures and mediator or moderator effects [7, 8]; (3) Other analysis software lead to less clear conclusions and require a greater number of different analyzes, while the PLS-SEM provides more reliable and valid results [9, 10]; (4) PLS-SEM analysis has become a popular technique as an alternative to other SEM technologies like LISREL and AMOS [11]; (5) PLS-SEM does not require large samples to analyze, because it is a component-based approach [12, 13]; and (6) PLS-SEM provides more accurate estimates for paths analysis whether direct and indirect effects [5]. On the other hand, Hair et al. [14] suggest that it is better to choose PLS-SEM as the main analysis technique in the following cases “When the structural model is complex and includes many constructs, indicators and/or model relationships; when the research objective is to better understand increasing complexity by exploring theoretical extensions of established theories (exploratory research for theory development); when the path model includes one or more formatively measured constructs; when the research consists of financial ratios or similar types of data artifacts”.

One of the common software of PLS-SEM is SmartPLS. SmartPLS was developed by Ringle et al. [15] under the name “SmartPLS 2”. Then, it was developed and updated by Ringle et al. [16] under the name “SmartPLS 3”. At the moment, the latest version of SmartPLS is 3.3.7. SmartPLS has acquired popularity with researchers since its launch in 2005 because it is freely available to academics and researchers, and has an easy-to-use user interface and advanced reporting features. Hair et al. [1] indicated that SmartPLS is “A milestone in latent variable modeling. It combines state of the art methods (e.g., PLS-POS, IPMA, complex bootstrapping routines) with an easy to use and intuitive graphical user interface”. The model in SmartPLS is mainly based on theory and hypothesis to form paths. The paths in the model are created by the suggested constructs and indicators. The benefit of the model is that it allows researchers to explore and simplify the structure, in order to be able to assess the link between indicators and constructs for testing hypotheses [17].

Ringle et al. [18] indicated that SmartPLS is a useful software for examining models proposed by researchers in many fields, especially in the field of management (e.g., human resource management, accounting, logistics, marketing, supply chain management, production management, international business management, and operations management). SmartPLS allows drawing the path between constructs and identifying indicators of constructs [19]. SmartPLS also ensures that the indicators used in the model are valid and reliable for further analysis [11]. SmartPLS explains causal effects and validates hypotheses and theory [20]. Besides, SmartPLS introduces paths for the model that is able to describe the relationship between constructs and indicators. This comes as an important vital point to provide an understandable picture and support for the demonstration of results [21]. Finally, SmartPLS provides more accurate and valid results if the sample size is less than 250 than other methods of testing models or explaining the causal effects [18].

Despite the diversity of studies related to PLS-SEM (e.g., [1, 2, 4, 7, 11, 14, 18,19,20, 22]), there is no study showing how to deal with the results of PLS-SEM in empirical studies. Therefore, this study aims to explain how to deal with the results of PLS-SEM briefly in order to facilitate the procedure for researchers. This study includes five sections namely introduction, initial description of the data, measurement model assessment (outer model), structural model assessment (inner model), and conclusion. This study presented a hypothetical example in order to clarify the tests systematically. The example consists of two variables are human resource management practices (career opportunities, compensation, human resource planning, performance appraisal, promotion, recruitment & selection, training, and development) as the independent variable and organizational performance as the dependent variable.

2 Initial Description of the Data

The initial description of the data is the first step in software of SmartPLS. The initial description of the data aims to give the researcher a detailed notion of how the respondents have responded to the indicators in the survey. SmartPLS provides values for each predefined indicator including missing, mean, median, minimum, maximum, standard deviation, excess kurtosis, and skewness. The missing data is the result of not responding to one of the indicators. The mean is the sum of the values divided by the number of values. The median is the middle number in a list of numbers in ascending or descending order and can be more of a description of that data set than the mean. Minimum refers to the smallest value in the data set, while maximum refers to the largest value in the data set. Standard deviation is an indication of how far a group of numbers diverges. Kurtosis and skewness are statistical methods for normality tests of data. Kurtosis is a measure of the peak of distribution, while skewness is a measure of asymmetry. Table 1 shows how descriptive analysis is reported in empirical studies. All constructs achieved zero for missing values, 3.27 to 4.02 for mean, 3 and 4 for median, 1 for minimum, 5 for maximum, and 0.950 to 1.182 for standard deviation.

Table 1. Descriptive analysis

Table 2 shows how kurtosis and skewness for the normality test are reported in empirical studies. In order to ensure that the data has a normal distribution, the values of kurtosis and skewness should be range between +3 to −3. In this example, the value of Kurtosis for all constructs ranges from −1.533 to −0.564, while the value of skewness for all constructs ranges from −0.943 to −0.123. As a result, the data of this example has a normal distribution.

Table 2. Kurtosis and skewness for the normality test

3 Measurement Model Assessment (Outer Model)

The measurement model assessment is the second step in software of SmartPLS. Generally, the measurement model is denoted as the outer model. The measurement model (outer model) describes the relationship between a latent variable and its indicators. The measurement model specifies the relationship between observable constructs and the underlying construct. In this context, the search for an investigation of suitable indicators is an important step with regard to the operationalization of such a construct. Moreover, the measurement model aims to determine how well the indicators (items) are loaded onto theoretically defined constructs. By measurement model assessment, we can ensure those survey items measure the fixtures they were designed to measure. Thus, ensuring that the survey instrument is valid and reliable.

In order to assessment the measurement model, the researchers should examine three main aspects namely (1) internal consistency reliability, (2) convergent validity, and (3) discriminant validity. The internal consistency reliability includes two main tests namely Cronbach's alpha (CA) and composite reliability (CR). CA provides an estimate of reliability based on the inter-correlations of the observed indicator constructs, while the CR indicates the degree to which the set of items consistently indicates the latent construct. Table 3 shows how internal consistency reliability (CA and CR) is reported in empirical studies. Generally, values between 0.70 and 0.95 for CA and CR are values widely accepted. In our example, all constructs achieved values ranging between 0.856 and 0.943 for CA. Moreover, all constructs achieved values ranging between 0.903 and 0.932 for CR. As a result, the model of this study has internal consistency reliability.

Table 3. Internal consistency reliability

Convergent validity refers to that indicators developed to measure a particular construct are actually measuring this construct. Chin and Yao [23] indicated that “Convergent validity states that tests having the same or similar constructs should be highly correlated. Two methods are often applied to test convergent validity. One is to correlate the scores between two assessment tools or tools’ sub-domains that are considered to measure the same construct. In intelligence research, two intelligence tests are supposed to share some general parts of intelligence and at least be moderately correlated with each other. Then, moderate to high correlation shows evidence of convergent validity”. The convergent validity includes two main tests namely factor loading (FL) and Average Variance Extracted (AVE). Table 4 shows how convergent validity (FL and AVE) is reported in empirical studies. Generally, researchers should accept and retain indicators that have loading 0.7 or more, while should drop and delete indicators that have loading less than 0.4. Regarding indicators that have loading between 0.4 to 0.7, researchers should retain indicators if values of CA, CR, and AVE are above the suggested threshold value. Whereas should delete indicators from construct only when deleting the indicator leads to an increase in the values of CA, CR, and AVE above the suggested threshold value. On the other hand, values more than 0.5 for AVE are values widely accepted. In our example, all indicators have loading more than 0.7. Meanwhile, all constructs achieved values more than 0.5 for the AVE.

Table 4. Convergent validity

The assessment of discriminative validity aims to ensure that the construct has the strongest relationships with its indicators compared to other constructs. Criticisms of the Fornell-Larker criterion in SmartPLS led to the innovation of the HTMT criterion. The innovation of the HTMT criterion removed the limitations of the Fornell-Larcker criterion, which has less ability than the HTMT criterion to examine the discriminatory validity. Henseler et al. [24] indicated that “The new HTMT criteria, which are based on a comparison of the heterotrait-heteromethod correlations and the monotrait-heteromethod correlations, identify a lack of discriminant validity effectively, as evidenced by their high sensitivity rates. The main difference between the HTMT criteria lies in their specificity. Of the three approaches, HTMT 0.85 is the most conservative criterion, as it achieves the lowest specificity rates of all the simulation conditions. This means that HTMT 0.85 can point to discriminant validity problems in research situations in which HTMT 0.90 and HTMT inference indicate that discriminant validity has been established”. Table 5 shows how discriminative validity based on HTMT criterion is reported in empirical studies. Generally, values less than 0.85 for the HTMT criterion are values widely accepted. In our example, all constructs achieved values ranging between 0.038 and 0.592 for discriminative validity based on the HTMT criterion.

Table 5. Discriminant validity

4 Structural Model Assessment (Inner Model)

The structural model assessment is the third step in software of SmartPLS. The structural model is denoted as the inner model. The structural model (inner model) handles the relation of unobserved with latent constructs. The structural model assessment aims to examine the predictive capabilities of model and the relationships between constructs. In order to assessment the structural model, the researchers should examine four main aspects namely coefficient of determination (R2), cross-validated redundancy (Q2), effect sizes (f2), and path coefficients (hypotheses testing).

Table 6 shows how structural model assessment (R2, Q2, f2, and path coefficients) is reported in empirical studies. Generally, values of 0.75, 0.50, and 0.25 are considered substantial, moderate, and weak for R2. In our example, the organizational performance achieved a value of 0.626 (moderate) for R2. Moreover, values more than zero are meaningful for Q2. In our example, the organizational performance achieved a value of 0.315 (meaningful) for Q2. Regarding effect sizes (f2), values of 0.35, 0.15, and 0.02 are considered large, medium, and small for f2. In our example, career opportunities achieved a value of 0.036 (small) for f2, compensation achieved a value of 0.021 (small) for f2, human resource planning achieved a value of 0.004 (no effect) for f2, performance appraisal achieved a value of 0.003 (no effect) for f2, promotion achieved a value of 0.035 (small) for f2, recruitment and selection achieved a value of 0.002 (no effect) for f2, and training and development achieved a value of 0.047 (small) for f2.

In order to examine the path coefficients (hypotheses testing), the researchers should follow two steps. The first step is ensure that p-value of direct or indirect effects are less than 0.05. The second step is ensure that zero not cross the confidence interval values (lower level and upper level). Therefore, if any path coefficient (direct or indirect) achieves the above requirements, then that path or hypothesis is considered statistically supported and acceptable. In our example, hypothesis 1 has been supported, because p-value was less than 0.05 and zero didn’t crossed the confidence interval values (lower level and upper level). While hypothesis 3 has been not supported, because p-value was more than 0.05 and zero crossed the confidence interval values (lower level and upper level).

Table 6. Structural model assessment

5 Conclusion and Future Directions

Despite the diversity of studies related to PLS-SEM (e.g., [1, 2, 4, 7, 11, 14, 18,19,20, 22]), there is no study showing how to deal with the results of PLS-SEM in empirical studies. Therefore, this study aims to explain how to deal with the results of PLS-SEM briefly in order to facilitate the procedure for researchers. This study presented a hypothetical example in order to clarify the tests systematically. The example consists of two variables are human resource management practices as the independent variable and organizational performance as the dependent variable.

The results of the analysis using the software of SmartPLS include three main steps namely initial description of the data, measurement model assessment (outer model), and structural model assessment (inner model). The initial description of the data is the first step in software of SmartPLS. The initial description of the data aims to give the researcher a detailed notion of how the respondents have responded to the indicators in the survey. SmartPLS provides values for each predefined indicator including missing, mean, median, minimum, maximum, standard deviation, excess kurtosis, and skewness. The measurement model assessment is the second step in software of SmartPLS. By measurement model assessment, we can ensure those survey items measure the fixtures they were designed to measure. Thus, ensuring that the survey instrument is valid and reliable. The researchers should examine three main aspects in orde to assessment the measurement model namely internal consistency reliability (CA and CR), convergent validity (FL and AVE), and discriminant validity based on HTMT criterion. The structural model assessment is the third step in software of SmartPLS. The structural model assessment aims to examine the predictive capabilities of model and the relationships between constructs. In order to assessment the structural model, the researchers should examine four main aspects namely coefficient of determination (R2), cross-validated redundancy (Q2), effect sizes (f2), and path coefficients (hypotheses testing).

Despite the important results provided by this study, it has the following limitations. First, the study was limited to how to deal with the results of PLS-SEM in empirical studies, but it did not provide details on how to use the software of SmartPLS. As a result, future studies may provide a detailed explanation of how to use SmartPLS. Second, this study focused on the field of management, thus it is difficult to generalize the results to other fields. As a result, future studies may address the above limitation by explaining how to deal with the results of PLS-SEM in empirical studies in other fields such as curricula & teaching, linguistics, and special education.