August 10, 2022

In this section, we present a detailed introduction to the preliminary feature selection method RF-RFE (random forest-based recursive feature elimination) and various classification models used for passenger satisfaction in this study.

Recursive feature elimination based on random forest

RF

The RF proposed by Breiman29 is a parallel integration algorithm based on a decision tree. Because of its relatively good precision, robustness and ease of use, it has become one of the most popular machine learning methods. The decision tree may be completely different due to the small change in data, so it is not stable enough. The RF reduces the variance brought by a single decision tree, improves the prediction performance of the general decision tree, and can give the importance measurement of variables, which brings substantial improvement to the decision tree model.

RF uses a decision tree as the base learner to construct a bagging ensemble. Bagging is a parallel integrated learning algorithm based on a self-help sampling method. Each sampling set is used to train a base learner, and then these base learners are combined. When combining the prediction output, the simple voting method is usually used for the classification task.

Let the training set \(D=\left\\left(x_1,y_1\right),\left(x_2,y_2\right),\dots ,\left(x_n,y_n\right)\right\\), and the prediction result of the new sample is Eq. (1):

$$f\left( x \right) = \mathop \textargmax\limits_y \in \mathcalY \mathop \sum \limits_t = 1^T \mathbbI\left( h_t \left( x \right) = y \right)$$

(1)

where y is the output category set, \(h_t(x)\) is the prediction result of the new sample x by the t-th learner, and y is the real category of the sample.

RF introduces the selection of random attributes on the basis of bagging integration. Different from selecting an optimal attribute when a single decision tree divides attributes, RF adopts the method of random selection for each node attribute set in the decision tree, first randomly selects an attribute subset from all attributes, and then selects an optimal attribute from the subset. Therefore, based on the sample disturbance brought by bagging, the RF further introduces attribute disturbance, which increases the generalization performance of the integration. The algorithm description of RF is shown in Table 1.

Importance of RF characteristics

The importance measurement indicators based on RF include the mean decrease impurity (MDI) based on the Gini index and the mean decrease accuracy (MDA) based on OOB data30. This method uses the frequency of attributes in the RF decision tree to reflect the importance of features. This paper chooses the MDI method based on the Gini index to measure the importance of features.

When constructing the CART decision tree, RF takes the attribute with the largest Gini gain as the splitting attribute by calculating the Gini gain of all attributes of the node. Gini represents the probability that a randomly selected sample in the sample set is misclassified, let \(p_k\) be the proportion of class k samples, and the calculation equation is Eq. (2):

$$ Gini\left( p \right) = \mathop \sum \limits_k = 1^K p_k \left( 1 – p_k \right) = 1 – \mathop \sum \limits_k = 1^K p_k^2 $$

(2)

The Gini gain obtained by dividing the data set according to attribute a is Eq. (3):

$$ Gini\left( D,a \right) = Gini\left( D \right) – \mathop \sum \limits_v^V \frac D^v \right\leftGini\left( D^v \right) $$

(3)

where V is the number of value categories of attribute a and \(\left|D^v\right|\) is the number of value categories of attribute a.

Based on the calculation of feature importance, the specific steps are as follows:

  1. (1)

    For each decision tree, the node where feature \(\propto \) appears is set A, and the change in the Gini index before and after node i branch is calculated as follows Eq. (4):

    $$\Delta Gini = Gini\left( i \right) – Gini\left( l \right) – Gini\left( k \right) $$

    (4)

    where \(Gini\left(l\right)\) and \(Gini(k)\) are the Gini index of the new node after branching.

  2. (2)

    The importance of feature \(\propto \) in the tree is shown in Eq. (5):

    $$ IM_ \propto = \mathop \sum \limits_a \in A \Delta Gini $$

    (5)

    where a is the node where feature \(\propto \) appears.

  3. (3)

    Suppose n is the number of decision trees, and the importance of feature \(\propto \) is Eq. (6):

    $$ IMPORTANCE\left( \propto \right) = \mathop \sum \limits_N IM_ \propto $$

    (6)

    Then, normalize the importance of all features in Eq. (7):

    $$IM\left( \propto \right) = \fracIMPORTANCE\left( \propto \right)\mathop \sum \nolimits_i^c IMPORTANCE\left( i \right) $$

    (7)

    where c is the number of features.

  4. (4)

    The larger the \(IM\left(\propto \right)\) value is, the more important the feature is to the result prediction, that is, the higher the importance of the feature.

Recursive feature elimination based on RF

RF-RFE uses RF as an external learning algorithm for feature selection, calculates the importance of features in each round of feature subset, and removes the features corresponding to the lowest feature importance to recursively reduce the scale of the feature set, and the feature importance is constantly updated in each round of model training. Based on the selected feature set, this study uses cross validation to determine the feature set with the highest average score based on classification accuracy. The algorithm flow chart is shown in Fig. 1.

Figure 1
figure 1

The RF-RFE flow is as follows:

  1. (1)

    Bootstrap sampling is carried out from the training set T containing all samples to obtain a training sample set \(T^*\) with a sample size of n. The decision tree is established by using \(T^*\), and a total of b decision trees are generated by repeating this process;

  2. (2)

    The prediction results of each decision tree are combined by “voting”, and the effect of the RF regression model is evaluated based on classification accuracy by using the fivefold cross validation method;

  3. (3)

    Calculate and sort the importance \(IM\left(\propto \right)\) of each feature \(\propto \) in the feature set based on MDI;

  4. (4)

    According to the backward selection of the sequence, delete the feature with the lowest feature importance, and repeat steps 1–3 for the remaining feature subset until the feature subset is empty. According to the cross-validation results of each feature subset, the feature subset with the highest classification accuracy is determined.

Satisfaction prediction based on machine learning algorithm

According to whether the processed data are marked artificially, machine learning can be generally divided into supervised learning and unsupervised learning. Supervised learning data sets include initial training data and manually labeled objects. The machine learning algorithm learns from labeled training data sets, tries to find the pattern of object division, and takes labeled data as the final learning goal. Generally, the learning effect is good, but the acquisition cost of labeled data is high. Unsupervised learning processes unclassified and unlabeled sample set data without prior training, hoping to find the internal rules between the data through learning to obtain the structural characteristics of the sample data, but the learning efficiency is often low. The satisfaction status in this study is the data set label. In the training process, the supervised machine learning algorithm learns the corresponding relationship between features and labels and applies this relationship to the test set for prediction.

k-nearest neighbors (KNN)

KNN is a supervised learning algorithm. Because the training time overhead is zero, it is also representative of “lazy learning”31. K-nearest neighbor has been used as a nonparametric technique in statistical estimation and pattern recognition. The working principle is as follows: for a given new sample, find the K samples closest to the sample in the training set based on a certain distance measurement and take the number of categories with the largest number of K samples as the category of the new sample. The samples are not processed in the training stage, so it belongs to “lazy learning”. As shown in Fig. 2, if there are 3 squares, 2 circles and 1 triangle around a data point, it is considered that the data point may be square. The parameter K in KNN is the number of nearest neighbors in majority voting.

Figure 2
figure 2

LR

LR is used to evaluate the relationship between dependent variables and one or more independent variables, and the classification probability is obtained by using logical functions32. It is a learning algorithm with a logistic function as the core. A logistic function is used to compress the output of the linear equation to (0, 1). The logistic function is defined as Eq. (8):

$$Logistic\left( z \right) = ~\frac11 + e^ – z $$

(8)

Consider the binary classification problem, given the data set \(D=\left(x_1,y_1\right),\left(x_2,y_2\right),\dots ,\left(x_N,y_N\right),x_I\subseteq R^n,y_i\in \mathrm0,1,i=\mathrm1,2,\cdots ,N\).

P is the probability that the sample is a positive example, and the coefficient in the following formula is determined by LR through the maximum likelihood method \(\beta _0,\beta _1,\cdots ,\beta _k\) to make an estimate [Eqs. (9) and (10)]:

$$logit\left( p \right) = ~log\left( \fracp1 – p \right) = \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k$$

(9)

$$p = \frac\exp \left( \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k \right)1 + \exp \left( \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k \right)$$

(10)

When P is greater than the preset threshold, the sample is divided into positive examples, and vice versa.

\(\fracp1-p\) is called the odds ratio (odds), which refers to the ratio of the probability of event occurrence to the probability of event nonoccurrence. The logarithm of the winning rate is linear with the coefficient of the variable. When the features have been standardized, the greater the absolute value of the coefficient, the more important the feature is. If the coefficient is positive, this characteristic is positively correlated with the probability that the target value is 1; if the coefficient is negative, this characteristic is positively correlated with the probability that the target value is 0.

Gaussian Naive Bayes (GNB)

Naive Bayes (NB) is a direct supervised machine learning algorithm33. The NB classifier is based on the Bayesian probability theorem and predicts future opportunities according to previous experience. NB assumes that the input variables are conditionally independent [Eq. (11)].

$$ P\left( Y = y_k \textX_1 , \ldots ,X_n \right) = \frac{P\left( Y = y_k \right)P\left( X_1 , \ldots ,X_n \textY = y_k \right)}{\mathop \sum \nolimits_j P(Y = y_j )P\left( X_1 , \ldots ,X_n \textY = y_k \right)} = \frac{P\left( Y = y_k \right)\mathop \prod \nolimits_i P\left( X_i \textY = y_k \right)}{\mathop \sum \nolimits_j P(Y = y_j )\mathop \prod \nolimits_j P\left( X_i \textY = y_j \right)}$$

(11)

where X is the input vector \((X_1,X_2,\dots ,X_n)\) and Y is the output category.

On the basis of NB, GNB further assumes that the prior probability of the feature is a Gaussian distribution, that is, the probability density function is as follows in Eq. (12):

$$ P\left( x_i = x\textY = y_k \right) = \frac1\sqrt 2\pi \delta _ik^2 e^{{ – \frac12\left( {\fracx – \mu _ik {\delta _ik }} \right)^2 }} $$

(12)

For a given test set sample \(\mathrmX=(\mathrmX_1,\mathrmX_2,\dots ,\mathrmX_\mathrmn)\), calculate P [Eq. (13)]:

$$ P\left( Y = y_k \right)\mathop \prod \limits_i P\left( Y = y_k \right),\quad k = 1,2, \ldots ,K $$

(13)

To determine the class of the sample y [Eq. (14)]:

$$ y = \mathop argmax\limits_y_k P\left( Y = y_k \right)\mathop \prod \limits_i P\left( X_i \textY = y_k \right) $$

(14)

RF

The working principle of RF34 is to combine the results of each decision tree, as shown in Fig. 3. This strategy has better estimation performance than a single random tree: the estimation of each decision tree has low deviation but high variance, but clustering realizes the trade-off between overall deviation and variance and provides the importance of prediction variables to the prediction of result variables. RF has good prediction performance in practical applications and can be used to address multiclass classification problems, category variables and sample imbalance problems.

Figure 3
figure 3

Backpropagation neural network (BPNN)

BPNN is one of the most widely used neural network models and is a typical error backpropagation algorithm35. Since the emergence of BPNNs, much research has been done on the selection of activation functions, the design of structural parameters and the improvement of network defects. The main idea of the BP algorithm is to divide the learning process into two stages: forward transmission and reverse feedback. In the forward transmission stage, the input sample reaches the output layer from the input layer through the hidden layer, and the output end forms an output signal. In the backpropagation stage, the error signals that do not meet the precision requirements are spread forward step by step, and the weight matrix between neurons is corrected through the pre-adjustment and post-adjustment cycles. When the iteration termination condition is met, the learning stops.

  1. (1)

    Forward transmission

First, the input vector of the sample is X, T is the corresponding output vector, m is the number of neural units in the input layer, and P is the number of nodes in the output layer:

$$ \beginaligned X & = \left( x_1 , \ldots ,x_m \right) \\ T & = \left( T_1 , \ldots ,T_p \right) \\ \endaligned $$

The calculation process equation of the forward transmission output layer is Eq. (15):

$$ I_j = \mathop \sum \limits_i = 1^m w_ij x_i + \theta _j $$

(15)

where j represents the node of the hidden layer, w is the weight matrix between the input layer node and the hidden layer node, \(\theta _j\) is the threshold of node j, and the output value of node j is Eq. (16):

$$ O_j = f\left( I_j \right) $$

(16)

where f is called the activation function, which is the processing of the input vector. The function can be linear or nonlinear.

  1. (2)

    Reverse feedback

Calculate the error between the true value of the sample and the output value of the sample. For the problem of second classification, two neural units are often used as the output layer. If the output value of the first neural unit of the output layer is greater than that of the second neural unit, it is considered that the sample belongs to the first category (Eq. (17)):

$$ E_i = O_i \left( 1 – O_i \right)\left( T_i – O_i \right) $$

(17)

The error of the middle hidden layer is accumulated by weight through the node error of the next layer (Eq. (18)):

$$ E_j = O_i \left( 1 – O_i \right)\mathop \sum \limits_k E_k W_jk $$

(18)

where \(E_k\) is the error of the k-th node of the next layer and \(W_jk\) is the weight from the j-th node of the current layer to the k-th node of the next layer.

Update the weights and offsets, respectively (Eq. (19)):

$$ \beginaligned W_ij & = W_ij + \Delta W_ij = W_ij + \lambda E_j O_i \\ \theta _j & = \theta _j + \vartriangle \theta _j = \theta _j + \lambda E_j \\ \endaligned $$

(19)

where λ is the learning rate, with a value of 0–1. When the training reaches a certain number of iterations or the accuracy is higher than a certain value, the training is stopped.