# Variance of components

Nonlinear PCA works by minimizing the mean square error (MSE) which can be seen as minimizing the remaining variance and hence indirectly maximizing the variance explained (covered) by the components.

## How to get the variance that is explained by each component?

In classical linear PCA, the variance of the components is related to their eigenvalues. Since nonlinear PCA is not a simple linear rotation, we don't have eigenvectors and eigenvalues, and hence it is not directly possible to get the variance explained by the first component(s). Instead, in nonlinear PCA explained variance can be estimated based on data reconstruction.

'net.variance' contains the explained variance in percent of each component:

net.variance

92.67 2.09 0.43

## How to calculate the explained variance of a nonlinear principal component?

In NLPCA the explained variance of a component is estimated as variance of the reconstructed data by using only one component, normalized by total variance.

% get total variance of a data set

data = net.data_train_in; % original data set

total_variance = sum(var(data')); % or: sum(diag(cov(data')))

% variance of reconstructed data using only the first component PC1

pc=nlpca_get_components(net);

pcx=zeros(size(pc));

pcx(1,:)=pc(1,:); % keep only PC_1, set remaining PC's to zero

data_recon=nlpca_get_data(net,pcx);

eval_PC1=sum(var(data_recon')) % estimated eigenvalue of PC1 (absolute variance)

perc_variance_PC1 = eval_PC1 / total_variance*100 % variance in percent

% variance of reconstructed data using only the second component PC2

pc=nlpca_get_components(net);

pcx=zeros(size(pc));

pcx(2,:)=pc(2,:); % keep only PC_2, set remaining PC's to zero

data_recon=nlpca_get_data(net,pcx);

eval_PC2=sum(var(data_recon')) % estimated eigenvalue of PC2 (absolute variance)

perc_variance_PC2 = eval_PC2 / total_variance*100 % variance in percent