Nonlinear PCA works by minimizing the mean square error (MSE) which can be seen as minimizing the remaining variance and hence indirectly maximizing the variance explained (covered) by the components.
How to get the variance that is explained by each component?
In classical linear PCA, the variance of the components is related to their eigenvalues. Since nonlinear PCA is not a simple linear rotation, we don't have eigenvectors and eigenvalues, and hence it is not directly possible to get the variance explained by the first component(s). Instead, in nonlinear PCA explained variance can be estimated based on data reconstruction.
'net.variance' contains the explained variance in percent of each component:
92.67 2.09 0.43
How to calculate the explained variance of a nonlinear principal component?
In NLPCA the explained variance of a component is estimated as variance of the reconstructed data by using only one component, normalized by total variance.
% get total variance of a data set
data = net.data_train_in; % original data set
total_variance = sum(var(data')); % or: sum(diag(cov(data')))
% variance of reconstructed data using only the first component PC1
pcx(1,:)=pc(1,:); % keep only PC_1, set remaining PC's to zero
eval_PC1=sum(var(data_recon')) % estimated eigenvalue of PC1 (absolute variance)
perc_variance_PC1 = eval_PC1 / total_variance*100 % variance in percent
% variance of reconstructed data using only the second component PC2
pcx(2,:)=pc(2,:); % keep only PC_2, set remaining PC's to zero
eval_PC2=sum(var(data_recon')) % estimated eigenvalue of PC2 (absolute variance)
perc_variance_PC2 = eval_PC2 / total_variance*100 % variance in percent