Nonlinear PCA works by minimizing the mean square error (MSE) which can be seen as minimizing the remaining variance and hence indirectly maximizing the variance explained (covered) by the components. linear PCA, the variance of the components is related to their eigenvalues. Since nonlinear PCA is not a simple linear rotation, we don't have eigenvectors and eigenvalues, and hence it is not directly possible to get the variance explained by the first component(s). Instead, in nonlinear PCA explained variance can be estimated based on data reconstruction.
'net.variance' contains the explained variance in percent of each component:
92.67 2.09 0.43
In NLPCA the explained variance of a component is estimated as variance of the reconstructed data by using only one component, normalized by total variance.
% get total variance of a data set
% variance of reconstructed data using only the first component PC1
% variance of reconstructed data using only the second component PC2