To avoid bad random initializations, NLPCA can be started with the linear PCA solution, and hence only needs to optimize the transformation from a perfect linear solution into a good curved nonlinear solution. This PCA pre-processing can be done by adding the option:
"I run the same data twice and I obtained different results, the result changes from run to run with same data. Why NLPCA is producing different results for the same data?"
The weights of the network are initialized by random values, depending on this random 'start position' NLPCA can come to different results in each run. If there is a clear best result, NLPCA finds in most cases to this best result, only sometimes fails. But if there are multiple good solutions, NLPCA sometimes finds one and another time another solution. Also in case of a very bad start, NLPCA can simply be trapped in a so called 'local minimum' of the optimization process.