LOBPCG is a sparse matrix diagonalization method that produces the minimal eigenvalue. This could possibly be a nice alternative to ARPACK for detecting negative modes? It's worth looking into, anyways.
I see conflicting statements on wikipedia about the suitability of this method for our problem: (emphasis mine)
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem
A simple work-around is to negate the function, substituting -DT(D X) for DT(D X) and thus reversing the order of the eigenvalues, since LOBPCG does not care if the matrix of the eigenvalue problem is positive definite or not.
(I think the first quote might just mean that B (the extra matrix in the generalized eigenvalue problem) is positive definite, not A...)
Some choice quotes (note: ARPACK uses something similar to the Lanczos method, with "implicit restarting" to reduce memory requirements):
In contrast to the Lanczos method, LOBPCG rarely exhibits asymptotic superlinear convergence in practice
LOBPCG allows warm starts and computes an approximation to the eigenvector on every iteration. It has no numerical stability issues similar to those of the Lanczos method.
LOBPCG is reasonably easy to implement, so many implementations have appeared.
(with that last quote: All it does is minimize the function (x^T A x) / (x^T x), with some special considerations for numerical stability)
LOBPCG is a sparse matrix diagonalization method that produces the minimal eigenvalue. This could possibly be a nice alternative to ARPACK for detecting negative modes? It's worth looking into, anyways.
I see conflicting statements on wikipedia about the suitability of this method for our problem: (emphasis mine)
(I think the first quote might just mean that B (the extra matrix in the generalized eigenvalue problem) is positive definite, not A...)
Some choice quotes (note: ARPACK uses something similar to the Lanczos method, with "implicit restarting" to reduce memory requirements):
(with that last quote: All it does is minimize the function
(x^T A x) / (x^T x), with some special considerations for numerical stability)