Your comments
Thank you for sharing your deep knowledge - very helpful.
I quickly checked the source code of SDPA-GMP.
Indeed, there are many weak points. It uses MPACK which is very buggy and unstable conversion of some LAPACK functions to arbitrary precision. I tested it myself few years ago - that was nightmare. E.g. half of the indices in the SVD code was hand-translated from FORTRAN to 0-based indices, another half still uses FORTRAN 1-based indices..... Not sure how people use it at all. I ended up writing my own MP-LAPACK with parallelism - it is part of toolbox now.
Another issue is that SDPA-GMP relies on Spooles for sparse matrices factorization. This is way too suboptimal, something like UMFPACK/CHOLMOD would be much better alternative. Also I would use something like iterative refinement to make sure our matrix solution has relative accuracy comparable to eps.
As for algorithm itself, I am not sure why code relies on Cholesky decomposition which is unstable in a presence of ill-conditioning. It already breaks when cond(A) ~ 1/sqrt(eps). Rank revealing (or at least with decent column pivoting) QR would be much more stable, it can handle ill-conditioning up to cond(A) ~ 1/eps. If our goal is accuracy, then algorithm must seek the most reliable way to handle ill-conditioning. But I don't see this in SDPA-GMP.
QUADMINOS looks interesting but I am not sure if I can access the source code.
Thanks for sharing all the links - I will study them (wasn't aware of SR1 option in KNITRO).
What solvers would you suggest to have in arbitrary precision (by priority)?
Dear Mark,
Thank you very much for your suggestion and for summary on optimization solvers. I will use it as a reference.
In order to support multiple precision, core C/C++ code of each solver must be modified.
The minimal modification would be just changing the data types, constants, tolerances, etc. However in most of the cases these codes also rely on third-party libraries like Intel MKL/LAPACK, SuiteSparse or even some outdated FORTAN libraries (like NAG). This part must be modified to call analogous (if present) functions from MCT (also on C/C++ level).
Overall this work requires tight collaboration with those vendors with access to C/C++ codes on both sides.
We are open for such collaboration, but from my experience, other companies are not ready to invest because market is really tiny and it won't pay off.
For example, I talked to Maple top managers few years ago. They said that fast multiple precision is low priority task because of low demand.
P.S.
To my knowledge, developers of some of the libraries you mentioned (Mosek, CVX) are using MCT in their development work.
Thank you for your tests and support! This makes me happy.
@"Does it mean, that you are switching to INTEL compiler on Linux?"
Unfortunately - No. On technical side, Intel compiler has limited support for Linux flavors and versions.
Usually it supports very recent versions. But in many institutions MATLAB/ toolbox are being used on clusters with Linux versions released 5-10 years ago.
That is why TMW also uses GCC to build MATLAB on Linux (and pretty old version of it - to preserve wide compatibility).
@"We upgraded to new compilers everywhere, and this opened whole new Pandora box of incompatibilities, etc."
Not only that but we also upgrading our code to be more like C++11 and higher. Also we switched to new MPFR (which needs GCC 7 now) and other libraries. All combined has revealed lots of incompatibilities and issues.
Anyway, today we have released 4.5.2 under GNU Linux. Please tests it in your environment. I hope you can see some changes in speed. Would appreciate your comments.
Thank you,
Pavel.
We are working on this, sorry for the delay.
We upgraded to new compilers everywhere, and this opened whole new Pandora box of incompatibilities, etc.
@"I did not observed this problem with any other matlab algorithms (linear algebra, FFT, statistics, etc.)."
All of these in MATLAB are implemented directly in compiled languages C++/FORTRAN - that is why there is no difference.
On the contrary, ODE-related functionality is implemented in M-language with all the consequences for speed.
@"Finally, to avoid this problem, you propose only remove MCT from the path? Am I right?"
I do not create MATLAB's design rules and constraints. I just follow their rules as we all do.
I would be happy if someone would enlighten me with better workarounds.
Initially MATLAB was designed as procedural language, and OOP constructs were imposed later on to catch with modern trends. As a result, OOP support in MATLAB is very clumsy, slow and not flexible.
The ode45 is m-script, it creates a lot of elementary arrays inside - which are dispatched through toolbox overloads (when it is on the path). That is why ode45 is slower.
Run ode45 through performance profiler and check where bottleneck is.
In general toolbox treats ODEs the same way as all other functionality - toolbox doesn't do any intrusive actions (e.g. replace standard scripts or else). MATLAB just picks up its overloads when it is on the path.
Hello Michal,
Toolbox overloads standard MATLAB routines to create elementary arrays - zeros, ones, eye, etc.
This is needed to support 'mp' type as argument in such functions, e.g. eye(100,'mp').
If requested type is not 'mp' then toolbox dispatches the call to built-in MATLAB functions (see arrayCreationOverload.m for more details). Dispatch is quick but I expect it to be slower than direct call to double-precision functions.
MATLAB picks up toolbox overload when it is on the path.
Probably this is what causing slow-down in your tests.
Yes, these functions try to use all available cores. If functions were slow - this means that other processes were intensively using the same cores.
Customer support service by UserEcho
This functionality is available in newest toolbox version - 4.6.0.