Dine kommentarer

This bug has been fixed, updated version will be available in 1-2 hours after re-compilation.
Thank you for reporting the issue.
Hi Stefan,

Thank you very much for your excellent work on RKT and for citing the toolbox.
RKT looks very interesting and useful - congratulations on a new release!

I will make this post public as well - probably somebody will be interested in RKT as well.
Dear Stefan,

Thank you very much for keeping me updated and congratulations on progress with RKT!
I am doing fine, preparing update for sparse matrices engine....

I am fine with any format of citation as long as it is provided. Probably the least common denominator among different styles would be the following format (similar to citation of MATLAB itself):

Multiprecision Computing Toolbox for MATLAB X.X.X.XXXX, Advanpix LLC., Yokohama, Japan.
BibTeX #1:
@book{mct2015,
 author = {Multiprecision Computing Toolbox for MATLAB X.X.X.XXXX},
 publisher = {Advanpix LLC.},
 address = {Yokohama, Japan}
}
BibTeX #2:
@software{mct2015,
  author = {Advanpix LLC.},
  title = {Multiprecision Computing Toolbox for MATLAB},
  url = {http://www.advanpix.com/},
  version = {X.X.X.XXXX},
  date = {YYYY-MM-DD},
}
If it is ok, I will make this question public - as it is one of the FAQ.
Actually for big matrices extended precision might be faster than double in some cases. For example, Krylov-type solvers CG, GMRES, etc. suffers from inherent loss of orthogonality in subspace basis with iterations - which leads to very slow convergence and the need for restarts, double application of modified Gram-Schmidt, etc.

Extended precision makes all this disappear and can actually guarantee theoretical convergence speed e.g. for CG (which is never seen in practice for double precision if no good pre-conditioner is known).

In recent versions, we are reducing the speed difference between quadruple and higher precisions (<100).
Precisions < 50 are already pretty much comparable to the speed of quadruple. So that we want to avoid strong division on quadruple, octuple, etc.

We also have plans to implement (at least) quadruple on GPU - prototype engine is tested and ready for implementation. The only question is time and development funding, as always.
Matlab's multiplication uses hardware SIMD vectorization. Something we cannot employ for extended precision :(.
This makes all the difference (mtimes is also parallelized in toolbox).
Sorry I don't collect such comparison. Yes, quadruple has the lowest slowdown.

I think the fastest way would be to just compare particular algorithm(s) of interest.
Toolbox trial is full functional and has all the optimizations - it can be downloaded from our website and run in a few minutes.
Sorry for misunderstanding.

It is good idea to measure toolbox timings for different level of precision (actually we do this internally, but don't publish as our website is already over-filled with tables). But again, only for the case of extended precision, not double.

I think, in the world of arbitrary precision software we need to use different "well-known comparison point".
Probably toolbox might be the one since now it is the fastest among 3M (Maple, Matlab and Mathematica) :).

Double precision is implemented in hardware, extended in software. Slowdown factor might be as high as 100-1000.


Kundesupport af UserEcho