Vos commentaires
Decimal is one of the features I would like to add to toolbox (in fixed and arbitrary precision).
In comparison to double, decimals is slower since they have to be emulated in software.
I am sure there is no tolerance behind the "==".
This rather relates to algorithms in colon - TMW developers change it from version to version and there is a chance that 0.02 coincides with array(1,100) in this particular implementation of colon.
Anyway, surprisingly colon is very difficult to implement correctly and TMW support forum has a lot of examples of some degenerate cases, e.g. see comments on this post:
http://blogs.mathworks.com/loren/2006/11/10/all-about-the-colon-operator/
Dear Michael,
Thank you for question and sorry for delay in reply.
This situation is not a bug.
Toolbox follows philosophy of IEEE standard and stores numbers in binary floating-point format.
The thing is that number 0.02 is not exactly representable in binary floating-point format regardless of precision.
We can only approximate it by the nearest representable number in given precision, for example (I use 15 digits for simplicity):
>> mp.Digits(15); >> fprintf('%.20e\n',mp('0.02')) 1.99999999999999900080e-02
Colon operation includes quite a lot of algorithms under the hood (e.g. to make sure that generated numbers are symmetric around midpoint, etc.) - and there is no guarantee that it will generate exactly the same bit-to-bit approximation to 0.02 in the middle of generated sequence:
>> array=mp('2*10^(-4)'):mp('2*10^(-4)'):mp('1'); >> fprintf('%.20e\n',array(1,100)) 2.00000000000000177636e-02
Both numbers are accurate approximations to 0.02 up to the same number of decimal digits:
>> fprintf('%.14e\n',mp('0.02')) 2.00000000000000e-02 >> fprintf('%.14e\n',array(1,100)) 2.00000000000000e-02
But they are not the same in bit-to-bit binary representation (they might be in some precisions, but there is no guarantee, of course)!
This is classical issue of decimal vs. binary floating-point numbers. And this is the reason why == shouldn't be used in floating-point world. Please compare numbers by some tolerance instead.
Would it be Ok to make this question public, since it touches common issue and can be useful for others?
Dear Denis,
Thank you very much for the report!
Indeed, this is bug and it has been fixed in latest build.
Please download updated version: http://goo.gl/pMXV3
Thank you,
Pavel.
For future references, here is architecture of eigensolver implemented in toolbox:
http://www.advanpix.com/2016/10/20/architecture-of-eigenproblem-solver/
In some cases we are faster than even double precision EIG of MATLAB.
This is case-sensitive function:
>> FresnelS(mp(1)) ans = 0.4382591473903547660767566966251527
I have just received response from MIT as for FFTW licensing possibilities for non-profit organization (like Advanpix):
Full commercial license is $12500.
License price for non-profits is $8333.
I am still waiting response from NFFT team - I hope for more sensible answer.
At the very least - I will replace calls to FFTW with my code.
I am trying to contact the developers to know what licensing terms available for closed-source projects.
Meanwhile I found that there is pure MATLAB implementation of NFFT:
http://web.eecs.umich.edu/~fessler/irt/irt/nufft/
It was developed by one of the original authors.
I haven't check its source code but chances are that it can be used with toolbox (?).
Thank you for the important details - very interesting!
The main difficulty is not FFTW per se, but licensing terms of NFFT and FFTW.
GNU GPL license doesn't allow usage of these libraries in closed source products.
Alternatively I have to buy extremely expensive commercial licenses for them, $5K+.
This functionality was already requested (by Denis) and I will see what I can do in more details.
Full re-write is needed at the very least to not breach the licensing terms.
Will update you on this (no high hopes for nearest future).
Service d'assistance aux clients par UserEcho
One small update. In some special cases, quad precision in toolbox is now faster than MATLAB's double:
http://www.advanpix.com/2016/10/20/architecture-of-eigenproblem-solver/#Speed_comparison_double_vs_quadruple_precision