0
Gestartet
Calculation of Hankel Function
Hello.
I tried to calculate the Hankel Function with mp objects. But as I saw it is not implemented yet. Therefore I used the definition to calculate it via Bessel functions. Following example:
First with double precision:
besselh(0,1,71.62771527592053131293674232738074i) =
0.000000000000000e+00 - 7.347258660573146e-33i
since besselh is not implemented yet, I used besselh(0,1,x)=besselj(0,x)+1i*bessely(0,x) to calculate it.
mp.Digits(34) %Quadruple precision
besselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
-6.103515625e-05 - 0.0003108494982263580776736011003369421i
which is quite different. Now with higher precision
mp.Digits(200) %Higher Precision
besselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
0 - 7.34725866057318428016340868623722...e-33i
seems better.
mp.Digits(500) %Higher Precisionbesselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
0 -7.34725866057318428016340868623722892521743368377951878300459469616931...e-33i
Interesting is the following: (using double precision)
besselj(0,71.62771527592053131293674232738074i)+1i*bessely(0,71.62771527592053131293674232738074i)=
.000000000000000e+00 - 7.347258660573146e-33i
I assume this is a bug is it?
Thank you for your answer.
I tried to calculate the Hankel Function with mp objects. But as I saw it is not implemented yet. Therefore I used the definition to calculate it via Bessel functions. Following example:
First with double precision:
besselh(0,1,71.62771527592053131293674232738074i) =
0.000000000000000e+00 - 7.347258660573146e-33i
since besselh is not implemented yet, I used besselh(0,1,x)=besselj(0,x)+1i*bessely(0,x) to calculate it.
mp.Digits(34) %Quadruple precision
besselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
-6.103515625e-05 - 0.0003108494982263580776736011003369421i
which is quite different. Now with higher precision
mp.Digits(200) %Higher Precision
besselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
0 - 7.34725866057318428016340868623722...e-33i
seems better.
mp.Digits(500) %Higher Precisionbesselj(0,mp('71.62771527592053131293674232738074i'))+1i*bessely(0,mp('71.62771527592053131293674232738074i'))=
0 -7.34725866057318428016340868623722892521743368377951878300459469616931...e-33i
Interesting is the following: (using double precision)
besselj(0,71.62771527592053131293674232738074i)+1i*bessely(0,71.62771527592053131293674232738074i)=
.000000000000000e+00 - 7.347258660573146e-33i
I assume this is a bug is it?
Thank you for your answer.
Customer support service by UserEcho
From Mathematica
N[BesselY[0, 40*I], {1000, 1000}] =
-5.34306132305811469773184318452720415....e-19+1.4894774793419899924224591570721184444946....e16 i
From Matlab
mp.Digits(16)
bessely(0,mp('40*i')) = -2.546479089470325 + 1.48947747934199e+16i
mp.Digits(25)
bessely(0,mp('40*i')) = 7.114780385429948677221705e-09 + 14894774793419899.92422459i
Normal Double Precision:
bessely(0,40*i) = -5.343061323058116e-19 + 1.489477479341990e+16i
Is there any information on this problem? Or is it fixed already, did not check it lately.
There were no updates to this issue recently.
(I have been busy with updating dense matrices engine).
Will resume bessely code update shortly. Sorry for the delay.
I have just released Linux version of toolbox with fixed bessely: http://goo.gl/btsBt
(you use Linux version, right?)
Now it provides accurate result up to the last digit of precision specified (and last digit is correctly rounded).
The main drawback - function is slow for large arguments. Will improve this in next updates.
Please test it in your conditions.
I will test it right away, when the release for windows is available.
Would appreciate your feedback.
Thank you. It gives the correct results now. Performance is indeed 1000x slower with 34 Digits precision compared to normal double, but this is not a big deal for me at the moment.
Now toolbox is using adaptive algorithm for series summation - every operation is analysed for cancellation/rounding errors.
If it is of unacceptable level - precision is boosted up to compensate the issue. At the end we re-arrange the terms to guarantee the minimum accuracy loss overall [1].
It will became much faster in future, when we will add asymptotics for large arguments and (probably) simplify the algorithm.
Hope it is useful for your work.
Btw, our system solver has been updated today with faster decompositions under the hood. You might see some changes in computing time.
[1]. J. Demmel. Y. Hida. Fast and accurate floating point summation with application to computational geometry.