Your comments

Updated toolbox for *NIX platforms (same links as always):

GNU Linux:
Advanpix Toolbox for Linux

MacOSX:
Advanpix Toolbox for Mac OSX

max/min for sparse matrices will be added shortly in next micro-update.
Element-wise division and multiplication with sparse matrices weren't implemented....

Until now - I have just released the updated version of toolbox with the operations.
Now only for Windows: http://goo.gl/pMXV3.

Mac OSX & Linux will be available later today - after re-compilation and unit tests.

As you know, sparse matrices are new feature in toolbox - functions are added (almost) every day.

Because of the limited (but growing) functionality for sparse matrices, we suggest the following workflow (if applicable):
1. Do all math. operations and manipulations with non-zero elements stored as vectors. 
2. Build the sparse matrix from vector of prepared non-zeros and their coordinates (function sparse()).
3. Solve the resulting sparse matrix against different rhs or use in iterative processes.

Toolbox has necessary solvers and optimized SpMV operations needed for step 3 - it has been our priority from the beginning.

This workflow is not perfect, but we lift the limitations with every release.

Now we are working on eigs(). The svds() will be added right after eigs() - as two are related.

The min/max has no support for sparse matrices (both convert it to full matrix = slow).
Will update both functions in next micro-updates shortly.

Michael, thank you very much for your help!
struct34digits=mp(struct128digits,34)

Yes, this is the official way of changing the precision of variable and it works perfectly.
It has clear and simple semantic.

The case when operands of one math. operation have different precision - is totally different beast -
don't go that way.
Actually it has been a long-time holywar among developers of arbitrary precision libraries - what precision to use in that case. Both parties - for "minimum" and "maximum" precision have their arguments and no straightforward semantic exists here.

Please also note, all variables stay in precision they were created with, regardless of precision changes made by mp.Digits() later. Same with files - variables are stored/loaded with the original precision they had. 

So that in your case, it is smart idea to store numbers in high precision, and then round them as needed to lower precision upon loading.
Hello Michael,

Thank you for your questions (as always)!

1. We overload the 'num2str' function so that it supports the mp(). Same for sprintf and fprintf.
You can use the functions for mp() exactly as with double. 

2. The mp() numbers are stored in binary format, not decimal.
So that when precision of mp() number is increased - we see no decimal zeroes added.

3. 

I was wondering if it is possible to make a function which simply converts a low precision number like
numberlowprecision = mp('1.234',9);

to a higher precision number like
numberhighprecision = mp(numberlowprecision,34)

which is acctually
numberhighprecision = mp('1.234',34);

I am afraid this is not possible for all numbers. It is possible for a narrow set of numbers which have exact binary representation (1/2, etc.), but not in general.

When number is stored with some precision - its accuracy is rounded to the precision specified.
Then we can extend the precision of the number (represent using the higher number of bits), but not the accuracy.
Accuracy is lost and we can only guess what it was. 

That is why it is better to avoid any precision conversions.

We have some additional information about this in our brief user manual: http://goo.gl/NRsaRx

4. The general rule we follow when deal with numbers of different precision - we do the operation with the maximum precision out of two arguments.

However, as I see from your examples - this is not always the case. Checking now...

5. You can check the precision of any mp() entity using function mp.Digits():

a = mp('pi');
mp.Digits(a);

6. As strange as it might seem - lower precision gives no merit in performance.
The fastest mode is quadruple - all others are slower.

Quadruple uses heavy optimization tricks as its size known in advance.
We have no such luxury for arbitrary precision numbers - thus slower execution.

7. 
A question on another matter is the initialisation of mp-arrays. I did not get the syntax yet. I tried:
mparray = mp('[1.23456,1.23456]');
mparray = mp({'1.23456','1.23456'})

We do not support such conversions at the moment.

Element-wise initialization is the only way for such case:

mparray = [mp('1.23456'), mp('1.23456')];

***

Let me know why do you need to change the precision during the computations (iterative, Newton-like schemes?).

Btw, today we have moved closer to proper Bessel functions - we have developed our own FORTRAN to arbitrary precision C++ converter. Now testing.
There are several ways to represent "quadruple".

We rely on IEEE 754 standard - http://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format

Whereas what the authors use is not actually quadruple, but 31-decimal digit surrogate with reduced exponent range and accuracy issues with really small numbers.

However "double-double" has several advantages - it is better suitable for GPU and low-level CPU optimization tricks.

Some time ago I have implemented my own "double-double" library for tests on GPU. If I find it superior in speed - I will add it to the toolbox as an option.
Hi Michael!

Matrix multiplication is already polished to the extreme.
At the moment, the only thing what we can do for speed-up - is to use CPU with more cores :). 

This will change in future, when we will finish porting to GPU. Then we can expect x5-x10 times increase in performance.

The Bessel module is being actively modified. In the last release I changed algorithms for Y0, Y1, J0 and J1. Still I do not consider it finished or mature. Classical recommended algorithms (I use) suffer from accuracy drop when argument is large and close to being pure imaginary. No publicly available references mention the situation. 

The main issue is information on good algorithms on computing Bessel for complex arguments is scarce. Actually there are only two published papers on the topic.

MATLAB itself uses 30-year old FORTRAN library developed by Amos (http://www.netlib.org/amos/). Now I am trying to make the underling algorithms precision-independent and convert it to C++.

Until then please increase precision of argument proportionally to its magnitude, or/and use the built-in MATLAB function.
Sorry for such inconvenience - hopefully will fix it soon.
Just minimal example would be enough. Very interesting what routines are called from toolbox in your functions.
Great, this is at least something.

I guess now the major bottleneck is not in assignment and referencing.

By any chance, could you send me the code so that I can trace it?
Michael,

Just want to let you know that the newest version of toolbox (available for download from website as usual) gives the following result:

A = mp(zeros(200,200));
p = mp('pi');
tic
for ii = 1:200
for jj = 1:200
  A(ii,jj) = p;
end
end
toc

Elapsed time is 2.97645 seconds.


It is approximately 5 times faster now :-). Referencing and concatenation is faster too (up to 2 times).  

I think I finally found the way to access the internal data of MATLAB directly without crashes.

Please test it in your environment. Would appreciate your feedback.