Enter forum description here ...
0
Under review

Saving leads to corrupt file

JFW 4 years ago updated by Pavel Holoborodko 4 years ago 3

I have a loop in which I generate approximately 100 mp matrices with sizes ranging form N = 4 to ~1500. In order to use them later I save them as cell entries (A{i} = A). In some instances these will save, other times I get a corrupt file error warning.

It always happens with the same matrices, just wth some parameters I get the errors with others I don't.

Any ideas or suggestions?

0
Under review

Compilation in standalone application

eduardo salazar 4 years ago updated by Pavel Holoborodko 4 years ago 1

Wonder if it is possible, currently, to compile your libraries using MCC for eventual deployment as a standalone application.

0
Fixed

eigs() support for sigma = 'sa'

Joe 4 years ago updated by Pavel Holoborodko 4 years ago 3

I've noticed that when using advanpix data structures with eigs(), I get erroneous results when requesting the smallest eigenvalue. 


Please refer to the following code using Matlab's data structure:


A_matlab = speye(25,25); %initialize Matlab sparse identity matrix
A_matlab(13,13) = 1/3;
eigs(A_matlab,1,'sa') %request smallest eigenvalue

When running this code, I receive the following output:

ans =

   0.333333333333333


Now consider a similar experiment with Advanpix:

A_advanpix = mp(speye(25,25),32); %initialize Advanpix sparse identity matrix
A_advanpix(13,13) = mp('1/3');
eigs(A_advanpix,1,'sa') %request smallest eigenvalue

When running this code, I receive the following output:  


ans = 

    0

I have noticed if I replace '1/3' with '-1/3' in the above experiments, I obtain the correct results, namely both return "-1/3" in the appropriate precision. I appreciate any help with this issue, thank you for your assistance!

0
Answered

Slow vectorized operations compared to Matlab double

Joe 4 years ago updated by Pavel Holoborodko 4 years ago 2

I've noticed when doing vectorized operations with advanpix mp data structures, it's extremely slow compared to Matlab. For example, consider the follow code where I initialize 1000x1000 sparse, random matrices using both Matlab double and advanpix double data structures:

A_matlab = sprand(1000,1000,.2);
A_advanpix = mp(sprand(1000,1000,.2),16);

I then time the operation of "zeroing out" the first 100 rows in each

tic
A_matlab(1:100,:) = 0;
toc Elapsed time is 0.008518 seconds.

tic
A_advanpix(1:100,:) = 0;
toc Elapsed time is 2.534518 seconds.

Of course, this naturally gets much worse as the size of A increases. Is this behavior expected or am I using the advanpix data structures incorrectly?

Thank you!

Joe

0
Answered

Specifying the number of bits instead of number of digits

Joe 4 years ago updated by Pavel Holoborodko 4 years ago 1

Is it possible to specify the number of bits used to represent numbers, rather than the number of decimal digits?

Answer
Pavel Holoborodko 4 years ago

Not directly, but you can convert required number of bits to decimal digits using simple formula: 

>> d = floor(bits*log10(2));  % precision in decimal digits
>> mp.Digits(d)
0
Answered

How is a double precision number extended?

kuanxu33 4 years ago updated 4 years ago 2
>> mp(1/3)            % Accuracy is limited to double precision
ans =                 % since 1/3 was evaluated in double precision first
    0.33333333333333331482961625624739099293947219848633

How are the extended digits, i.e. 1482961625624739099293947219848633, created? Are they totally random numbers or some of them are determined by the guard digits in the double-precision representation of 1/3?


I've been thinking that a natural extension would be done by trailing zeros, resulting, for example, in the example above, something like:

0.333333333333333300000000000000000000000000000000

Any discussion/comment is welcome. Thanks!

Answer
Pavel Holoborodko 4 years ago

Toolbox uses binary representation of floating-point numbers

When we extend double precision number with zeroes (in binary representation) - it doesn't refer to zeros in decimal representation (please refer to some guides on floating-point numbers & their formats for deeper understanding).

Do not use mp(1/3) because 1/3 is computed in double precision and then converted to quadruple - final value has double precision accuracy (not the intended quadruple).

The correct way to compute constants to full accuracy is:

>> mp.Digits(34);
>> mp('1/3')
ans = 
    0.3333333333333333333333333333333333   

>> mp.Digits(50);
>> mp('1/3')
ans = 

    0.33333333333333333333333333333333333333333333333333

>> mp.Digits(70);
>> mp('1/3')

ans = 

    0.3333333333333333333333333333333333333333333333333333333333333333333333

In this case, double precision constant is not extended by zeros (in binary format) but computed with full accuracy from the onset (within limits of specified precision).

***

Could you please reply to another thread you started yesterday: https://mct.userecho.com/communities/1/topics/179-it-would-be-fantastic-if-quadeig-is-supported


We prioritize only well-motivated requests to add new functionality.

0
Fixed

R2019b linux problem

Michal Kvasnicka 5 years ago updated by Pavel Holoborodko 4 years ago 14

On R201b (Linux Mint 19.2) is the following problem:

>> mp.Test
mp.Digits() : <- success
mp.GuardDigits() : <- success
double() : <- success
int8() : <- success
uint8() : <- success
int16() : <- success
uint16() : <- success
int32() : <- success
uint32() : <- success
int64() : <- success
uint64() : <- success
colon() : <- success
plus() : <- success
minus() : <- success
times() : <- success
mtimes() : <- success
rdivide() : <- success
ldivide() : <- success
mldivide() : <- success
mrdivide() : <- success
mpower() : <- success
power() : <- success
realpow() : <- success
transpose() : <- success
ctranspose() : <- success
uminus() : <- success
uplus() : <- success
sin() : <- success
cos() : <- success
tan() : <- success
sec() : <- success
csc() : <- success
cot() : <- success
acos() : <- success
asin() : <- success
atan() : <- success
acot() : <- success
atan2() : <- success
hypot() : <- success
cosh() : <- success
sinh() : <- success
tanh() : <- success
sech() : <- success
csch() : <- success
coth() : <- success
acosh() : <- success
asinh() : <- success
atanh() : <- success
acoth() : <- success
asech() : <- success
acsch() : <- success
exp() : <- success
expm1() : <- success
log() : <- success
log10() : <- success
log1p() : <- success
log2() : <- success
nextpow2() : <- success
pow2() : <- success
sqrt() : <- success
reallog() : <- success
realsqrt() : <- success
nthroot() : <- success
pow2(F,E) : <- success
min(), max() : <- success
prod(matrix) : <- success
prod(vector) : <- success
sum(matrix) : <- success
sum(vector) : <- success
cumsum(matrix) : <- success
cumsum(vector) : <- success
cumprod(matrix) : <- success
cumprod(vector) : <- success
dot() : <- success
cross() : 2 <- fail
3 <- fail
9 <- fail
10 <- fail
<- success
svd() : <- success
qr() : <- success
lu(square) : <- success
lu(rect) : <- success
pinv() : <- success
null() : <- success
balance() : <- success
eig() : <- success
qz() : <- success
hess() : <- success
chol() : <- success
schur() : <- success
ordschur() : <- success
rank() : <- success
trace() : <- success
det() : <- success
inv() : <- success
sort(real) : <- success
sort(complex) : <- success
find() : <- success
<,<=,>,>=,==,~= : <- success
and,or,not,xor : <- success
all() : <- success
any() : <- success
isinf() : <- success
isnan() : <- success
isfinite() : <- success
isreal() : <- success
abs() : <- success
sign() : <- success
conj() : <- success
angle() : <- success
imag() : <- success
real() : <- success
complex() : <- success
ceil() : <- success
fix() : <- success
floor() : <- success
idivide() : <- success
round() : <- success
rem() : <- success
mod() : <- success
tril(matrix) : <- success
tril(vector) : <- success
triu(matrix) : <- success
triu(vector) : <- success
diag(matrix) : <- success
diag(vector) : <- success
norm(matrix) : <- success
norm(vector) : <- success
cond() : <- success
rcond() : <- success
factorial() : <- success
mean() : <- success
std() : <- success
erf() : <- success
erfc() : <- success
erfi() : <- success
FresnelS() : <- success
FresnelC() : <- success
gammaln() : <- success
gamma() : <- success
gammainc() : <- success
psi() : <- success
zeta() : <- success
eint() : <- success
logint() : <- success
cosint() : <- success
sinint() : <- success
besselj() : <- success
bessely() : <- success
besseli() : <- success
besselk() : <- success
besselh() : <- success
hypergeom() : <- success
KummerM() : <- success
KummerU() : <- success
expm() : <- success
logm() : <- success
sqrtm() : <- success
sinm() : <- success
cosm() : <- success
sinhm() : <- success
coshm() : <- success
fft : <- success
ifft : <- success
fft2 : <- success
ifft2 : <- success

0
Declined

add support for subview of array

Zhen Zhong 5 years ago updated by Pavel Holoborodko 5 years ago 1

Matlab creates an additional copy of that slice and since my array is huge, it takes a lot of time.
Is there any possibility to add some function like block operator in eigen3?

Like this one

https://ch.mathworks.com/matlabcentral/fileexchange/24576-inplacearray-a-semi-pointer-package-for-matlab

0
Answered

previous version download?

dattorro 5 years ago updated by Anuranan Das 2 years ago 4

Looking for past versions of the multiprecision toolbox on the Advanpix website.

Where are they?

0
Under review

Add precomputeLU support for dense matrix

Zhen Zhong 5 years ago updated by Pavel Holoborodko 5 years ago 3

Now, precomputeLU only supports sparse matrix. In addition, Maybe also precomputeQR.