Your comments
Pavel,
Immediately above, you made a post consisting of just PLANNED in a blue rectangle. Can you please elaborate on what is planned, or what your post means? Thanks.
Immediately above, you made a post consisting of just PLANNED in a blue rectangle. Can you please elaborate on what is planned, or what your post means? Thanks.
Michael_
Thank you. That is very helpful information. The insight about the effect of parallelization is particularly informative. Unfortunately for me, the only calculations for which I really care about computation time are matrix calculations on large matrices. That doesn't mean I'm not willing to use quad precision when needed, but if I have a calculation that takes a day to run in MATLAB double precision, I will assess the almost year it would take in quad precision to be unviable. Heck, even 3 weeks probably isn't going to cut it. On the other hand, I have some situations where I think there is just s small part of a large algorithm where I need to perform quad precision (dealing with matrices which can get very ill-conditioned), while retaining most of the most computationally intensive portions in MATLAB double precision.
I wish the powers that be had moved to hardware level support for quad precision, but alas, with the popularity of GPUs, often run in single precision, the world seems to be heading in the other direction. Of course that would put Pavel out of business, but I guess he could implement an octuple precision to keep it going.
Thank you. That is very helpful information. The insight about the effect of parallelization is particularly informative. Unfortunately for me, the only calculations for which I really care about computation time are matrix calculations on large matrices. That doesn't mean I'm not willing to use quad precision when needed, but if I have a calculation that takes a day to run in MATLAB double precision, I will assess the almost year it would take in quad precision to be unviable. Heck, even 3 weeks probably isn't going to cut it. On the other hand, I have some situations where I think there is just s small part of a large algorithm where I need to perform quad precision (dealing with matrices which can get very ill-conditioned), while retaining most of the most computationally intensive portions in MATLAB double precision.
I wish the powers that be had moved to hardware level support for quad precision, but alas, with the popularity of GPUs, often run in single precision, the world seems to be heading in the other direction. Of course that would put Pavel out of business, but I guess he could implement an octuple precision to keep it going.
Can you provide guidance for various computations on slowdown factor for quad precision vs. MATLAB double precision? Is quad precision in fact the precision with the least slowdown?
Pavel, you have misinterpreted my intent. Rather than the mindset of showing performance comparisons to MAPLE's arbitrary precision as demonstrating superiority to the competition, I am interested in an idea of what the computational time to be expected using the Multiprecision Computing Toolbox is, with MATLAB double precision as a well-known comparison point. It would be useful to understand how computation speed in Multiprecision Computing Toolbox varies with the precision selected. One of these precisions, so to speak, would be MATLAB double precision.
So ideally, a computation time table, with columns for MATLAB double precision, and Multiprecision Computing Toolbox in various precisions. And if it is the case that quad precision is the fastest multiprecision precision, then certainly those results should be part of such a table, and to me, would be the most important column.
Thanks.
So ideally, a computation time table, with columns for MATLAB double precision, and Multiprecision Computing Toolbox in various precisions. And if it is the case that quad precision is the fastest multiprecision precision, then certainly those results should be part of such a table, and to me, would be the most important column.
Thanks.
Customer support service by UserEcho
Are you actually modifying FMINCON source code, or are you independently creating code to emulate the FMINCON functionality (at least for those FMINCON algorithm options which you support?
If the former, have you encountered algorithm assumptions predicated on all computations being done in DP, such that if they were not identified and addressed, would prevent the MP version of FMINCON from achieving arbitrarily high accuracy, presuming that user-settable algorithm tolerances (e.g., for feasibility and optimality) are set appropriately? This seems like it might be a non-trivial effort to implement.
Edit: I now see https://mct.userecho.com/forums/1-general/topics/125-mp-linprog-support/ . So, apparently you have to create your own implementation of FMINCON funtionality. Nonlinear optimization solvers are very complicated, and there are many subtleties and refinements which are usually not documented in papers and books, but can be very significant to performance and robustness. A highly non-trivial undertaking to do well!!