In some sense of decreasing priority (but other people have different priorities I'm sure):

I'd pick KNITRO.  I suppose arbitrary precision need not be implemented for all algorithm options of which there are several https://www.artelys.com/tools/knitro_doc/2_userGuide/algorithms.html ,. combined with derivative options https://www.artelys.com/tools/knitro_doc/2_userGuide/derivatives.html .

Also, Mosek which can address LMIs and other conic problems (in forthcoming Mosek version 9, any product of Second Order Cones, semidefinite cones, exponential cones, and power cones, with exponential cones and power cones being new for Mosek 9). As you know with your eigenvalue work, sometimes you have an inherently ill-conditioned problem, and need to solve semidefinjite problems to high precision. I really don't like it when solvers cheat on the semidefinite constraints allowing slightly negative eigenvalue for psd constraint, and I don't like having to build in buffer against that by constraining to be >= small_number_as buffer_against_cheating). There actually is VSDP,http://www.ti3.tu-harburg.de/jansson/vsdp/VSDP2012Guide.pdf which is built on top of INTLAB, but I haven';t heard of anyone using VSDPt to practical effect.

CPLEX ,might be a good choice, but IBM is bureaucratic as hell, so good luck with that. Gurobi (smaller problem class than CPLEX, since it will not do non-convex QP or non-convex MIQP), also has top reputation for what it does, and was started by the founder of CPLEX after CPLEX was sold.

Arbitrary precision seems like it might not be useful for tough mixed integer problems (except when continuous relaxations are extremely ill-conditioned) because you are usually limited by long run-time to specify a non-trivial gap, such as 1% or higher,(as termination criterion)( between upper bound from best integer feasible solution and lower bound from continuous relaxation. But with arbitrary precision, you can reduce the integrality tolerance, allowing Big M logic constraints to have larger M (bound on variable) and have logic still function correctly. Many naive users specify a very large Big M value, which given default integrality tolerance means that their Big M constraints might not function as intended, thereby producing the wrong answer. But for certain very tough problems, the M needs to be very big, which means that the integrality tolerance needs to be very small, which can be achieved with arbitrary precision,.

Of course, there will be a huge performance penalty from doing floating point in SW.  I think some folks have looked at hybrid implementations which implement higher precision only in vital places, and use HW floating point for everything else.

Well, I didn't think it would be easy, but perhaps much easier than writing a high quality optimization solver from scratch. There are a tremendous amount of non-obvious subtleties which make a huge difference in actual performance and reliability, and that is not easy to do.

SDPA is available in the variant SDPA-GMP (SDPA with arbitrary precision arithmetic);) http://sdpa.sourceforge.net/index.html . However, SDPA is not the most reliable LMI solver (not in Mosek's league).

quadMINOS is a quadrupke (not arbitrary) precision version of MINOS which is available commercially https://www.gams.com/latest/docs/S_MINOS.html .  See also https://web.stanford.edu/group/SOL/talks/14beijingQuadMINOS.pdf , including FAQ on the last page, which as of Dec 2016, states there is no MATLAB version of quadMINOS, and also that there is quadSNOPT (in f90 snopt9 we can change 1 line), i..e, quad precision version of the SNOPT continuous variable smooth nonlinear optimization solver.

As for optimization solvers written in MATLAB, there is the free solver PENLAB, which can handle nonlinear SDPs,(none of the solvers listed in my post above address nonlinear SDPs, to include Bilinear Matrix Inequalities (BMIs), and nonlinear constraints such as FMINCON addresses. It is a free, lower performing version of the commercial solver PENNON, now distributed via NAG in the NAG C library.. Unfortunately, PENLAB is riddled with bugs, and frequently hangs, unlike PENNON, or its other sibling, PENBMI (addresses BMIs rather than fully general nonlinear SDPs), bath written in C/C++, and having many fewer bugs than PENLAB. Supposedly a new version of PENLAB is under development, or is at least being considered for such, but I don't know its statues,, I wouldn't invest much effort in trying to make an arbitrary precision version of PENLAB until there's a relatively bug-free, or at least "usable" version available as a starting point. A version of PENNON packaged to be called from MATLAB will supposedly be available in NAG Toolbox Mark 27 in summer 2019 - however as of 2 years ago, it was going to come out 1.3.4 years ago, so who knows. Although PENLAB and PENNON address standard nonlinear continuous nonlinear programming (in addition to SDP constraints, they can't hold a candle to KNITRO when solving problems addressable by KNITRO (or FMINCON)., so they are not a good choice except for problems having SDP constraints, possibly in addition to other constraints,

Also I forgot to mention in my post above that KNITRO also has an SR1 Quasi-Newton option, in addition to BFGS and BFGS-L (LBFGS), whereas FMINCON does not have offer an SR1 option. The SR1 update, when used in a rust region, as in KNITRO, has better capability to handle sustained non-convexiities in the Lagrangian on the path to the optimum,on account of not enforcing (maintaining) positive semoidefiniteness of the approximation of the Hessian of the Lagrnagian as does BFGS, SR1 can quickly adjust its Hessian eigenvalues to match the curvature it encounters. But if your problem is convex, you're probably better off with BFGS.

﻿