Quadratic Programming
PolyMPC solves quadratic programs with two-sided linear inequality constraints of the form:
where \(x \in R^n\) is an optimisation variable, \(x_l \in \mathbb{R}^n \cup -\infty\) and \(x_u \in \mathbb{R}^n \cup +\infty\) are box constraints, \(P \in \mathbb{S}^n_+\) positive semidefinite, \(q \in \mathbb{R}^n\), \(A \in \mathbb{R}^{m \times n}\), \(l \in \mathbb{R}^m \cup -\infty\) and \(u \in \mathbb{R}^m \cup +\infty\). For equality constraints \(l_i = u_i\).
The software tool is originally designed for real-time embedded control of robotic and mechatronic systems. This design requirements motivates the choice of algorithms for QP: we prefer methods with lower cost per iteration and fewer linear system factorisations which often suitable for problems where moderate accuracy is sufficient.
Numerical Methods
Alternating Direction Methods of Multipliers (ADMM)
The ADMM algorithm was chosen for its cheap and well vectorisable iterations that are particularly suitable for embedded applications. We first present a standard splitting approach suggested in the literature [Boyd2011], [Stellato2020], [Stathopoulos2016]. The key ingredients of the method (similar to other operator splitting methods) are:
Splitting step. Assume for brevity that box constraints are part of the general polytopic constraints \(Ax\); here the idea is to “separate” the unconstrained quadratic cost function from the inequality constraints by introducing an auxillary variable \(z\):
Equivalent QP. Arguably, the main complexity of quadratic programming is handling inequality constraints which gives rise to various solving techniques. In splitting methods, we formulate an equivalent “easy” equality-constrained QP by introducing indicator functions to the cost:
where \(x \in \mathbb{R}^n\), \(z \in \mathbb{R}^{m + n}\). \(\mathcal{I}_{\tilde{A}x=z}\) is the indicator function for set \(\{(x,z) \mid \tilde{A}x = z \}\) to enforce \(\tilde{A}\tilde{x}=\tilde{z}\) and \(\mathcal{I_C}\) is the indicator function for set \(\mathcal{C} = [l,u] = \{z \mid l_i \leq z_i \leq u_i, i= 1 \cdots m+n \}\) to enforce \(l \leq z \leq u\).
Augmented Lagrangian for this equivalent problem has the following form (after some algebraic manipulations):
with dual variables \((w,y)\) and corresponding penalty parameters \((\sigma,\rho) > 0\).
Alternating Minimisation. Finally, as the name suggests, the method minimises the Augmented Lagrangian by alternating between variables \(x,\tilde{x},z, \tilde{z}\) (and the dual varables \(w,y\)). Using Augmented Lagrangian has several benefits. First, it relaxes the requirement of strict convexity of the quadratic cost, therefore allowing for a positive semi-definite cost. Second, the operator splitting theory provides efficient expressions for evaluating certain proximal operators. In our case, the proximal operator for indicator funtions is a simple box projection. The basic ADMM interations, therefore, can be summarised as below:
The method iterates until the primal and dual residuals satisfy user-specified tolerances: \(r_{prim} = \Vert Ax - z \Vert_\infty\) and \(r_{dual} = \Vert Px + q + A^Ty \Vert_\infty\)
boxADMM
For dense problems with box constraints the splitting described before is not very efficient due to the potentially large number of zero values in \(A\) and the linear system at the step (1) in ADMM iterations that have to be stored. We therefore implement an additional splitting which aims at reducing memory consumption and better vectorisation – boxADMM. The method treats general polytopic and box constraints differently which results in better performance especially for dense QPs.
Interfacing Other Solvers
PolyMPC provides a unified interface to some established codes for solving QP problems.
OSQP
OSQP is accessed via osqp-eigen interface. OSQP is based on the ADMM framework described in Numerical Methods and features additionally infeasibility detection and solution polishing (in case more accurate solution is required).
Note
OSQP solver supports computations with sparse matrices only. Attempt to create a dense OSQP solver interface will result in a compilation error. The user is free to convert dense matrices to sparse using standard Eigen methods before passing them to the solver.
When vectorisation is enabled the user can expected better performance from ADMM implementation in PolyMPC. Further, OSQP only supports allocation free operation in code-generation mode which is not available through the PolyMPC interface.
QPMAD
QPMAD is a header-only Eigen-based implementation of the Goldfarb-Idnani dual active set method [Goldfarb1983].
Note
QPMAD solver supports only dense matrices. Attempt to create a sparse QPMAD solver interface will result in a compilation error.
Modelling Quadratic Programs
To solve a QP with PolyMPC you need to specify several details about the problem at the compile-time. This is necessary for memory management and performance optimisation.
Problem Dimensions
: The user need to specify the number of optmisation variablesN
and number of generic linear constraints (including equality constraints)M
. The dimension of box constraints coincides with the optmisation variable dimensionality.Scalar Type [Optional, Default: double]
: single (float
) or double precision floating point types;complex
and user-defined types are allowed (compatible with Eigen algebraic kernel) but not tested.Matrix Format [Optional, Default: DENSE]
: possible values:DENSE
andSPARSE
. Since dense and sparse matrices in Eigen have slightly different interface, this option controls compilation of a specific implementation and optimisations.Linear System Solver [Optional, Default: Eigen::LDLT (DENSE) | Eigen::SimplicialLDLT (SPARSE)]
: QP solvers implemented in PolyMPC support direct and iterative, dense and sparse solvers available in Eigen. The user can as well provide a custom linear solver given it is derived from the Eigen base solver classes and has the same interface.
All QP solvers in PolyMPC are derived from the QPBase
class:
template<int N, int M, typename Scalar = double, int MatrixType = DENSE,
template <typename, int, typename... Args> class LinearSolver = linear_solver_traits<DENSE>::default_solver,
int LinearSolver_UpLo = Eigen::Lower, typename ...Args>
class ADMM : public QPBase<ADMM<N, M, Scalar, MatrixType, LinearSolver, LinearSolver_UpLo>, N, M, Scalar, MatrixType, LinearSolver, LinearSolver_UpLo>
{
...
};
This fairly cumbersome construction allows passing (any) additional arguments (through parameter pack Args
) that a linear solver potentially might require. It further allows
creating aliases for interface types:
qp_var_t
: optimisation vector type (staticNx1
vector)qp_dual_t
: dual variable (Lagrange multipliers) (static(N+M)x1
vector); access:(0..M)
- multipliers associated with general constraints,(M...M+N)
- multipliers associated with box constraintsqp_hessian_t
: Hessian \(P\) of the cost function; dense or sparseNxN
matrix
Note
for small and moderate size problems qp_hessian_t
and other matrices created internally will be static size matrices, for larger
problems they can go on the heap (dynamic size matrix). This behaviour can be controlled by the compiler definition EIGEN_STACK_ALLOCATION_LIMIT
Normally, this only affects internal behaviour.
qp_contraint_t
: constraints Jacobian \(A\); dense or sparseMxN
matrixqp_dual_a_t
: dual variable associated with general constraints (staticMx1
vector)
- status_t QPBase::solve
The class QPBase
provides purely virtual solve()
which is a placeholder for the user-defined implementation.
status_t solve(const Eigen::Ref<const qp_hessian_t>&H, const Eigen::Ref<const qp_var_t>& h,
const Eigen::Ref<const qp_constraint_t>& A,
const Eigen::Ref<const qp_dual_a_t>& Alb, const Eigen::Ref<const qp_dual_a_t>& Aub,
const Eigen::Ref<const qp_var_t>& xlb, const Eigen::Ref<const qp_var_t>& xub) noexcept
{
return static_cast<Derived*>(this)->solve_impl(H, h, A, Alb, Aub, xlb, xub);
}
This assumes to solve the folowing problem:
Examples
Let us now consider several examples that demonstrate the interface of the solver. Assume, we need to solve a simple QP.
Simple QP: basic ADMM solver
To solve this problem with PolyMPC one might to write a simple program:
#include "solvers/admm.hpp"
int main(void){
using Scalar = double;
Eigen::Matrix<Scalar, 2,2> H;
Eigen::Matrix<Scalar, 2,1> h;
Eigen::Matrix<Scalar, 1,2> A;
Eigen::Matrix<Scalar, 1,1> al, au;
Eigen::Matrix<Scalar, 2,1> xl, xu, solution;
H << 4, 1,
1, 2;
h << 1, 1;
A << 1, 1;
al << 1; xl << 0, 0;
au << 1; xu << 0.7, 0.7;
solution << 0.3, 0.7;
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
/** here further template arguments are omitted, and default values are used: dense matrices and LDLT linear solver */
ADMM<N, M, Scalar> solver;
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2d sol = solver.primal_solution();
}
Simple QP: boxADMM
boxADMM
solver can be created in the similar manner:
#include "solvers/box_admm.hpp"
...
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
/** here further template arguments are omitted, and default values are used: dense matrices and LDLT linear solver */
boxADMM<N, M, Scalar> solver;
/** the user now can solve any positive semi-definite QP that has N variables and M constraints */
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2d sol = solver.primal_solution();
SimpleQP: Iterative Linear Solver
Now let’s for sake of example pretend that we need an iterative solver, for instance Conjugate Gradient method, to solve our problem. Moreover, we decide that the problem is nicely scaled (or we do not have enough memory on our chip) and single precision arithmetics is enough for our purpose and the Hessian is symmetric. Generally, iterative solvers should be used for large sparse problems (preferably well conditioned) where performing a direct factorisation is expensive.
#include "solvers/box_admm.hpp"
int main(void)
{
using Scalar = float;
Eigen::Matrix<Scalar, 2,2> H;
Eigen::Matrix<Scalar, 2,1> h;
Eigen::Matrix<Scalar, 1,2> A;
Eigen::Matrix<Scalar, 1,1> al, au;
Eigen::Matrix<Scalar, 2,1> xl, xu, solution;
H << 4, 1,
1, 2;
h << 1, 1;
A << 1, 1;
al << 1; xl << 0, 0;
au << 1; xu << 0.7, 0.7;
solution << 0.3, 0.7;
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
/** tell boxADMM to use ConjugateGradient solver available in Eigen (Eigen::ConjugateGradient)
and tell it the H matrix is symmetric (Eigen::Lower | Eigen::Upper) */
boxADMM<N, M, Scalar, DENSE, Eigen::ConjugateGradient, Eigen::Lower | Eigen::Upper> solver;
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2f sol = solver.primal_solution();
}
Note
In all previous examples \(H\) and \(A\) matrices are defined as static. It is possible, however,
to provide dynamically allocated matrices, i.e. MatrixXd
for example. The user should make sure that
the memory is properly allocated as PolyMPC does not perform any data consistency checks in RELEASE
mode.
SimpleQP: Sparse Matrices
The problem we are considering is dense. Let’s again for the sake of demonstration pretend that the data in our problem is sparse.
#include "solvers/box_admm.hpp"
int main(void){
using Scalar = double;
Eigen::SparseMatrix<Scalar> H(2,2);
Eigen::SparseMatrix<Scalar> A(1,2);
Eigen::Matrix<Scalar, 2,1> h;
Eigen::Matrix<Scalar, 1,1> al, au;
Eigen::Matrix<Scalar, 2,1> xl, xu, solution;
/** reserve memory and fill-in the matrices */
H.reserve(2); H.insert(0,0) = 4; H.insert(0,1) = 1; H.insert(1,0) = 1; H.insert(1,1) = 2;
A.reserve(1); A.insert(0,0) = 1; A.insert(0,1) = 1;
h << 1, 1;
al << 1; xl << 0, 0;
au << 1; xu << 0.7, 0.7;
solution << 0.3, 0.7;
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
/** tell boxADMM to use sparse linear algebra and (default) SimplicialLDLT method */
boxADMM<N, M, Scalar, SPARSE, linear_solver_traits<SPARSE>::default_solver> solver;
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2d sol = solver.primal_solution();
}
Note
PolyMPC does not assume any particular structure of the sparse matrices. It is responsibility of the user to fill-in data correctly, the
sparsity pattern will inferred automatically, therefore matrices are assumed to be in uncompressed form (default in Eigen). If the sparsity
patter of the problem does not change in-between solves (only entries values rather), it is possible to set solver.settings().reuse_pattern = true;
which will skip the memory check and allocation step. This feature significantly improves the performance. A full set of avaliable options is available in
the Settings section.
SimpleQP: Sparse Iterative Solver
The iterative solvers can be called the same way with the sparse QP solvers. (For the problem setup see previous example).
#include "solvers/admm.hpp"
...
ADMM<M, N, scalar, SPARSE, Eigen::ConjugateGradient, Eigen::Lower | Eigen::Upper> solver;
/** or: boxADMM<M, N, Scalar, SPARSE, Eigen::ConjugateGradient, Eigen::Lower | Eigen::Upper> solver; */
solver.solve(H,h,A,Al,Au,xl,xu);
Eigen::Vector2d sol = solver.primal_solution();
SimpleQP: OSQP Interface
To use OSQP interface make sure that OSQP itself and osqp-eigen are installed. You will also need to link your executable to the
OsqpEigen::OsqpEigen
target.
osqp_test.cpp
:
#include "solvers/osqp_interface.hpp"
int main(void){
using Scalar = double;
Eigen::SparseMatrix<Scalar> H(2,2);
Eigen::SparseMatrix<Scalar> A(1,2);
Eigen::Matrix<Scalar, 2,1> h;
Eigen::Matrix<Scalar, 1,1> al, au;
Eigen::Matrix<Scalar, 2,1> xl, xu, solution;
/** reserve memory and fill-in the matrices */
H.reserve(2); H.insert(0,0) = 4; H.insert(0,1) = 1; H.insert(1,0) = 1; H.insert(1,1) = 2;
A.reserve(1); A.insert(0,0) = 1; A.insert(0,1) = 1;
h << 1, 1;
al << 1; xl << 0, 0;
au << 1; xu << 0.7, 0.7;
solution << 0.3, 0.7;
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
polympc::OSQP<N, M, Scalar> solver;
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2d sol = solver.primal_solution();
}
In your CMakeLists.txt
:
find_package(OsqpEigen)
add_executable(osqp_test osqp_test.cpp)
target_link_libraries(osqp_test OsqpEigen::OsqpEigen)
SimpleQP: QPMAD Interface
Make sure that QPMAD is installed.
#include "solvers/qpmad_interface.hpp"
int main(void){
using Scalar = double;
Eigen::Matrix<Scalar, 2,2> H;
Eigen::Matrix<Scalar, 2,1> h;
Eigen::Matrix<Scalar, 1,2> A;
Eigen::Matrix<Scalar, 1,1> al, au;
Eigen::Matrix<Scalar, 2,1> xl, xu, solution;
H << 4, 1,
1, 2;
h << 1, 1;
A << 1, 1;
al << 1; xl << 0, 0;
au << 1; xu << 0.7, 0.7;
solution << 0.3, 0.7;
const int N = 2; // number of optimisation variables
const int M = 1; // number of generic constraints
/** here further template arguments are omitted, and default values are used: dense matrices and LDLT linear solver */
polympc::QPMAD<N, M, Scalar> solver;
solver.solve(H, h, A, al, au, xl, xu);
Eigen::Vector2d sol = solver.primal_solution();
}
Note
QPMAD and OSQP interfaces accept only problem dimensions and scalar type as template parameters.
Solver Settings
- const QPBase::settings_t& settings() const noexcept
- QPBase::settings_t& settings() noexcept
Settings of all solvers can be accessed for writing and reading using settings()
function which returns
the structure settings_t = qp_solver_settings_t<scalar_t>
containing settings for all available solvers. Settings for each QP solver are summarised in the
table below.
Common Settings
Setting |
Description |
Allowed values |
Default value |
---|---|---|---|
|
Maximum number of iterations |
0 < |
1000 |
|
Absolute tolerance |
0 <= |
1e-03 |
|
Relative tolerance |
0 <= |
1e-03 |
|
Verbose output |
true/false |
false |
|
Warm start solver |
true/false |
false |
|
Skip sparsity estimation and memory allocation step |
true/false |
false |
Settings for ADMM-based Solvers
Setting |
Description |
Allowed values |
Default value |
---|---|---|---|
|
ADMM rho step |
0 < |
0.1 |
|
Adaptive rho |
true/false |
true |
|
Adapt rho every N-th iteration |
0 (automatic) or 0 < |
25 |
|
Tolerance for adapting rho |
1 <= |
5 |
|
ADMM sigma step |
0 < |
1e-06 |
|
ADMM overrelaxation parameter |
0 < |
1.0 |
|
Check termination after N-th iteration |
0 (disabled) or 0 < |
25 |
Settings Specific to OSQP
Setting |
Description |
Allowed values |
Default value |
---|---|---|---|
|
Primal infeasibility tolerance |
0 <= |
1e-04 |
|
Dual infeasibility tolerance |
0 <= |
1e-04 |
|
Polishing regularization parameter |
0 < |
1e-06 |
|
Perform polishing |
true/false |
false |
|
Number of scaling iterations |
0 (disabled) or 0 < |
10 |
|
Refinement iterations in polish |
0 < |
3 |
|
Scaled termination conditions |
True/False |
False |
|
Adaptive rho interval as fraction of setup time (auto mode) |
0 < |
0.4 |
|
Run time limit in seconds |
0 (disabled) or 0 <= |
0 |
|
Linear systems solver type |
0 = LDLT, 1 = Pardiso |
LDLT |
Settings Specific to QPMAD
Setting |
Description |
Allowed values |
Default value |
---|---|---|---|
|
Hessian structure |
0:undefined, 1: lower_triangular, 2: cholesky_factor (int) |
1 |
Solver Status
The software provides a unified status for all solvers defined in the qp_status_t
enum.
status_t |
---|
SOLVED |
MAX_ITER_EXCEEDED |
UNSOLVED |
UNINITIALIZED |
INCONSISTENT |
Solver Info
Some statistics of the QP solver including number of iterations, primal and dual residuals, status can accessed with the info()
function.