Pseudoinverse solvers for least squares problems (in robotics)
Quadratic Programming
A wide range of optimization problems can be cast into the following shape of a quadratic program.
This generic and declarative formulation allows for many types of solvers that are based on policies of such as:
Choice of constants:
Some \(\vect{A}\) may be identity matrices
Some \(\vect{b}\) may be zero vectors
Some \(\vect{W}\) may be identity matrices
Weight (soft objective) vs. prioritization (hard constraint)
For example, represent physical system as objective (cf. Stack of Tasks) vs. system as constraint (cf. Gauss principle of least constraint)
Multi-level prioritization hierarchy (via nullspace projections)
Number and type of constraints:
No equality constraints, no inequality constraints \(\rightarrow\) analytic solver (unconstrained optimization)
Some equality constraints, no inequality constraints \(\rightarrow\) analytic solver (via Lagrange multipliers to unconstrained optimization)
Some inequality constraints \(\rightarrow\) iterative/numerical solver
Damping or regularization
Implementation of matrix decomposition:
Choice of solver: LU, Cholesky, QR, SVD, …
Solver-specific choices, e.g. “type of SVD” (Householder reflection vs. Givens rotation), or maximum number of iterations in SVD
Here, we focus on a special family of solvers that afford an analytical solution relying an matrix pseudoinverses.
Variations of pseudoinverse solvers
We classify the solution approaches along the number of equations and variables in the system of equations. Then the system can be overdetermined (more equations than variables) or underdetermined (less equations than variables). The case where there are the same amount of equations as variables is not discussed here. Most of the below information can be found in and is based on [BenIsrael2003]. A discussion is also available here.
Overdetermined systems
The system has more equations than unknowns
\(\vect{A}\) has full column rank (is row-rank deficient)
There is no (exact) solution
We can find an approximate solution (w.r.t. a \(\vect{W}\)-norm in the “\(f(\vect{x})\)-space”)
The system enters as an objective
Solve via \(\{\vect{W}\}\)-weighted, left pseudoinverse
Robotic example: a 5-DoF manipulator performing a 6-DoF Cartesian task or a Kelo platform with a single wheel unit
The manipulator is always singular for that task
The “\(f(\vect{x})\)-space” is the Cartesian velocity-twist space
\(\vect{W}\) is a weight matrix in Cartesian velocity-twist space
(Ordinary) Least squares
The (ordinary) least squares is only provided for completeness and as a “baseline” for reducing other problems to. For many physical systems, for example with linear and angular quantities, this is not a correct choice because it leads to inconsistent physical units. The “correct” solution is to introduce (relative) weights for linear and angular quantities.
Solve the minimization by (i) equating the derivative to the zero vector; and (ii) solving for \(\vect{x}\):
where \(\vect{A}^\dagger = (\vect{A}^T \vect{A})^{-1} \vect{A}^T\) is the Moore-Penrose pseudoinverse (a left inverse).
Weighted least squares
Here, we assume an arbitrary, positive-definite weight matrix \(\vect{W}\) that acts on the “\(\vect{b}\)-space”. This problem is sometimes also referred to as generalized least squares and the term weighted least squares is reserved for problems with diagonal weight matrices.
Solve similar to previous cases:
where \(\vect{A}^\dagger_{\vect{W}} = (\vect{A}^T \vect{W} \vect{A})^{-1} \vect{A}^T \vect{W}\) is the \(\{\vect{W}\}\)-weighted Moore-Penrose pseudoinverse (a left inverse).
Underdetermined systems
The system has fewer equations than unknowns
\(\vect{A}\) has full row rank (is column-rank deficient)
There are (infinitely) many solutions
We can find a minimum-norm solution among all existing solutions (w.r.t. a \(\vect{Q}\)-norm in the “\(\vect{x}\)-space”)
The system enters as a constraint
Solve via \(\{\vect{Q}^{-1}\}\)-weighted, right pseudoinverse
Robotic example: a 7-DoF manipulator performing a 6-DoF Cartesian task or a Kelo platform with more than one wheel unit
The manipulator is sometimes redundant for that task
The “\(\vect{x}\)-space” is the joint-velocity space
\(\vect{Q}\) is a weight matrix in joint-velocity space
Minimum-norm least squares or least norm
Among the set of Eq. \(\eqref{eq:lsq-set}\) we may be interested in a specific solution, that of minimum norm, i.e. the solution where \(\vect{x}(\hat{\vect{x}})\) is “shortest”. This can be represented by a cascaded or staged optimization problem:
Todo
Is there a better notation for this?
Equivalently, this is represented by the following problem:
Construct an unconstrained optimization problem using the Lagrange multiplier \(\vect{\lambda}\) to embed the constraint into the augmented objective (also known as Lagrangian):
Solve the minimization by (i) equating the derivative to the zero vector; (ii) substituting the constraint equation; and (iii) solving for the Lagrange multiplier:
Substitute the Lagrange multiplier into Eq. \(\eqref{eq:lsq-min-norm-deriv}\) and solve for \(\vect{x}\):
where \(\vect{A}^\dagger = \vect{A}^T (\vect{A} \vect{A}^T)^{-1}\) is the Moore-Penrose pseudoinverse (a right inverse). This is the (unique) minimum-norm least-squares solution. However, there exists a family/set of solutions that satisfy
where \(\vect{N} = \vect{I} - \vect{A} \vect{A}^{\dagger}\) is an orthogonal nullspace projector and \(\bar{\vect{x}}\) an arbitrary vector of appropriate dimension.
Minimum-norm weighted least squares
The underlying optimization problem is formulated as
In analogy to Eq. \(\eqref{eq:lsq-min-norm}\) the minimum-norm weighted least-squares solution for a rank-deficient matrix \(\vect{A}\) is
Just like before, this is the minimum-norm solution but there exists an infinite amount of solutions of the form
where \(\vect{N}_{\vect{Q}^{-1}} = \vect{I} - \vect{A} \vect{A}^\dagger_{\vect{Q}^{-1}}\) is an oblique nullspace projector and \(\bar{\vect{x}}\) is any vector of appropriate dimension.
Minimum-reference weighted least squares
This problem is an extension of the minimum-norm weighted least-squares problem in that it minimizes with respect to a potentially non-zero reference state.
The augmented, non-constrained problem is analog to before:
So is the solution for the Lagrange multiplier:
Substitute the last equation into Eq. \(\eqref{eq:lsq-ref-deriv}\) (with \(\vect{Q}\) positive definite, hence \(\vect{Q}^T = \vect{Q}\)):
with weighted pseudoinverse (a right inverse) \(\vect{A}^{\dagger}_{\vect{Q}^{-1}} = \vect{Q}^{-1} \vect{A}^T (\vect{A} \vect{Q}^{-1} \vect{A}^T)^{-1}\):
where \(\vect{N}_{\vect{Q}^{-1}} = \vect{I} - \vect{A}^{\dagger}_{\vect{Q}^{-1}} \vect{A}\) is an oblique nullspace projector. Note, how this generalized formulation and the derivation of its solution provides a better insight into the origin of the family/set of solutions in Eq. \(\eqref{eq:lsq-set}\).
The nullspace projector annihilates any vector in the nullspace (with \(\vect{A} \vect{A}^{\dagger}_{\vect{Q}^{-1}} = \vect{I}\)):
Generalization to handle both cases in one formulation
Weighted-minimum norm weighted least squares
Here, we are interested in a setting (i) of weighted-least squares (analogous to weighted least squares) with the positive-definite matrix \(\vect{W}\) as the norm in the space where vector \(\vect{b}\) lives; (ii) the matrix \(\vect{A}\) is not of full rank; and (iii) the norm in the space where vector \(\vect{x}\) lives is given by a positive-definite matrix \(\vect{Q}\).
This problem can be represented by the following cascaded minimization over, first, the system and, second, the state:
There does not seem to exist a formulation that is equivalent to Eq. \(\eqref{eq:lsq-min-norm-prob}\). Instead, the solution relies on reducing this problem to Eq. \(\eqref{eq:lsq-min-norm}\) via the following transformations according to [BenIsrael2003] (p. 118, Eq. 51):
\(\vect{A}' = \vect{W}^{\frac{1}{2}} \vect{A} \vect{Q}^{-\frac{1}{2}}\)
\(\vect{x}' = \vect{Q}^{\frac{1}{2}} \vect{x}\)
\(\vect{b}' = \vect{W}^{\frac{1}{2}} \vect{b}\)
The following equations show that these transformations do not change the result (i.e. the transformations are “valid”):
Substituting these transformations into the minimum-norm least-squares solution from Eq. \(\eqref{eq:lsq-min-norm}\) yields:
where \(\vect{A}_{(W,Q)}^\dagger = \vect{Q}^{-\frac{1}{2}} (\vect{W}^{\frac{1}{2}} \vect{A} \vect{Q}^{-\frac{1}{2}})^\dagger \vect{W}^{\frac{1}{2}}\) is the \(\{\vect{W},\vect{Q}\}\)-weighted pseudoinverse.
Unified handling of overdetermined and underdetermined systems
The benefit of this formulation is that the same formula can compute minimum-norm solutions for overdetermined and underdetermined systems. In the following we employ the property \((\vect{A} \vect{B} \vect{C})^{-1} = \vect{C}^{-1} \vect{B}^{-1} \vect{A}^{-1}\).
For an overdetermined system the solution reduces to the \(\{\vect{W}\}\)-weighted, left pseudoinverse:
For an underdetermined system the solution reduces to the \(\{\vect{Q}^{-1}\}\)-weighted, right pseudoinverse:
Weighted-minimum norm weighted least squares with reference
This problem resembles the previous one but with an added reference configuration \(\bar{\vect{x}}\):
Here we employ the following transformations:
\(\vect{A}' = \vect{W}^{\frac{1}{2}} \vect{A} \vect{Q}^{-\frac{1}{2}}\)
\(\vect{x}' = \vect{Q}^{\frac{1}{2}} (\vect{x} - \bar{\vect{x}})\)
\(\vect{b}' = \vect{W}^{\frac{1}{2}} (\vect{b} - \vect{A} \bar{\vect{x}})\)
These transformations enable the reduction to an (ordinary) least norm problem as follows:
The resulting solution \(\vect{x}\) is:
where \(\vect{N}_{(W,Q)} = \vect{I} - \vect{A}_{(W,Q)}^{\dagger} \vect{A}\) is the nullspace projector associated with the \(\{\vect{W},\vect{Q}\}\)-weighted pseudoinverse.
Regularization
The objective function has contributions from the “system” (\(\vect{A} \vect{x} = \vect{b}\)) and the “state” (\(\vect{x}\)) to handle ill-conditioned problems for overdetermined and underdetermined systems.
Damped least squares or ridge regression
Introduce a scalar weight \(\lambda\) on the “state” vector.
Solve just as before:
Weighted damped least squares
Here, the weighted damped least squares is given without derivation:
Generalized Tikhonov regularization
The “system” (\(\vect{A} \vect{x} = \vect{b}\)) and the “state” (\(\vect{x}\)) are weighted by the positive-definited matrices \(\vect{W}\) and \(\vect{Q}\), respectively.
Solve just as before:
Generalized Tikhonov regularization with reference
This is a mixture of weighted least squares and minimum-reference weighted least squares. An explanation is also available here.
Solve just as before:
Numerical implementation via matrix decompositions
Refer to [Press1992] and [Golub2013] for more details on the topic. The most relevant matrix decompositions (in order from highest to lowest numerical robustness, but in lowest to highest computational speed) are:
the singular value decomposition (SVD) \(\vect{A} = \vect{U} \vect{\Sigma} \vect{V}^T\) for an arbitrary matrix \(\vect{A}\)
the QR decomposition \(\vect{A} = \vect{Q} \vect{R}\) for an arbitrary matrix \(\vect{A}\)
the LU decomposition \(\vect{A} = \vect{L} \vect{U}\) for a square matrix \(\vect{A}\)
the Cholesky decomposition \(\vect{A} = \vect{L} \vect{L}^T\) for a symmetric positive-definite matrix \(\vect{A}\)
Only the first two will be discussed in the following.
Note
In a software implementation all matrix inverses \((\cdot)^{-1}\), pseudoinverses \((\cdot)^{\dagger}\) and weighted pseudoinverses \((\cdot)_{\vect{W}}^{\dagger}\) should rely on a matrix decomposition for better numerical robustness. One should never explicitly compute an inverse matrix.
Ordinary least squares
SVD
Given the SVD decomposition of a matrix \(\vect{A}\), the solution to an ordinary least-squares problem is given by (see here):
where \(\vect{\Sigma}^{+}\) is the pseudoinverse of \(\vect{\Sigma}\) where (i) non-zero entries are (scalar-)inverted; and (ii) zero entries are kept zero.
LAPACK provides the routines
DGESVD for computing the SVD of an arbitrary (shape and rank-deficient) matrix
DGELSS and DGELSD for solving ordinary least-squares problems using an SVD of an arbitrary (shape and rank-deficient) matrix
See here, here or here for further LAPACK documentation (in increasing level detail).
QR decomposition
Given the QR decomposition of a matrix \(\vect{A}\), the solution to an ordinary least-squares problem is (see here and here)
\(\vect{R}\) is an upper triangular and non-singular. Hence, this problem is efficiently solvable by back substitution (see [Nocedal2006] pp. 432).
LAPACK provides the routines
Weighted least squares
LAPACK features two routines (see here for further documentation):
DGGGLM supports a weight matrix but lacks support for a reference value \(\bar{\vect{x}}\).
DGGLSE supports a reference value \(\bar{\vect{x}}\) but lacks support for a weight matrix.
Because both routines do not support the full feature set required for the above problems, we review the more fine-granular building blocks. The following approaches should be ordered from highest to lowest numerical robustness.
Reduction to ordinary least squares
At first sight, the weighted least-squares problem seems unapplicable for the above decompositions. However, the following transformation allow us to reduce a weighted least-squares to an ordinary least squares problem (see here) \(\|\tilde{\vect{A}} \vect{x} - \tilde{\vect{b}}\|^2\) with
This can then be solved by the SVD or QR decomposition as discussed above.
The square root of a positive-definite matrix \(\vect{W}\) can be computed using an eigendecomposition (see [BenIsrael2003] p. 222 (Ex. 37) or here):
where \(\vect{\Lambda}^{\frac{1}{2}}\) are the component-wise square roots of the eigenvalues (due to positive-definiteness they are always positive).
LAPACK provides the routines DSYEVR or DSYEV for computing the eigendecomposition of a symmetric matrix. See here for an example and here or here for further LAPACK documentation.
SVD of weighted matrix
The reference (see here) suggests the following approach:
While computationally more efficient (due to less operations), it computes a product of two matrices first which may reduce numerical robustness.
QR decomposition of weighted matrix
Similar to the SVD decomposition (see here) (but with constraints on the dimension of \(\vect{A}\)):
Regularized least squares
The regularized least squares with a damping parameter \(\lambda\) can be solved efficiently given an SVD as explained here:
with the diagonal values \(D_{i,i}\) of matrix \(\vect{D}\) as
Weighted-minimum norm weighted least squares
Use the reduction to an ordinary least-squares problem provided in the associated section above and solve that reduced problem in turn using an SVD or a QR decomposition.
Alternatively, compute the SVD of either weight matrix and adapt the last approach for the weighted least squares.
[BenIsrael2003] (p. 255, Eq. 201):
Algorithms for HDDC platforms
Force distribution
The following routines map forces from the platform frame to the pivot frame.
Singular platform
Solver:
\[\vect{F}_d = (\vect{G}^T \vect{W}_p \vect{G})^{-1} \vect{G}^T \vect{W}_p \vect{F}_p\]
Weighting:
\[\begin{split}\vect{Z}_p, \vect{\Lambda}_p &= \operatorname{dsyev}(\vect{W}_p) \\ \vect{W}_p^{\frac{1}{2}} &= \vect{Z}_p \vect{\Lambda}_p^{\frac{1}{2}} \vect{Z}_p^T \\ \vect{G}' &= \vect{W}_p^{\frac{1}{2}} \vect{G} \\ \vect{F}_p' &= \vect{W}_p^{\frac{1}{2}} \vect{F}_p\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}') \\ \vect{F}_d &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \vect{F}_p'\end{split}\]
Redundant platform with reference
Solver:
\[\vect{F}_d = \bar{\vect{F}}_d + \vect{W}_d^{-1} \vect{G}^T (\vect{G} \vect{W}_d^{-1} \vect{G}^T)^{-1} (\vect{F}_p - \vect{G} \bar{\vect{F}}_d)\]
Reference (initial):
\[\vect{F}_p' = \vect{F}_p - \vect{G} \bar{\vect{F}}_d\]Weighting (initial):
\[\begin{split}\vect{Z}_d, \vect{\Lambda}_d &= \operatorname{dsyev}(\vect{W}_d) \\ \vect{W}_d^{-\frac{1}{2}} &= \vect{Z}_d \vect{\Lambda}_d^{-\frac{1}{2}} \vect{Z}_d^T \\ \vect{G}' &= \vect{G} \vect{W}_d^{-\frac{1}{2}}\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}') \\ \vect{F}_d' &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \vect{F}_p'\end{split}\]Weighting (final):
\[\vect{F}_d'' = \vect{W}_d^{-\frac{1}{2}} \vect{F}_d'\]Reference (final):
\[\vect{F}_d = \bar{\vect{F}}_d + \vect{F}_d''\]
Fused singular and redundant platform with reference
Solver:
\[\vect{F}_d = \bar{\vect{F}}_d + \vect{W}_d^{-\frac{1}{2}} (\vect{W}_p^{\frac{1}{2}} \vect{G} \vect{W}_d^{-\frac{1}{2}})^{-1} \vect{W}_p^{\frac{1}{2}} (\vect{F}_p - \vect{G} \bar{\vect{F}}_d)\]
Reference (initial):
\[\vect{F}_p' = \vect{F}_p - \vect{G} \bar{\vect{F}}_d\]Weighting (initial):
\[\begin{split}\vect{Z}_p, \vect{\Lambda}_p &= \operatorname{dsyev}(\vect{W}_p) \\ \vect{W}_p^{\frac{1}{2}} &= \vect{Z}_p \vect{\Lambda}_p^{\frac{1}{2}} \vect{Z}_p^T \\ \vect{Z}_d, \vect{\Lambda}_d &= \operatorname{dsyev}(\vect{W}_d) \\ \vect{W}_d^{-\frac{1}{2}} &= \vect{Z}_d \vect{\Lambda}_d^{-\frac{1}{2}} \vect{Z}_d^T \\ \vect{G}' &= \vect{W}_p^{\frac{1}{2}} \vect{G} \vect{W}_d^{-\frac{1}{2}} \\ \vect{F}_p'' &= \vect{W}_p^{\frac{1}{2}} \vect{F}_p'\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}') \\ \vect{F}_d' &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \vect{F}_p''\end{split}\]Weighting (final):
\[\vect{F}_d'' = \vect{W}_d^{-\frac{1}{2}} \vect{F}_d'\]Reference (final):
\[\vect{F}_d = \bar{\vect{F}}_d + \vect{F}_d''\]
Velocity composition
The following routines map velocities from the pivot frame to the platform frame.
Singular platform
Solver:
\[\dot{\vect{X}}_p = (\vect{G} \vect{W}_d \vect{G}^T)^{-1} \vect{G} \vect{W}_d \dot{\vect{X}}_d\]
Weighting:
\[\begin{split}\vect{Z}_d, \vect{\Lambda}_d &= \operatorname{dsyev}(\vect{W}_d) \\ \vect{W}_d^{\frac{1}{2}} &= \vect{Z}_d \vect{\Lambda}_d^{\frac{1}{2}} \vect{Z}_d^T \\ \vect{G}' &= \vect{G} \vect{W}_d^{\frac{1}{2}} \\ \dot{\vect{X}}_d' &= \vect{W}_d^{\frac{1}{2}} \dot{\vect{X}}_d\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}'^T) \\ \dot{\vect{X}}_p &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \dot{\vect{X}}_d'\end{split}\]
Redundant platform with reference
Solver:
\[\dot{\vect{X}}_p = \bar{\dot{\vect{X}}}_p + \vect{W}_p^{-1} \vect{G} (\vect{G}^T \vect{W}_p^{-1} \vect{G})^{-1} (\dot{\vect{X}}_d - \vect{G}^T \bar{\dot{\vect{X}}}_p)\]
Reference (initial):
\[\dot{\vect{X}}_d' = \dot{\vect{X}}_d - \vect{G}^T \bar{\dot{\vect{X}}}_p\]Weighting (initial):
\[\begin{split}\vect{Z}_p, \vect{\Lambda}_p &= \operatorname{dsyev}(\vect{W}_p) \\ \vect{W}_p^{-\frac{1}{2}} &= \vect{Z}_p \vect{\Lambda}_p^{-\frac{1}{2}} \vect{Z}_p^T \\ \vect{G}' &= \vect{W}_p^{-\frac{1}{2}} \vect{G}\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}'^T) \\ \dot{\vect{X}}_p' &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \dot{\vect{X}}_d'\end{split}\]Weighting (final):
\[\dot{\vect{X}}_p'' = \vect{W}_p^{-\frac{1}{2}} \dot{\vect{X}}_p'\]Reference (final):
\[\dot{\vect{X}}_p = \bar{\dot{\vect{X}}}_p + \dot{\vect{X}}_p''\]
Fused singular and redundant platform with reference
Solver:
\[\dot{\vect{X}}_p = \bar{\dot{\vect{X}}}_p + \vect{W}_p^{-\frac{1}{2}} (\vect{W}_d^{\frac{1}{2}} \vect{G}^T \vect{W}_p^{-\frac{1}{2}})^{-1} \vect{W}_d^{\frac{1}{2}} (\dot{\vect{X}}_d - \vect{G}^T \bar{\dot{\vect{X}}}_p)\]
Reference (initial):
\[\dot{\vect{X}}_d' = \dot{\vect{X}}_d - \vect{G}^T \bar{\dot{\vect{X}}}_p\]Weighting (initial):
\[\begin{split}\vect{Z}_d, \vect{\Lambda}_d &= \operatorname{dsyev}(\vect{W}_d) \\ \vect{W}_d^{\frac{1}{2}} &= \vect{Z}_d \vect{\Lambda}_d^{\frac{1}{2}} \vect{Z}_d^T \\ \vect{Z}_p, \vect{\Lambda}_p &= \operatorname{dsyev}(\vect{W}_p) \\ \vect{W}_p^{-\frac{1}{2}} &= \vect{Z}_p \vect{\Lambda}_p^{-\frac{1}{2}} \vect{Z}_p^T \\ \vect{G}' &= \vect{W}_p^{-\frac{1}{2}} \vect{G} \vect{W}_d^{\frac{1}{2}} \\ \dot{\vect{X}}_d'' &= \vect{W}_d^{\frac{1}{2}} \dot{\vect{X}}_d'\end{split}\]OLS:
\[\begin{split}\vect{U}, \vect{\Sigma}, \vect{V}^T &= \operatorname{dgesvd}(\vect{G}'^T) \\ \dot{\vect{X}}_p' &= \vect{V} \vect{\Sigma}^{+} \vect{U}^T \dot{\vect{X}}_d''\end{split}\]Weighting (final):
\[\dot{\vect{X}}_p'' = \vect{W}_p^{-\frac{1}{2}} \dot{\vect{X}}_p'\]Reference (final):
\[\dot{\vect{X}}_p = \bar{\dot{\vect{X}}}_p + \dot{\vect{X}}_p''\]
Context
The “fused” algorithm version is sometimes used for robotic manipulators where (i) \(\vect{x}\) represents a joint velocity; (ii) \(\vect{b}\) represents a Cartesian-space velocity twist; and (iii) \(\vect{A}\) represents the manipulator’s Jacobian matrix. The WDLS solver of the Kinematics and Dynamics Library (KDL) is an example implementation. The “benefit” is that the same implementation can handle redundant (e.g. a 7-DoF 3-1-3 robot solving a 6-DoF Cartesian task) and singular (e.g. a 5-DoF 2-1-2 robot solving a 6-DoF Cartesian task) kinematic chains. In the former case the solver minimizes over joint-space velocities (while exactly achieving the Cartesian-space velocity twist), whereas in the latter case the solver minimizes over Cartesian-space velocity twists (while using all joints).
This algorithm is not a good choice if the kinematic chain structure is known and free from singularities. In such a situation it is better to use one of the dedicated algorithms for either singular or redundant platforms to avoid superflous computations.
“Cheating”
In various robotics applications the concrete values in the weight matrices are deemed irrelevant, only their relative values matter. Then the following properties are sometimes exploited:
\(\vect{X}\) positive-definite \(\rightarrow\) \(\vect{X}^{-1}\) positive-definite
\(\vect{X}\) positive-definite \(\rightarrow\) \(\vect{X}^{\frac{1}{2}}\) positive-definite
As a result, one can get by with somewhat “arbitrarily-designed” positive-definite matrices \(\vect{W}' \sim \vect{W}^{\frac{1}{2}}\) and \(\vect{Q}' \sim \vect{Q}^{-\frac{1}{2}}\) so that
Note, that neither the \(\vect{W}\) nor the \(\vect{Q}\) matrix should provide a “zero weight” for any \(\vect{x}\) coordinate value as that violates the positive-definiteness assumption. This problem may be “hidden” behind a damped least squares solution, though.