Showing posts with label sparsity. Show all posts
Showing posts with label sparsity. Show all posts

Mar 28, 2015

Discreteness and Sparsity


Discrete signals may be sparse, whereas sparse signals may not be discrete.

Let us consider the following signal:
   x = [1,1,-1,1,-1,-1,1,1];
This is a discrete signal generated from an alphabet {1,-1}. This is obviously not sparse, but if we transform this into
   x - 1 = [0,0,-2,0,-2,-2,0,0]
we have a sparse signal. Also,
   x + 1 = [2,2,0,2,0,0,2,2]
is sparse. Let's adapt this idea to discrete signal reconstruction.

Discrete Signal Reconstruction  
Find the original n-dimensional discrete signal x over a finite alphabet
   X = {r1,r2,...,rL}
with its probability distribution
   P(rj) = pj,   0 < pj <1,   pp+ ... + p= 1
from incomplete linear measurements
   y = Ax
where A is an m-times-n matrix (m < n).

In this problem, each vector x - rj is sparse, and hence its ell-1 norm may be small if the probability pj is large. This observation leads to optimization with the sum of absolute values (SOAV):

  minimize   p1||x - r1||1 + p2||x - r2||1 +...+ pL||x - rL||1
  subject to   y = Ax

Let's apply this to binary-valued image reconstruction. The following is a 37x37-pixel binary image disturbed by random Gaussian noise.


Then we transform this into frequency-domain data via discrete Fourier transform (DFT), and then randomly down-sample rows to generate half-size data. Then apply the SOAV optimization to reconstruct the original binary image. The following is the result.


For comparison, we also use the basis pursuit (BP) that minimizes the ell-1 norm of x instead of the sum of absolute values. The following is the result of BP:


This is quite bad since BP only uses the property of sparsity in the original image while the SOAV approach takes the discreteness into account.

If you are interested in this work, please see the original paper:
M. Nagahara, Discrete Signal Reconstruction by Sum of Absolute Values, IEEE Signal Processing Letters, vol. 22 no. 10, pp. 1575-1579, Oct 2015 (to appear)
PDF can be downloaded from here and the MATLAB codes are available here.



Aug 21, 2014

Maximum Hands-Off Control


Sparsity is also important in control systems.

In particular, sparse control, called hands-off control, is proposed for saving energy and reducing CO2 emissions in control systems. For example, a hybrid vehicle, Toyota Prius (displayed above), uses this control. The internal combustion engine is stopped (control = zero; sparse control) when the vehicle is at a stop or the speed is lower than a preset threshold, and the electric motor is alternatively used. See also Hands-off Control as Green Control.

Recently, it has been proved that the sparsest (or L0 optimal) control among all admissible controls is L1 optimal (see L0/L1 Equivalence in Optimal Control). Based on this, hands-off control is extended to feedback control in
M. Nagahara, D. E. Quevedo, D. Nesic, Maximum Hands-Off Control: A Paradigm of Control Effort Minimization, IEEE Transactions on Automatic Control, Vol. 61, No. 4, 2016 (to appear) (PDF is here).
The abstract reads:
In this paper, we propose a new paradigm of control, called a maximum hands-off control. A hands-off control is defined as a control that has a short support per unit time. The maximum hands-off control is the minimum support (or sparsest) per unit time among all controls that achieve control objectives. For finite horizon control, we show the equivalence between the maximum hands-off control and L1-optimal control under a uniqueness assumption called normality. This result rationalizes the use of L1 optimality in computing a maximum hands-off control. We also propose an L1/L2-optimal control to obtain a smooth hands-off control. Furthermore, we give a self-triggered feedback control algorithm for linear time-invariant systems, which achieves a given sparsity rate and practical stability in the case of plant disturbances. An example is included to illustrate the effectiveness of the proposed control.
A MATLAB code for the simulation is available here.

Enjoy!


Aug 14, 2014

Textbooks on Compressed Sensing


Today, I'd like to list textbooks on compressed sensing and related topics.
This list is far from complete, but I believe this helps a beginner take the first step.

Y. C. Eldar and G. Kutyniok (Editors),
Cambridge University Press, June, 2012
ISBN-13: 978-1107005587
My comment: The introduction chapter (Chapter 1) is especially nice.

M. Elad (Author)
Springer, August, 2010.
ISBN-13: 978-1441970107
My comment: A nice introduction to sparse representation in signal and image processing.

S. Mallat (Author)
Academic Press, 3rd edition, December, 2008.
ISBN-13: 978-0123743701
My comment: Read this book and be a master of wavelet. You can!

S. Foucart and H. Rauhut (Authors)
Birkhäuser, August, 2013.
ISBN-13: 978-0817649470
My comment: If you are interested in the theory of compressed sensing, this book will be of help.

H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, H. Wolkowicz (Editors)
Springer, June, 2011.
ISBN-13: 978-1441995681
My comment: This book gives you fancy methods to solve sparse optimizations (e.g. L1/L2 optimization)

J.-L. Starck, F. Murtagh, and J. M. Fadili (Authors)
Cambridge University Press, May 2010.
ISBN-13: 978-0521119139
My comment: Sparsity has many applications in image and signal processing.

Enjoy!

A related blog entry is here.

Jul 30, 2014

Sparse Packetized Predictive Control for Networked Control over Erasure Channels


A new paper on sparsity-based control has been published:

M. Nagahara, D. E. Quevedo, and J. Ostergaard,
Sparse Packetized Predictive Control for Networked Control over Erasure Channels,
IEEE Transactions on Automatic Control, vol. 59, no. 7, pp. 1899-1905, July 2014. 

PDF can be downloaded from arxiv.

The abstract reads:
We study feedback control over erasure channels with packet-dropouts. To achieve robustness with respect to packet-dropouts, the controller transmits  data packets   containing plant input predictions, which minimize a finite horizon cost function. To reduce the data size of packets, we propose to adopt sparsity-promoting optimizations, namely, ell-1/ell-2 and  ell-2-constrained ell-0 optimizations,  for which efficient algorithms exist. We show how to design the tuning parameters to ensure (practical) stability of the resulting feedback control systems when the number of consecutive packet-dropouts is bounded.
A corresponding blog post is here.

Jul 24, 2014

L1 Control Theoretic Smoothing Splines

The control theoretic spline is a bridge between control and signal processing, as mentioned in a previous blog post. This technique is extended to robust and sparse splines via L1 optimality:
M. Nagahara and C. F. Martin,
L1 Control Theoretic Smoothing Splines,
IEEE Signal Processing Letters, 2014. (accepted)
The robustness is against outliers in data, while the sparsity is for representation of a curve with smaller number of parameters.
The abstract reads
In this paper, we propose control theoretic smoothing splines with L1 optimality for reducing the number of parameters that describes the fitted curve as well as removing outlier data. A control theoretic spline is a smoothing spline that is generated as an output of a given linear dynamical system. Conventional design requires exactly the same number of base functions as given data, and the result is not robust against outliers. To solve these problems, we propose to use L1 optimality, that is, we use the L1 norm for the regularization term and/or the empirical risk term. The optimization is described by a convex optimization, which can be efficiently solved via a numerical optimization software. A numerical example shows the effectiveness of the proposed method.
The MATLAB codes are available here.
PDF is here.

Feb 6, 2014

L0/L1 Equivalence in Optimal Control


One of the fundamental results of compressed sensing is that L1 optimization gives the sparsest (or L0-optimal) solution of an inverse problem under some assumptions.

Is it also true with L1 optimal control?
Is L1 optimal control the sparsest one among all admissible controls?

The answer is YES (under some assumptions, of course).
Our recent research
M. Nagahara, D. E. Quevedo, and D. Nesic,
Maximum Hands-Off Control and L1 Optimality,
52nd IEEE Conference on Decision and Control (CDC),
pp. 3825-3830, Dec. 2013.
has revealed that.

It is interesting that the L0 norm in this article is defined for continuous-time signals while most compressed sensing methods uses one for vectors. We call a control with small L0 norm a hands-off control.

The abstract reads:
In this article, we propose a new paradigm of control, called a maximum hands-off control. A hands-off control is defined as a control that has a much shorter support than the horizon length. The maximum hands-off control is the minimum support (or sparsest) control among all admissible controls. We first prove that a solution to an L1-optimal control problem gives a maximum hands-off control, and vice versa. This result rationalizes the use of L1 optimality in computing a maximum hands-off control. The solution has in general the "bang-off-bang" property, and hence the control may be discontinuous. We then propose an L1/L2-optimal control to obtain a continuous hands-off control. Examples are shown to illustrate the effectiveness of the proposed control method.


Dec 25, 2013

Sparsity-Related Researches in IEEE CDC 2013


Ciao!

The most important conference on controlIEEE Conference on Decision and Control (CDC) 2013, was held in this month, Dec 10-13, in Florence, Italy. Of course, I was there.

Foods are good, wines excellent, Duomo gorgeous, bazaar funny, I fully enjoyed my stay!

By the way,  I found papers related to sparsity in control including my paper:


Maximum Hands-Off Control and L1 Optimality
M. Nagahara, Kyoto Univ.
D. E. Quevedo, The Univ. of Newcastle
D. Nesic, Univ. of Melbourne
Abstract: In this article, we propose a new paradigm of control, called a maximum hands-off control. A hands-off control is defined as a control that has a much shorter support than the horizon length. The maximum hands-off control is the minimum support (or sparsest) control among all admissible controls. We first prove that a solution to an L1-optimal control problem gives a maximum hands-off control, and vice versa. This result rationalizes the use of L1 optimality in computing a maximum hands-off control. The solution has in general the "bang-off-bang" property, and hence the control may be discontinuous. We then propose an L1/L2-optimal control to obtain a continuous hands-off control. Examples are shown to illustrate the effectiveness of the proposed control method.

Graphical FPGA Design for a Predictive Controller with Application to Spacecraft Rendezvous
E. N. Hartley, Univ. of Cambridge
J. M. Maciejowski, Univ. of Cambridge
Abstract: A reconfigurable field-programmable gate array (FPGA)-based predictive controller based on Nesterov's fast gradient method is designed using Simulink and converted to VHDL using Mathworks' HDL Coder. The implementation is verified by application to a spacecraft rendezvous and capture scenario, with communication between the FPGA and a simulation of the relative dynamics occuring over Ethernet. For a problem with 120 decision variables and 240 constraints, computation times of 0.95 ms are achieved with a clock rate of 50 MHz, corresponding to a speed up of more than 2000 over running the algorithm directly on a MicroBlaze microprocessor implemented on the same FPGA.

Sparse Control Using Sum-Of-Norms Regularized Model Predictive Control
S. K. Pakazad, Linköping Univ.
H. Ohlsson, Linköping Univ.
L. Ljung, Linkoping Univ.
Abstract: Some control applications require the use of piecewise constant or impulse-type control signals, with as few changes as possible. So as to achieve this type of control, we consider the use of regularized model predictive control (MPC), which allows us to impose this structure through the use of regularization. It is then possible to regulate the trade-off between control performance and control signal characteristics by tuning the so-called regularization parameter. However, since the mentioned trade-off is only indirectly affected by this parameter, its tuning is often unintuitive and time-consuming. In this paper, we propose an equivalent reformulation of the regularized MPC, which enables us to configure the desired trade-off in a more intuitive and computationally efficient manner. This reformulation is inspired by the so-called varepsilon-constraint formulation of multi-objective optimization problems and enables us to quantify the trade-off, by explicitly assigning bounds over the control performance.

Near-Ideal Behavior of a Modified Elastic Net Algorithm
M. Vidyasagar, The Univ. of Texas at Dallas
Abstract: In this paper it is shown that a modification of the Elastic Net algorithm (MEN) exhibits near ideal behavior in the following sense: Suppose the input to the algorithm is a vector of known sparsity index but unknown locations for the nonzero components. Then the output error of the algorithm is bounded by a universal constant times the error achieved by an oracle that knows not just the sparsity index but also the locations of the nonzero components. This result generalizes an earlier result of Candes and Plan on the near ideal behavior of the LASSO algorithm.

Sparse Networked Control of Input Constrained Linear Systems
H. Kong, The Univ. of Newcastle
G. C. Goodwin, Univ. of Newcastle
M. Seron, The Univ. of Newcastle
Abstract: In networked control, there is often an incentive to communicate only what is absolutely necessary to achieve the desired performance goals. This is true of both the downlink (between a control base station and actuators) and the uplink (between the sensors and base station). Here we present a possible solution to this problem based on a singular value decomposition (SVD) of the Hessian of the quadratic performance index generally considered in Model Predictive Control (MPC). The singular vectors are employed to generate an orthonormal basis function expansion of the unconstrained solution to the finite horizon optimal control problem. These are pre-loaded into each actuator and each sensor. On the downlink, the actuators are informed in real-time about which basis functions they should use. On the uplink, after a “burn in period”, the sensors need only communicate when their response departs from that pre-calculated for the given basis functions. We show that this strategy facilitates sparse communication in both the downlink and uplink. We also show that the strategy can be modified so that input constraints are satisfied. The proposed results are illustrated by an example.

Soft-Constrained Lasso-MPC for Robust LTI Tracking: Enlarged Feasible Region and an ISS Gain Estimate
M. Gallieri, Univ. of Cambridge
J. M. Maciejowski, Univ. of Cambridge
Abstract: This paper investigates the robustness of a soft constrained LTI MPC for set-point tracking, using an l1- regularised cost. The MPC using this type of cost (informally dubbed lasso-MPC) is suitable, for instance, for redundantly- actuated systems. This is because of its ability to select a set of preferred actuators, leaving the other ones at rest for most of the time. The proposed approach aims to recover from state constraint violation and to track a changing set-point. Nominal stability is guaranteed for all feasible references and robustness to additive uncertainties is formally characterised, under certain assumptions. In particular, sufficient conditions are given for the feasible region to be robustly invariant, this region being larger than the MPC for regulation. The closed- loop system is input-to-state stable (ISS), and a local ISS gain is computed. All results apply to stabilisable LTI systems. Results hold as well for the more common quadratic MPC, a special case of the proposed controller. 

Proximal Newton Methods for Convex Composite Optimization
P. Patrinos, IMT Inst. for Advanced Studies Lucca
A. Bemporad, IMT Inst. for Advanced Studies Lucca
Abstract: This paper proposes two proximal Newton methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a new continuously differentiable exact penalty function, namely the emph{Composite Moreau Envelope}. The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the solution of a linear system of usually small dimension.

Enjoy! 

Dec 13, 2013

Sparse Packetized Predictive Control


As in signal processing, sparsity is also a useful concept in control systems. 

Our paper on a sparsity-based control method has been accepted to
IEEE Transactions on Automatic Control, entitled
"Sparse Packetized Predictive Control for Networked Control over Erasure Channels"

The paper can be downloaded from arxiv.

The motivation of this study is sparse representation of control data.
In networked control systems, the data packets are transmitted through communication channels whose bit-rates are limited, and the transmitted data should be represented in fewer bits.
Sparsity then gives an attractive solution to this problem.

To reduce the data size of packets, we have proposed to adopt sparsity-promoting optimizations, namely, 
  1. L1-L2 optimization
  2. L2-constrained L0 optimization

for which efficient algorithms exist (e.g., see this article).

We have shown how to design the tuning parameters to ensure (practical) stability of the resulting feedback control systems when the number of consecutive packet-dropouts is bounded.

Enjoy!