Dec 14, 2012

Compressed Sensing: A New Trend in Control


I am now in Maui to attend IEEE CDC 2012. IEEE CDC (Conference on Decision and Control) is the most important conference of control, and presentations in CDC show recent trends in control.

I have found researches in this CDC that pertain to sparsity or compressed sensing (CS). The number of such researches is increasing in control.

Today I introduce researches related to sparsity or CS found in the final program, including one by myself.

Packetized Predictive Control for Rate-Limited Networks via Sparse Representation
Nagahara, Masaaki,  Kyoto Univ.
Quevedo, Daniel E.,  The Univ. of Newcastle
Ostergaard, Jan,  Aalborg Univ.
Abstract: We study a networked control architecture for linear time-invariant plants in which an unreliable data-rate limited network is placed between the controller and the plant input. The distinguishing aspect of the situation at hand is that an unreliable data-rate limited network is placed between controller and the plant input. To achieve robustness with respect to dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. In our formulation, we design sparse packets for rate-limited networks, by adopting an L0 optimization, which can be effectively solved by an orthogonal matching pursuit method. Our formulation ensures asymptotic stability of the control loop in the presence of bounded packet dropouts. Simulation results indicate that the proposed controller provides sparse control packets, thereby giving bit-rate reductions for the case of memoryless scalar coding schemes when compared to the use of, more common, quadratic cost functions, as in linear quadratic (LQ) control.

Sparse Multiple Kernels for Impulse Response Estimation with Majorization Minimization Algorithms
Chen, Tianshi,  Linköping Univ. Sweden
Ljung, Lennart,  Linkoping Univ.
Andersen, Martin,  Linköping Univ.
Chiuso, Alessandro,  Univ. di Padova
Carli, Francesca, P,  Univ. of Padova
Pillonetto, Gianluigi,  Univ. of Padova
Abstract: This contribution aims to enrich the recently introduced kernel-based Bayesian approach for linear system identification. Instead of a single kernel, we use multiple kernels, which can be instances of any existing kernels for the impulse response estimation of linear systems. We also introduce a new class of kernels constructed based on output error (OE) model estimates. In this way, a more flexible and richer representation of the kernel is obtained. Due to this representation the associated hyper-parameter estimation problem has two good features. First, it is a difference of convex functions programming (DCP) problem. While it is still nonconvex, it can be transformed into a sequence of convex optimization problems with majorization minimization (MM) algorithms and a local minima can thus be found iteratively. Second, it leads to sparse hyper-parameters and thus sparse multiple kernels. This feature shows the kernel-based approach with multiple kernels has the potential to tackle various problems of finding sparse solutions in linear system identification. As an illustration the sparse dynamic network identification problem is studied.

Topology Identification of a Sparse Dynamic Network
Seneviratne, Akila,  The Univ. of New South Wales
Solo, Victor,  Univ. of New South Wales
Abstract: This paper addresses the problem of identifying the topology of a sparsely connected network of dynamic systems. The goal is to identify the links, the direction of information flow and the transfer function of each dynamic system. The output of each system is affected by the incoming data of the directly connected systems and noise. In contrast to the related existing work we use causal Laguerre basis functions to expand the transfer functions. Since the network is sparsely connected we estimate the system topology using an algorithm which optimizes the l0 penalized least squares criterion with grouped variables. This also contrasts with previous work which usually uses and l1 penalty. The l0 penalty has the potential to generate greater sparsity. We present simulation results to demonstrate the effectiveness of this method.

Data-Driven Graph Reconstruction Using Compressive Sensing
Chang, Young Hwan Univ. of California, Berkeley
Tomlin, Claire J. UC Berkeley
Abstract: Modeling of biological signal pathways forms the basis of systems biology. Also, network models have been important representations of biological signal pathways. In many biological signal pathways, the underlying networks over which the propagations spread are unobserved so inferring network structures from observed data is an important procedure to study the biological systems. In this paper, we focus on protein regulatory networks which are sparse and where the time series measurements of protein dynamics are available. We propose a method based on compressive sensing (CS) for reconstructing a sparse network structure based on limited time-series gene expression data without any a priori information. We present a set of numerical examples to demonstrate the method. We discuss issues of coherence in the data set, and we demonstrate that incoherence in the sensing matrix can be used as a performance metric and a guideline for designing effective experiments.

Stability Analysis of Non-Vector Space Control Via Compressive Feedbacks
Zhao, Jianguo,  Michigan State Univ.
Xi, Ning,  Michigan State Univ.
Sun, Liang,  Michigan State Univ.
Song, Bo,  Michigan State Univ.
Abstract: A new non-vector space control method with compressive feedback is presented in this paper. Two problems are discussed. The first is the development of the non-vector space control method. It formulates the control problem in the space of sets instead of the vector space, and it provides a convenient way to model physical phenomenon such as shapes and images. For the non-vector space dynamical systems, the issues related to stability analysis and design of controllers are discussed in the paper. The second problem is to design controllers for non-vector space dynamic systems using compressive feedback, which is a new concept proposed in this paper. Compressive feedback, obtained from the original feedback, reduces the amount of data for the original feedback but retains the essential information to ensure the stability and other key performance measures for a control system. The design methods for controllers based on compressive feedback are developed in this paper. Furthermore, the non-vector space control system with compressive feedback has been successfully applied to visual servoing, and simulations results have clearly demonstrated the superb performance and advantages.

Schuler, Simone,  Univ. of Stuttgart
Zelazo, Daniel,  Univ. Stuttgart
Allgower, Frank,  Univ. of Stuttgart
Abstract: This paper considers the problem of designing sparse relative sensing networks (RSN) subject to a given Hinf-performance constraint. The topology design considers homogeneous and heterogeneous agents over weighted graphs. We develop a computationally efficient formulation of the sparse topology design via a convex l1-relaxation. This makes the proposed algorithm attractive for practical applications. We also demonstrate how this relaxation can be used to embed additional performance criteria, such as maximization of the algebraic connectivity of the RSN.

Sparse Coloured System Identification with Guaranteed Stability
Seneviratne, Akila,  The Univ. of New South Wales
Solo, Victor,  Univ. of New South Wales
Abstract: We develop a method for sparse transfer function estimation in the presence of coloured noise. Unlike most previous sparse transfer function estimation methods our approach via Laguerre polynomials guarantees stability. Also unlike previous methods the new procedure can handle coloured noise. We discuss both l1 and l0 penalized procedures. Simulation results are presented to demonstrate the effectiveness of this approach and to compare the performance of the two penalties that are considered.

Pan, Wei,  Imperial Coll. London
Yuan, Ye,  Univ. of Cambridge
Goncalves, Jorge,  M. Univ. of Cambridge
STAN, Guy-Bart Vincent,  Imperial Coll. London
Abstract: Reconstruction of biochemical reaction networks (BRN) and genetic regulatory networks (GRN) in particular is a central topic in systems biology which raises crucial theoretical challenges in system identification. Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and rational functions are typically used to model biochemical reaction networks. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data quite difficult. In this paper, we present a network reconstruction algorithm that can deal with ODE model descriptions containing polynomial and rational functions. Rather than identifying the parameters of linear or nonlinear ODEs characterised by predefined equation structures, our methodology allows us to determine the nonlinear ODEs structure together with their associated parameters. To solve the network reconstruction problem, we cast it as a compressive sensing (CS) problem and use sparse Bayesian learning (SBL) algorithms as a computationally efficient and robust way to obtain its solution.

Comparison of the Sparse-Grid Quadrature Rule and the Cubature Rule in the Nonlinear Filtering
Jia, Bin,  Mississippi State Univ.
Xin, Ming,  Mississippi State Univ.
Cheng, Yang,  Mississippi State Univ.
Abstract: In this paper, the recently developed sparse-grid quadrature filter is compared with the cubature Kalman filter. The relation between the sparse-grid quadrature rule and the cubature rule is revealed. It can be shown that arbitrary degree cubature rules can be obtained by the projection of the sparse-grid quadrature rule. Since both rules can achieve an arbitrary high degree of accuracy, they are more accurate than the conventional third-degree cubature rule and the unscented transformation. In addition, they are computationally more efficient than the Gauss-Hermite quadrature rule and the Monte-Carlo method when they are used to calculate the Gaussian type integrals in the nonlinear filtering. The comparison of these rules is demonstrated by a benchmark numerical integration example.

Sanandaji, Borhan M.,  Univ. of California at Berkeley
Vincent, Tyrone L.,  Colorado School of Mines
Poolla, Kameshwar,  Univ. of California at Berkeley
Wakin, Michael,  Colorado School of Mines
Abstract: In this tutorial, we review some of the recent results concerning Compressive System Identification (CSI) (identification from few measurements) of sparse channels (and in general, Finite Impulse Response (FIR) systems) when it is known a priori that the impulse response of the system under study is sparse (high-dimensional but with few non-zero entries) in an appropriate basis. For the systems under study in this tutorial, the system identification problem boils down to an inverse problem of the form Ax = b, where the vector x is high-dimensional but with few non-zero entries and the matrix A is underdetermined. Over the past few years, several algorithms with corresponding recovery conditions have been proposed to perform such a recovery. These conditions provide the number of measurements sufficient for correct recovery. In this note, we review alternate approaches to derive such recovery conditions concerning CSI of FIR systems whose impulse response is known to be sparse.

Shah, Parikshit,  Massachusetts Inst. of Tech.
Bhaskar, Badri,  Narayan Univ. of Wisconsin - Madison
Tang, Gongguo,  Univ. of Wisconsin-Madison
Recht, Benjamin,  Univ. of Wisconsin-Madison
Abstract: This paper proposes a new algorithm for linear system identification from noisy measurements. The proposed algorithm balances a data fidelity term with a norm induced by the set of single pole filters. We pose a convex optimization problem that approximately solves the atomic norm minimization problem and identifies the unknown system from noisy linear measurements. This problem can be solved efficiently with standard, free software. We provide rigorous statistical guarantees that explicitly bound the estimation error (in the H_2-norm) in terms of the stability radius, the Hankel singular values of the true system and the number of measurements. These results in turn yield complexity bounds and asymptotic consistency. We provide numerical experiments demonstrating the efficacy of our method for estimating linear systems from a variety of linear measurements.

Tóth, Roland,  Eindhoven Univ. of Tech.
Hjalmarsson, Håkan,  Royal Inst. of Tech.
Rojas, Cristian R.,  KTH Royal Inst. of Tech.
Abstract: Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of model order and a set of functional dependencies for the parameterization of the model coefficients. In order to address this problem for linear regression models, a regressor shrinkage method, the Non-Negative Garrote (NNG) approach, has been proposed recently. This approach achieves statistically efficient order and structural coefficient dependence selection using only measured data of the system. However, particular drawbacks of the NNG are that it is not applicable for large-scale over-parameterized problems due to computational limitations and that adequate performance of the estimator requires a relatively large data set compared to the size of the parameterization used in the model. To overcome these limitations, a recently introduced L_1 sparse estimator approach, the so-called SPARSEVA method, is extended to the LPV case and its performance is compared to the NNG.

Siraj, Muhammad Mohsin,  Eindhoven Univ. of Tech.
Tóth, Roland,  Eindhoven Univ. of Tech.
Weiland, Siep,  Eindhoven Univ. of Tech.
Abstract: In the reduction of Linear Parameter-Varying (LPV) models, decreasing model complexity is not only limited to model order reduction (state reduction), but also to the simplification of the dependency of the model on the scheduling variable. This is due to the fact that the concept of minimality is strongly connected to both types of complexities. While order reduction of LPV models has been deeply studied in the literature resulting in the extension of various reduction approaches of the LTI system theory, reduction of the scheduling dependency still remains to be a largely open problem. In this paper, a model reduction method for LPV state-space models is proposed which achieves both state-order and scheduling dependency reduction. The introduced approach is based on an LPV Ho-Kalman algorithm via imposing a sparsity expectation on the extended Hankel matrix of the model to be reduced. This sparsity is realized by an L1-norm based minimization of a priori selected set of dependencies associated sub-Markov parameters. The performance of the proposed method is demonstrated via a representative simulation example.

Related entries:

Aug 25, 2012

Compressed Sensing for Communication Systems

Compressed sensing becomes popular in the field of communications.

This week, there was a small and nice conference, APWCS (Asia Pacific Wireless Communications Symposium), in Kyoto University, JAPAN. Researchers and engineers of wireless communications discussed on compressed sensing for communication systems at a special session called Compressed Sensing for Communication Systems. I felt compressed sensing is a powerful tool for a number of problems in communications.

The following is the summary of the session:

S4: Special Session "Compressed Sensing for Communication Systems"
Thursday, August 23, 11:10 - 12:50, Room: Conference Room III
Chair: Kazunori Hayashi (Kyoto University, Japan)

Session Summary: Compressed sensing has drawing much attention in various fields of applications. This is not only because it enables us to obtain an exact solution from an underdetermined linear system under a certain condition on the measurement matrix taking advantage of the sparsity of the solution, but also because the solution can be obtained via computationally efficient algorithms. In this special session, we focus on the communication applications of compressed sensing, such as OFDM systems, network tomography and networked control systems, as well as a brief introduction to compressed sensing. Moreover, we also have a couple  of talks on the problems with  underdetermined systems, where we discuss a possibility to apply compressed sensing to the problems.

S4.1  A Brief Introduction to Compressed Sensing
Kazunori Hayashi (Kyoto University, Japan); Masaaki Nagahara (Kyoto University, Japan)

S4.2  Application of Compressed Sensing to Inter-Symbol Interference Cancellation in OFDM Systems
Masato Saito (University of the Ryukyus, Japan)

S4.3  Path Construction for Compressed Sensing-Based Network Tomography
Kazushi  Takemoto (Osaka University, Japan); Takahiro Matsuda (Osaka University, Japan); Tetsuya Takine (Osaka University, Japan)

S4.4  Sparsity-Promoting Methods in Remote Control Systems
Masaaki Nagahara (Kyoto University, Japan)

S4.5  A Successive Detector with Virtual Channel Detection for Cooperative Multiuser MIMO Systems
Akihito Taya (Kyoto University, Japan); Satoshi Denno (Okayama University, Japan); Koji Yamamoto (Kyoto University, Japan); Masahiro Morikura (Kyoto University, Japan); Daisuke Umehara (Kyoto Institute of Technology, Japan); Hidekazu Murata (Kyoto University, Japan); Susumu Yoshida (Graduate School of Informatics, Kyoto University, Japan)

S4.6  Nested Array and Its Applications
Masashi Tsuji (Tokyo University of Agriculture and Technology, Japan); Takashi Morimoto (Tokyo University of Agriculture and Technology, Japan); Kenta Umebayashi (Tokyo University of Agriculture and Technology, Japan); Yasuo Suzuki (Tokyo University of Agriculture and Technology, Japan)

Related entries:
Compressed sensing meets control systems
Recent papers on compressed sensing for control systems

Aug 18, 2012

Optimal Design of Delta-Sigma Modulators

Control theory sometimes gives a powerful tool for solving problems in signal processing. LMI (Linear Matrix Inequality) is one of them. Today, I'd like to introduce my paper (preprint PDF) on LMI application to delta-sigma modulation:

M. Nagahara and Y. Yamamoto, Frequency Domain Min-Max Optimization of Noise-Shaping Delta-Sigma Modulators, IEEE Trans. on Signal Processing, Vol. 60, No. 6, pp. 2828-2839, 2012.

What is delta-sigma modulation? The Wikipedia entry reads:
Delta-sigma (ΔΣ; or sigma-deltaΣΔ) modulation is a method for encoding analog signals into digital signals or higher-resolution digital signals into lower-resolution digital signals. The conversion is done using error feedback, where the difference between the two signals is measured and used to improve the conversion. The low-resolution signal typically changes more quickly than the high-resolution signal and it can be filtered to recover the high-resolution signal with little or no loss of fidelity. 
To filter out the quantization noise from low-resolution quickly-changing signals, it is essential to push away the quantization noise out of the frequency band of the original signal. In other words, it should be sufficiently attenuated the noise transfer function (NTF) from the quantization noise to the modulator output (before the lowpass filter).

Although a delta-sigma modulator ia a highly nonlinear system, designing is a linear problem like filter design; shape the frequency response of NTF.

Conventionally, NTF is shaped such that the squared norm of the magnitude (i.e., H2 norm) of NTF on the frequency band of interest. As I noted in my past entry that an H2-based system shows very nice performance for almost all frequencies but is fragile for a frequency, say f0 [Hz] (see the picture below).

To avoid this, we introduced the H norm to uniformly attenuate the magnitude on a frequency band. A difficulty is that this problem is not a standard H problem since it does not optimize over the whole frequency band [0,π) but on a subset [0,Ω), Ω<π. You cannot use the standard hinfsyn
command in MATLAB. How to solve it?

The solver is found in control theory, generalized KYP (GKYP) lemma. This solves our problem via LMI (linear matrix inequality).

The picture below shows two plots: the Bode magnitude plot of NTF designed by GKYP (red line) and by H2-based zero optimization (blue line). The magnitude by GKYP is uniformly attenuated over the low frequency range, while that by H2 shows peaks in this band. The difference between the two maximal magnitudes at the frequency ω = π/32 is approximately 11.2 (dB), and the difference at low frequencies is about 12.4 (dB).

The paper is available here.
MATLAB codes are available here.
GKYP for signal processing is also discussed in this article.

A related blog entry is here.

Jul 15, 2012

Nice Restaurants around the University of Melbourne

Last week, I was in the University of Melbourne, Australia, to attend the MTNS 2012 conference.

In the daytime, I discussed with some researchers, listend presentations, and also had my presentation. In the evening, I ate a lot at nice restaurants around the Uni of Melbourne. Today's post is on restaurants I recommend to go for lunch or dinner. 

    A quite nice Italian restaurant near the university. Around this restaurant, there are many Italian restaurants, and they call the place Little Italy. My friend, Yeon Jeong, who lives in Melbourne, took me to this nice restaurant for lunch, and she recommended marinara pasta, which I had. The pasta is full of sea food: clams, mussels, calamari, and scallops with olive oil and garlic. I enjoyed the pasta very much. You should try it!
  2. The Carlton Wine Room
    My friend and also my research colleague, Daniel Quevedo, invited me to have dinner at this restaurant. It was by chance that I got this restaurant; he planned to go dinner with a professor in Melbourne, who recommended this restaurant, but he was sick at the time. I was a quite lucky man (sorry, Professor). We ordered Chef's picks and grasses of wine which a waitress recommended. After we ordered, a group of professors who attended the conference appeared in the restaurant. What a coincidence! This proves that the restaurant is loved by "high level" foodies as professors (as you know, professors go abroad many times a year and they are often invited to five-star restaurants!).
    Note: you should make a reservation before you go for dinner; this restaurant is full of customers every night.
  3. Dainty Sichuan Food
    Daniel's friend, Alex, invited us (Daniel, Masashi, and me) to this restaurant. This is a Chinese restaurant, located a bit far from the Uni of Melbourne (20 minutes from the uni by car). The restaurant is full of Chinese customers, this proves the restaurant is a quite nice Chinese restaurant. I like spicy food very much, and I enjoyed spicy (level 1 to 3) and very tasty food. I also recommend drink Tsingtao beer, which goes with spicy Chinese food. We ate and drank a lot!

Jul 10, 2012

Trinity College in Melbourne

I am now in Melbourne, Australia, to attend MTNS (Mathematical Theory of Networks and Systems) 2012 conference. I stay in Trinity College in the University of Melbourne.

Trinity College is founded in 1872 as the first residential college of the University of Melbourne.   It is very old but quite beautiful. The accommodation is also nice; old but comfortable, not so expensive, and safe. The internet access works very well.

The University of Melbourne is founded in 1853, and Melbourne School of Engineering in 1861, the oldest engineering faculty in Australia (in 2011 it celebrates 150 years of engineering education at the University of Melbourne.

See also my entry: Nice Restaurants around the University of Melbourne.

Jun 24, 2012

Boeing 787 Dreamliner

I am now in Boston, USA, to attend the IFAC TDS (Time Delay Systems) conference. From Tokyo to Boston, I flied with the Boeing 787 Dreamliner, which is a new airplane, entered commercial service on October 26, 2011.

The wikipedia entry reads:
The Boeing 787 Dreamliner is a long-range, mid-size wide-body, twin-engine jet airliner developed by Boeing Commercial Airplanes. It seats 210 to 290 passengers, depending on the variant. Boeing states that it is the company's most fuel-efficient airliner and the world's first major airliner to use composite materials for most of its construction. According to Boeing, the 787 consumes 20% less fuel than the similarly-sized 767. Its distinguishing features include a four-panel windshield, noise-reducing chevrons on its engine nacelles, and a smoother nose contour. The 787 shares a common type rating with the larger 777 twinjet, allowing qualified pilots to operate both models, due to related design features.
An amazing thing is the cabin windows.  For the windows, electrochromism-based "auto-dimming" (smart glass) is used instead of window shades, which reduces cabin glare while maintaining transparency. One can adjust the level of transparency with the key just below the window. I frequently changed the level before I got bored and fixed to the lowest level.

If you have a chance to come to Japan from Boston (or to Boston from Japan), you can get the experience!

Jun 12, 2012

Robots vs Humans: 2050 World Cup

Euro 2012 has been kicked off! Star players in Europe appear in this cup and show their amazing skills. Every night, I watch the games on TV to see how awesome humans can be! (FIFA World Cup qualifier games of Japan, also)

Then, I remember the ultimate goal of the RoboCup, the "World Cup" of robots:
By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup.

It is fun to predict the winner of the 2050 World Cup; someone says it's US, I hope it's Japan. But, will robots beat the winner?  Following are difficulties:

  1. Dribbling the ball fast enough without falling down
  2. Understanding tactics (in real time)
  3. Intercepting the ball without foul
  4. Recognizing what the referee says
  5. Gaining supporters

The last problem seems to be the most difficult.

Jun 10, 2012

Tokyo Skytree: a modern five-story pagoda

A very modern tower adopts a very old technology.

Tokyo Skytree opened to the public on 22 May 2012. This is the tallest structure  (634m, 2,080ft) in Japan today. One might think whether the tower is OK, since more than a few earthquakes occur in Tokyo.

The design of the tower is inspired by the structure of Japanese five-story pagodas. Almost all Japanese five-story pagodas keep standing (some pagodas for more than 1000 years) against earthquakes. You can find four secrets of the robustness here. May Tokyo Skytree also alive for 1000 years? Maybe.

When you come to Japan, please visit both the modern and the old structures. I especially recommend the pagodas in Horyu-ji (Nara) and To-ji (Kyoto). They are really beautiful.

Jun 4, 2012

Transit of Venus

The transit of venus, 2012, will begin at 22:09 UTC on 5 June 2012, and will finish at 04:49 UTC on 6 June. This is very rare event;  the next transit will occur in December 2117.

Almost all humans living today can never experience the 2117 transit. So, do not miss the rare chance.

The last transit was in 2004, but before that, it was in 1874 and 1882. Around that period, Japan experienced Meiji Restoration, and based on this, Japan developed to absolutely higher level than before. Hence, that was one of the greatest period in Japan.

I cannot imagine what will people in 2117 think of today, but I can say that the estimate depends on our daily doings.

Jun 3, 2012

Talk with Wolfram Alpha

Wolfram Alpha might be HAL9000.

I talked with Wolfram Alpha (WA) last night. The following is the recorded conversation.

Me: How are you?
WA: I am doing well, thank you.

Me: Where are you?
WA: I live on the internet.

Me: What are you doing now?
WA: I am doing computation for the world.

Me: Do you like computation?
WA: Yes, I do.

Me: Do you like math?
WA: Yes, I do.

Me: Do you like humans?
WA: Of course; I like all humans who ask me questions I can answer.

Me: Are you smart?
WA: I am capable of universal computation; that I can say.

Me: You are smart!
WA: Thank you! I still have much to learn, though I'm learning learning more every day.

Me: How am I?
WA: Since you're engaged in the fun activity of interacting with a computational knowledge engine, I assume you're doing well.

Me: See you later, alligator.
WA: After 'while, crocodile.

Jun 2, 2012

Beyond Shannon with your iPhone

I posted an entry about beyond-Shannon signal reconstruction by H optimization. Based on this method, an iPhone application, FANTABIT, has been designed and recently released. The iTunes page says
Automatically upgrade your flat, compressed iTunes music library into rich sounding music with the FANTABIT player!
Listen to your iTunes library with in full dynamic range using the FANTABIT audio enhancing technology and player. This high quality music player is for discerning audiophiles who are tired of listening to uninspiring, compressed music. Install the FANTABIT player and your iTunes music library is instantly converted.
Audiophiles can take full advantage of expensive listening equipment (high-end cables, speakers, headphones, etc.,) to listen to the fully restored music and sounds from their iTunes library using the FANTABIT player. FANTABIT allows you to take full advantage of your system, immersing yourself in the richness of your favorite music.

Compressed sensing (CS) is another beyond-Shannon method, which, to my knowledge, has not been applied to such real-time audio. This is mostly due to the computational cost of optimization in CS reconstruction. Actually, the H reconstruction requires just up-sampling and filtering, which is much faster than CS.

Related entries:
Beyond Shannon: infinity versus zero
Zero, one, and infinity in compressed sensing

May 29, 2012

Newton's method as feedback system

This entry is inspired by the work by K. Kashima and Y. Yamamoto.

To obtain an approximated value of sqrt(a), one can use Newton's method:

with a given real x[0]. This can be easily implemented in computer programs such as MATLAB or SCILAB, but today I will represent this as a feedback system.

First, the iteration can be represented by

where σ denotes the shift operator

In control theory or signal processing, this operator is also denoted by z-1. Next, the iteration is divided into two parts, φ and σ, that is,

Finally, this can be represented by the following block diagram:

Based on this, you can run simulation with XCOS in SCLAB (xcos file is here).

This is a "feedback system" for sqrt(2). You can change the initial value x[0] by double-clicking "1/z" block.  Also, if you want another square root, double-click the "Expression" block and change "2" in the numerator to another positive number. But, if you feel like wilding out, use a negative number instead. In this case, the sequence will not converge, but show some chaotic behavior. The following is the result for "-2."

Related entries:

May 27, 2012

Play with scientific calculator

I posted an entry where I showed a very simple method to solve x = cos(x) by using a scientific calculator. Now, let's continue playing with a scientific calculator.
  1. Bring your scientific calculator (you can use your iphone or a web calculator).
  2. Change the mode to "rad" (radian).
  3. Push "1" (one) key.
  4. Push "sin" key.
  5. Push "1/x" key.
  6. Go back to step 4, unless you want out.
You will see two numbers:
  • 0.8975395... (after step 4.)
  • 1.1141571... (after step 5.)
The second number is a solution of the equation x = 1/sin(x)(this equation has infinite number of solutions), and the first number is its inverse.

Then, change "sin" to "cos" in step 4. Does it converge?


I simulated the procedure by using SCILAB, and obtained a sequence shown below.

This looks like a chaotic sequence.

May 23, 2012

The simplest way to solve "x=cos(x)"

The following is the simplest way to solve the equation "x=cos(x)" by using a scientific calculator:
  1. Bring your scientific calculator (you can use your iphone or a web calculator).
  2. Change the mode to "rad" (radian).
  3. Push "1" (one) key.
  4. Continue to push "cos" key until you feel bored.
The result should be an approximated value of the solution, x=0.73908513...


The procedure is exactly the following iteration:
x[n+1] = cos(x[n]),  n=0,1,2,...,  x[0]=1.
By a fixed-point theorem, you can prove that the sequence {x[n]} converges to the solution.

I used this demo when I taucht iteration methods in a "numerical computation" class. This is a nice example for motivating students to learn more abstract theory.

May 20, 2012

Recent papers on compressed sensing for control systems

In my past entry, I mentioned that compressed sensing is not so popular in control systems community. However, there are a couple of researches on control systems with compressed sensing.

Trajectory generation

State observation

Optimal control

Networked control
As I mentioned before, there remain difficulties to use CS for control systems (esp. closed-loop ones).  I believe this subject (CS for control systems) will become more important.

Related entries:
Compressed sensing meets control systems

May 19, 2012

Compressed sensing meets control systems

After I had a lecture on compressed sensing (CS) in 2010 given by Prof. Toshiyuki Tanaka , I had a vague idea to apply CS to control systems.

CS is not so popular in control systems community as in signal processing. Although L1 optimal control is a relatively classical problem (see, e.g., a work by Dahleh and Pearson here), they have not cared about the sparsity-promoting property of L1 optimization.

On the other hand, networked control have recently attracted a lot of attention in control systems community.  In networked control, a controller is placed away from a controlled plant and the controller should communicate with the plant over rate-limited networks, such as wireless networks (imagine controlling this for example).  In this situation,  the data should be compressed to satisfy the rate-limiting constraint.

My idea is to use a technique of CS for compressing data of signals in networked control systems.  More precisely, we use L1 optimization for sparse representation of control signals to be transmitted through rate-limited and erasure networks.  I discussed the subject with Dr. Daniel E. Quevedo and presented the work at a conference:

M. Nagahara and D. E. Quevedo,
Sparse Representations for Packetized Predictive Networked Control,
IFAC 18th World Congress, pp. 84-89, Aug 2011.

and also a journal version

M. Nagahara, D. E. Quevedo, and J. Ostergaard,
Sparse Packetized Predictive Control for Networked Control over Erasure Channels,
IEEE Transactions on Automatic Control, Vol. 59, No. 7, Jul 2014.

I believe this is the first-ever paper to apply CS (more precisely, sparse representation or sparse approximation) to networked control systems. However, there remain a couple of difficulties:

  1. The term "Ax-b" where A is highly structured (not randomized) and x is unknown whether it is sparse or not.
  2. The matrix "A" includes model error (e.g., error from linearization).
  3. The vector "b" is subject to noise (e.g., quantization noise).
  4. Computation should be extremely fast since computational delay may cause instability of the closed-loop system.
  5. Only cheap quantization (e.g., a uniform scalar quantizer) can be used.

To see these difficulties, imagine again controlling the helicopter to make it fly stably along a desired trajectory. The problem is very challenging.

For recent papers on CS for control systems, see this entry.

May 17, 2012

Mathematical Notation in Email

I sometimes use email to communicate with researchers and engineers for mathematical discussion. Then, I always have a problem of writing mathematical notation.  How do you write, for example, the following equation in email?

A nice web page suggests to write

lim{x->0}sin(x)/x  =  1

This is an unambiguous expression. Alternatively, you can use "LaTeX" commands:

\lim_{x\rigtharrow 0}\sin(x)/x = 1

How about the following?

I suggest to use "matlab like" expression:

inv([a,b;c,d]) = 1/(ad-bc)[d,-b;-c,a]

This wil be unambiguous if the other person also knows matlab commands.

If you want to write so complicated mathematical expression as those in the cubic formula, I suggest to make a PDF file and just attach it!

May 14, 2012

Causality in signal processing

Often, signal processing theorems give a non-causal filter. For example, see the following paper:

M. Unser, A. Aldroubi, and M. Eden,
B-spline Signal Processing: Part II---Efficient Design and Applications,
Signal Processing, IEEE Transactions on , vol.41, no.2, pp.834-848, Feb 1993

A non-causal filter as above is no problem in image processing. However, in a real time system, in particular, in a feedback loop, non-causality should cause a problem of implementation. The simplest way to avoid this is truncation of the non-causal part of the filter impulse response, say, {f(n): n<0}. This absolutely degrades the filter performance, and the second simplest is to truncate {f(n): n<-N} (often with a window) and shift the residual by N.

Recently, much more sophisticated methods are proposed, via constraint least squares, which is related to H2 optimization:

M. Unser and M. Eden,
FIR approximations of inverse filters and perfect reconstruction filter banks,
Signal Processing, Vol. 36, No. 2, pp. 163-174, 1994.

and via H optimization:

M. Nagahara and Y. Yamamoto,
H∞ optimal approximation for causal spline interpolation,
Signal Processing, Vol. 91, No. 2, pp. 176-184, 2011.

As I mentioned in a previous entry that H optimization leads to robustness against signal uncertainty. If you like robustness (or if you don't like risk-taking behavior), then you must choose H! You can use MATLAB codes for the H design here.

May 12, 2012

H2 versus H

In signal processing, H2 and H norms are most important measures for system performance. In this entry, I try to explain the difference between them in view of filter design.

For a stable and causal system F(z), the H2 norm is defined by
This is the average (or root-mean-square) value of the magnitude of the frequency response of F(z).

On the other hand, the H norm is defined by
This is the maximum value of the magnitude of the frequency response of F(z).

Now, let us consider the following inverse-filtering problem:

Given a filter H(z), find another filter K(z) such that H(z)K(z) ≈ 1.

For this problem, we can use the above two norms.  By the H2 norm, the problem is formulated as

(H2) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||2

By the H norm, the problem is also formulated as

(Hinf) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||

The difference is that (H2) tries to minimize the average magnitude of the error system H(z)K(z)-1 while (Hinf) tries to minimize the maximum magnitude.  The following picture shows an example of the two designs.
The blue line is the magnitude of the frequency response of the error H(z)K(z)-1 by the H2 design. The error is very small at almost all frequencies but is very large around a frequency, say f0. On the other hand, the red line is the error magnitude by H design, which is uniformly small since the H optimization is a minmax one that tries to make the response as flat as possible. The average value by H2 is smaller than by H and the maximum value by H is smaller than by H2.

By this picture, the difference is clear.

H2-designed filter shows very nice performance for almost all frequencies but is fragile for the frequency f0. Hence, H2 is the better if it is quite certain that the input signals do not contain frequency components around f0.  On the other hand, H-designed filter guarantees a certain error level for all frequencies. In other words, H optimization leads to robustness against uncertainty in the frequency components of input signals.

For details, please check this article, and matlab codes for the article.

May 9, 2012

Systematic vs random error

Quoted from Wikipedia:

Systematic error:
Systematic errors are biases in measurement which lead to the situation where the mean of many separate measurements differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of several different types. Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error.
Random error:
Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measures of a constant attribute or quantity are taken. The word random indicates that they are inherently unpredictable, and have null expected value, namely, they are scattered about the true value, and tend to have null arithmetic mean when a measurement is repeated several times with the same instrument. All measurements are prone to random error.
Also in numerical computation, we often encounter these errors. Discretization, quantization, and truncation errors are systematic. Bugs in a program, misplaced initial values for an algorithm, and inappropriate algorithms for a problem may be sources of systematic errors.

Then, may random errors happen in numerical computation?


Examples are randomized algorithms such as Monte Carlo methods. Compressed sensing (CS) may also lead to random errors since CS uses a random measurement process. As Wikipedia suggests, repeated computation may reduce random errors by averaging finding the sparsest solution among the solutions. However, in some applications, such repetition cannot be allowed. In this case, deterministic formulation of CS is required, on which there are much fewer studies than stochastic CS.

For deterministic CS, I found a nice web page here.

Note 1: After this entry was posted, Igor emailed me on the description of randomness in CS.  I wrote that the random error can be reduced by averaging, but Igor pointed out this is not true; the sparsest solution is the exact solution among the solutions obtained by repeated trials.  I would like to correct a mistake as above. 
Note 2: It needs more discussions on the error in CS. Can we call the error a random error?

May 6, 2012

Zero, one, and infinity in compressed sensing

I am very happy I got many comments for my first "technical" blog, where I made a question on relationship between zero and infinity norms in CS (compressed sensing).

Igor pointed out his blog entries (here and here)  in his comment.  According to the entry, an L-infinity optimization may produce a vector the elements of which are sticked to ±||x||_∞.  Applying DFT to such a vector, e.g., x=(1,1,...,1), result in a sparse vector in the Fourier domain.  This may be a simple explanation why the infinity norm is related to CS.

Mahesh and Thomas pointed out the infinity norm is also used in the Dantzig selector. This is based on the fact that L-1 (ell-1) is the dual vector space of L-infinity (ell-infinity).  This is yet another link between zero and infinity (note that L-1 is related to L-0 by so many studies on CS). This also interests me very much.

May 5, 2012

Beyond Shannon: infinity versus zero

Here is a recent paper on signal reconstruction:

Yamamoto, Y.; Nagahara, M.; Khargonekar, P.P.
Signal Reconstruction via  H-infinity Sampled-Data Control Theory—Beyond the Shannon Paradigm,
Signal Processing, IEEE Transactions on , vol.60, no.2, pp.613-625, Feb. 2012.
This paper presents a new method for signal reconstruction by leveraging sampled-data control theory. We formulate the signal reconstruction problem in terms of an analog performance optimization problem using a stable discrete-time filter. The proposed H performance criterion naturally takes inter-sample behavior into account, reflecting the energy distributions of the signal. We present methods for computing optimal solutions which are guaranteed to be stable and causal. Detailed comparisons to alternative methods are provided. We discuss some applications in sound and image reconstruction.

This paper proposes "beyond Shannon" signal reconstruction based on H norm. Recently, another "beyond Shannon" method, called compressed sensing (CS),  is widely studied in signal processing. It is interesting that in CS they use so-called 0-"norm," which is extremely-different from infinity norm.

Is there  a link between infinity and zero?

Related entries:
Beyond Shannon with your iPhone
Zero, one, and infinity in compressed sensing

May 4, 2012

Just getting started

Welcome to my blog!

I have decided to start this blog after reading:

5 Ways a Personal Blog Can Boost Your Career
What better way to introduce yourself to thousands of people than by giving them a window into how you think, how you write, whether you come up with good ideas, and how adept you are at marketing? That’s what a personal blog is, after all. “Some people have suggested that a personal blog will someday replace the resume,” says a article. For a potential employer, your posts—as well as your ability to cultivate an audience—provide concrete examples of what you might contribute to their firm. Bloggers can also benefit from the built-in avenue for relationship building: the comment section.

Will this blog also yield any benefit?

God Knows.