May 29, 2012

Newton's method as feedback system

This entry is inspired by the work by K. Kashima and Y. Yamamoto.


To obtain an approximated value of sqrt(a), one can use Newton's method:


with a given real x[0]. This can be easily implemented in computer programs such as MATLAB or SCILAB, but today I will represent this as a feedback system.

First, the iteration can be represented by


where σ denotes the shift operator


In control theory or signal processing, this operator is also denoted by z-1. Next, the iteration is divided into two parts, φ and σ, that is,


Finally, this can be represented by the following block diagram:


Based on this, you can run simulation with XCOS in SCLAB (xcos file is here).


This is a "feedback system" for sqrt(2). You can change the initial value x[0] by double-clicking "1/z" block.  Also, if you want another square root, double-click the "Expression" block and change "2" in the numerator to another positive number. But, if you feel like wilding out, use a negative number instead. In this case, the sequence will not converge, but show some chaotic behavior. The following is the result for "-2."

Related entries:


May 27, 2012

Play with scientific calculator

I posted an entry where I showed a very simple method to solve x = cos(x) by using a scientific calculator. Now, let's continue playing with a scientific calculator.
  1. Bring your scientific calculator (you can use your iphone or a web calculator).
  2. Change the mode to "rad" (radian).
  3. Push "1" (one) key.
  4. Push "sin" key.
  5. Push "1/x" key.
  6. Go back to step 4, unless you want out.
You will see two numbers:
  • 0.8975395... (after step 4.)
  • 1.1141571... (after step 5.)
The second number is a solution of the equation x = 1/sin(x)(this equation has infinite number of solutions), and the first number is its inverse.


Then, change "sin" to "cos" in step 4. Does it converge?

No.

I simulated the procedure by using SCILAB, and obtained a sequence shown below.


This looks like a chaotic sequence.

May 23, 2012

The simplest way to solve "x=cos(x)"


The following is the simplest way to solve the equation "x=cos(x)" by using a scientific calculator:
  1. Bring your scientific calculator (you can use your iphone or a web calculator).
  2. Change the mode to "rad" (radian).
  3. Push "1" (one) key.
  4. Continue to push "cos" key until you feel bored.
The result should be an approximated value of the solution, x=0.73908513...

Why?

The procedure is exactly the following iteration:
x[n+1] = cos(x[n]),  n=0,1,2,...,  x[0]=1.
By a fixed-point theorem, you can prove that the sequence {x[n]} converges to the solution.

I used this demo when I taucht iteration methods in a "numerical computation" class. This is a nice example for motivating students to learn more abstract theory.

May 20, 2012

Recent papers on compressed sensing for control systems

In my past entry, I mentioned that compressed sensing is not so popular in control systems community. However, there are a couple of researches on control systems with compressed sensing.

Trajectory generation

State observation

Optimal control

Networked control
As I mentioned before, there remain difficulties to use CS for control systems (esp. closed-loop ones).  I believe this subject (CS for control systems) will become more important.

Related entries:
Compressed sensing meets control systems

May 19, 2012

Compressed sensing meets control systems

After I had a lecture on compressed sensing (CS) in 2010 given by Prof. Toshiyuki Tanaka , I had a vague idea to apply CS to control systems.

CS is not so popular in control systems community as in signal processing. Although L1 optimal control is a relatively classical problem (see, e.g., a work by Dahleh and Pearson here), they have not cared about the sparsity-promoting property of L1 optimization.

On the other hand, networked control have recently attracted a lot of attention in control systems community.  In networked control, a controller is placed away from a controlled plant and the controller should communicate with the plant over rate-limited networks, such as wireless networks (imagine controlling this for example).  In this situation,  the data should be compressed to satisfy the rate-limiting constraint.

My idea is to use a technique of CS for compressing data of signals in networked control systems.  More precisely, we use L1 optimization for sparse representation of control signals to be transmitted through rate-limited and erasure networks.  I discussed the subject with Dr. Daniel E. Quevedo and presented the work at a conference:

M. Nagahara and D. E. Quevedo,
Sparse Representations for Packetized Predictive Networked Control,
IFAC 18th World Congress, pp. 84-89, Aug 2011.

and also a journal version

M. Nagahara, D. E. Quevedo, and J. Ostergaard,
Sparse Packetized Predictive Control for Networked Control over Erasure Channels,
IEEE Transactions on Automatic Control, Vol. 59, No. 7, Jul 2014.


I believe this is the first-ever paper to apply CS (more precisely, sparse representation or sparse approximation) to networked control systems. However, there remain a couple of difficulties:

  1. The term "Ax-b" where A is highly structured (not randomized) and x is unknown whether it is sparse or not.
  2. The matrix "A" includes model error (e.g., error from linearization).
  3. The vector "b" is subject to noise (e.g., quantization noise).
  4. Computation should be extremely fast since computational delay may cause instability of the closed-loop system.
  5. Only cheap quantization (e.g., a uniform scalar quantizer) can be used.

To see these difficulties, imagine again controlling the helicopter to make it fly stably along a desired trajectory. The problem is very challenging.

For recent papers on CS for control systems, see this entry.


May 17, 2012

Mathematical Notation in Email

I sometimes use email to communicate with researchers and engineers for mathematical discussion. Then, I always have a problem of writing mathematical notation.  How do you write, for example, the following equation in email?


A nice web page suggests to write

lim{x->0}sin(x)/x  =  1

This is an unambiguous expression. Alternatively, you can use "LaTeX" commands:

\lim_{x\rigtharrow 0}\sin(x)/x = 1

How about the following?


I suggest to use "matlab like" expression:

inv([a,b;c,d]) = 1/(ad-bc)[d,-b;-c,a]

This wil be unambiguous if the other person also knows matlab commands.

If you want to write so complicated mathematical expression as those in the cubic formula, I suggest to make a PDF file and just attach it!


May 14, 2012

Causality in signal processing

Often, signal processing theorems give a non-causal filter. For example, see the following paper:

M. Unser, A. Aldroubi, and M. Eden,
B-spline Signal Processing: Part II---Efficient Design and Applications,
Signal Processing, IEEE Transactions on , vol.41, no.2, pp.834-848, Feb 1993

A non-causal filter as above is no problem in image processing. However, in a real time system, in particular, in a feedback loop, non-causality should cause a problem of implementation. The simplest way to avoid this is truncation of the non-causal part of the filter impulse response, say, {f(n): n<0}. This absolutely degrades the filter performance, and the second simplest is to truncate {f(n): n<-N} (often with a window) and shift the residual by N.

Recently, much more sophisticated methods are proposed, via constraint least squares, which is related to H2 optimization:

M. Unser and M. Eden,
FIR approximations of inverse filters and perfect reconstruction filter banks,
Signal Processing, Vol. 36, No. 2, pp. 163-174, 1994.

and via H optimization:

M. Nagahara and Y. Yamamoto,
H∞ optimal approximation for causal spline interpolation,
Signal Processing, Vol. 91, No. 2, pp. 176-184, 2011.

As I mentioned in a previous entry that H optimization leads to robustness against signal uncertainty. If you like robustness (or if you don't like risk-taking behavior), then you must choose H! You can use MATLAB codes for the H design here.


May 12, 2012

H2 versus H

In signal processing, H2 and H norms are most important measures for system performance. In this entry, I try to explain the difference between them in view of filter design.


For a stable and causal system F(z), the H2 norm is defined by
This is the average (or root-mean-square) value of the magnitude of the frequency response of F(z).

On the other hand, the H norm is defined by
This is the maximum value of the magnitude of the frequency response of F(z).


Now, let us consider the following inverse-filtering problem:

Given a filter H(z), find another filter K(z) such that H(z)K(z) ≈ 1.

For this problem, we can use the above two norms.  By the H2 norm, the problem is formulated as

(H2) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||2

By the H norm, the problem is also formulated as

(Hinf) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||

The difference is that (H2) tries to minimize the average magnitude of the error system H(z)K(z)-1 while (Hinf) tries to minimize the maximum magnitude.  The following picture shows an example of the two designs.
The blue line is the magnitude of the frequency response of the error H(z)K(z)-1 by the H2 design. The error is very small at almost all frequencies but is very large around a frequency, say f0. On the other hand, the red line is the error magnitude by H design, which is uniformly small since the H optimization is a minmax one that tries to make the response as flat as possible. The average value by H2 is smaller than by H and the maximum value by H is smaller than by H2.

By this picture, the difference is clear.

H2-designed filter shows very nice performance for almost all frequencies but is fragile for the frequency f0. Hence, H2 is the better if it is quite certain that the input signals do not contain frequency components around f0.  On the other hand, H-designed filter guarantees a certain error level for all frequencies. In other words, H optimization leads to robustness against uncertainty in the frequency components of input signals.

For details, please check this article, and matlab codes for the article.



May 9, 2012

Systematic vs random error

Quoted from Wikipedia:

Systematic error:
Systematic errors are biases in measurement which lead to the situation where the mean of many separate measurements differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of several different types. Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error.
Random error:
Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measures of a constant attribute or quantity are taken. The word random indicates that they are inherently unpredictable, and have null expected value, namely, they are scattered about the true value, and tend to have null arithmetic mean when a measurement is repeated several times with the same instrument. All measurements are prone to random error.
Also in numerical computation, we often encounter these errors. Discretization, quantization, and truncation errors are systematic. Bugs in a program, misplaced initial values for an algorithm, and inappropriate algorithms for a problem may be sources of systematic errors.

Then, may random errors happen in numerical computation?


Yes.

Examples are randomized algorithms such as Monte Carlo methods. Compressed sensing (CS) may also lead to random errors since CS uses a random measurement process. As Wikipedia suggests, repeated computation may reduce random errors by averaging finding the sparsest solution among the solutions. However, in some applications, such repetition cannot be allowed. In this case, deterministic formulation of CS is required, on which there are much fewer studies than stochastic CS.

For deterministic CS, I found a nice web page here.

Note 1: After this entry was posted, Igor emailed me on the description of randomness in CS.  I wrote that the random error can be reduced by averaging, but Igor pointed out this is not true; the sparsest solution is the exact solution among the solutions obtained by repeated trials.  I would like to correct a mistake as above. 
Note 2: It needs more discussions on the error in CS. Can we call the error a random error?
(2012/May/13)

May 6, 2012

Zero, one, and infinity in compressed sensing

I am very happy I got many comments for my first "technical" blog, where I made a question on relationship between zero and infinity norms in CS (compressed sensing).

Igor pointed out his blog entries (here and here)  in his comment.  According to the entry, an L-infinity optimization may produce a vector the elements of which are sticked to ±||x||_∞.  Applying DFT to such a vector, e.g., x=(1,1,...,1), result in a sparse vector in the Fourier domain.  This may be a simple explanation why the infinity norm is related to CS.

Mahesh and Thomas pointed out the infinity norm is also used in the Dantzig selector. This is based on the fact that L-1 (ell-1) is the dual vector space of L-infinity (ell-infinity).  This is yet another link between zero and infinity (note that L-1 is related to L-0 by so many studies on CS). This also interests me very much.


May 5, 2012

Beyond Shannon: infinity versus zero

Here is a recent paper on signal reconstruction:

Yamamoto, Y.; Nagahara, M.; Khargonekar, P.P.
Signal Reconstruction via  H-infinity Sampled-Data Control Theory—Beyond the Shannon Paradigm,
Signal Processing, IEEE Transactions on , vol.60, no.2, pp.613-625, Feb. 2012.
This paper presents a new method for signal reconstruction by leveraging sampled-data control theory. We formulate the signal reconstruction problem in terms of an analog performance optimization problem using a stable discrete-time filter. The proposed H performance criterion naturally takes inter-sample behavior into account, reflecting the energy distributions of the signal. We present methods for computing optimal solutions which are guaranteed to be stable and causal. Detailed comparisons to alternative methods are provided. We discuss some applications in sound and image reconstruction.

This paper proposes "beyond Shannon" signal reconstruction based on H norm. Recently, another "beyond Shannon" method, called compressed sensing (CS),  is widely studied in signal processing. It is interesting that in CS they use so-called 0-"norm," which is extremely-different from infinity norm.

Is there  a link between infinity and zero?


Related entries:
Beyond Shannon with your iPhone
Zero, one, and infinity in compressed sensing


May 4, 2012

Just getting started

Welcome to my blog!

I have decided to start this blog after reading:

5 Ways a Personal Blog Can Boost Your Career
What better way to introduce yourself to thousands of people than by giving them a window into how you think, how you write, whether you come up with good ideas, and how adept you are at marketing? That’s what a personal blog is, after all. “Some people have suggested that a personal blog will someday replace the resume,” says a Jobacle.com article. For a potential employer, your posts—as well as your ability to cultivate an audience—provide concrete examples of what you might contribute to their firm. Bloggers can also benefit from the built-in avenue for relationship building: the comment section.

Will this blog also yield any benefit?

God Knows.