Systematic errors are biases in measurement which lead to the situation where the mean of many separate measurements differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of several different types. Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error.Random error:
Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measures of a constant attribute or quantity are taken. The word random indicates that they are inherently unpredictable, and have null expected value, namely, they are scattered about the true value, and tend to have null arithmetic mean when a measurement is repeated several times with the same instrument. All measurements are prone to random error.Also in numerical computation, we often encounter these errors. Discretization, quantization, and truncation errors are systematic. Bugs in a program, misplaced initial values for an algorithm, and inappropriate algorithms for a problem may be sources of systematic errors.
Then, may random errors happen in numerical computation?
Examples are randomized algorithms such as Monte Carlo methods. Compressed sensing (CS) may also lead to random errors since CS uses a random measurement process. As Wikipedia suggests, repeated computation may reduce random errors by
For deterministic CS, I found a nice web page here.
Note 1: After this entry was posted, Igor emailed me on the description of randomness in CS. I wrote that the random error can be reduced by averaging, but Igor pointed out this is not true; the sparsest solution is the exact solution among the solutions obtained by repeated trials. I would like to correct a mistake as above.
Note 2: It needs more discussions on the error in CS. Can we call the error a random error?