HW 2, EECS 207A. Fall 2004. UCI.
First problem.
by Nasser Abbasi
Problem
The following table displays the pixel intensity values of a macro block of a 1-dimensional image
60, 75, 86, 200, 235, 255, 46, 34
a) compute the DCT coefficients of this function.
b) compute the original function from the DCT using the IDCT algorithm
c)Ignore the last 4 bytes and recompute the original function, how much error is introduced?
extra work: I also did an edditional analysis on this problem. I showed how the error in data changes as a function of number of terms dropped from the DCT table
Part(a)
Define the DCT and IDCT functions
In[923]:=
Define the data and create the C table
In[1005]:=
Create the DCT values, and print them
In[1009]:=
Part(b)
Use IDCT to recompute the original data, i.e. use all DCT points.
In[1013]:=
Part(c)
In[1016]:=
Extra work. Generate table showing how much error (in percentage) in the data recomputed as we drop more terms in the DCT. Try from 1 to 7 terms dropped.
In[1023]:=
Data recomputed, each row shows the data based on dropping as many terms from DCT as the row number-1.
For example, the first row shows the data recomputed if we dropped ZERO terms from DCT.
The second row shows the data recomputed if dropped ONE term from DCT, etc...
In[1130]:=
In[1144]:=
Out[1152]=
Out[1155]=
Conclusion
The Max error in the recomputed data array increases as more DCT terms are dropped. But that is not always true each time. In this example, as we dropped the 6th term in the DCT table, the max error was actually smaller, but we see that the average error in data is always increasing as expected. I am not sure now why the max error in data did not go down every time we dropped an additional term from the DCT. This needs more investigation. I have tried this on another data input and saw the same result.
Created by Mathematica (October 21, 2004)