next up previous
Next: EEL6586: HW#5 Up: EEL6586: Homework Previous: EEL6586: HW#3

EEL6586: HW#4

Get PDF file
EEL 6586: HW#4


Assignment is due Friday, March 19, 2004 in class. Since we will go over the assignment in class on March 22, late homework will not be accepted past the beginning of class that day. Our exam will be Wednesday night March 24. This assignment includes both matlab and textbook questions. PART A: Cepstrum Problems
A1
Compute the complex cepstrum of $H(z)=(1-2z^{-1})/(1+.25z^{-2})$
A2
Compute the real cepstrum of $H(z)=(1-2z^{-1})/(1+.25z^{-2})$
A3
Let $x_1(n)$ and $x_2(n)$ denote two sequences and $\hat{x}_1(n)$ and $\hat{x}_2(n)$ their corresponding complex cepstra. If ${x}_1(n) *
{x}_2(n) = \delta(n)$ determine the relationship between $\hat{x}_1(n)$ and $\hat{x}_2(n)$.
A4
Suppose the complex cepstrum of of $y(n)$ is $\hat{y}(n)=\hat{s}(n)+2\delta(n)$. Determine $y(n)$ in terms of $s(n)$.
A5
Euclidean distance in complex cepstral space can be related to a RMS log spectral distance measure. Assuming that

\begin{displaymath}\log S(\omega) = \sum_{n=-\infty}^{n=+\infty}c_n e^{-jn\omega} \end{displaymath}

where $S(\omega)$ is the power spectrum (magnitude-squared Fourier transform), prove the following:

\begin{displaymath}\sum_{n=-\infty}^{n=+\infty} (c_n - c'_n)^2 = \frac{1}{2 \pi} \int \vert\log( S(\omega))-\log(S'( \omega))\vert^2 d \omega\end{displaymath}

where $S(\omega)$ and $S'(\omega)$ are the power spectra for two different signals.
PART B: Linking Cepstrum with LPC Assuming that

\begin{displaymath}H(z)=\sum_{n=0}^\infty h(n)z^{-n} = \frac{G}{1-\sum_{k=1}^pa(k)z^{-k}}\end{displaymath}

Prove that the complex cepstrum $\hat{h}(n)$ can be derived from the linear prediction coefficients a(k) using the following relation:

\begin{displaymath}
\hat{h}(n)=a(n) + \sum_{k=1}^{n-1}(k/n) \hat{h}(k)a(n-k)
\end{displaymath}

for $n \ge 1$. PART C: Phoneme Recognition Experiments Utterances of 8 vowel phonemes from 38 speakers from the TIMIT database were extracted (about 2300 utterances total). Your goal for this problem is to achieve the highest recognition accuracy for this speech corpus. In the end you are free to do whatever you can to improve recognition accuracy. A demo Matlab program using LPC-10 and 1-NN is provided to demonstrate usage of the database. The following files are provided: hw4Data.mat: Matlab .mat file containing the variables vocab, *Utter, and *Speaker, where * is one of the phonemes in vocab. *Utter is a 512xN (N varies w/ phoneme) matrix where each column is a 512-pt utterance extracted from the center (to minimize coarticulation effects) of a labeled phoneme from the TIMIT database. *Speaker is an Nx5 character matrix w/ each row i is the speaker label for column i in *Utter. *Speaker is provided to ensure that no speaker appears in both the test/train sets. about 9MB uncompressed. hw4Demo.m: Matlab .m file that demonstrates usage of hw4Data.mat. LPC coeffs are extracted in bulk, random test/train speakers are designated for each classifier trial, and the test/train LPC coeffs are used w/ a 1-Nearest Neighbor classifer. Run this program to make sure you have downloaded the database properly. Tweak the following variables to see their effects on accuracy: percentTest, numTrials, vocab (you can shrink the vocab as a sanity check that your program works properly-small vocab means high recognition accuracy). Feel free to modify this code when writing your own solution. hw4Readme.txt: Readme file that describes all files in hw4.zip. All these files are conveniently available in hw4.zip which can be found at:
http://www.cnel.ufl.edu/hybrid/courses/EEL6586/hw4.zip
C1
Choose a robust feature extraction technique that you think will provide best results. You may use any feature extraction techniques you want (energy, zero-crossing, LPC, mfcc, PLP, hfcc...) or any combination of these. You are free to look at several different types of feature sets or to invent your own but do whatever you can to improve the recognition accuracy (without using test data during training). Explain your choice of feature set and why you think that it should perform well.
C2
Use a classification algorithm to classify the test data. Again you are free to use any classifier you like (Nearest Neighbor, Bayes, Neural Network, HMM, ...) Briefly explain why your choice of a classifier is a wise one (even if you decide to stay with the nearest neighbor classifier).
C3
Always include several trials as in hw4Demo.m and report the average over all trials. For your final version, make sure to include at least 100 trials. What is your final accuracy rate? What is the standard deviation of your accuracy value?
C4
For your final optimized system, which two phonemes are most likely to be confused with one another?
C5
Comment on why it is important that no speaker appear in both the test/train datasets.
As usual, attach all of your code to the end of the assignment. A total of 5 Bonus points will be awarded to the person(s) with the highest percentage correct classification.
next up previous
Next: EEL6586: HW#5 Up: EEL6586: Homework Previous: EEL6586: HW#3
Dr John Harris 2004-04-02