8

Density Estimation — Gaussian Mixtures, EM & Vowel Classification

Summary

Modeling probability distributions with Mixture of Gaussians (MoG) trained via the Expectation-Maximization algorithm. Covers the full EM loop — E-step soft responsibility assignments, M-step parameter updates for means, covariances, and mixing weights — applied to the Peterson & Barney vowel formant dataset (F1, F2 frequencies). Builds a Maximum Likelihood classifier from two class-conditional GMMs, visualizes decision boundaries on a meshgrid, confronts the singularity problem with linearly dependent features, and solves it with regularization. Achieves 95.07% accuracy with K=3 and 95.72% with K=6.