Gaussian mixture models (GMM's) have been demonstrated as one of the powerful statistical methods for speaker identification. In GMM method, the covariance matrix is usually assumed to be diagonal. That means the feature components are relatively uncorrelated. This assumption may not be correct. This paper concentrates on finding an orthogonal speaker-dependent transformation to reduce the correlation between feature components. This transformation is based on the eigenvectors of the within-class scatter matrix which is attained in each stage of iterative training of GMM parameters. Hence the transformation matrix and GMM parameters are both updated in each iteration until the total log-likelihood converges. An experimental evaluation of the proposed method is conducted on a 100-person connected digit database for text independent speaker identification. The experimental result shows a reduction in the error rate by 42% when 7-digit utterances are used for testing.