2016-12-19 56 views
3

昨天我使用期望最大化算法實現了GMM(高斯混合模型)。如你所記得的,它將一些不知名的分佈建模爲高斯分佈的混合體,我們需要了解它的均值和方差,以及每個高斯的權重。GMM-對數似然不單調

這是代碼背後的數學(它不是那麼複雜) http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/

這是我的代碼:

import numpy as np 
from scipy.stats import multivariate_normal 
import matplotlib.pyplot as plt 

#reference for this code is http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/ 

def expectation(data, means, covs, priors): #E-step. returns the updated probabilities 
    m = data.shape[0]      #gets the data, means covariances and priors of all clusters 
    numOfClusters = priors.shape[0] 

    probabilities = np.zeros((m, numOfClusters)) 
    for i in range(0, m): 
     for j in range(0, numOfClusters): 
      sum = 0 
      for l in range(0, numOfClusters): 
       sum += normalPDF(data[i, :], means[l], covs[l]) * priors[l, 0] 
      probabilities[i, j] = normalPDF(data[i, :], means[j], covs[j]) * priors[j, 0]/sum 

    return probabilities 

def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters 
    m, n = data.shape 
    numOfClusters = probabilities.shape[1] 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    for i in range(0, numOfClusters): 
     priors[i, 0] = np.sum(probabilities[:, i])/m #update priors 

     for j in range(0, m): #update means 
      means[i] += probabilities[j, i] * data[j, :] 

      vec = np.reshape(data[j, :] - means[i, :], (n, 1)) 
      covs[i] += probabilities[j, i] * np.dot(vec, vec.T) #update covs 

     means[i] /= np.sum(probabilities[:, i]) 
     covs[i] /= np.sum(probabilities[:, i]) 

    return [means, covs, priors] 

def normalPDF(x, mean, covariance): #this is simply multivariate normal pdf 
    n = len(x) 

    mean = np.reshape(mean, (n,)) 
    x = np.reshape(x, (n,)) 

    var = multivariate_normal(mean=mean, cov=covariance,) 
    return var.pdf(x) 


def initClusters(numOfClusters, data): #initialize all the gaussian clusters (means, covariances, priors 
    m, n = data.shape 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    initialCovariance = np.cov(data.T) 

    for i in range(0, numOfClusters): 
     means[i] = np.random.rand(n) #the initial mean for each gaussian is chosen randomly 
     covs[i] = initialCovariance #the initial covariance of each cluster is the covariance of the data 
     priors[i, 0] = 1.0/numOfClusters #the initial priors are uniformly distributed. 

    return [means, covs, priors] 

def logLikelihood(data, probabilities): #data is our data. probabilities[i, j] = k means probability example i belongs in cluster j is 0 < k < 1 
    m = data.shape[0] #num of examples 

    examplesByCluster = np.zeros((m, 1)) 
    for i in range(0, m): 
     examplesByCluster[i, 0] = np.argmax(probabilities[i, :]) 
    examplesByCluster = examplesByCluster.astype(int) #examplesByCluster[i] = j means that example i belongs in cluster j 

    result = 0 
    for i in range(0, m): 
     result += np.log(probabilities[i, examplesByCluster[i, 0]]) #example i belongs in cluster examplesByCluster[i, 0] 

    return result 

m = 2000 #num of training examples 
n = 8 #num of features for each example 

data = np.random.rand(m, n) 
numOfClusters = 2 #num of gaussians 
numIter = 30 #num of iterations of EM 
cost = np.zeros((numIter, 1)) 

[means, covs, priors] = initClusters(numOfClusters, data) 

for i in range(0, numIter): 
    probabilities = expectation(data, means, covs, priors) 
    [means, covs, priors] = maximization(data, probabilities) 

    cost[i, 0] = logLikelihood(data, probabilities) 

plt.plot(cost) 
plt.show() 

的問題是,對數似然是行爲古怪。我預計它會單調增加。但事實並非如此。

例如,具有8個特徵與3個高斯簇2000個的例子中,對數似然看起來像這樣(30次迭代) -

enter image description here

因此,這是很不好。但在其他測試中,我跑了,例如一個測試用的2個功能和2羣15本例中,對數似然是這樣的 -

enter image description here

更好,但還不夠完善。

爲什麼會發生這種情況,我該如何解決?

+1

你想要模擬什麼數據?從代碼看來,您正在對隨機點進行建模,即在數據中沒有找到結構。如果是這樣的話,你的GMM模型可能會隨機波動 – etov

+0

在這種情況下,它是隨機的,但將來它可能是任何類型的數據,從溫度到車輛傳感器讀數,任何事情。我認爲數據是隨機的並不重要。從理論上講,我們保證單調收斂。即使是隨機數據。 –

+0

您是否嘗試將您的結果與已知實現的結果進行比較?一個選項是scikit-learn的[GaussianMixture](http://scikit-learn.org/stable/modules/mixture.html)。 –

回答

3

問題在於最大化步驟。

該代碼使用means來計算covs。然而,這是在相同的循環中完成的,然後將means除以概率之和。

這會導致估計的協方差爆炸。

這裏有一個修復建議:

def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters 
    m, n = data.shape 
    numOfClusters = probabilities.shape[1] 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    for i in range(0, numOfClusters): 
     priors[i, 0] = np.sum(probabilities[:, i])/m #update priors 

     for j in range(0, m): #update means 
      means[i] += probabilities[j, i] * data[j, :] 

     means[i] /= np.sum(probabilities[:, i]) 

    for i in range(0, numOfClusters): 
     for j in range(0, m): #update means 
      vec = np.reshape(data[j, :] - means[i, :], (n, 1)) 
      covs[i] += probabilities[j, i] * np.multiply(vec, vec.T) #update covs 

     covs[i] /= np.sum(probabilities[:, i]) 

    return [means, covs, priors] 

而導致成本函數(200個數據點,4個功能): Cost function

編輯: 我確信這個bug是在唯一的問題代碼,但是運行一些額外的例子,我仍然有時會看到非單調行爲(儘管比以前更不穩定)。所以這似乎只是問題的一部分。

EDIT2: 協方差計算還有另一個問題:向量乘法應該是元素明智的,而不是點積 - 記住結果應該是一個向量。現在結果似乎一直是單調遞增的。