2015-10-28 46 views
2

我的代碼是使用L-BFGS優化實現主動學習算法。我想優化四個參數:alpha,beta,wgamma發生Python scipy.optimize.fmin_l_bfgs_b錯誤

然而,當我運行下面的代碼,我得到了一個錯誤:

optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad)           
    File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 188, in fmin_l_bfgs_b 
    **opts) 
    File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 311, in _minimize_lbfgsb 
    isave, dsave) 
    _lbfgsb.error: failed in converting 7th argument ``g' of _lbfgsb.setulb to C/Fortran array 
    0-th dimension must be fixed to 22 but got 4 

我的代碼是:

# -*- coding: utf-8 -*- 
import numpy as np 
import scipy as sp 
import scipy.stats as sps 

num_labeler = 3 
num_instance = 5 

X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]]) 
Z = np.array([1,0,1,0,1]) 
Y = np.array([[1,0,1],[0,1,0],[0,0,0],[1,1,1],[1,0,0]]) 

W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]]) 
gamma = np.array([1,1,1,1,1]) 
alpha = np.array([1,1,1,1]) 
beta = 1 
para = np.array([1,1,1,1,1,1,1,1,1,2,2,2,2,3,3,3,3,1,1,1,1,1]) 

def get_params(para): 
    # extract parameters from 1D parameter vector 
    assert len(para) == 22 
    alpha = para[0:4] 
    beta = para[4] 
    W = para[5:17].reshape(3, 4) 
    gamma = para[17:] 
    return alpha, beta, gamma, W 

def log_p_y_xz(yit,zi,sigmati): #log P(y_it|x_i,z_i) 
    return np.log(sps.norm(zi,sigmati).pdf(yit))#tested 

def log_p_z_x(alpha,beta,xi): #log P(z_i=1|x_i) 
    return -np.log(1+np.exp(-np.dot(alpha,xi)-beta))#tested 

def sigma_eta_ti(xi, w_t, gamma_t): # 1+exp(-w_t x_i -gamma_t)^-1 
    return 1/(1+np.exp(-np.dot(xi,w_t)-gamma_t)) #tested 

def df_alpha(X,Y,Z,W,alpha,beta,gamma):#df/dalpha 
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)*X[i]/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance)) 
    #tested 
def df_beta(X,Y,Z,W,alpha,beta,gamma):#df/dbelta 
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance)) 

def df_w(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dw 
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]for t in range(num_labeler)) for i in range (num_instance)) 

def df_gamma(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dgamma 
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))for t in range(num_labeler)) for i in range (num_instance)) 

def func(para, *args): 
    alpha, beta, gamma, W = get_params(para) 
    #args 
    X = args [0] 
    Y = args[1] 
    Z = args[2]   
    return np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],W[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(num_labeler)) for i in range (num_instance)) 
    #tested 

def func_grad(para, *args): 
    alpha, beta, gamma, W = get_params(para) 
    #args 
    X = args [0] 
    Y = args[1] 
    Z = args[2] 
    #gradiants 
    d_f_a = df_alpha(X,Y,Z,W,alpha,beta,gamma) 
    d_f_b = df_beta(X,Y,Z,W,alpha,beta,gamma) 
    d_f_w = df_w(X,Y,Z,W,alpha,beta,gamma) 
    d_f_g = df_gamma(X,Y,Z,W,alpha,beta,gamma) 
    return np.array([d_f_a, d_f_b,d_f_w,d_f_g]) 

x0 = np.concatenate([np.ravel(alpha), np.ravel(beta), np.ravel(W), np.ravel(gamma)]) 

optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad) 

我不知道是什麼問題。也許,func_grad會導致問題?任何人都可以看看嗎?感謝

+1

[相關問題](http://stackoverflow.com/questions/33383895/to-optimize-four-parameters-in-python-scipy-optimize-fmin-l-bfgs-b-with-an-erro/ ) – jakevdp

回答

4

需要首先考慮的func的衍生物相對於每個在你的alpha, beta, w, gamma參數級聯陣列元件,那麼應func_grad返回相同的長度的單一一維數組作爲x0(即22)。相反,它返回嵌套在np.object陣列內的兩個陣列和兩個標量漂浮的混亂:

問題的
In [1]: func_grad(x0, X, Y, Z) 
Out[1]: 
array([array([ 0.00681272, 0.00681272, 0.00681272, 0.00681272]), 
     0.006684719133999417, 
     array([-0.01351227, -0.01351227, -0.01351227, -0.01351227]), 
     -0.013639910534587798], dtype=object) 

部分是np.array([d_f_a, d_f_b,d_f_w,d_f_g])不級聯這些對象到一個單一的一維數組,因爲有些是numpy的陣列和一些是Python漂浮。取而代之,通過使用np.hstack([d_f_a, d_f_b,d_f_w,d_f_g])可以輕鬆解決該問題。

但是,這些對象的組合大小仍然只有10,而func_grad的輸出需要是22長的向量。您需要再看看df_*功能。特別地,W(3, 4)陣列,但是df_w僅返回(4,)向量,並且gamma(4,)向量,而df_gamma僅返回標量。

+0

非常感謝您的回答。我在原始論文中檢查了算法中的公式,例如,'df_gamma'是一個標量。所以我不知道爲什麼那篇論文的作者成功地對算法進行了編碼。另一個值得關注的是,當我將代碼的最後一行改爲'optimBFGS = sp.optimize.minimize(func,x0 = x0,args =(X,Y,Z))'(沒有func_grad)時,我可以得到結果。這有點奇怪。 – flyingmouse

+0

這是預期的。如果你不通過梯度函數來「最小化」,那麼它將嘗試使用一階有限差分來近似它。這通常效率較低,數值穩定,但它仍然可以讓你得到答案。 –

+0

「gamma」是一個「(4)」向量,或者「df_gamma」是一個標量 - 它們都是沒有意義的。 –