在the code for the main OLS class in Python Pandas中,我正在尋求幫助,以澄清在執行加權OLS時報告的標準錯誤和t-stats的約定。Python中加權最小二乘的意外標準錯誤熊貓
這裏是我的示例數據集中,一些進口使用熊貓和使用scikits.statsmodels直接WLS:
import pandas
import numpy as np
from statsmodels.regression.linear_model import WLS
# Make some random data.
np.random.seed(42)
df = pd.DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'weights'])
# Add an intercept term for direct use in WLS
df['intercept'] = 1
# Add a number (I picked 10) to stabilize the weight proportions a little.
df['weights'] = df.weights + 10
# Fit the regression models.
pd_wls = pandas.ols(y=df.a, x=df.b, weights=df.weights)
sm_wls = WLS(df.a, df[['intercept','b']], weights=df.weights).fit()
我用%cpaste
在IPython中執行此操作,然後同時打印迴歸的摘要:
In [226]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:import pandas
:import numpy as np
:from statsmodels.regression.linear_model import WLS
:
:# Make some random data.
np:np.random.seed(42)
:df = pd.DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'weights'])
:
:# Add an intercept term for direct use in WLS
:df['intercept'] = 1
:
:# Add a number (I picked 10) to stabilize the weight proportions a little.
:df['weights'] = df.weights + 10
:
:# Fit the regression models.
:pd_wls = pandas.ols(y=df.a, x=df.b, weights=df.weights)
:sm_wls = WLS(df.a, df[['intercept','b']], weights=df.weights).fit()
:--
In [227]: pd_wls
Out[227]:
-------------------------Summary of Regression Analysis-------------------------
Formula: Y ~ <x> + <intercept>
Number of Observations: 10
Number of Degrees of Freedom: 2
R-squared: 0.2685
Adj R-squared: 0.1770
Rmse: 2.4125
F-stat (1, 8): 2.9361, p-value: 0.1250
Degrees of Freedom: model 1, resid 8
-----------------------Summary of Estimated Coefficients------------------------
Variable Coef Std Err t-stat p-value CI 2.5% CI 97.5%
--------------------------------------------------------------------------------
x 0.5768 1.0191 0.57 0.5869 -1.4206 2.5742
intercept 0.5227 0.9079 0.58 0.5806 -1.2567 2.3021
---------------------------------End of Summary---------------------------------
In [228]: sm_wls.summ
sm_wls.summary sm_wls.summary_old
In [228]: sm_wls.summary()
Out[228]:
<class 'statsmodels.iolib.summary.Summary'>
"""
WLS Regression Results
==============================================================================
Dep. Variable: a R-squared: 0.268
Model: WLS Adj. R-squared: 0.177
Method: Least Squares F-statistic: 2.936
Date: Wed, 17 Jul 2013 Prob (F-statistic): 0.125
Time: 15:14:04 Log-Likelihood: -10.560
No. Observations: 10 AIC: 25.12
Df Residuals: 8 BIC: 25.72
Df Model: 1
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
intercept 0.5227 0.295 1.770 0.115 -0.158 1.204
b 0.5768 0.333 1.730 0.122 -0.192 1.346
==============================================================================
Omnibus: 0.967 Durbin-Watson: 1.082
Prob(Omnibus): 0.617 Jarque-Bera (JB): 0.622
Skew: 0.003 Prob(JB): 0.733
Kurtosis: 1.778 Cond. No. 1.90
==============================================================================
"""
通知其不匹配標準誤差:熊貓聲稱的標準誤差[0.9079, 1.0191]
而statsmodels說:[0.295, 0.333].
回到the code I linked at the top of the post我試圖追蹤不匹配的來源。
首先,你可以看到,標準誤差是由函數報告:
def _std_err_raw(self):
"""Returns the raw standard err values."""
return np.sqrt(np.diag(self._var_beta_raw))
所以看着self._var_beta_raw
我發現:
def _var_beta_raw(self):
"""
Returns the raw covariance of beta.
"""
x = self._x.values
y = self._y.values
xx = np.dot(x.T, x)
if self._nw_lags is None:
return math.inv(xx) * (self._rmse_raw ** 2)
else:
resid = y - np.dot(x, self._beta_raw)
m = (x.T * resid).T
xeps = math.newey_west(m, self._nw_lags, self._nobs, self._df_raw,
self._nw_overlap)
xx_inv = math.inv(xx)
return np.dot(xx_inv, np.dot(xeps, xx_inv))
在我的使用情況下,self._nw_lags
將None
始終,所以這是令人困惑的第一部分。由於xx
只是迴歸矩陣的標準產品:x.T.dot(x)
,我想知道權重如何影響這一點。術語self._rmse_raw
直接來自於在OLS
的構造函數中擬合的statsmodels迴歸,因此絕大多數肯定包含權重。
這促使這些問題:
- 爲什麼在RMSE部分施加配重報標準錯誤,而不是到迴歸變量。
- 這是標準的做法,如果你想要「非轉換」的變量(你不是那麼也想要非轉換的RMSE ??)有沒有辦法讓Pandas回饋完全加權版本的標準錯誤?
- 爲什麼所有的誤導?在構造函數中,計算完整的statsmodels擬合迴歸。爲什麼絕對不是每個彙總統計都直接來自那裏?爲什麼混合和匹配,以便一些來自statsmodels輸出,一些來自Pandas自制計算?
它看起來像我可以通過以下操作調和熊貓輸出:
In [238]: xs = df[['intercept', 'b']]
In [239]: trans_xs = xs.values * np.sqrt(df.weights.values[:,None])
In [240]: trans_xs
Out[240]:
array([[ 3.26307961, -0.45116742],
[ 3.12503809, -0.73173821],
[ 3.08715494, 2.36918991],
[ 3.08776136, -1.43092325],
[ 2.87664425, -5.50382662],
[ 3.21158019, -3.25278836],
[ 3.38609639, -4.78219647],
[ 2.92835309, 0.19774643],
[ 2.97472796, 0.32996453],
[ 3.1158155 , -1.87147934]])
In [241]: np.sqrt(np.diag(np.linalg.inv(trans_xs.T.dot(trans_xs)) * (pd_wls._rmse_raw ** 2)))
Out[241]: array([ 0.29525952, 0.33344823])
我只是非常受這種關係混淆。這是統計學家常見的問題:涉及RMSE部分的權重,但是在計算係數的標準誤差時選擇是否加權變量?如果是這樣的話,那麼爲什麼大熊貓和統計模型之間的係數本身也不會不同呢,因爲這些係數同樣是從第一個由statsmodels轉化而來的變量中得出的?
僅供參考,這裏是我的玩具示例中使用完整的數據集(如果np.random.seed
不足以使其可重複):
In [242]: df
Out[242]:
a b weights intercept
0 0.496714 -0.138264 10.647689 1
1 1.523030 -0.234153 9.765863 1
2 1.579213 0.767435 9.530526 1
3 0.542560 -0.463418 9.534270 1
4 0.241962 -1.913280 8.275082 1
5 -0.562288 -1.012831 10.314247 1
6 -0.908024 -1.412304 11.465649 1
7 -0.225776 0.067528 8.575252 1
8 -0.544383 0.110923 8.849006 1
9 0.375698 -0.600639 9.708306 1
做了些研究,但對於它的價值,R與statsmodels'號一致。 lm(公式= a〜1 + b,data = df,權重=權重)給出了0.2953和0.3334 – TomAugspurger