2013-12-16 119 views
0

我需要一些關於AAM的建議,例如編碼,我試圖理解。結果無法完成,因爲出現錯誤說,說Matlab的內存不足:計算4D矩陣的平均值的替代方法

使用零的錯誤 內存不足。爲您的選項鍵入HELP MEMORY。

在導致錯誤的編碼是:

Error in AAM_MakeSearchModel2D (line 6) 
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean)); 

在DRPD的實際數據是:

drdp=zeros(13,6,10,468249); 

由於第四陣列是大的,這是可以理解的是,Matlab的那我使用的是32位內存不足。代碼將生成的輸出是2d。下面是以後要使用DRPD代碼:

drdpt=squeeze(mean(mean(drdp,3),2)); 
R=pinv(drdpt)'; 

,我要問的問題是,是否有可能分裂四維矩陣分解成較小的(如2D或3D),並進行正常的加法和除法(得到平均)。如果是的話,一個人會怎麼做?

編輯17/12/2013

由於4D DRPD是用於獲得存儲模型的所有加權誤差相對真正進入DRPD另一整個計算的初始化我不能使用稀疏。我複製AAM功能的一部分計算該DRPD:

function R=AAM_MakeSearchModel2D(ShapeAppearanceData,ShapeData,AppearanceData,TrainingData,options) 


% Structure which will contain all weighted errors of model versus real 
% intensities, by several offsets of the parameters 
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean)); 

% We use the trainingdata images, to train the model. Because we want 
% the background information to be included 

% Loop through all training images 
for i=1:length(TrainingData); 
    % Loop through all model parameters, bot the PCA parameters as pose 
    % parameters 
    for j = 1:size(ShapeAppearanceData.Evectors,2)+4 
     if(j<=size(ShapeAppearanceData.Evectors,2)) 
      % Model parameters, offsets 
      de = [-0.5 -0.3 -0.1 0.1 0.3 0.5]; 

      % First we calculate the real ShapeAppearance parameters of the 
      % training data set 
      c = ShapeAppearanceData.Evectors'*(ShapeAppearanceData.b(:,i) -ShapeAppearanceData.b_mean); 

      % Standard deviation form the eigenvalue 
      c_std = sqrt(ShapeAppearanceData.Evalues(j)); 
      for k=1:length(de) 
       % Offset the ShapeAppearance parameters with a certain 
       % value times the std of the eigenvector 
       c_offset=c; 
       c_offset(j)=c_offset(j)+c_std *de(k); 

       % Transform back from ShapeAppearance parameters to Shape parameters 
       b_offset = ShapeAppearanceData.b_mean + ShapeAppearanceData.Evectors*c_offset; 
       b1_offset = b_offset(1:(length(ShapeAppearanceData.Ws))); 
       b1_offset= inv(ShapeAppearanceData.Ws)*b1_offset; 
       x = ShapeData.x_mean + ShapeData.Evectors*b1_offset; 
       pos(:,1)=x(1:end/2); 
       pos(:,2)=x(end/2+1:end); 



       % Transform the Shape back to real image coordinates 
       pos=AAM_align_data_inverse2D(pos,TrainingData(i).tform); 

       % Get the intensities in the real image. Use those 
       % intensities to get ShapeAppearance parameters, which 
       % are then used to get model intensities 
       [g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData); 

       % A weighted sum of difference between model an real 
       % intensities gives the "intensity/offset" ratio 
       w = exp ((-de(k)^2)/(2*c_std^2))/de(k); 
       drdp(j,k,i,:)=(g-g_offset)*w; 
      end 
     else 
      % Pose parameters offsets 
      j2=j-size(ShapeAppearanceData.Evectors,2); 
      switch(j2) 
       case 1 % Translation x 
        de = [-2 -1.2 -0.4 0.4 1.2 2]/2; 
       case 2 % Translation y 
        de = [-2 -1.2 -0.4 0.4 1.2 2]/2; 
       case 3 % Scaling & Rotation Sx 
        de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2; 
       case 4 % Scaling & Rotation Sy 
        de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2; 
      end 

      for k=1:length(de) 
       tform=TrainingData(i).tform; 
       switch(j2) 
        case 1 % Translation x 
         tform.offsetv(1)=tform.offsetv(1)+de(k); 
        case 2 % Translation y 
         tform.offsetv(2)=tform.offsetv(2)+de(k); 
        case 3 % Scaling & Rotation Sx 
         tform.offsetsx=tform.offsetsx+de(k); 
        case 4 % Scaling & Rotation Sy 
         tform.offsetsy=tform.offsetsy+de(k); 
       end 

       % From Shape tot real image coordinates, with a certain 
       % pose offset 
       pos=AAM_align_data_inverse2D(TrainingData(i).CVertices, tform); 

       % Get the intensities in the real image. Use those 
       % intensities to get ShapeAppearance parameters, which 
       % are then used to get model intensities 
       [g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData); 

       % A weighted sum of difference between model an real 
       % intensities gives the "intensity/offset" ratio 
       w =exp ((-de(k)^2)/(2*2^2))/de(k); 
       drdp(j,k,i,:)=(g-g_offset)*w; 
      end 
     end 
    end 
end 

% Combine the data to the intensity/parameter matrix, 
% using a pseudo inverse 
% for i=1:length(TrainingData); 
%  drdpt=squeeze(mean(drdp(:,:,i,:),2)); 
%  R(:,:,i) = (drdpt * drdpt')\drdpt; 
% end 
% % Combine the data intensity/parameter matrix of all training datasets. 
% % 
% % In case of only a few images, it will be better to use a weighted mean 
% % instead of the normal mean, depending on the probability of the trainingset 
% R=mean(R,3);  

drdpt=squeeze(mean(mean(drdp,3),2)); 
R=pinv(drdpt)'; 
%R = (drdpt * drdpt')\drdpt; 

正如你可以在函數的最終代碼中看到的,四維DRPD是再擠,然後再計算成爲一個二維矩陣商店中的R 。由於'內存不足'問題,該函數無法初始化drpd,因爲它使用了很多空間(drdp =零(13,6,10,))。我可以以2D或3D形式存儲數據(拆分drpd部分),然後進行簡單的加法和除法以獲得均值,然後獲得'R'?

謝謝,並且對於長期的問題感到抱歉。

回答

0

我想你想要使用一些稀疏表示,如果drdp許多元素保持爲零。
Matlab的sparse命令只能創建二維矩陣,儘管如此,這樣的事情可能會起作用?

http://www.mathworks.com/matlabcentral/newsreader/view_thread/167669

一旦你得到那個工作你可以不用擔心計算方法 -
除了需要一點點的簿記,應該是可行的。