我在分析算法的計算複雜度的過程中,我有兩個for循環。兩個for循環,內循環增加1000倍,而速度只增加100倍?
short i=0;
short j=0;
short ii=0;
short[] counts = new short[2];
int label_size = MapRedUtils.label_size;
int max_index=0;
int sample_size =data.getSampleSize();
float max_ig = 0;
float totalVar=0;
float var=0;
float cov_ig;
float[] tocopy = new float[label_size];
float[][] sums = new float[2][];
float[] totalSum = new float[label_size];
byte value;
ByteBuffer label = ByteBuffer.allocate(label_size*4);
for(j=0; j<2; j++)
sums[j] = new float[label_size];
for(ii=0; ii<3; ii++)// 3 ways of split the current node
{
long startTime;
long endTime;
long totalTime;
startTime = System.currentTimeMillis();
counts[0]=0;
counts[1]=0;
System.arraycopy(tocopy,0,totalSum,0,label_size);
System.arraycopy(tocopy,0,sums[0],0,label_size);
System.arraycopy(tocopy,0,sums[1],0,label_size);
for (i = 0; i < sample_size; i++)
{
OneSample instance = data.get(i);
value = (byte) instance.getTheGenoytpe(snpid);
label = instance.getPhenotype();
if(value==ii)
{
counts[0]++;
for(j=0; j< label_size; j++)
sums[0][j] += label.getFloat(j*4);
}
else
{
counts[1]++;
for(j=0; j< label_size; j++)
sums[1][j] += label.getFloat(j*4);
}
for(j=0; j< label_size; j++)
totalSum[j] += label.getFloat(j*4);
}
totalVar=0;
var=0;
for(i=0; i< label_size; i++)
{
totalVar += totalSum[i]*totalSum[i];
}
totalVar = totalVar/sample_size;//it is averaged by sample size
for(j=0; j< 2; j++)
//var += sumSquared(sums[j], MapRedUtils.inverse_covariance , counts[j]);
var += sumSquared(sums[j], counts[j]);
cov_ig = var- totalVar;
if(cov_ig > max_ig)
{
max_ig=cov_ig;
max_index=ii;
}
endTime = System.currentTimeMillis();
totalTime = (endTime - startTime);
System.out.println(totalTime);
我增加從label_size內label_size = 1和label_size = 1000,我預計運行時間應增加1000倍,而實際運行時間只增加了40-100針對不同的運行時間。 這是爲什麼?
什麼語言? (該示例僞代碼在我所知的語言中沒有嵌套循環。) – Mat
理論上,理論和實踐是相同的。你能不能展示一些*真實的*代碼? – dasblinkenlight
我用java感謝我展示真正的代碼現在 – user974270