2016-09-24 94 views
2

我想計算具有30.000個觀測值的數據幀的行之間的歐幾里德距離。一個簡單的方法是dist功能(例如dist(數據))。但是,由於我的數據幀很大,這需要花費太多時間。以更快的方式計算歐幾里德距離

某些行包含缺少的值。我不需要兩行之間包含缺失值的行之間的距離,也不需要行之間包含缺少值的行之間的距離。我試圖排除我不需要的組合。不幸的是,我的解決方案需要更多時間:

# Some example data 
data <- data.frame(
    x1 = c(1, 22, NA, NA, 15, 7, 10, 8, NA, 5), 
    x2 = c(11, 2, 7, 15, 1, 17, 11, 18, 5, 5), 
    x3 = c(21, 5, 6, NA, 10, 22, 12, 2, 12, 3), 
    x4 = c(13, NA, NA, 20, 12, 5, 1, 8, 7, 14) 
) 


# Measure speed of dist() function 
start_time_dist <- Sys.time() 

# Calculate euclidean distance with dist() function for complete dataset 
dist_results <- dist(data) 

end_time_dist <- Sys.time() 
time_taken_dist <- end_time_dist - start_time_dist 


# Measure speed of my own loop 
start_time_own <- Sys.time() 

# Calculate euclidean distance with my own loop only for specific cases 

# # # 
# The following code should be faster! 
# # # 

data_cc <- data[complete.cases(data), ] 
data_miss <- data[complete.cases(data) == FALSE, ] 

distance_list <- list() 

for(i in 1:nrow(data_miss)) { 

    distances <- numeric() 
    for(j in 1:nrow(data_cc)) { 
    distances <- c(distances, dist(rbind(data_miss[i, ], data_cc[j, ]), method = "euclidean")) 
    } 

    distance_list[[i]] <- distances 
} 

end_time_own <- Sys.time() 
time_taken_own <- end_time_own - start_time_own 


# Compare speed of both calculations 
time_taken_dist # 0.002001047 secs 
time_taken_own # 0.01562881 secs 

有沒有更快的方式來計算我需要的歐氏距離?非常感謝!

+2

dist在C中實現,當然它比R for循環更快。你應該在Rcpp中實現你的循環。 – Roland

+0

謝謝你的提示!我會試着弄清楚它是如何工作的。 – JSP

回答

3

我建議你使用並行計算。將所有代碼放在一個函數中,並行執行。

R將默認在一個線程中進行所有計算。您應該手動添加並行線程。在R中啓動集羣需要時間,但是如果您有大數據框,主要工作的性能將會提高(your_processors_number-1)倍。

該鏈接可能也有幫助:How-to go parallel in R – basics + tipsA gentle introduction to parallel computing in R

不錯的選擇是將您的工作分成更小的包裝,並在每個線程中分別計算。只創建一次線程,因爲它在R中耗時。

library(parallel) 
library(foreach) 
library(doParallel) 
# I am not sure that all libraries are here 
# try ??your function to determine which library do you need 
# determine how many processors has your computer 
no_cores <- detectCores() - 1# one processor must be free always for system 
start.t.total<-Sys.time() 
print(start.t.total) 
startt<-Sys.time() 
print(startt) 
#start parallel calculations 
cl<-makeCluster(no_cores,outfile = "mycalculation_debug.txt") 
registerDoParallel(cl) 
# results will be in out.df class(dataframe) 
out.df<-foreach(p=1:no_cores 
        ,.combine=rbind # data from different threads will be in one table 
        ,.packages=c()# All packages that your funtion is using must be called here 
        ,.inorder=T) %dopar% #don`t forget this directive 
        { 
         tryCatch({ 
          # 
          # enter your function here and do what you want in parallel 
          # 
          print(startt-Sys.time()) 
          print(start.t.total-Sys.time()) 
          print(paste(date,'packet',p, percent((x-istart)/packes[p]),'done')) 
         } 
         out.df 
         },error = function(e) return(paste0("The variable '", p, "'", 
                  " caused the error: '", e, "'"))) 
        } 
stopCluster(cl) 
gc()# force to free memory from killed processes 
+0

非常感謝您的回答,這對我有很大幫助!我甚至不知道這對R來說是可能的,並會嘗試實現您的解決方案! – JSP

+0

我覺得'amap'包在這裏可能會有所幫助,如果你不想創建自己的函數,請檢查[answer](http://stackoverflow.com/a/25767588/6327771) –