2017-10-17 53 views
0

我有一個非常大的多千兆字節文件,其成本太高,無法加載到內存中。但是,文件中行的排序不是隨機的。有沒有辦法使用類似fread的行來讀取行的隨機子集?R:使用fread或同等文件從文件中隨機讀取行嗎?

像這樣的東西,例如?

data <- fread("data_file", nrows_sample = 90000) 

github post表明一種可能性是做這樣的事情:

fread("shuf -n 5 data_file") 

這不適合我,但是。有任何想法嗎?

回答

1

使用tidyverse(相對於data.table),你可以這樣做:

library(readr) 
library(purrr) 
library(dplyr) 

# generate some random numbers between 1 and how many rows your files has, 
# assuming you can ballpark the number of rows in your file 
# 
# Generating 900 integers because we'll grab 10 rows for each start, 
# giving us a total of 9000 rows in the final 
start_at <- floor(runif(900, min = 1, max = (n_rows_in_your_file - 10))) 

# sort the index sequentially 
start_at <- start_at[order(start_at)] 

# read in 10 rows at a time, starting at your random numbers 
sample_of_rows <- map(start_at, ~read_csv("data_file", n_max = 10, skip = .x)) %>% 
    bind_rows() 
1

如果您的數據文件正好是文本使用軟件包文件該解決方案LaF可能是有用的:

library(LaF) 

# Prepare dummy data 
mat <- matrix(sample(letters,10*1000000,T), nrow = 1000000) 

dim(mat) 
#[1] 1000000  10 

write.table(mat, "tmp.csv", 
    row.names = F, 
    sep = ",", 
    quote = F) 

# Read 90'000 random lines 
start <- Sys.time() 
random_mat <- sample_lines(filename = "tmp.csv", 
    n = 90000, 
    nlines = 1000000) 
random_mat <- do.call("rbind",strsplit(random_mat,",")) 
Sys.time() - start 
#Time difference of 1.135546 secs  

dim(random_mat) 
#[1] 90000 10