2017-01-30 26 views
2

我使用pydoop閱讀和pyspark寫入文件。我想用gzip格式編寫我的作業輸出。我目前的代碼如下所示:保存與pydoop一個gzip文件在python

def create_data_distributed(workerNum,outputDir, centers, noSamples = 10, var = 0.1): 
numCenters = centers.shape[0] 
dim = centers.shape[1] 
fptr_out = hdfs.hdfs().open_file(os.path.join(outputDir, ("part-%05d" % workerNum)) + ".txt", "w") 
for idx in range(noSamples): 
    idxCenter = np.random.randint(numCenters) 
    sample = centers[idxCenter] + np.random.normal(size=(1,dim)) 
    # output the sample. Need to 
    fptr_out.write("%d, " % idxCenter) 
    for i in range(len(sample[0])): 
     fptr_out.write("%f " %(sample[0][i])) 
     if (i < (len(sample[0])-1)): 
      fptr_out.write(",") 
    fptr_out.write("\n") 
fptr_out.close() 
return 

如何讓此代碼打開並寫入gzip文件而不是常規文件?

謝謝!

回答

2

我希望你能做到這一點通過包裝返回的類文件對象:

fptr_out = hdfs.hdfs().open_file(...) 

隨着gzip.GzipFile,如:

hdfs_file = hdfs.hdfs().open_file(...) 
fptr_out = gzip.GzipFile(mode='wb', fileobj=hdfs_file) 

請注意,您必須在兩個接近撥打:

fptr_out.close() 
hdfs_file.close() 

這是更清晰的with聲明:

output_filename = os.path.join(outputDir, ("part-%05d" % workerNum)) + ".txt.gz" 
with hdfs.hdfs().open_file(output_filename, "wb") as hdfs_file: 
    with gzip.GzipFile(mode='wb', fileobj=hdfs_file) as fptr_out: 
     ... 

這都是未經測試的。使用風險自負。