2017-02-14 111 views
3

我們的parquet文件存儲在aws S3存儲桶中,並由SNAPPY壓縮。 我能夠使用python fastparquet模塊讀取壓縮版本的parquet文件,但不讀取壓縮版本。python fastparquet模塊可以讀取壓縮的parquet文件嗎?

這是我使用的未壓縮

s3 = s3fs.S3FileSystem(key='XESF', secret='dsfkljsf') 
myopen = s3.open 
pf = ParquetFile('sample/py_test_snappy/part-r-12423423942834.parquet', open_with=myopen) 
df=pf.to_pandas() 

這回沒有錯誤,但是當我試圖在瞬間壓縮版本的文件讀取代碼:

pf = ParquetFile('sample/py_test_snappy/part-r-12423423942834.snappy.parquet', open_with=myopen) 

我有錯誤與to_pandas()

df=pf.to_pandas() 

錯誤消息

KeyErrorTraceback (most recent call last) in() ----> 1 df=pf.to_pandas()

/opt/conda/lib/python3.5/site-packages/fastparquet/api.py in to_pandas(self, columns, categories, filters, index) 293 for (name, v) in views.items()} 294 self.read_row_group(rg, columns, categories, infile=f, --> 295 index=index, assign=parts) 296 start += rg.num_rows 297 else:

/opt/conda/lib/python3.5/site-packages/fastparquet/api.py in read_row_group(self, rg, columns, categories, infile, index, assign) 151 core.read_row_group( 152 infile, rg, columns, categories, self.helper, self.cats, --> 153 self.selfmade, index=index, assign=assign) 154 if ret: 155 return df

/opt/conda/lib/python3.5/site-packages/fastparquet/core.py in read_row_group(file, rg, columns, categories, schema_helper, cats, selfmade, index, assign) 300 raise RuntimeError('Going with pre-allocation!') 301 read_row_group_arrays(file, rg, columns, categories, schema_helper, --> 302 cats, selfmade, assign=assign) 303 304 for cat in cats:

/opt/conda/lib/python3.5/site-packages/fastparquet/core.py in read_row_group_arrays(file, rg, columns, categories, schema_helper, cats, selfmade, assign) 289 read_col(column, schema_helper, file, use_cat=use, 290 selfmade=selfmade, assign=out[name], --> 291 catdef=out[name+'-catdef'] if use else None) 292 293

/opt/conda/lib/python3.5/site-packages/fastparquet/core.py in read_col(column, schema_helper, infile, use_cat, grab_dict, selfmade, assign, catdef) 196 dic = None 197 if ph.type == parquet_thrift.PageType.DICTIONARY_PAGE: --> 198 dic = np.array(read_dictionary_page(infile, schema_helper, ph, cmd)) 199 ph = read_thrift(infile, parquet_thrift.PageHeader) 200 dic = convert(dic, se)

/opt/conda/lib/python3.5/site-packages/fastparquet/core.py in read_dictionary_page(file_obj, schema_helper, page_header, column_metadata) 152 Consumes data using the plain encoding and returns an array of values. 153 """ --> 154 raw_bytes = _read_page(file_obj, page_header, column_metadata) 155 if column_metadata.type == parquet_thrift.Type.BYTE_ARRAY: 156 # no faster way to read variable-length-strings?

/opt/conda/lib/python3.5/site-packages/fastparquet/core.py in _read_page(file_obj, page_header, column_metadata) 28 """Read the data page from the given file-object and convert it to raw, uncompressed bytes (if necessary).""" 29 raw_bytes = file_obj.read(page_header.compressed_page_size) ---> 30 raw_bytes = decompress_data(raw_bytes, column_metadata.codec) 31 32 assert len(raw_bytes) == page_header.uncompressed_page_size, \

/opt/conda/lib/python3.5/site-packages/fastparquet/compression.py in decompress_data(data, algorithm) 48 def decompress_data(data, algorithm='gzip'): 49 if isinstance(algorithm, int): ---> 50 algorithm = rev_map[algorithm] 51 if algorithm.upper() not in decompressions: 52 raise RuntimeError("Decompression '%s' not available. Options: %s" %

KeyError: 1

+0

你能告訴我們你得到了什麼錯誤,以及關於如何生成文件的一些細節? – mdurant

+0

是的。對不起! ^見 – user2322784

回答

4

錯誤可能表明系統上沒有找到解壓縮SNAPPY的庫 - 儘管顯然錯誤消息可能更清晰!

根據您的系統,下面幾行可能會解決這個問題爲您: conda install python-snappy pip install python-snappy

如果您使用的是Windows,構建鏈可能不工作,也許你需要從here安裝。

+0

https://github.com/dask/fastparquet/pull/84中的編輯,以改善錯誤信息。 – mdurant

相關問題