2016-07-04 63 views
0

我在python中使用Elasticsearch。我有熊貓框架(3列)中的數據,然後我添加了兩列_index和_type,並使用pandas內置方法將數據轉換爲json。將批量數據加載到Elasticsearch時出錯

data = data.to_json(orient='records') 

這是我的數據,那麼,

[{"op_key":99140046678,"employee_key":991400459,"Revenue Results":6625.76480192,"_index":"revenueindex","_type":"revenuetype"},  
{"op_key":99140045489,"employee_key":9914004258,"Revenue Results":6691.05435536,"_index":"revenueindex","_type":"revenuetype"}, 
...... 
}] 

我的映射是:

Traceback (most recent call last): 
     File "/Users/adaggula/Documents/workspace/ElSearchPython/sample.py", line 59, in <module> 
     res = helpers.bulk(client,data) 
     File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 188, in bulk 
     for ok, item in streaming_bulk(client, actions, **kwargs): 
     File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 160, in streaming_bulk 
     for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs): 
     File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 89, in _process_bulk_chunk 
     raise e 
    elasticsearch.exceptions.RequestError: TransportError(400, u'action_request_validation_exception', u'Validation Failed: 1: index is 
missing;2: type is missing;3: index is missing;4: type is missing;5: index is 
missing;6: ....... type is missing;999: index is missing;1000: type is missing;') 

user_mapping = { 
     "settings" : { 
      "number_of_shards": 3, 
      "number_of_replicas": 2 
     }, 

     'mappings': { 
      'revenuetype': { 
       'properties': { 
        'op_key':{'type':'string'}, 
        'employee_key':{'type':'string'}, 
        'Revenue Results':{'type':'float','index':'not_analyzed'}, 
       } 
      } 
     } 
    } 

同時使用helpers.bulk(ES,數據),然後對着這個錯誤

它看起來像每個json對象,索引和t ype's不見了。如何克服這一點?

回答

0

熊貓數據幀到json轉換是解決問題的技巧。

data = data.to_json(orient='records') 
data= json.loads(data) 
+0

我剛剛發表評論,這可能會縮短爲'data = data.to_dict(orient ='records')'。然後我對一個有1,000,000行和50列的數據框進行了一個簡短的測試,發現你的版本執行速度明顯更快......奇怪的是,'df.to_dict()'的速度非常慢。 – Dirk

+1

我有一個類似的錯誤,並通過在'obj.to_dict(include_meta = True)中添加'include_meta = True'來擺脫它' – Anupam

相關問題