2014-07-03 120 views
9

是否有任何工具能夠從「典型」JSON文檔創建AVRO模式?從JSON文檔生成AVRO模式

例如:

{ 
"records":[{"name":"X1","age":2},{"name":"X2","age":4}] 
} 

我發現http://jsonschema.net/reboot/#/其產生 'JSON-架構'

{ 
    "$schema": "http://json-schema.org/draft-04/schema#", 
    "id": "http://jsonschema.net#", 
    "type": "object", 
    "required": false, 
    "properties": { 
    "records": { 
     "id": "#records", 
     "type": "array", 
     "required": false, 
     "items": { 
     "id": "#1", 
     "type": "object", 
     "required": false, 
     "properties": { 
      "name": { 
      "id": "#name", 
      "type": "string", 
      "required": false 
      }, 
      "age": { 
      "id": "#age", 
      "type": "integer", 
      "required": false 
      } 
     } 
     } 
    } 
    } 
} 

但我想的AVRO版本。

+3

你有答案嗎?如果不是,那麼你是否從json手動創建了avro模式? :| – Abhishek

+0

我也很好奇 –

+0

好奇 –

回答

2

您可以使用Apache Spark和python輕鬆實現。首先從http://spark.apache.org/downloads.html下載火花分佈,然後使用pip爲python安裝avro軟件包。然後用阿夫羅包運行pyspark:

./bin/pyspark --packages com.databricks:spark-avro_2.11:3.1.0 

,並使用以下代碼(假設input.json文件包含一個或多個JSON文檔,每一個在單獨的行):

import os, avro.datafile 

spark.read.json('input.json').coalesce(1).write.format("com.databricks.spark.avro").save("output.avro") 
avrofile = filter(lambda file: file.startswith('part-r-00000'), os.listdir('output.avro'))[0] 

with open('output.avro/' + avrofile) as avrofile: 
    reader = avro.datafile.DataFileReader(avrofile, avro.io.DatumReader()) 
    print(reader.datum_reader.writers_schema) 

例如:輸入文件與內容:

{'string': 'somestring', 'number': 3.14, 'structure': {'integer': 13}} 
{'string': 'somestring2', 'structure': {'integer': 14}} 

該腳本將導致:

{"fields": [{"type": ["double", "null"], "name": "number"}, {"type": ["string", "null"], "name": "string"}, {"type": [{"type": "record", "namespace": "", "name": "structure", "fields": [{"type": ["long", "null"], "name": "integer"}]}, "null"], "name": "structure"}], "type": "record", "name": "topLevelRecord"}