2017-04-14 54 views
0

我試圖從一個網站API轉換網頁API數據蟒蛇數據幀

這裏下載全球地震數據是我的代碼:

url = "http://api.openhazards.com/GetEarthquakeCatalog?t0=1990/01/01 00:00:00&m0=5.5&x0=-125&x1=180&y0=32&y1=42" 

response = requests.get(url) 
response 

(響應[200])

page = response.content 
scraping = BeautifulSoup(page, "lxml") 
element =scraping.find_all() 
element 

這裏是一個數據結果的例子:

[<html><body><p>1990/01/10 03:11:17.630000 39.706 143.306 36.6 5.7\n1990/01/14 03:03:19.230000 37.819 91.971 12.2 6.1\n1990/01/20 01:27:09.800000 35.832 52.954 24.5 5.5\n1990/02/05 05:16:46.150000 37.047 71.25 9.9 6.1\n1990/02/20 06:53:39.890000 34.706 139.252 14.2 6.1\n1990/02/28 23:43:36.750000 34.144 -117.697 3.29 5.51\n1990/03/05 20:47:00.760000 36.907 73.021 12.2 5.8\n1990/03/05 20:51:13.060000 36.738 73.061 10.0 5.7\n1990/03/25 14:17:18.820000 37.034 72.942 33.0 6.0\n1990/04/11 20:51:12.190000 35.474 135.451 62.3 5.6\n1990/04/17 01:59:33.400000 39.436 74.9 33.0 6.2\n1990/04/26 09:37:10.940000 36.04 100.274 10.0 5.7\n1990/04/26 09:37:15.040000 35.986 100.245 8.1 6.9\n1990/04/26 09:37:45.380000 36.239 100.254 9.6 6.3\n1990/05/11 13:10:20.290000 41.82 130.858 78.5 6.3\n1990/05/15 14:25:20.690000 36.043 70.428 13.1 5.9\n1990/05/15 22:2 

如何爲每個列中的每個值創建一個Dataframe?

在這裏,你是我的試用代碼和結果。我嘗試了很多不同的風格,但我無法管理。

a = pd.DataFrame(element) 
a 

enter image description here

也在這裏是網站預期的請求和響應: enter image description here

回答

1

pandas.read_csv方法可以讀取URL,這樣你就可以這樣做:

df = pd.read_csv("http://api.openhazards.com/GetEarthquakeCatalog?t0=1990/01/01%2000:00:00&m0=5.5&x0=-125&x1=180&y0=32&y1=42", 
        header = None, sep = " ") 

df.head() 

enter image description here