2016-06-12 68 views
6

我有兩個腳本,scraper.py和db_control.py。在scraper.py我有這樣的事情:Aiohttp,Asyncio:RuntimeError:事件循環已關閉

... 
def scrap(category, field, pages, search, use_proxy, proxy_file): 
    ... 
    loop = asyncio.get_event_loop() 

    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    ... 
    loop.close() 

    return [ x.result() for x in res ] 

... 

而且在db_control.py:

from scraper import scrap 
... 
while new < 15: 
    data = scrap(category, field, pages, search, use_proxy, proxy_file) 
    ... 
... 

理論上,刮板應啓動不明倍,直到有足夠的數據已經獲得。但是,當new不imidiatelly > 15則出現此錯誤:

File "/usr/lib/python3.4/asyncio/base_events.py", line 293, in run_until_complete 
self._check_closed() 
    File "/usr/lib/python3.4/asyncio/base_events.py", line 265, in _check_closed 
raise RuntimeError('Event loop is closed') 
RuntimeError: Event loop is closed 

但是,如果我跑廢料()只有一次腳本工作得很好。所以我猜想在重新創建loop = asyncio.get_event_loop()時出現了一些問題,我嘗試過this,但沒有任何變化。我如何解決這個問題?當然這些只是我的代碼的片段,如果你認爲問題可以在其他地方,完整的代碼可用here

回答

7

方法run_until_completerun_foreverrun_in_executorcreate_taskcall_at明確檢查 循環,如果它的封閉拋出異常。從文檔

報價 - BaseEvenLoop.close

This is idempotent and irreversible


除非你有一些(好)的原因,你可以簡單地忽略關閉管線:

def scrap(category, field, pages, search, use_proxy, proxy_file): 
    #... 
    loop = asyncio.get_event_loop() 

    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    #... 
    # loop.close() 
    return [ x.result() for x in res ] 

如果你想擁有各一次全新的循環,您不必手動創建並設置爲默認值:

def scrap(category, field, pages, search, use_proxy, proxy_file): 
    #... 
    loop = asyncio.new_event_loop() 
    asyncio.set_event_loop(loop)  
    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    #... 
    return [ x.result() for x in res ] 
+0

謝謝!像現在的魅力一樣:) –