嘗試從此URL獲取所有(5)表。分裂選擇(值)返回屬性錯誤
對個人頁面的下拉框,我可以type(value)
填充但這並不刷新頁面。使用NextPage按鈕瀏覽頁面會失敗,因爲對象不再連接到DOM(我不知道如何在分裂中解決這個問題)。
試圖填充下拉列表,然後選擇它。這將返回該錯誤:
回溯(最近通話最後一個):
File "<stdin>", line 69, in <module>
File "/usr/local/lib/python2.6/dist-packages/splinter/driver/webdriver/__init__.py", line 334, in select
self.find_by_xpath('//select[@name="%s"]/option[@value="%s"]' % (self["name"], value))._element.click()
File "/usr/local/lib/python2.6/dist-packages/splinter/element_list.py", line 73, in __getattr__
self.__class__.__name__, name))
AttributeError: 'ElementList' object has no attribute '_element'
我用下面的代碼。最受讚賞的任何幫助!
from splinter import Browser
from lxml.html import parse
from StringIO import StringIO
from time import sleep
url = r'http://www.molpower.com//VLCWeb/UIAboutMOL/PortScheduleInfo.aspx?pPort=NLRTMDE&pFromDate=01-Oct-2013&pToDate=10-Oct-2013'
def _unpack(row, kind = 'td'):
elts = row.findall('.//%s' %kind)
return [val.text_content() for val in elts[0:7]]
def parse_schdls_data(table):
rows = table.findall('.//tr')
hdrs = _unpack(rows[0], kind = 'th')
data = [_unpack(r, kind = 'td') for ir, r in enumerate(rows[1:-1]) if ir % 3 == 0]
return (hdrs, data)
with Browser() as browser:
browser.visit(url)
print browser.url
pages = browser.find_by_tag('option')
pagevals = [p.value for p in pages]
maxpagev = max(pagevals)
inputs = browser.find_by_tag('input')
'''
for ip, inp in enumerate(inputs):
if inp.has_class('btnMRBPageNext'):
#print ip, inp.value, inp.text
#Need input 35 for the nextPage
inp.click()
'''
selects = browser.find_by_tag('select')
for ns, sel in enumerate(selects):
if sel.has_class('inputDropDown'):
print ns, sel.value, sel.text
sel.type(sel.value)
sleep(2)
moldata = list()
for page in range(len(pagevals)):
content = browser.html
parsed = parse(StringIO(content))
doc = parsed.getroot()
tables = doc.findall('.//table')
schdls = tables[91]
#Get all rows from that table
rows = schdls.findall('.//tr')
hdr, data = parse_schdls_data(schdls)
#print page, data
moldata.append(data)
while browser.is_element_not_present_by_tag('select', wait_time = 2):
pass
inputs = browser.find_by_tag('input')
selects = browser.find_by_tag('select')
#inputs[35].click()
#selects[0].type(str(page + 1))
selects[0].select(selects[0].value)