mechanize._response.httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt
後使用機械化的時候,從Screen scraping: getting around "HTTP Error 403: request disallowed by robots.txt"添加代碼忽略robots.txt的,但現在我收到此錯誤:
mechanize._response.httperror_seek_wrapper: HTTP Error 403: Forbidden
是有沒有辦法解決這個錯誤?
(當前代碼)
br = mechanize.Browser()
br.set_handle_robots(False)