Scrapy 之中間件(Middleware)的具體使用
Scrapy 結構概述:
一、下載器中間件(Downloader Middleware)
如上圖標號4、5處所示,下載器中間件用於處理scrapy的request和response的鉤子框架,如在request中設置代理ip,header等,檢測response的HTTP響應碼等。
scrapy已經自帶來一堆下載器中間件。
{ 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100, 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300, 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350, 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400, 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500, 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550, 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560, 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580, 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590, 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600, 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700, 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750, 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850, 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900, }
上面就是默認啟用的下載器中間件,其各個中間件的作用參考一下官方文檔:Scrapy download-middleware
自定義下載器中間件
有時我們需要編寫自己的一些下載器中間件,如使用代理池,隨機更換user-agent等,要使用自定義的下載器中間件,就需要在setting文件中激活我們自己的實現類,如下:
DOWNLOADERMIDDLEWARES = { 'myproject.middlewares.Custom_A_DownloaderMiddleware': 543, 'myproject.middlewares.Custom_B_DownloaderMiddleware': 643, 'myproject.middlewares.Custom_B_DownloaderMiddleware': None, }
設置值是個DICT,key是我們自定義的類路徑,後面數字是執行順序,數字越小,越靠近引擎,數字越大越靠近下載器,所以數字越小的,processrequest()優先處理;數字越大的,process_response()優先處理;若需要關閉某個中間件直接設為None即可。
(PS. 如果兩個下載器的沒有強制的前後關系,數字大小沒什麼影響)
實現下載器我們需要重寫以下幾個方法:
- 對於請求的中間件實現 process_request(request, spider);
- 對於處理回復中間件實現process_response(request, response, spider);
- 以及異常處理實現 process_exception(request, exception, spider)
process_request(request, spider)
process_request:可以選擇返回None、Response、Request、raise IgnoreRequest其中之一。
- 如果返回None,scrapy將繼續處理該request,執行其他的中間件的響應方法。直到合適的下載器處理函數(downloader handler)被調用,該request被執行,其response被下載。
- 如果其返回Response對象,Scrapy將不會調用任何其他的process_request()或process_exception()方法,或相應地下載函數; 其將返回該response。已安裝的中間件的process_response()方法則會在每個response返回時被調用
- 如果其返回Request對象,Scrapy則停止調用process_request方法並重新調度返回的request。當新返回的request被執行後,相應地中間件鏈將會根據下載的response被調用。
- 如果其raise IgnoreRequest,則安裝的下載中間件的process_exception()方法會被調用。如果沒有任何一個方法處理該異常, 則request的errback(Request.errback)方法會被調用。如果沒有代碼處理拋出的異常, 則該異常被忽略且不記錄(不同於其他異常那樣)。
通常返回None較常見,它會繼續執行爬蟲下去
process_response(request, response, spider)
當下載器完成HTTP請求,傳遞響應給引擎的時候調用,它會返回 Response 、Request 、IgnoreRequest三種對象的一種
- 若返回Response對象,它會被下個中間件中的process_response()處理
- 若返回Request對象,中間鏈停止,然後返回的Request會被重新調度下載
- 拋出IgnoreRequest,回調函數 Request.errback將會被調用處理,若沒處理,將會忽略
process_exception(request, exception, spider)
當下載處理器(download handler)或process_request()拋出異常(包括 IgnoreRequest 異常)時, Scrapy調用 process_exception() ,通常返回None,它會一直處理異常
from_crawler(cls, crawler)
這個類方法通常是訪問settings和signals的入口函數
例如下面2個例子是更換user-agent和代理ip的下載中間件
# setting中設置 USER_AGENT_LIST = [ \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1", \ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36", \ "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0" ] PROXIES = [ '1.85.220.195:8118', '60.255.186.169:8888', '118.187.58.34:53281', '116.224.191.141:8118', '120.27.5.62:9090', '119.132.250.156:53281', '139.129.166.68:3128' ]
代理ip中間件
import random class Proxy_Middleware(): def __init__(self, crawler): self.proxy_list = crawler.settings.PROXY_LIST self.ua_list = crawler.settings.USER_AGENT_LIST @classmethod def from_crawler(cls, crawler): return cls(crawler) def process_request(self, request, spider): try: ua = random.choice(self.ua_list) request.headers.setdefault('User-Agent', ua) proxy_ip_port = random.choice(self.proxy_list) request.meta['proxy'] = 'http://' + proxy_ip_port except request.exceptions.RequestException: spider.logger.error('some error happended!')
重試中間件
有時使用代理會被遠程拒絕或超時等錯誤,這時我們需要換代理ip重試,重寫scrapy.downloadermiddlewares.retry.RetryMiddleware
from scrapy.downloadermiddlewares.retry import RetryMiddleware from scrapy.utils.response import response_status_message class My_RetryMiddleware(RetryMiddleware): def __init__(self, crawler): self.proxy_list = crawler.settings.PROXY_LIST self.ua_list = crawler.settings.USER_AGENT_LIST @classmethod def from_crawler(cls, crawler): return cls(crawler) def process_response(self, request, response, spider): if request.meta.get('dont_retry', False): return response if response.status in self.retry_http_codes: reason = response_status_message(response.status) try: ua = random.choice(self.ua_list) request.headers.setdefault('User-Agent', ua) proxy_ip_port = random.choice(self.proxy_list) request.meta['proxy'] = 'http://' + proxy_ip_port except request.exceptions.RequestException: spider.logger.error('獲取訊代理ip失敗!') return self._retry(request, reason, spider) or response return response
# scrapy中對接selenium from scrapy.http import HtmlResponse from selenium import webdriver from selenium.common.exceptions import TimeoutException from gp.configs import * class ChromeDownloaderMiddleware(object): def __init__(self): options = webdriver.ChromeOptions() options.add_argument('--headless') # 設置無界面 if CHROME_PATH: options.binary_location = CHROME_PATH if CHROME_DRIVER_PATH: # 初始化Chrome驅動 self.driver = webdriver.Chrome(chrome_options=options, executable_path=CHROME_DRIVER_PATH) else: self.driver = webdriver.Chrome(chrome_options=options) # 初始化Chrome驅動 def __del__(self): self.driver.close() def process_request(self, request, spider): try: print('Chrome driver begin...') self.driver.get(request.url) # 獲取網頁鏈接內容 return HtmlResponse(url=request.url, body=self.driver.page_source, request=request, encoding='utf-8', status=200) # 返回HTML數據 except TimeoutException: return HtmlResponse(url=request.url, request=request, encoding='utf-8', status=500) finally: print('Chrome driver end...')
二、Spider中間件(Spider Middleware)
如文章第一張圖所示,spider中間件用於處理response及spider生成的item和Request
啟動自定義spider中間件必須先開啟settings中的設置
SPIDER_MIDDLEWARES = { 'myproject.middlewares.CustomSpiderMiddleware': 543, 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None, }
同理,數字越小越靠近引擎,process_spider_input()優先處理,數字越大越靠近spider,process_spider_output()優先處理,關閉用None
編寫自定義spider中間件
process_spider_input(response, spider)
當response通過spider中間件時,這個方法被調用,返回None
process_spider_output(response, result, spider)
當spider處理response後返回result時,這個方法被調用,必須返回Request或Item對象的可迭代對象,一般返回result
process_spider_exception(response, exception, spider)
當spider中間件拋出異常時,這個方法被調用,返回None或可迭代對象的Request、dict、Item
補充一張圖:
參考文檔:
https://docs.scrapy.org/en/latest/topics/spider-middleware.html
https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
到此這篇關於Scrapy 之中間件(Middleware)的文章就介紹到這瞭,更多相關Scrapy 之中間件(Middleware)內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!
推薦閱讀:
- Python中scrapy下載保存圖片的示例
- Python Scrapy爬蟲框架使用示例淺析
- python爬蟲scrapy基本使用超詳細教程
- python網絡爬蟲實戰
- Python爬蟲報錯<response [406]>(已解決)