Redis-Scrapy分布式爬虫

Scrapy-Redis分布式策略:

Scrapy_redis在scrapy的基础上实现了更多,更强大的功能,具体体现在:

reqeust去重,爬虫持久化,和轻松实现分布式

 

假设有四台电脑:Windows 10、Mac OS X、Ubuntu 16.04、CentOS 7.2,任意一台电脑都可以作为 Master端 或 Slaver端,比如:

  • Master端(核心服务器) :使用 Windows 10,搭建一个Redis数据库,不负责爬取,只负责url指纹判重、Request的分配,以及数据的存储

  • Slaver端(爬虫程序执行端) :使用 Mac OS X 、Ubuntu 16.04、CentOS 7.2,负责执行爬虫程序,运行过程中提交新的Request给Master

 

  1. 首先Slaver端从Master端拿任务(Request、url)进行数据抓取,Slaver抓取数据的同时,产生新任务的Request便提交给 Master 处理;

  2. Master端只有一个Redis数据库,负责将未处理的Request去重和任务分配,将处理后的Request加入待爬队列,并且存储爬取的数据。

Scrapy-Redis默认使用的就是这种策略,我们实现起来很简单,因为任务调度等工作Scrapy-Redis都已经帮我们做好了,我们只需要继承RedisSpider、指定redis_key就行了。

缺点是,Scrapy-Redis调度的任务是Request对象,里面信息量比较大(不仅包含url,还有callback函数、headers等信息),可能导致的结果就是会降低爬虫速度、而且会占用Redis大量的存储空间,所以如果要保证效率,那么就需要一定硬件水平。

Redis-Scrapy分布式爬虫

Redis-Scrapy分布式爬虫

当当网图书信息抓取案例:

Redis-Scrapy分布式爬虫

 

 

1、创建Scrapy项目

 

使用全局命令startproject创建项目,创建新文件夹并且使用命令进入文件夹,创建一个名为jingdong的Scrapy项目。

 

[python] view plain copy

  1. scrapy startproject dangdang  

 

 

2.使用项目命令genspider创建Spider

 

[python] view plain copy

  1. scrapy genspider dangdang dangdang.com

 

 

3、发送请求,接受响应,提取数据

 

 
  1. # -*- coding: utf-8 -*-

  2. import scrapy

  3. from scrapy_redis.spiders import RedisSpider

  4. from copy import deepcopy

  5.  
  6. class DangdangSpider(RedisSpider):

  7. name = 'dangdang'

  8. allowed_domains = ['dangdang.com']

  9. # start_urls = ['http://book.dangdang.com/']

  10. redis_key = "dangdang"

  11.  
  12. def parse(self, response):

  13. div_list = response.xpath("//div[@class='con flq_body']/div")

  14. # print(len(div_list),"("*100)

  15. for div in div_list:#大分类

  16. item = {}

  17. item["b_cate"] = div.xpath("./dl/dt//text()").extract()

  18. #中间分类

  19. dl_list = div.xpath("./div//dl[@class='inner_dl']")

  20. # print(len(dl_list),")"*100)

  21. for dl in dl_list:

  22. item["m_cate"] = dl.xpath("./dt/a/text()").extract_first()

  23. #获取小分类

  24. a_list = dl.xpath("./dd/a")

  25. # print("-"*100,len(a_list))

  26. for a in a_list:

  27. item["s_cate"] = a.xpath("./@title").extract_first()

  28. item["s_href"] = a.xpath("./@href").extract_first()

  29. if item["s_href"] is not None:

  30. yield scrapy.Request( #发送图书列表页的请求

  31. item["s_href"],

  32. callback=self.parse_book_list,

  33. meta = {"item":deepcopy(item)}

  34. )

  35.  
  36. def parse_book_list(self,response):

  37. item = response.meta["item"]

  38. li_list = response.xpath("//ul[@class='bigimg']/li")

  39. for li in li_list:

  40. item["book_title"] = li.xpath("./a/@title").extract_first()

  41. item["book_href"] = li.xpath("./a/@href").extract_first()

  42. item["book_detail"] = li.xpath("./p[@class='detail']/text()").extract_first()

  43. item["book_price"] = li.xpath(".//span[@class='search_now_price']/text()").extract_first()

  44. item["book_author"] = li.xpath("./p[@class='search_book_author']/span[1]/a/@title").extract_first()

  45. item["book_publish_date"] = li.xpath("./p[@class='search_book_author']/span[2]/text()").extract_first()

  46. item["book_press"] = li.xpath("./p[@class='search_book_author']/span[3]/a/@title").extract_first()

  47. print(item)

  48.  



 

 

4.pipelines设置保存文件:

 

 
  1. # -*- coding: utf-8 -*-

  2.  
  3. # Define your item pipelines here

  4. #

  5. # Don't forget to add your pipeline to the ITEM_PIPELINES setting

  6. # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html

  7.  
  8.  
  9. class BookPipeline(object):

  10. def process_item(self, item, spider):

  11. item["book_name"] = item["book_name"].strip() if item["book_name"] is not None else None

  12. item["book_publish_date"] = item["book_publish_date"].strip() if item["book_publish_date"] is not None else None

  13. print(item)

  14. # return item


 

 

5.配置settings设置,文件保存在redis中:


注意:setting中的配置都是可以自己设定的,意味着我们的可以重写去重和调度器的方法,包括是否要把数据存储到redis(pipeline)view plain cop

 

 

 

 
  1. # -*- coding: utf-8 -*-

  2.  
  3. # Scrapy settings for book project

  4. #

  5. # For simplicity, this file contains only settings considered important or

  6. # commonly used. You can find more settings consulting the documentation:

  7. #

  8. # http://doc.scrapy.org/en/latest/topics/settings.html

  9. # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html

  10. # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

  11.  
  12. BOT_NAME = 'book'

  13.  
  14. SPIDER_MODULES = ['book.spiders']

  15. NEWSPIDER_MODULE = 'book.spiders'

  16.  
  17. #实现scrapyredis的功能,持久化的功能

  18. DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

  19. SCHEDULER = "scrapy_redis.scheduler.Scheduler"

  20. SCHEDULER_PERSIST = True

  21. REDIS_URL = "redis://127.0.0.1:6379"

  22.  
  23. # Crawl responsibly by identifying yourself (and your website) on the user-agent

  24. USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'

  25.  
  26. # Obey robots.txt rules

  27. ROBOTSTXT_OBEY = False

  28.  
  29. # Configure maximum concurrent requests performed by Scrapy (default: 16)

  30. #CONCURRENT_REQUESTS = 32

  31.  
  32. # Configure a delay for requests for the same website (default: 0)

  33. # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay

  34. # See also autothrottle settings and docs

  35. #DOWNLOAD_DELAY = 3

  36. # The download delay setting will honor only one of:

  37. #CONCURRENT_REQUESTS_PER_DOMAIN = 16

  38. #CONCURRENT_REQUESTS_PER_IP = 16

  39.  
  40. # Disable cookies (enabled by default)

  41. #COOKIES_ENABLED = False

  42.  
  43. # Disable Telnet Console (enabled by default)

  44. #TELNETCONSOLE_ENABLED = False

  45.  
  46. # Override the default request headers:

  47. #DEFAULT_REQUEST_HEADERS = {

  48. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

  49. # 'Accept-Language': 'en',

  50. #}

  51.  
  52. # Enable or disable spider middlewares

  53. # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

  54. #SPIDER_MIDDLEWARES = {

  55. # 'book.middlewares.BookSpiderMiddleware': 543,

  56. #}

  57.  
  58. # Enable or disable downloader middlewares

  59. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html

  60. #DOWNLOADER_MIDDLEWARES = {

  61. # 'book.middlewares.MyCustomDownloaderMiddleware': 543,

  62. #}

  63.  
  64. # Enable or disable extensions

  65. # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html

  66. #EXTENSIONS = {

  67. # 'scrapy.extensions.telnet.TelnetConsole': None,

  68. #}

  69.  
  70. # Configure item pipelines

  71. # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html

  72. ITEM_PIPELINES = {

  73. 'book.pipelines.BookPipeline': 300,

  74. }

  75.  
  76. # Enable and configure the AutoThrottle extension (disabled by default)

  77. # See http://doc.scrapy.org/en/latest/topics/autothrottle.html

  78. #AUTOTHROTTLE_ENABLED = True

  79. # The initial download delay

  80. #AUTOTHROTTLE_START_DELAY = 5

  81. # The maximum download delay to be set in case of high latencies

  82. #AUTOTHROTTLE_MAX_DELAY = 60

  83. # The average number of requests Scrapy should be sending in parallel to

  84. # each remote server

  85. #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

  86. # Enable showing throttling stats for every response received:

  87. #AUTOTHROTTLE_DEBUG = False

  88.  
  89. # Enable and configure HTTP caching (disabled by default)

  90. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

  91. #HTTPCACHE_ENABLED = True

  92. #HTTPCACHE_EXPIRATION_SECS = 0

  93. #HTTPCACHE_DIR = 'httpcache'

  94. #HTTPCACHE_IGNORE_HTTP_CODES = []

  95. #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'



 

 

 

6.进行爬取:执行项目命令crawl,启动Spider:

 

[python] view plain copy

  1. scrapy crawl dangdang