关于爬虫时url去重的初步探讨(上)
博客第十五天
测试内容:自己写init_add_request(spider,url:str)方法实现url去重(本次仅测试)
工具:Python3.6,Pycharm,scrapy,
工程内容:
1. 准备:
# spider.py
import scrapy from scrapy.http import Request class DuanDian(scrapy.Spider): name = 'duandian' allowed_domains = ['58.com'] start_urls = ['http://cd.58.com/'] def parse(self,response): pass yield Request('http://bj.58.com',callback = self.parse) yield Request('http://wh.58.com',callback = self.parse)
# pipelines.py
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html from .init_utils import init_add_request class DuandianPipeline(object): def process_item(self, item, spider): return item def open_spider(self,spider): # init_add_request(spider,'http://wh.58.com')
# main.py 注:使用此方法便于调试
from scrapy.cmdline import execute execute('scrapy crawl duandian'.split())
# init_utils.py 注:此方法用于去重
from scrapy.http import Request def init_add_request(spider,url:str): rf = spider.crawler.engine.slot.scheduler.df request = Request(url) rf.request_seen(request)
2. 测试
# settings.py 注:用于配置pipelings,图为默认情况
# ITEM_PIPELINES = { # 'duandian.pipelines.DuandianPipeline': 300, # }
此时的调试结果,访问了全部三个地址:
# 重新设置settings:
ITEM_PIPELINES = { 'duandian.pipelines.DuandianPipeline': 300, }
此时的调试结果,配置好的地址没有被访问: