Scrapy蜘蛛不爬行

问题描述:

我想测试scrapy CrawlSpider,但我不明白它为什么不爬行。它应该做的是爬行*的数学页面只有一个深度级别,并返回每个爬行页面的标题。我错过了什么?非常感谢帮助!Scrapy蜘蛛不爬行

from scrapy.spiders import CrawlSpider, Rule 
from scrapy.linkextractors import LinkExtractor 
from scrapy.selector import Selector 
from Beurs.items import WikiItem 

class WikiSpider(CrawlSpider): 
    name = 'WikiSpider' 
    allowed_domains = ['wikipedia.org'] 
    start_urls = ["http://en.wikipedia.org/wiki/Mathematics"] 

    Rules = (
     Rule(LinkExtractor(restrict_xpaths=('//div[@class="mw-body"]//a/@href'))), 
     Rule(LinkExtractor(allow=("http://en.wikipedia.org/wiki/",)),  callback='parse_item', follow=True),   
     ) 


def parse_item(self, response): 
    sel = Selector(response) 
    rows = sel.xpath('//span[@class="innhold"]/table/tr') 
    items = [] 

     for row in rows[1:]: 
      item = WikiItem() 
      item['agent'] = row.xpath('./td[1]/a/text()|./td[1]/text()').extract() 
      item['org'] = row.xpath('./td[2]/text()').extract() 
      item['link'] = row.xpath('./td[1]/a/@href').extract() 
      item['produkt'] = row.xpath('./td[3]/text()').extract() 
     items.append(item) 
     return items 

设置:

BOT_NAME = 'Beurs' 

SPIDER_MODULES = ['Beurs.spiders'] 
NEWSPIDER_MODULE = 'Beurs.spiders' 
DOWNLOAD_HANDLERS = { 
    's3': None, 
} 
DEPTH_LIMIT = 1 

和日志:

C:\Users\Jan Willem\Anaconda\Beurs>scrapy crawl BeursSpider 
2015-11-07 15:14:36 [scrapy] INFO: Scrapy 1.0.3 started (bot: Beurs) 
2015-11-07 15:14:36 [scrapy] INFO: Optional features available: ssl, http11, boto 
2015-11-07 15:14:36 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Beurs.spiders', 'SPIDER_MODULES': ['Beurs.spiders'], 'DEPTH_LIMIT': 1, 'BOT_NAME': 'Beurs'} 
2015-11-07 15:14:36 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState 
2015-11-07 15:14:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2015-11-07 15:14:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2015-11-07 15:14:36 [scrapy] INFO: Enabled item pipelines: 
2015-11-07 15:14:36 [scrapy] INFO: Spider opened 
2015-11-07 15:14:36 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2015-11-07 15:14:36 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2015-11-07 15:14:36 [scrapy] DEBUG: Redirecting (301) to <GET https://en.wikipedia.org/wiki/Mathematics> from <GET http://en.wikipedia.org/wiki/Mathematics> 
2015-11-07 15:14:37 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/Mathematics> (referer: None) 
2015-11-07 15:14:37 [scrapy] INFO: Closing spider (finished) 
2015-11-07 15:14:37 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 530, 
'downloader/request_count': 2, 
'downloader/request_method_count/GET': 2, 
'downloader/response_bytes': 60393, 
'downloader/response_count': 2, 
'downloader/response_status_count/200': 1, 
'downloader/response_status_count/301': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2015, 11, 7, 14, 14, 37, 274000), 
'log_count/DEBUG': 3, 
'log_count/INFO': 7, 
'response_received_count': 1, 
'scheduler/dequeued': 2, 
'scheduler/dequeued/memory': 2, 
'scheduler/enqueued': 2, 
'scheduler/enqueued/memory': 2, 
'start_time': datetime.datetime(2015, 11, 7, 14, 14, 36, 852000)} 
2015-11-07 15:14:37 [scrapy] INFO: Spider closed (finished) 
+0

所以我改变了代码的解析部分Dup的步骤的一个(见下文),但我仍然得到相同的日志抓取0页(0页/分钟),刮0项(0项/分钟)。有人知道我能做什么? – Argali

所以基本上你的正则表达式是不完全正确和你的Xpath需要一些调整。我想下面的代码符合您的要求,请尝试一下,让我知道如果你需要更多的帮助:

def parse_item(self, response): 
    sel = Selector(response) 
    rows = sel.xpath('//span[@class="innhold"]/table/tr') 
    items = [] 

    for row in rows[1:]: 
     item = SasItem() 
     item['agent'] = row.xpath('./td[1]/a/text()|./td[1]/text()').extract() 
     item['org'] = row.xpath('./td[2]/text()').extract() 
     item['link'] = row.xpath('./td[1]/a/@href').extract() 
     item['produkt'] = row.xpath('./td[3]/text()').extract() 
     items.append(item) 
    return items 
+0

感谢您的快速响应!我尝试了你的调整,但不幸的是没有运行。还有以下问题:爬行0页(0页/分钟),刮0项(0项/分钟)。它看起来像爬行本身根本没有发生,我不明白为什么。有什么建议么? – Argali