Scrapy:Linkextractor规则不起作用
问题描述:
我已经尝试了3种不同的LinkExtractor变体,但它仍然忽略了所有3个变体中的“拒绝”规则和爬行子域....我想排除从爬行。Scrapy:Linkextractor规则不起作用
只用'允许'规则试过。只允许主域即example.edu.uk
rules = [Rule(LinkExtractor(allow=(r'^example\.edu.uk(\/.*)?$',)))] // Not Working
与“拒绝”唯一的规则尝试。要拒绝所有子域即sub.example.edu.uk
rules = [Rule(LinkExtractor(deny=(r'(?<=\.)[a-z0-9-]*\.edu\.uk',)))] // Not Working
既尝试 '允许&否认' 的规则
rules = [Rule(LinkExtractor(allow=(r'^http:\/\/example\.edu\.uk(\/.*)?$'),deny=(r'(?<=\.)[a-z0-9-]*\.edu\.uk',)))] // Not Working
例子:
关注这些链接
- example.edu.uk/fsdfs.htm
- example.edu.uk/nkln.htm
- example.edu.uk/vefr.htm
- example.edu.uk/opji.htm
弃子域名链接
- sub-domain.example.edu.uk/fsdfs.htm
- sub-domain.example.edu.uk/nkln.htm
- sub-domain.example.edu.uk/vefr.htm
- sub-domain.example.edu.uk/opji.htm
下面是完整的代码...
class NewsFields(Item):
pagetype = Field()
pagetitle = Field()
pageurl = Field()
pagedate = Field()
pagedescription = Field()
bodytext = Field()
class MySpider(CrawlSpider):
name = 'profiles'
start_urls = ['http://www.example.edu.uk/listing']
allowed_domains = ['example.edu.uk']
rules = (Rule(LinkExtractor(allow=(r'^https?://example.edu.uk/.*',))),)
def parse(self, response):
hxs = Selector(response)
soup = BeautifulSoup(response.body, 'lxml')
nf = NewsFields()
ptype = soup.find_all(attrs={"name":"nkdpagetype"})
ptitle = soup.find_all(attrs={"name":"nkdpagetitle"})
pturl = soup.find_all(attrs={"name":"nkdpageurl"})
ptdate = soup.find_all(attrs={"name":"nkdpagedate"})
ptdesc = soup.find_all(attrs={"name":"nkdpagedescription"})
for node in soup.find_all("div", id="main-content__wrapper"):
ptbody = ''.join(node.find_all(text=True))
ptbody = ' '.join(ptbody.split())
nf['pagetype'] = ptype[0]['content'].encode('ascii', 'ignore')
nf['pagetitle'] = ptitle[0]['content'].encode('ascii', 'ignore')
nf['pageurl'] = pturl[0]['content'].encode('ascii', 'ignore')
nf['pagedate'] = ptdate[0]['content'].encode('ascii', 'ignore')
nf['pagedescription'] = ptdesc[0]['content'].encode('ascii', 'ignore')
nf['bodytext'] = ptbody.encode('ascii', 'ignore')
yield nf
for url in hxs.xpath('//p/a/@href').extract():
yield Request(response.urljoin(url), callback=self.parse)
是否有人可以帮忙吗? 感谢
答
你的第一条规则是错误的
rules = [Rule(LinkExtractor(allow=(r'^example\.edu.uk(\/.*)?$',)))] // Not Working
rules = [Rule(LinkExtractor(deny=(r'(?<=\.)[a-z0-9-]*\.edu\.uk',)))] // Not Working
的允许和拒绝是绝对URL,而不是域名。下面应该为你工作
rules = (Rule(LinkExtractor(allow=(r'^https?://example.edu.uk/.*',))),)
编辑-1
首先,你应该低于
allowed_domains = ['example.edu.uk']
到
allowed_domains = ['www.example.edu.uk']
其次你的规则改变了用于提取网址应为
rules = (Rule(LinkExtractor(allow=(r'^https?://www.example.edu.uk/.*',))),)
第三,在你的下面的代码
for url in hxs.xpath('//p/a/@href').extract():
yield Request(response.urljoin(url), callback=self.parse)
规则将不会应用。您的收益率受制于规则。规则会自动插入新的请求,但它们不会阻止您产生规则配置不允许的其他链接。但设置allowed_domains
将适用于规则和您的收益率
发布一些样例链接也希望被处理和那些你不想处理 –
也请发布时,你说不工作,什么是我们在发生什么?如果可能的话发布日志 –
嗨@TarunLalwani你在我的问题中不理解的是什么?必须对主域中的所有链接进行爬网,并且必须丢弃子域下的所有链接。无论如何,我已经更新了这个问题。往上看。 – Slyper