爬虫框架Scrapy的学习记录

本次实验以爬取美剧天堂最近更新页面的美剧名字为目的 https://www.meijutt.com/new100.html

1、环境

  • Centos7 x64
  • python2或者python3(本次实验用python3版本)
  • virtualenvwrapper 虚拟环境

2、安装Scrapy

mkvirtualenv learnScrapypython3 --python=python3 #创建一个python3版本的虚拟环境
cd ~/.virtualenvs/learnScrapypython3/
pip install scrapy
pip list 

安装Scrapy后会自动安装如下模块:

(learnScrapypython3) [[email protected] movie]# pip list
Package Version


asn1crypto 0.24.0
attrs 18.2.0
Automat 0.7.0
cffi 1.11.5
constantly 15.1.0
cryptography 2.4.2
cssselect 1.0.3
hyperlink 18.0.0
idna 2.8
incremental 17.5.0
lxml 4.2.5
parsel 1.5.1
pip 18.1
pyasn1 0.4.4
pyasn1-modules 0.2.2
pycparser 2.19
PyDispatcher 2.0.5
PyHamcrest 1.9.0
pyOpenSSL 18.0.0
queuelib 1.5.0
Scrapy 1.5.1
service-identity 18.1.0
setuptools 40.6.3
six 1.12.0
Twisted 18.9.0
w3lib 1.19.0
wheel 0.32.3
zope.interface 4.6.0

3、创建项目

scrapy startproject movie
cd movie
scrapy genspider meiju meijutt.com

此时整个项目目录结构如下:

/root/.virtualenvs/learnScrapypython3
								├── bin
								├── include
								├── lib
								└── movie
									├── movie
									│   ├── __init__.py
									│   ├── items.py
									│   ├── middlewares.py
									│   ├── pipelines.py
									│   ├── __pycache__
									│   ├── settings.py
									│   └── spiders
									│       ├── __init__.py
									│       ├── meiju.py
									│       └── __pycache__
									└── scrapy.cfg		

因为之前有学过Django,可以看出上面这个目录结构和django的文件目录结构非常相似。
目录文件说明:

下文用./ 代替/root/.virtualenvs/learnScrapypython3/

  • ./bin ./include ./lib是执行scrapy startproject movie时生成的,暂时不用管
  • ./scrapy.cfg 项目的配置信息,主要为Scrapy命令行工具提供一个基础的配置信息。目前只记录爬虫setting文件的路径和爬虫程序的名字
  • ./movie是存放整个爬虫文件的地方
  • ./movie/items.py 设置数据存储模板,用于结构化数据,类似于Django的model.py文件
  • ./movie/pipelines.py 数据处理行为,本次实验:把爬取的信息存到一个文件里
  • ./movie/setting.py 配置文件,如:递归的层数、并发数,延迟下载等
  • ./movie/spiders/ 爬虫目录,如:创建文件,编写爬虫规则
  • ./movie/middlewares.py 中间件,处理xxx与xxx请求和相应(爬虫、调度、下载器、引擎)

4、修改爬虫程序的各个文件

  • ./movie/items.py:
# -*- coding: utf-8 -*-
  
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MovieItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # pass
    name = scrapy.Field()
  • 爬虫文件./movie/spiders/meiju.py:
# -*- coding: utf-8 -*-
import scrapy
from movie.items import MovieItem


class MeijuSpider(scrapy.Spider):
    name = 'meiju'
    allowed_domains = ['meijutt.com']
    start_urls = ['http://www.meijutt.com/new100.html']

    def parse(self, response):
        movies = response.xpath('/html/body/div[2]/div[4]/div[1]/ul/li')
        for each_movie in movies:
            item = MovieItem()
            item['name'] = each_movie.xpath('./h5/a/@title').extract()[0]
            yield item
  • 爬虫设置文件./movie/setting.py增加如下内容:
ITEM_PIPELINES = {'movie.pipelines.MoviePipeline':100}
  • 数据处理脚本 ./movie/pipelines.py:
# -*- coding: utf-8 -*-
  
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class MoviePipeline(object):
    def process_item(self, item, spider):
        with open("my_meiju.txt",'a',encoding='utf-8') as fp:
            fp.write(str(item['name'])+ '\n') 
        return item 

5、执行爬虫

/root/.virtualenvs/learnScrapypython3/movie目录下执行如下命令开启爬虫程序(其实只要是在./movie项目下的任意目录中都可以成功执行)

scrapy crawl meiju --nolog 

为方便查看错误也可执行下面的命令

scrapy crawl meiju

如果没错误,会在当前目录下产生一个名为my_meiju.txt的文件,里面是 https://www.meijutt.com/new100.html 的美剧的名字
截图如下:
爬虫框架Scrapy的学习记录

6、参考教程

  1. Scrapy简单入门及实例讲解
  2. Scrapy入门教程