爬取最新斗图啦网站上表情包

女朋友找我斗图,最后斗她到自闭。

网址:https://www.doutula.com/

难度不大,代码如下:

# -*- coding: utf-8 -*-
 
 
import random
import requests
from bs4 import BeautifulSoup
import urllib
import os
 
 
BASE_URL = 'https://www.doutula.com/photo/list/?page='
URL_LIST = []
for x in range(1, 2):
    REAL_URL = BASE_URL+str(x)
    URL_LIST.append(REAL_URL)
 
def get_url(url):
    my_headers = [
        "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36",
        "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14",
        "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0)",
    ]
    header = {
        "User-Agent": random.choice(my_headers)
    }
    re = requests.get(url, headers=header)
    soup = BeautifulSoup(re.content, "lxml")
    IMG_LIST = soup.find_all('img', 'img-responsive lazy image_dta')
    num=1
    for img in IMG_LIST:
        imgurl = img['data-original']
        
        pic=requests.get(imgurl,headers=header).content
        with open('C:/python/project/doutufile/'+str(num)+'.jpg','wb')as f:
            f.write(pic)
            num=num+1
def main():
    for url in URL_LIST:
        get_url(url)
 
 
if __name__ == '__main__':
    main()

只需要把文件保存的路径换成自己的即可。

效果如图所示:

爬取最新斗图啦网站上表情包

 不说了,要去哄女朋友了~~~~