首页 > 代码库 > python学习笔记(11)--爬虫下载漫画图片

python学习笔记(11)--爬虫下载漫画图片

说明:

1. 某本子网站爬虫,现在只实现了扒取一页,已经凌晨两点了,又饿又困,先睡觉,明天再写总结吧!

2. 

 1 import urllib.request
 2 import re
 3 import os
 4 
 5 # 获取漫画网首页html
 6 url = "http://www.yaoqmh.net/shaonvmanhua/list_4_1.html"
 7 headers = {User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0}
 8 req = urllib.request.Request(url=url,headers=headers)
 9 response = urllib.request.urlopen(url)
10 html = response.read().decode("utf-8")
11 # 处理一下html,只保留中间的本子,侧边和顶部的本子不要
12 startNum = html.find("mainleft")
13 endNum = html.find("mainright")
14 html = html[startNum:endNum]
15 
16 
17 # 从html获取本子编号,名字
18 # <a href="http://www.mamicode.com/shaonvmanhua/8389.html" class="pic show" title="里番H少女漫画之發情關係" target="_blank"><span class="bt">里番H少女漫画之發情關係</span> <span class="bg"></span><img class="scrollLoading" src="http://pic.taov5.com/1/615/183-1.jpg" xsrc="http://pic.taov5.com/1/615/183-1.jpg" alt="里番H少女漫画之發情關係" style="background:url(/static/images/loading.gif) no-repeat center;" width="150" height="185"></a>
19 # 
20 # <img class="scrollLoading" src="http://pic.taov5.com/1/615/183-1.jpg" xsrc="http://pic.taov5.com/1/615/183-1.jpg" alt="里番H少女漫画之發情關係" style="background:url(/static/images/loading.gif) no-repeat center;" width="150" height="185">
21 regBookNum = rhref="http://www.mamicode.com/shaonvmanhua/(/d+)/.html"
22 regName = rtitle="(.+?)"
23 bookNums = re.findall(regBookNum, html)
24 bookNames = re.findall(regName, html)
25 # print(bookNums)
26 # print(bookNames)
27 
28 # 打开每个本子网页,获取总页数,第一张图片的网址
29 # <img alt="里番H少女漫画之發情關係" src="http://pic.taov5.com/1/615/143.jpg">
30 for i in range(len(bookNums)):
31     urlBook = "http://www.yaoqmh.net/shaonvmanhua/"+bookNums[i]+".html"
32     reqBook = urllib.request.Request(url=urlBook,headers=headers)
33     responseBook = urllib.request.urlopen(reqBook)
34     htmlBook = responseBook.read().decode("utf-8")
35     regPageNums = r"共(\d+)页:"
36     regImgStart1 = r"http://pic\.taov5\.com/1/(\d+)/\d+?\.jpg"
37     regImgStart2 = r"http://pic\.taov5\.com/1/\d+?/(\d+?)\.jpg"
38     pageNums = re.findall(regPageNums,htmlBook)#总页数,获得一个二维数组,有两个总页数标签
39     imgStart1 = re.findall(regImgStart1, htmlBook)#图片目录的第一个数字,findall返回一个数组
40     imgStart2 = re.findall(regImgStart2, htmlBook)#图片目录的第二个数字
41     # 每个本子新建文件夹,下载完一个本子要返回上一级目录!!不然会一直新建子文件夹!
42     os.mkdir(bookNames[i])#新建文件夹
43     os.chdir(bookNames[i])#跳转到指定目录
44     #记得后面要返回上级目录!!
45 
46     # 开始页码和结束页码
47     rangeMin = int(imgStart2[0])
48     rangeMax = int(imgStart2[0]) + int(pageNums[0])
49     pageNums = int(pageNums[0])
50     # print(rangeMin)
51     # print(rangeMax)
52     # print(type(rangeMin))
53     # 打开每页,下载保存到这个名字的文件夹里
54     print("正在下载:"+bookNames[i])#给个下载提示本子名
55     for j in range(pageNums):
56         urlImg = "http://pic.taov5.com/1/"+imgStart1[0]+"/"+str(rangeMin+j)+".jpg"
57         reqImg = urllib.request.Request(url=urlImg,headers=headers)
58         responseImg = urllib.request.urlopen(reqImg)
59         img = open(str(j)+".jpg","wb")
60         img.write(responseImg.read())
61         img.close()
62         print("已下载%d页,共%d页"%(j+1,pageNums))#提示下载几页了,放在后面比较好
63         # os.system("pause")
64     os.chdir(os.path.dirname(os.getcwd()))#返回上级目录
65 # 退出功能,下载哪一页,python按键停止运行

 

python学习笔记(11)--爬虫下载漫画图片