首页 > 代码库 > python爬虫入门(1)-urllib模块

python爬虫入门(1)-urllib模块

作用:用于读取来自网上(服务器上)的数据

基本方法:urllib.request.urlopen(url,data=http://www.mamicode.com/None,[]timeout]*,cafile=None,cadefault=False,context=None)
  • url:需要打开的网址
  • data:Post提交的数据
  • timeout:设置网站的访问超时时间

示例1:获取页面
  1. import urllib.request
  2. response = urllib.request.urlopen("http://www.fishc.com")#是一个HTTP响应类型
  3. html =response.read()#读取响应内容,为bytes类型
  4. # print(type(html),html) #输出的为一串<class ‘bytes‘>
  5. html = html.decode(‘utf-8‘)#bytes类型解码为str类型
  6. print(html)

示例2:抓一只猫
  1. import urllib.request
  2. response = urllib.request.urlopen("http://placekitten.com/g/400/400")
  3. cat_img = response.read()
  4. with open(‘cat_400_400.jpg‘, ‘wb‘) as f:
  5. f.write(cat_img)


示例3:翻译器
右击浏览器,选择检查或审查元素,再点击网络,找到post的 Name,复制RequestURL
技术分享
在headers中找到Form Data,复制表单中内容
技术分享
 
  1. import urllib.request
  2. import urllib.parse
  3. import json
  4. import time
  5. while True:
  6. content = input("请输入需要翻译的内容《输入q!退出程序》:")
  7. if content == ‘q!‘:
  8. break
  9.   url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=http://www.youdao.com/"    #即RequestURL中的链接
  10. data = {}
  11.     #Form Data中的内容,适当删除无用信息
  12. data[‘i‘] = content
  13. data[‘smartresult‘] = ‘dict‘
  14. data[‘client‘] = ‘fanyideskweb‘
  15. data[‘doctype‘] = ‘json‘
  16. data[‘version‘] = ‘2.1‘
  17. data[‘keyfrom‘] = ‘fanyi.web‘
  18. data[‘action‘] = ‘FY_BY_CLICKBUTTON‘
  19. data[‘typoResult‘] = ‘true‘
  20. data = urllib.parse.urlencode(data).encode(‘utf-8‘)
  21. #打开网址并提交表单
  22. response = urllib.request.urlopen(url, data)
  23. html = response.read().decode(‘utf-8‘)
  24. target = json.loads(html)
  25. print("翻译结果:%s" % (target[‘translateResult‘][0][0][‘tgt‘]))
  26. time.sleep(2)



隐藏和代理
隐藏:1.通过request的headers参数修改
           2.通过Request.add_header()方法修改
代理:1.proxy_support = urllib.request.ProxyHandler({}) #参数是一个字典{‘类型‘:‘代理IP:端口号‘}
           2.opener = urllib.request.build_opener(proxy_support) #定制、创建一个opener
           3.urllib.request.install_opener(opener) #安装opener
              opener.open(url) #调用opener
代理
示例5:代理
  1. import urllib.request
  2. import random
  3. url = ‘http://www.whatismyip.com.tw/‘
  4. iplist = [‘61.191.41.130:80‘,‘115.46.97.122:8123‘,]

  5. #参数是一个字典{‘类型‘:‘代理IP:端口号‘}
  6. proxy_support = urllib.request.ProxyHandler({‘http‘:random.choice(iplist)})
  7. #定制、创建一个opener
  8. opener = urllib.request.build_opener(proxy_support)
  9. #通过addheaders修改User-Agent
  10. opener.addheaders = [(‘User-Agent‘,‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36‘)]
  11. #安装opener
  12. urllib.request.install_opener(opener)
  13. response = urllib.request.urlopen(url)
  14. html = response.read().decode(‘utf-8‘)
  15. print(html)


示例6:简单爬取贴吧图片
  1. import urllib.request
  2. import re
  3. def open_url(url):
  4. #打开URL并修改header,将URL内容读取
  5. req = urllib.request.Request(url)
  6.     #通过add_header修改User-Agent
  7. req.add_header(‘User-Agent‘,‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36‘)
  8. page = urllib.request.urlopen(req)
  9. html = page.read().decode(‘utf-8‘)
  10. return html
  11. def get_img(html):
  12. p = r‘<img class="BDE_Image" src="http://www.mamicode.com/([^"]+\.jpg)‘
  13. imglist = re.findall(p,html)#寻找到图片的链接
  14. for each in imglist:
  15. filename = each.split("/")[-1]
  16. urllib.request.urlretrieve(each,filename,None)#保存图片
  17. if __name__ == ‘__main__‘:
  18. url = "https://tieba.baidu.com/p/5090206152"
  19. get_img(open_url(url))




null


python爬虫入门(1)-urllib模块