首页 > 代码库 > Scrapy爬虫 -- 03
Scrapy爬虫 -- 03
关于数据过滤,scrapy提供xpath和css两种过滤器(selector),一般xpath使用的较多,另外我对css也不算熟。这里主要是xpath。
关于xpath,是一种专门在 XML 文档中查找信息的语言。详细教程可以看这里:http://www.w3school.com.cn/xpath/,不过对于刚入门的人来说不用那么复杂。官网的tutorial给的一些示例足够基本入门的。
以下为示例:
/html/head/title: selects the <title> element, inside the <head>element of a HTML document (选择html头部的标题元素) /html/head/title/text(): selects the text inside the aforementioned<title> element.(选择上例中元素的文本) //td: selects all the <td> elements(选择所有td项目) //div[@class="mine"]: selects all div elements which contain an attribute class="mine"(选择class="mine"的div标签)
为了方便调式,scrapy提供了交互式的方式对网站进行抓取分析(注意一定要有"",否则。。。):
scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
正常的话会出现以下内容:
[ ... Scrapy log here ... ] 2014-01-23 17:11:42-0400 [default] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.Crawler object at 0x3636b50> [s] item {} [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] settings <scrapy.settings.Settings object at 0x3fadc50> [s] spider <Spider ‘default‘ at 0x3cebf50> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser In [1]:
这样你就获得了crawler等六个类,现在我们只关注response类,正如字面意思,response代表返回的网页内容,使用view(response)就可以调用默认的浏览器查看抓取的页面(我的是firefox)。response.body存放着html的源代码,你可以在交互程序下查看。不过一般没有换行和对齐,十分惨烈。。。
response中包含了四个基本方法,用于数据过滤:
xpath(): returns a list of selectors, each of them representing the nodes selected by the xpath expression given as argument.
css(): returns a list of selectors, each of them representing the nodes selected by the CSS expression given as argument.
extract(): returns a unicode string with the selected data.
re(): returns a list of unicode strings extracted by applying the regular expression given as argument.
其中xpath()就是选择器,而extract()方法则会返回html标签之间的unicode内容,re()则是调用正则的接口(我恨正则)。
示例如下:
In [1]: response.xpath(‘//title‘) Out[1]: [<Selector xpath=‘//title‘ data=http://www.mamicode.com/u‘Open Directory - Computers: Progr‘>]> response.xpath()将会返回selector的一个列表,你可以对其中的元素再次调用xpath, extract()等函数。
就像这样:
>>> response.xpath(‘//title‘)[0].xpath(‘text()‘).extract() [u‘DMOZ - Computers: Programming: Languages: Python: Books‘]剩下的事就是把元素存入items之中了,然后通过pipeline存放到你想要的数据格式:
import scrapyfrom tutorial.items import DmozItemclass DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): for sel in response.xpath(‘//ul/li‘): item = DmozItem() item[‘title‘] = sel.xpath(‘a/text()‘).extract() item[‘link‘] = sel.xpath(‘a/@href‘).extract() item[‘desc‘] = sel.xpath(‘text()‘).extract() yield item这里说一下yield这个关键字,yield在python中被称为generator,作用是将循环转化为iterator并返回,详情可以看这里:http://www.ibm.com/developerworks/cn/opensource/os-cn-python-yield/
Scrapy爬虫 -- 03