首页 > 代码库 > scrapy爬取西刺网站ip
scrapy爬取西刺网站ip
# scrapy爬取西刺网站ip # -*- coding: utf-8 -*- import scrapy from xici.items import XiciItem class XicispiderSpider(scrapy.Spider): name = "xicispider" allowed_domains = ["www.xicidaili.com/nn"] start_urls = [‘http://www.xicidaili.com/nn/‘] def parse(self, response): item = XiciItem() for each in response.css(‘#ip_list tr‘): ip = each.css(‘td:nth-child(2)::text‘).extract_first() port = each.css(‘td:nth-child(3)::text‘).extract_first() if ip: ip_port = ip + ‘:‘ + port item[‘ip_port‘] = ip_port yield item
import pymongo class XiciPipeline(object): collection_name = ‘scrapy_items‘ def __init__(self, mongo_uri, mongo_db): self.mongo_uri = mongo_uri self.mongo_db = mongo_db #这里的from经常拼错啊 @classmethod def from_crawler(cls, crawler): return cls( mongo_uri=crawler.settings.get(‘MONGO_URI‘), mongo_db=crawler.settings.get(‘MONGO_DB‘) ) def open_spider(self, spider): self.client = pymongo.MongoClient(self.mongo_uri) self.db = self.client[self.mongo_db] def close_spider(self, spider): self.client.close() def process_item(self, item, spider): self.db[self.collection_name].insert(dict(item)) return item
scrapy爬取西刺网站ip
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。