首页 > 代码库 > [Python爬虫] 之十九:Selenium +phantomjs 利用 pyquery抓取超级TV网数据

[Python爬虫] 之十九:Selenium +phantomjs 利用 pyquery抓取超级TV网数据

 

  一、介绍

    本例子用Selenium +phantomjs爬取超级TV(http://www.chaojitv.com/news/index.html)的资讯信息,输入给定关键字抓取资讯信息。

    给定关键字:数字;融合;电视

    抓取信息内如下:

      1、资讯标题

      2、资讯链接

      3、资讯时间

      4、资讯来源

 

  二、网站信息

    技术分享

 

 

    技术分享

    

    技术分享

 

 

   

  三、数据抓取

    针对上面的网站信息,来进行抓取

    1、首先抓取信息列表

      抓取代码:Elements = doc(‘ul[class="la_list"]‘).find(‘li‘)

    2、抓取标题

      抓取代码:title = element(‘h4‘).find(‘a‘).text().encode(‘utf8‘).strip()

    3、抓取链接

      抓取代码:url = element(‘h4‘).find(‘a‘).attr(‘href‘)

    4、抓取日期

      抓取代码:date = element(‘div[class="time"]‘).find(‘span‘).text().encode(‘utf8‘).strip()

    5、抓取来源

      抓取代码:strSources = dochtml(‘span[class="wzof"]‘).text().encode(‘utf8‘).strip().split(‘:‘)

   

  四、完整代码

# coding=utf-8import osimport refrom selenium import webdriverimport selenium.webdriver.support.ui as uiimport timefrom datetime import datetimeimport IniFile# from threading import Threadfrom pyquery import PyQuery as pqimport LogFileimport mongoDBclass chaojitvSpider(object):    def __init__(self):        logfile = os.path.join(os.path.dirname(os.getcwd()), time.strftime(%Y-%m-%d) + .txt)        self.log = LogFile.LogFile(logfile)        configfile = os.path.join(os.path.dirname(os.getcwd()), setting.conf)        cf = IniFile.ConfigFile(configfile)        self.webSearchUrl_list = cf.GetValue("chaojitv", "webSearchUrl").split(;)        self.keyword_list = cf.GetValue("section", "information_keywords").split(;)        self.db = mongoDB.mongoDbBase()        self.start_urls = []        for url in self.webSearchUrl_list:            self.start_urls.append(url)        self.driver = webdriver.PhantomJS()        self.wait = ui.WebDriverWait(self.driver, 2)        self.driver.maximize_window()    def Comapre_to_days(self,leftdate, rightdate):        ‘‘‘        比较连个字符串日期,左边日期大于右边日期多少天        :param leftdate: 格式:2017-04-15        :param rightdate: 格式:2017-04-15        :return: 天数        ‘‘‘        l_time = time.mktime(time.strptime(leftdate, %Y-%m-%d))        r_time = time.mktime(time.strptime(rightdate, %Y-%m-%d))        result = int(l_time - r_time) / 86400        return result    def date_isValid(self, strDateText):        ‘‘‘        判断日期时间字符串是否合法:如果给定时间大于当前时间是合法,或者说当前时间给定的范围内        :param strDateText: ‘2017-06-20 10:22 ‘        :return: True:合法;False:不合法        ‘‘‘        currentDate = time.strftime(%Y-%m-%d)        datePattern = re.compile(r\d{4}-\d{2}-\d{2})        strDate = re.findall(datePattern, strDateText)        if len(strDate) == 1:            if self.Comapre_to_days(currentDate, strDate[0]) == 0:                return True, strDate[0]        return False, ‘‘    def log_print(self, msg):        ‘‘‘        #         日志函数        #         :param msg: 日志信息        #         :return:        #         ‘‘‘        print %s: %s % (time.strftime(%Y-%m-%d %H-%M-%S), msg)    def scrapy_date(self):        strsplit = ------------------------------------------------------------------------------------        for link in self.start_urls:            self.driver.get(link)            selenium_html = self.driver.execute_script("return document.documentElement.outerHTML")            doc = pq(selenium_html)            infoList = []            self.log.WriteLog(strsplit)            self.log_print(strsplit)            Elements = doc(ul[class="la_list"]).find(li)            for element in Elements.items():                date = element(div[class="time"]).find(span).text().encode(utf8).strip()                flag, strDate = self.date_isValid(date)                if flag:                    title = element(h4).find(a).text().encode(utf8).strip()                    for keyword in self.keyword_list:                        if title.find(keyword) > -1:                            url = element(h4).find(a).attr(href)                            dictM = {title: title, date: strDate,                             url: url, keyword: keyword, introduction: title, source: ‘‘}                            infoList.append(dictM)                            break            if len(infoList)>0:                for item in infoList:                    url =item[url]                    self.driver.get(url)                    htext = self.driver.execute_script("return document.documentElement.outerHTML")                    dochtml = pq(htext)                    strSources = dochtml(span[class="wzof"]).text().encode(utf8).strip().split()                    if len(strSources)>2:                        item[source] = strSources[2].replace(编辑,‘‘).replace( ,‘‘)                    self.log_print(title:%s % item[title])                    self.log_print(url:%s % item[url])                    self.log_print(date:%s % item[date])                    self.log_print(source:%s % item[source])                    self.log_print(kword:%s % item[keyword])                    self.log_print(strsplit)                self.db.SaveInformations(infoList)        self.driver.close()        self.driver.quit()obj = chaojitvSpider()obj.scrapy_date()

 

[Python爬虫] 之十九:Selenium +phantomjs 利用 pyquery抓取超级TV网数据