首页 > 代码库 > Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案
Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案
1 random.seed(datetime.datetime.now()) 2 def getLinks(articleUrl): 3 html = urlopen("http://en.wikipedia.org"+articleUrl) 4 bsOdj = BeautifulSoup(html) 5 return bsOdj.find("div",{"id":"bodyContent"}).findAll("a",href=http://www.mamicode.com/re.compile("^(/wiki/)((?!:).)*$")) 6 links = getLinks("/wiki/Kevin_Bacon") 7 while len(links) > 0: 8 newArticle = links[random.randint(0,len(links)-1)].attrs["href"] 9 print(newArticle) 10 links = getLinks(newArticle)
这是我的源代码,然后报了警告
D:\Anaconda3\lib\site-packages\bs4\__init__.py:181: UserWarning: No parser was explicitly specified, so I‘m using the best available HTML parser for this system ("lxml"). This usually isn‘t a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently. The code that caused this warning is on line 16 of the file D:/ThronePython/Python3 网络数据爬取/BeautifulSoup 爬虫_开始爬取/BeautifulSoup 维基百科六度分割_构建从一个页面到另一个页面的爬虫.py. To get rid of this warning, change code that looks like this: BeautifulSoup([your markup]) to this: BeautifulSoup([your markup], "lxml") markup_type=markup_type))
百度后发现,其实这是没有设置默认的解析器造成的,
根据提示设置解析器即可,否则则采取默认的解析器,将第四行改为:
bsOdj = BeautifulSoup(html,"lxml")
即可.
Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。