初始爬虫7
针对数据提取的项目实战:
补充初始爬虫6的一个知识点:
etree.tostring能够自动补全html缺失的标签,显示原始的HTML结构
# -*- coding: utf-8 -*-
from lxml import etreetext = '''
<div> <ul> <li class="item-1"><a href="link1.html">first item</a></li> <li class="item-1"><a href="link2.html">second item</a></li> <li class="item-inactive"><a href="link3.html">third item</a></li> <li class="item-1"><a href="link4.html">fourth item</a></li> <li class="item-0"><a href="link5.html">fifth item</a></li> </ul>
</div>
'''html = etree.HTML(text)
print(etree.tostring(html))
运行结果,可以看出body等标签自动已补全。
实现对百度贴吧爬取:
注意点1:
网站对于要抓取的数据进行了注释操作,解决方法:
方法一:老年浏览器(user-agent)
方法二:注释符号(<!-- -->)替换("" "")
这里提供一些老版本的User-Agent :
[
"Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0) ",
"Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; DigExt) ",
"Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; TUCOWS) ",
"Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; .NET CLR 1.1.4322) ",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0) ",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; by TSG) ",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; .NET CLR 1.0.3705) ",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; .NET CLR 1.1.4322) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; ) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; T132461) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; en) Opera 8.0 ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; TencentTraveler ) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; zh-cn) Opera 8.0 ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322; FDM) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Maxthon; .NET CLR 1.1.4322) ",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; MathPlayer 2.0; .NET CLR 1.1.4322) "
]
注意点2:
Xpath路径查看方法,两次对比解决不同之处实现 :
所以得到xpath路径://*[@id="thread_list"]/li/div/div[2]/div[1]/div[1]/a
注意点3:
翻页处理时,跳转网址可能缺少部分网址:
temp['link'] = 'https://tieba.baidu.com' + el.xpath('./@href')[0]
next_url = 'https:' + html.xpath('//a[contains(text(),"下一页>")]/@href')[0]
同时对于翻页的xpath路径,需要特别注意:
例如上面原本翻页方法:
//a[@class="next pagination- item"]/@href
处理一页数据后,直接得到None,换方法之后实现正确翻页处理:
//a[contains(text(),"下一页>")]/@href
# -*- coding: utf-8 -*-
import requests
from lxml import etree# url
# headers
# 发送请求获取响应
# 从响应中提取数据
# 判断结束
class Tieba(object):def __init__(self, name):self.url = "https://tieba.baidu.com/f?kw={}".format(name)print(self.url)self.headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"# "User-Agent": "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; T132461)"}def get_data(self, url):response = requests.get(url, headers=self.headers)with open("temp.html", "wb") as f:f.write(response.content)return response.contentdef parse_data(self, data):# 创建element对象data = data.decode().replace("<!--", "").replace("-->", "")html = etree.HTML(data)el_list = html.xpath('//*[@id="thread_list"]/li/div/div[2]/div[1]/div[1]/a')# print(len(el_list))data_list = []for el in el_list:temp = {}temp['title'] = el.xpath('./text()')[0]temp['link'] = 'https://tieba.baidu.com' + el.xpath('./@href')[0]data_list.append(temp)# 获取下一页urltry:next_url = 'https:' + html.xpath('//a[contains(text(),"下一页>")]/@href')[0]except:next_url = Nonereturn data_list, next_urldef save_data(self, data_list):for data in data_list:print(data)def run(self):next_url = self.urlwhile True:# 发送请求获取响应data = self.get_data(next_url)# 从响应中提取数据,数据和翻页用的urldata_list, next_url = self.parse_data(data)self.save_data(data_list)print(next_url)# 判断是否结束if next_url == None:breakif __name__ == '__main__':tieba = Tieba("美食天下")tieba.run()