您好,登錄后才能下訂單哦!
小編給大家分享一下python中scrapy spider有哪些爬取方式,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
spider的幾種爬取方式:
爬取1頁內容
按照給定列表拼出鏈接爬取多頁
找到‘下一頁'標簽進行爬取
進入鏈接,按照鏈接進行爬取
下面分別給出了示例
1.爬取1頁內容
#by 寒小陽(hanxiaoyang.ml@gmail.com) import scrapy class JulyeduSpider(scrapy.Spider): name = "julyedu" start_urls = [ 'https://www.julyedu.com/category/index', ] def parse(self, response): for julyedu_class in response.xpath('//div[@class="course_info_box"]'): print julyedu_class.xpath('a/h5/text()').extract_first() print julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first() print julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first() print response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first()) print "\n" yield { 'title':julyedu_class.xpath('a/h5/text()').extract_first(), 'desc': julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first(), 'time': julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first(), 'img_url': response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first()) }
2.按照給定列表拼出鏈接爬取多頁
#by 寒小陽(hanxiaoyang.ml@gmail.com) import scrapy class CnBlogSpider(scrapy.Spider): name = "cnblogs" allowed_domains = ["cnblogs.com"] start_urls = [ 'http://www.cnblogs.com/pick/#p%s' % p for p in xrange(1, 11) ] def parse(self, response): for article in response.xpath('//div[@class="post_item"]'): print article.xpath('div[@class="post_item_body"]/h4/a/text()').extract_first().strip() print response.urljoin(article.xpath('div[@class="post_item_body"]/h4/a/@href').extract_first()).strip() print article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip() print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip() print response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip() print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip() print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip() print "" yield { 'title': article.xpath('div[@class="post_item_body"]/h4/a/text()').extract_first().strip(), 'link': response.urljoin(article.xpath('div[@class="post_item_body"]/h4/a/@href').extract_first()).strip(), 'summary': article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip(), 'author': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip(), 'author_link': response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip(), 'comment': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip(), 'view': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip(), }
3.找到‘下一頁'標簽進行爬取
import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response.xpath('//div[@class="quote"]'): yield { 'text': quote.xpath('span[@class="text"]/text()').extract_first(), 'author': quote.xpath('span/small[@class="author"]/text()').extract_first(), } next_page = response.xpath('//li[@class="next"]/@herf').extract_first() if next_page is not None: next_page = response.urljoin(next_page) yield scrapy.Request(next_page, callback=self.parse)
4.進入鏈接,按照鏈接進行爬取
#by 寒小陽(hanxiaoyang.ml@gmail.com) import scrapy class QQNewsSpider(scrapy.Spider): name = 'qqnews' start_urls = ['http://news.qq.com/society_index.shtml'] def parse(self, response): for href in response.xpath('//*[@id="news"]/div/div/div/div/em/a/@href'): full_url = response.urljoin(href.extract()) yield scrapy.Request(full_url, callback=self.parse_question) def parse_question(self, response): print response.xpath('//div[@class="qq_article"]/div/h2/text()').extract_first() print response.xpath('//span[@class="a_time"]/text()').extract_first() print response.xpath('//span[@class="a_catalog"]/a/text()').extract_first() print "\n".join(response.xpath('//div[@id="Cnt-Main-Article-QQ"]/p[@class="text"]/text()').extract()) print "" yield { 'title': response.xpath('//div[@class="qq_article"]/div/h2/text()').extract_first(), 'content': "\n".join(response.xpath('//div[@id="Cnt-Main-Article-QQ"]/p[@class="text"]/text()').extract()), 'time': response.xpath('//span[@class="a_time"]/text()').extract_first(), 'cate': response.xpath('//span[@class="a_catalog"]/a/text()').extract_first(), }
以上是“python中scrapy spider有哪些爬取方式”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。