当我查阅官方的scrapy文档时,发现可以动态添加字段,但是我不知道该怎么做。
我尝试了一个ItemLoader的演示,它像测试代码一样成功运行:
我为ItemLoader传递了一个field_name_list,就像一个代码一样:
当我从ItemClass观看时,就像两个代码:
当我运行代码时,它告诉我错误,但是我打印了args,它成功通过了,就像下面的三个代码:
我忘了运行代码,就像四个代码一样:
我的self.field像这样:
"field": {
"content": [
{
"expression": [
"//td[@id='article_content']//text()"
],
"method": "xpath"
}
],
"datetime": [
{
"expression": [
"//p[@class='xg1']/text()"
],
"method": "xpath",
"re" : "\\d{2,4}年\\d{1,2}月\\d{1,2}日|\\d{1,2}月\\d{1,2}日|\\d{2,4}[-|/|.]\\d{1,2}[-|/|.]\\d{1,2}"
}
],
# Test Code
class Test(Item):
field_list = ["title", "urls", "image", "content","name", "source","pubdate"]
fields = {field_name: Field() for field_name in field_list}
# one code
field_list = []
for key, value in field.items():
field_list.append(key)
loader = ItemLoader(item=Demo(field_list), response=response)
# two code
class Demo(Item):
def __init__(self, *args, **kwargs):
print(args,1111111111111111111111111111111111111111111111111111)
self._values = {}
if args or kwargs: # avoid creating dict for most common case
for k, v in six.iteritems(dict(*args, **kwargs)):
self[k] = v
# super(Demo, self).__init__()
# fields = {field_name: Field() for field_name in field_list}
# three code
2019-04-02 17:57:13 [scrapy.core.scraper] ERROR: Spider error processing <GET http://news.wmxa.cn/beilin/201904/615036.html via http://192.168.99.100:8050/render.html> (referer: None)
Traceback (most recent call last):
File "D:\python\Scripts\test\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "D:\python\Scripts\test\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
for el in result:
File "D:\python\Scripts\test\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "D:\python\Scripts\test\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "D:\python\Scripts\test\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "D:\python\Scripts\test\lib\site-packages\scrapy\spiders\crawl.py", line 78, in _parse_response
for requests_or_item in iterate_spider_output(cb_res):
File "F:\Newspider\news_project\news\news\spiders\newspider.py", line 141, in parse_item
loader = ItemLoader(item=Demo(field_list), response=response)
File "F:\Newspider\news_project\news\news\items.py", line 70, in __init__
for k, v in six.iteritems(dict(*args, **kwargs)):
TypeError: dict expected at most 1 arguments, got 8
2019-04-02 17:57:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://news.wmxa.cn/beilin/201904/615065.html via http://192.168.99.100:8050/render.html> (referer: None)
(['title', 'content', 'blei', 'image', 'pay', 'pubdate', 'source', 'url'],) 1111111111111111111111111111111111111111111111111111
# four code
field = self.field
loader = ItemLoader(item=Demo(field_list), response=response)
for key, value in field.items():
for extractor in value:
try:
if extractor.get("method") == "xpath":
loader.add_xpath(key, *extractor.get("expression"), **{"re": extractor.get("re")})
if extractor.get("method") == "css":
loader.add_css(key, *extractor.get("expression"), **{"re": extractor.get("re")})
if extractor.get('method') == 'attr':
loader.add_value(key, getattr(response, *extractor.get('expression')))
我希望它是动态生成的,我该怎么办?
有一种非常简单的使用方法ItemLoader
,使用该方法,您可以将所需的所有数据从响应添加到ItemLoader
解析响应时。
from scrapy import Item
from scrapy.loader import ItemLoader
def parse(self, response):
l = ItemLoader(item=Item(), response=response) #first arg can be scrapy.Item object
for item in response:
l.add_value(item, response[item]) # you can also use literal values
return l.load_item()
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句