使用R和XML包进行Web抓取

达夫

我正在尝试从此网页http://chicago.craigslist.org/search/apa?query=pilsen&zoomToPosting=&srchType=A&minAsk=&maxAsk=&bedrooms=&housing_type=访问纬度和经度数据

来源看起来像这样:

<p class="row" data-latitude="41.8500405654737" data-longitude="-87.6500521703668"   data-pid="4005695169"> <a href="/chc/apa/4005695169.html" class="i"></a> <span class="pl">    <span class="star"></span> <span class="date">Aug 16</span>  <a href="/chc/apa/4005695169.html">UIC/ Medical Dist /Pilsen</a> </span> <span class="l2">  <span class="price">$850</span> / 1br -  <span class="pnr"> <small> (Pilsen)</small> <span class="px"> <span class="p"> <a href="#" class="maptag" data-pid="4005695169">map</a></span></span> </span>  </span></p>

我是XPATH的新手,我想知道是否可以使用它和XML包来获取数据。

我试过了:

require(XML)
url <- "http://chicago.craigslist.org/search/apaquery=pilsen&zoomToPosting=&srchType=A&minAsk=&max  Ask=&bedrooms=&housing_type="

doc <- htmlParse(url)
latitude <- xpathApply(doc,path="//p[@data-latitude]",fun=xmlValue)

这给了我p标签的内容:

[[1]]
[1] "Aug 16  AMAZING Pilsen 2bed SHOWING TODAY @6:30pm $695 / 2br - 725ft² -       

我想知道如何访问构成像纬度这样的段落标签的信息(例如41.8500405654737)感谢您的帮助!

用户名

如果您只想要特定的属性,请使用xmlGetAttr而不是xmlValuewith name = 'data-latitude'如果要使用所有属性xmlAttrs

latitude <- xpathSApply(doc,path="//p[@data-latitude]",fun = xmlGetAttr , name = 'data-latitude')
latitude
 [1] "41.963428913124"  "41.9515686867654" "41.8634477778791" "41.8500405654737"
 [5] "41.8500405654737" "41.8517021430795" "41.8548534109526" "41.8551971856296"
 [9] "41.8540512700394" "41.8118242805405" "41.8467747060416" "41.8527907628902"
[13] "41.8615570171552" "41.8500405654737" "41.8514729599615" "41.8514729599615"
[17] "41.8500405654737" "41.8514729599615" "41.9457245172554" "41.9391355026118"
[21] "41.8766258071664" "41.8553117771886" "41.940625192879"  "41.9457245172554"
[25] "41.9391355026118" "41.8807511032911" "41.9457245172554" "41.903440231977" 
[29] "41.8780582016541" "41.8950177523891" "41.8541085658189" "41.9391355026118"
[33] "41.9395365730683" "41.8667136373111" "41.8667136373111" "41.8353155501396"
[37] "41.9181079515316" "41.903440231977"  "41.9208581489481" "41.89490316083"  
[41] "41.903440231977"  "41.9017213585917" "41.903440231977"  "41.8411597196497"
[45] "41.8520459177566" "41.8527907628902" "41.8617289044938" "41.8527907628902"
[49] "41.963428913124"  "41.9457245172554" "41.8118242805405" "41.8297005637477"
[53] "41.903440231977"  "41.8762247367098" "41.897710654026"  "41.8588641155183"
[57] "41.8667136373111" "41.8667136373111" "41.9181079515316" "41.903440231977" 
[61] "41.8500405654737" "41.9181079515316" "41.9181079515316" "41.92062896583"  
[65] "41.903440231977"  "41.9474433906407" "41.903325640418"  "41.8950177523891"
[69] "41.83176321181"   "41.8537074953624" "41.903440231977"  "41.9391355026118"
[73] "41.9457245172554" "41.8569733547944" "41.8402429871775" "41.8950177523891"
[77] "41.8762247367098" "41.8950177523891" "41.8762247367098" "41.8762247367098"
[81] "41.8368052404069"

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

使用R在imdb中进行Web抓取

来自分类Dev

使用R进行网页抓取

来自分类Dev

使用R的rvest软件包和RSelenium进行Web抓取

来自分类Dev

使用lxml和请求进行Web抓取

来自分类Dev

VBA:使用<ul和<li和<div和<span进行Web抓取

来自分类Dev

使用RVest和R的Web抓取html

来自分类Dev

R和rvest进行网络抓取

来自分类Dev

使用R和selectorgadget进行HTML抓取

来自分类Dev

在Wikipedia上使用BeautifulSoup进行Web抓取

来自分类Dev

使用R进行网页抓取

来自分类Dev

使用python和BeautifulSoup进行网络抓取

来自分类Dev

使用Python,BeautifulSoup进行Web抓取

来自分类Dev

使用BeautifulSoup到Dataframe进行Web抓取

来自分类Dev

Python-使用Scrapy进行Web抓取

来自分类Dev

使用Node.js进行Web抓取

来自分类Dev

使用Python和beautifulsoup进行Web抓取:BeautifulSoup函数可以保存什么?

来自分类Dev

Python中使用Selenium进行Web抓取和下拉菜单选项

来自分类Dev

使用Python和Beautiful汤进行Web抓取:错误“'页面'未定义”

来自分类Dev

使用Beautifulsoup和Python进行Web抓取不起作用

来自分类Dev

使用R在imdb中进行Web抓取

来自分类Dev

使用xpathSApply在R中进行Web抓取

来自分类Dev

使用XML包在R中进行Web抓取的麻烦

来自分类Dev

使用R Web抓取数据

来自分类Dev

使用R和RVest进行Web抓取

来自分类Dev

使用QWebView和QWebElement进行Web抓取将返回递增的倍数

来自分类Dev

使用Web :: Scraper从.onion网站进行抓取

来自分类Dev

使用ThreadPoolExecutor进行错误的Web抓取

来自分类Dev

使用Phantomjs / Selenium(来自R)进行Web抓取,设置元素值

来自分类Dev

Python - 使用 BeautifulSoup 和 Urllib 进行抓取