我正在尝试构建自动化的网页抓取工具,并且花了数小时在这里观看YT视频和阅读相关内容。编程新手(一个月前开始)和该社区的新手...
因此,使用VScode作为我的IDE,我遵循了实际上用作网络抓取工具的这段代码(python和selenium)的格式:
from selenium import webdriver
import time
from selenium.webdriver.support.select import Select
with open('job_scraping_multipe_pages.csv', 'w') as file:
file.write("Job_title, Location, Salary, Contract_type, Job_description \n")
driver= webdriver.Chrome()
driver.get('https://www.jobsite.co.uk/')
driver.maximize_window()
time.sleep(1)
cookie= driver.find_element_by_xpath('//button[@class="accept-button-new"]')
try:
cookie.click()
except:
pass
job_title=driver.find_element_by_id('keywords')
job_title.click()
job_title.send_keys('Software Engineer')
time.sleep(1)
location=driver.find_element_by_id('location')
location.click()
location.send_keys('Manchester')
time.sleep(1)
dropdown=driver.find_element_by_id('Radius')
radius=Select(dropdown)
radius.select_by_visible_text('30 miles')
time.sleep(1)
search=driver.find_element_by_xpath('//input[@value="Search"]')
search.click()
time.sleep(2)
for k in range(3):
titles=driver.find_elements_by_xpath('//div[@class="job-title"]/a/h2')
location=driver.find_elements_by_xpath('//li[@class="location"]/span')
salary=driver.find_elements_by_xpath('//li[@title="salary"]')
contract_type=driver.find_elements_by_xpath('//li[@class="job-type"]/span')
job_details=driver.find_elements_by_xpath('//div[@title="job details"]/p')
with open('job_scraping_multipe_pages.csv', 'a') as file:
for i in range(len(titles)):
file.write(titles[i].text + "," + location[i].text + "," + salary[i].text + "," + contract_type[i].text + ","+
job_details[i].text + "\n")
next=driver.find_element_by_xpath('//a[@aria-label="Next"]')
next.click()
file.close()
driver.close()
有效。然后,我尝试将结果复制到另一个网站。我没有点击'next'按钮,而是找到了一种方法,以使URL的结束数增加1。但是我的问题来自代码的最后部分,给我AttributeError:'str'对象没有属性'text'。这是我使用Python和Selenium定位的网站的代码(https://angelmatch.io/pitch_decks/5285):
from selenium import webdriver
import time
from selenium.webdriver.support.select import Select
driver = webdriver.Chrome()
with open('pitchDeckResults2.csv', 'w' ) as file:
file.write("Startup_Name, Startup_Description, Link_Deck_URL, Startup_Website, Pitch_Deck_PDF, Industries, Amount_Raised, Funding_Round, Year /n")
for k in range(5285, 5287, 1):
linkDeck = "https://angelmatch.io/pitch_decks/" + str(k)
driver.get(linkDeck)
driver.maximize_window
time.sleep(2)
startupName = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[2]/div/div/div[1]')
startupDescription = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[2]/div/div/div[3]/p[2]')
startupWebsite = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[3]/a')
pitchDeckPDF = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/button/a')
industries = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/a[2]')
amountRaised = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[1]/b')
fundingRound = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/a[1]')
year = driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[2]/b')
with open('pitchDeckResults2.csv', 'a') as file:
for i in range(len(startupName)):
file.write(startupName[i].text + "," + startupDescription[i].text + "," + linkDeck[i].text + "," + startupWebsite[i].text + "," + pitchDeckPDF[i].text + "," + industries[i].text + "," + amountRaised[i].text + "," + fundingRound[i].text + "," + year[i].text +"\n")
time.sleep(1)
file.close()
driver.close()
我将不胜感激!我正在尝试使用这种技术将数据转换成CSV!
老实说,你做得很好。唯一的原因以及为什么会出现错误,您正在尝试从字符串类型值中获取.text变量。python中的str类型没有任何文本变量。此外,您正在尝试通过[i]进行迭代,以达到“超出列表索引范围”的条件。例外。您要放置在linkDeck [i] .text上的内容可能是page.title?还是什么?
顺便说一句,当您使用open()语句时,您不应该关闭文件。它是上下文管理器,在您删除文件后无需您
将添加的列添加到maxamize_window()并删除1个打开的文件,并仅添加链接:
import time
from selenium import webdriver
driver = webdriver.Chrome()
delimeter = ';'
with open('pitchDeckResults2.csv', 'w+') as _file:
_l = ['Startup_Name', 'Startup_Description', 'Link_Deck_URL', 'Startup_Website', 'Pitch_Deck_PDF', 'Industries',
'Amount_Raised', 'Funding_Round', 'Year \n']
_file.write(delimeter.join(_l))
for k in range(5285, 5287, 1):
linkDeck = "https://angelmatch.io/pitch_decks/" + str(k)
driver.get(linkDeck)
time.sleep(1)
startupName = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[2]/div/div/div[1]')
startupDescription = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[2]/div/div/div[3]/p[2]')
startupWebsite = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[3]/a')
pitchDeckPDF = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/button/a')
industries = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/a[2]')
amountRaised = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[1]/b')
fundingRound = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/a[1]')
year = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div[1]/div/p[2]/b')
all_elements = [startupName.text, startupDescription.text, linkDeck, startupWebsite.text, pitchDeckPDF.text,
industries.text, amountRaised.text, fundingRound.text, f"{year.text}\n"]
_str = delimeter.join(all_elements)
_file.write(_str)
driver.close()
我可能错过了某事,让我知道
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句