Quantcast
Channel: Active questions tagged r - Stack Overflow
Viewing all articles
Browse latest Browse all 205301

webscraping with R and rvest

$
0
0

I have a project where I should scrape a series of articles from news sites. I am interested in the headline and text of the news. In most cases, the site maintains a base URL, for example:

https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html https://tmp.americanthinker.com/blog/2015/01/california_begins_giving_drivers_licenses_to_illegal_aliens.html

As there are a so many articles (more than 1000) to download, I thought of creating a function to download all data automatically. A vector would provide all web addresses (one per line):

article 
[1] "https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html"                   
[2] "https://tmp.americanthinker.com/blog/2015/01/california_begins_giving_drivers_licenses_to_illegal_aliens.html"
[3] "https://www.americanthinker.com/articles/2018/11/immigrants_will_not_fund_our_retirement.html"> str(article)
 chr [1:3] "https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html" ...
> summary(article)
   Length     Class      Mode 
        3 character character 

As a result, the script would use the vector as a source for the addresses and create a dataframe with the title and text of each article. But some errors pop up. Here are the codes I wrote, based on a series of Stack Overflow posts:

Packages

library(rvest)
library(purrr)
library(xml2) 
library(dplyr)
library(readr)

Importing CSV and exporting as a vector

base <- read_csv(file.choose(), col_names = FALSE)
article <- pull(base,X1)

First try

articles_final <- map_df(article, function(i){
  pages<-read_html(article)
  title <-
    article %>%  map_chr(. %>% html_node("h1") %>% html_text())
  content <-
    article %>% map_chr(. %>% html_nodes('.article_body span') %>% html_text() %>% paste(., collapse = ""))
  article_table <- data.frame("Title" = title, "Content" = content)
  return(article_table)
})  

Second try

map_df(1:3, function(i){
  page <- read_html(sprintf(article,i))
  data.frame(Title = html_text(html_nodes(page,'.h1')),
             Content= html_text(html_nodes(page,'.article_body span')),
             Site = "American Thinker"
             )
}) -> articles_final

In both cases, I am getting the following error while running these functions:

Error in doc_parse_file (con, encoding = encoding, as_html = as_html, options = options):
Expecting a single string value: 
[type = character; extent = 3].

I need this to download and analyse articles

Thank you very much for your help.

edit

I tried the code suggested bellow:

I tried and it dod not work, some problem with my coding:
> map_dfc(.x = article,
+         .f = function(x){
+           foo <- tibble(Title = read_html(x) %>%
+                           html_nodes("h1") %>% 
+                           html_text() %>%
+                           .[nchar(.) > 0],
+                         Content = read_html(x) %>% 
+                           html_nodes("p") %>% 
+                           html_text(),
+                         Site = "AmericanThinker")%>%
+             filter(nchar(Content) > 0)
+           }
+         ) -> out
Error: Argument 3 must be length 28, not 46

But as you see a new error pops up


Viewing all articles
Browse latest Browse all 205301

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>