In previous work with Skip Krueger, we conceptualized bond ratings as a multiple rater problem and extracted measure of state level creditworthiness. I had always had it on my list to do something like this and recently ran across a package called geofacet that makes it simply to easy to do. So here goes. The code is below the post.
library(haven)
library(dplyr)
Pew.Data <- read_dta(url("https://github.com/robertwwalker/academic-mymod/raw/master/data/Pew/modeledforprediction.dta"))
library(tidyverse)
load(url("https://github.com/robertwwalker/academic-mymod/raw/master/data/Pew/Scaled-BR-Pew.RData"))
state.ratings <- data.
NB: This was last updated on March 25, 2020.
Building Oregon COVID data
I have a few days of data now. To rebuild it, I will have to use the waybackmachine. The files that I need to locate and follow updates to this page from Oregon’s OHA.
A Scraper
Let me explain the logic for the scraper. NB: I had to rewrite it; the original versions of the website had three tables without data on hospitalizations.
Oregon COVID data
I now have a few days of data. These data are current as of March 24, 2020. I will present the first version of these visualizations here and then move the auto-update to a different location. A messy first version of the scraping exercise is at the bottom of this post.
paste0("https://github.com/robertwwalker/rww-science/raw/master/content/R/COVID/data/OregonCOVID",Sys.Date(),".RData")
## [1] "https://github.com/robertwwalker/rww-science/raw/master/content/R/COVID/data/OregonCOVID2020-03-24.RData"
load(url(paste0("https://github.com/robertwwalker/rww-science/raw/master/content/R/COVID/data/OregonCOVID",Sys.Date(),".RData")))
A base map
Load the tigris library then grab the map as an sf object; there is a geom_sf that makes them easy to work with.
The Office
library(tidyverse)
office_ratings <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-03-17/office_ratings.csv')
A First Plot
The number of episodes for the Office by season.
library(janitor)
TableS <- office_ratings %>% tabyl(season)
p1 <- TableS %>% ggplot(., aes(x=as.factor(season), y=n, fill=as.factor(season))) + geom_col() + labs(x="Season", y="Episodes", title="The Office: Episodes") + guides(fill=FALSE)
p1
Ratings
How are the various seasons and episodes rated?
p2 <- office_ratings %>% ggplot(., aes(x=as.factor(season), y=imdb_rating, fill=as.factor(season), color=as.factor(season))) + geom_violin(alpha=0.3) + guides(fill=FALSE, color=FALSE) + labs(x="Season", y="IMDB Rating") + geom_point()
p2
Patchwork
Using patchwork, we can combine multiple plots.
Some Data from FREDr
Downloading the FRED data on national debt as a percentage of GDP. I first want to examine the US data and will then turn to some comparisons. fredr makes it markable asy to do! I will use two core tools from fredr. First, fredr_series_search allows one to enter search text and retrieve the responsive series given that search text. They can be sorted in particular ways, two such options are shown below.
XBRL Changed
XBRL has undergone and is undergoing some changes. Some filers have already needed to change their filings and others will have to soon. Here is the excerpt.
XBRL Change
This has broken many of the existing parsers for new filings. It is time to find a way around this. I have seen links for scraping them from Yahoo! Finance but that is not really what I want.
tidyTuesday on the Carbon Footprint of Feeding the Planet
The tidyTuesday for this week relies on data scraped from the Food and Agricultural Organization of the United Nations. The blog post for obtaining the data can be found on r-tastic. The scraping exercise is nice and easy to follow and explored a case of cleaning up a very messy data structure. I took this exercise as practice for using pivot_wider and pivot_longer.
First, I wanted to acquire the distribution of letters and then play with that. I embedded the result here. The second step is to import the tidyTuesday data.
library(tidyverse)
Letter.Freq <- data.frame(stringsAsFactors=FALSE,
Letter = c("E", "T", "A", "O", "I", "N", "S", "R", "H", "D", "L", "U",
"C", "M", "F", "Y", "W", "G", "P", "B", "V",
"K", "X", "Q", "J", "Z"),
Frequency = c(12.02, 9.1, 8.12, 7.68, 7.31, 6.95, 6.28, 6.
tidyTuesday: December 10, 2019
Replicating plots from simplystatistics. One nice twist is the development of a tidytuesdayR package to grab the necessary data in an easy way. You can install the package via github. I will also use fiftystater and ggflags.
devtools::install_github("thebioengineer/tidytuesdayR")
devtools::install_github("ellisp/ggflags")
devtools::install_github("wmurphyrd/fiftystater")
tuesdata <- tidytuesdayR::tt_load(2019, week = 50)
## --- Downloading #TidyTuesday Information for 2019-12-10 ----
## --- Identified 4 files available for download ----
## --- Downloading files ---
## Warning in identify_delim(temp_file): Not able to detect delimiter for the file.