In January 2019, Google updated their terms of service and has essentially removed the free access to Google Maps in R. This means that you’ll need to purchase the relevant APIs (compu-speak for Application Programming Interface) from Google in your google account to access these features in R.
So do you need it?
If you’re interested in mapping in R, you basically need it. There are some mapping packages that you can use to get around using any Google products (Leaflet is a great example). But for all the glorious customization and overwhelming ubiquity of the ggmap, this API key is essential for reproducible science in ecology related fields. When I first encountered the problem, troubleshooting was a nightmare–everyone used ggmap, and even those who didn’t still used Google Maps as a source for their base maps. Not. Fun.
Luckily, it’s relatively cheap at $2/month for the first 100,000 static maps in each month (dynamic maps, street maps, embed advance, and dynamic street maps cost more, but we aren’t likely using these tools in our work). Even luckier, there’s a $200 credit/month for the first year of use!
It’s a bit confusing to navigate the Google Cloud Console if you’re trying to figure it out solo (and scary considering you’re paying for something), but the actual steps are easy and quick. There are two main steps to the process: 1)Get an API key and 2) Show R your API key. There’s just a few ministeps in between.
Select a Project. If you don’t have one, create one. It won’t matter later.
Enter your billing information.
Copy your API key. Consider pasting it into a .txt file on your machine for safe keeping.
Show R your API key:
In your R console, enter this code:
register_google(key = “YOUR_API_KEY”)
Run this code for every new session you need to map in, and you’re ready to go!
a daRk tuRn
R is popular among scientists (especially ecology/conservation scientists) because of its power. But it’s basically essential for scientists because it’s free. In a field where funding is scarce and costs are high, R has been a blessing for open science and has seriously moved the discipline forward. But the same reason R is powerful is because it’s not entirely autonomous; it (in large part) relies on monolithic companies like Google to up the ante. It may not be a very expensive fee, but it is yet another barrier for researchers and open science. Hopefully someday we can return to a free, open access Google Maps. After all, open science benefits scientists, the general public, and corporations–even Google.
For the bird-cactus double mutualism project, we had planned on observing two study species: Cylindropuntia anthrocarpa (Buckhorn Cholla) and Opuntia basilaris var. basilaris (Beavertail Cactus). We also needed 3 class sizes (small, medium, and large) in which to bin the cacti. This would impact our sample size and equipment list. That being said, the best laid plans of mice and men (and grad students) often go awry. I’d only briefly visited our study site the summer before I’d officially started at York, so we knew we would need to revisit Sunset Cove to do some preliminary exploration before getting into the trenches and collecting end-game data. Getting to the site, it was immediately apparent that we would need to examine our plants more closely; there was nearly no beavertail in sight. So we altered the protocol, then added Cylindropuntia enchinocarpa (Silver Cholla) into the mix. The goal? Determine the location, size, size-variability, and health. We want a tall-ish species (so pollinators and frugivores would be interested) with plenty of variability in size, enough of them to manipulate conditions, and healthy enough so we can expect some flowers and fruit later on. And, for fun, we took a quick look at shrubs to see if they’re associated with cacti in any respect (I don’t go into that here, but the data is available on Github).
Where are the cacti?
Let’s make a quick map and take a look at the cacti individuals sampled. For C. anthrocarpa, we were easily able to sample at every 5 meters along 5 transects that were spaced 5 meters a part (n=105). C. enchinocarpa, however, was more sparsely distributed. So, after doing our first two transects 5 meters apart, we realized we needed to increase the distance between transects to 10 meters. We also weren’t able to get a cactus sample at every 5 meters, so we sampled 9 transects in total (n=98). The least common species, however, was Opuntia basilaris, which was so rare that transects were ineffective, so we instead unsystematically searched the entire site only to find a paltry number of individuals (n=26).
Based on the proposed protocol, we need 150 individuals of each study species to replicate each combination of variables 10 times. Ideally, the individuals manipulated between flowering and fruiting season will not be resampled in the the fruiting season, as our manipulation of the flowers in April may impact the number of fruits in August. This means that C. anthrocarpa is a solid study species option. C. enchinocarpa is certainly possible, but not as dominant as its cousin, and O. basilaris is out of the question.
How big are the cacti?
We’ve seen the distribution of cacti, but size of the cacti is what’s really important for this study. We need to know if the sizes are variable enough to split into 3 class sizes (small, medium, and large). We also need a general idea of their height to consider if pollinating and frugivorous birds will engage with the flower and fruits of the cacti at all. The three species did indeed have significantly different mean heights (Kruskall Wallis Test, p > 0.0001, df = 52, x^2 = 151.52), with means of 1.04, 0.55, and 0.17 for Cylindropuntia anthrocarpa, Cylindropuntia echinocarpa, and Opuntia basilaris, respectively.
How should we bin the cacti?
One important variable of our project is size classes within a species: small, medium, and large. Because height is what may influence pollination and frugivory, we will use the “z-axis” that we measured as the factor for size. Each size class must contain enough individuals for replication. We need to decide how to bin the size classes; either we can use natural breaks present in the data, or we can create equally-sized bins for the study species. Let’s examine each species’ size distribution, and make decisions about size class breaks on that.
None of the species have distributions with natural breaks (see density plots), and, especially for our two Cylindropuntia species, we can see that there are even distances between quartiles (see boxplots). For these reasons, I propose an equal-size binning method to determine size class.
Size-classes of cacti
But what exactly are the equal size classes for each species?
86cm – 152cm
46cm – 72cm
16cm – 22cm
We can see that Buckhorn Cholla (C. anthrocarpa) has the largest class sizes, followed by Silver Cholla, and then Beavertail. Having large classes may translate more clearly to birds, and therefore be a suitable metric to see if bird visitation is influenced by cactus size.
Health of cacti
Another important factor to consider when exploring potential study species is their overall health. After all, are these individuals even capable of flowering and fruiting? To measure health, we created a health index based on the Wind Wolves Bakersfield Cactus Report, which classifies each individual’s health on a discrete scale of 1-5 (1 being the least healthy, and 5 being the healthiest). We considered overall paddle/branch death, as well as scarification and rot.
We can see that the Cylindropuntia species are healthier than their Opuntia counterparts. The question is, will an unhealthy population still flower/fruit as much as a healthy population? Perhaps, but this is not the question of my project.
Who is America’s Next Cactus Superstar?
Considering its abundance, size, and health, Opuntia basilaris is not a realistic contender as a study species. It is likely to be overlooked by birds, not bloom/fruit due to poor health, and is in small supply. Therefore I must remove it from the running. Both of Cylindropuntias are healthy. Silver Cholla, however, is still less dominant than Buckhorn Cholla, is smaller overall, and doesn’t have the width of size classes that Buckhorn Cholla does. While these traits do not mean the Silver Cholla could not be a viable study species, I propose that focusing more on Buckhorn Cholla by deepening the methods of observation (i.e., joy sampling: stationary versus mobile count data, and increased hours of focal observation) will be more beneficial to answering my study questions than a comparative study between cacti species would.
How I felt when first trying to work in R Markdown.
Writing can be scary. Writing can be scary for everyone, not just us scientists. But whether or not we enjoy it, or think we’re good at it, it’s probably the best tool for communicating our findings. So removing as much pain from the process is key.
That’s why I’ve started using R Markdown for writing.
If you’re like me, the worst part about writing scientific papers is formatting. I hate it. I hate getting bogged down in font size, citation style, line numbers–all that stuff. Not only does it take me forever to get just right, but it gives me so much room to mess up stuff that isn’t based in content. If I’m spending time fighting with format, that’s time away from thinking about stuff that really matters. And the idea of switching between different journals’ format style makes me want to cry. R Markdown made worrying about that a thing of the past.
But perhaps even better than the formatting convenience R Markdown provides, it makes collaboration so much easier. This is especially true when you pair R Studio with your Github account. All changes and additional files referenced are all neatly connected, and any code printout included in your paper is already sitting in your paper.
So, I’ve switched to writing in R Markdown. I’ve always worked in either Word or Google Docs, and I still will if I’m writing something that isn’t going to require a lot of coordinating; but for big projects, I’m moving on up. I’m ready to get productive.
When I first tried this new step in my workflow, I felt less than skilled. I have experience in R Studio and Markdown, but when learning anything new I feel like a cat trying to type. So here’s some important tips I’ve collected from my first time through the process to hopefully make it easier.
Define and fill the space R will reference when filling in format details. Three dashes (—) start and end the referential space, so write any parameters you want to fill followed by a colon and the content you want associated with it (title: Scientific Writing in R Markdown). When you create a new .rmd file, this is already started for you. Some parameters require a little extra characters, like abstracts or authors. You’ll also need to include which output you want (a specific journal, word doc, html, pdf, etc.). If you want to format in a specific journal style, you can look up different csl (citation style and language) codes to reference journals here. You’ll also need to install and run rticles package. The rticles package allows you to reference different journal format styles so your .rmd can knit to that format style. After you finish the referential section, begin writing your paper outside the ending three dashes.
Know and use your syntax. Writing in R Markdown means you’re writing in plain text as opposed to rich text. Rich text is when you’re writing but you have all these different formatting options–italics, font, colors–all the formatting options you can see in the the GUI interface. This is what you’re working with when you’re in Word. Plain text, which is just the text characters, is what you’ll use whenever you’re working in R. In order to get things like italics, or numbered lists, or bold, you need to use certain syntax. The rich text formatting will appear after you knit. Once you get used to this, it’s snap (here’s a handy guide to syntax). Plus, it’s one less thing to distract you when you’re trying to focus on content and ideas.
Understand citations. Probably my single favorite thing about R Markdown is the ease with which I can include citations. It took me a minute to figure out the steps, but once I did, I never want to type out a citation or use a Word plugin again. All you have to do is export whichever papers you could possibly want to cite from your reference manager (I use Mendeley) into a .bib file. Notice what your citation key is. For Mendeley, it automatically formats your key to be author and year (@Lemon2018). After you create this, make sure your bibliography reference in your .rmd is your new .bib file. If you know your citation key, all you need to add a parenthetical citation is include [@author]. For example, you might type: “A cat like to be scratched behind its ears [@Lemon2018]”. This will automatically populate the entire citation at the end of the document. If you want to include multiple citations in one parenthetical, simply separate the keys with a semi-colon [@Lemon2017;@Lemon2014].
Code! Don’t forget you’re writing in R Studio, so being able to directly code is a huge advantage of working in R Markdown. You can include any figures or tables you would in R Studio, just insert a new chunk. For tables, I recommend the kable function in the knitr package which creates an attractive table from a dataframe you already have. Just be sure to include “include=FALSE” at the beginning of your chunk so you only see the outputs of your code. Here’s a video that shows side-by-side screens of coding/writing in Markdown and how the code will look after knitting.
For me, it was a steep learning curve to make the transition from rich text programs to R markdown. In this post, I included some introductory tips for switching to R Markdown. There are lots of more advanced options with R Markdown, but for this post I wanted to focus on the challenges that I struggled with while writing my first paper in an .rmd file. This doesn’t include steps that I found intuitive, or questions that are associated with learning to code in R, or tricks that are so advanced that I didn’t run into them. But I found the answers to most of my questions by scouring the web, so even if I didn’t answer something here, the answer is probably out there. Hopefully, the tips I devised can help an intermediate R coder get the most out of their work with R Markdown.
We are currently working on a manuscript exploring the importance of microenvironmental conditions versus seed source for desert annual plants. Plant facilitation is a central tenet of the paper, however, we are more focussed on plant-seed/seedling interactions and less on plant-plant interactions. There are some confirmatory findings, i.e. that positive interactions are likely species-specific and that microenvironmental differences are important, but there are also some novel findings (teaser so you read the paper). An exceptional collaborator did this research as part of her honor’s thesis project, and it is absolutely publishable and technically correct. This study adopted a similar protocol to a recent contribution from the ecoblender team in Austral Ecology but with different species and a different purpose (and in fact, it predates this publication and was the pilot for the protocol). However, it is sometimes a challenge to publish a good idea demonstrated empirically with either mixed results, a single protocol (i.e. controlled conditions and not field), repeated testing of previously published similar research, or limited in extent of capacity to explore either full range of variation or extensive sample sizes. I think this study is great, and it is so tempting to overinterpret because the idea is so attractive and I like it. Nonetheless, it is prudent to select an appropriate framing of the problem and matching journals for submission. In discussing the writing, we are also concurrently considering the outlet.
Here is the workflow we used in selecting the journal.
2. Edit, repeat, and begin discussions on relationship to larger literature landscape and ideas.
3. Make a list of top journals that fit the scope of study to test hypothesis.
4. Check each journal for contemporary papers on topic to ensure that we are correct in estimate of fit/niche.
5. Check lit cited of current ms to see if certain journals are cited more frequently. Add to list and explore/rule out journals that we may cite frequently for big, specific ideas that are likely beyond out reach.
6. Make a list of journals entitled ‘journal pipeline’ recognizing and reminding ourselves that rejection is part of the process and beneficial. Remind again 🙂
7. Select journal.
8. Check lit cited within manuscript for journal citation matching patterns.**Rule of thumb – a good fit should have a few key papers cited from that journal. The rationale is NOT to ingratiate with editors, but to ensure that the current research offering matches previous/related research. Some editors do however check the lit cited of submissions, and if not a single citation to a previous publication in that journal, can consider rejection for offerings that are outside her/his primary research expertise.
Disclaimer: I am not a fan of ratcheting from higher-tier journals to lower. This wastes time all participants in the peer review process. Sometimes however, this is a disservice to my junior collaborators as we end up in lower-tier placements but waste less time. Efficiency-impact trade-off, but it is difficulty to predict handling times by perceived impact of journal. I also strongly advocate for OA journals and this also sometimes leads to non-ISI placements. I do recognize that we each have different career needs, but I am confident that strong work – regardless of journal -can be found online easily now and will capture interest.
Linking back to preamble that got me thinking of our collective workflow, that always include discussion within team, we generated a short list of three journals to consider.
Journal of Plant Ecology
Journal of Arid Environments
I have enjoyed many, many papers from all of these journals. A cursory search of the lit cited, online offerings, and discussion indicates that all three are viable with some caveats.
PLOSONE – High impact, great visibility, open access, and reviewed for technically correct designs. However, it is our collective opinion that this could be a stretch. There are many general plant facilitation papers, but we have a narrower scope. Whilst reviewing for technical correctness only and not impact, PLOSONE is nonetheless very reductionistic in their experimental/result/analyses reviews. I have had perfectly appropriately, well-designed experiments rejected. Never for impact reasons. There is no perfect experiment, but PLOSONE is nonetheless handling a very, very high number of experiments and thus seeks substantiative experimental designs.
Journal of Plant Ecology – A solid, mid-tier ecology journal. Interesting papers on facilitation. More emphasis on ecology then we necessarily tackle in this particular ms, and we are also focussed on plant-seed interactions. Seeds are the key life-stage in this study.
Journal of Arid Environments – I have read many papers over the years and always enjoyed. Sometimes less ecological and lower impact relative to previous two options.
How to decide – In summary, all three are certainly viable with difficult probabilities to estimate associated with both acceptance rate and handling time. We decided to examine the following questions explicitly to move forward, and in doing so, found the perfect fit (and a surprise too).
1. In PLOSONE are there a few seed biology/ecology papers or ecotype/reciprocal common garden papers that are comparable in sample size and number of species tested?
2. In JoPE, are there any seed biology/seed ecotype papers or is it more plant focussed?
3. In J of Arid Envts, are there a few plant facilitation papers or seed ones?
No other reason than assuming it was less ecological and more broad. There were many perfect papers related to our topic and design in the Journal of Arid Environments!
Journal of Arid Environments is a great fit for this paper. Concerns include lowest IF, non-OA journal, and handling times. We will keep you posted, but I thought it would be interesting to share how we approached submission of an interesting, well-executed experiment that is a mix of confirmatory and insightful findings.