You never know what is going to happen in the deserts, so a little greenhouse research is always a good bet!
As we collectively move to platforms that support better reproducibility and open science, a few tiny challenges persist. Reference management. LaTex with BibTex is great, but at times, team members are interested less in reproducibility and more in just sharing the libraries. We recently faced this challenge because we were collaboratively writing a very long white paper and each of us worked in a different management ecosystem in spite of using GitHub to control the versioning and collaboration in the writing.
Here are some resources to support a decision. Anecdotal research similar.
Refworks, Easbib, Endnote, and Mendeley look promising.
Good contrast here including discussion of Zotero.
Gradhacker review here of offerings.
Writing in google docs collaboratively use paperpile.
Writing in RStudio use Zotero.
Great lists of pros and cons out there. Based on the various lists, I vote for key criteria as a. cloud storage, b. can use in RStudio easily, c. allows me to share library with collaborators for a given paper.
The three competitors seem to be Refworks, Mendeley, and Zotero.
Now, need to give them a head-to-head test shortly.
Seven magical steps into a dataframe.
By Nargol Ghazian
This is a summary of the protocol I have been using for that past few months to process all the amazing camera trap photos from the Mojave National Preserve and the Carrizo National Monument. After reading a few papers on cam trap processing and exploring the CamtrapR package, the best approach would be to create your datasets manually as no other program is able to automatically detect animals for you. This method also ensures that you obtain the best dataset for the statistical analysis you wish to perform. This seven step guide should give you a quick rundown on how to get started with processing and maintaining a good workflow.
|1.Year||We are working on the 2017 images|
|2.Region||MNP is Mojave and CNM is Carrizo|
|3.Site||Mojave or Carrizo|
|4.Calendar date||The date the picture was taken in dd-mm-yr. I like to do the pictures belonging to the same date for each photo rep in order. If the date is wrong, don’t worry too much, just do it all as one for the last date of the particular week you are working on|
|5.Microsite||Carrizo is shrub or open, 3 weeks for each. Mojave is Buckhorn or Larea, also 3 weeks for each.|
|6.Day||This goes in a 1,2,3..n order|
|7.Rep||This refers to the camera trap station. There are 10 stations per microsite. For example you might have four pictures for the same day in station #2 of open, so you would write 2 four times: 2,2,2,2 each corresponding to an image|
|8.Photo Rep||A continuous number starting at 1 and continuing until you’ve finished processing all your pictures for the particular site|
|9.Animal||The animal in a hit photo. The most common are rat, rabbit, squirrel, fox, lizard and sometimes bird. There are times where you might have to guess. If it’s really hard then write ‘unidentifiable’. If it’s a false hit leave it blank.|
|10.Animal. Capture||binary 0 = false hit, 1 = animal present|
|11.Time block||Look at the timestamp. Is the photo taken at night, noon, afternoon or in the morning. If the timestamp is wrong, guess based on the darkness or lightness.|
|12.Night. Day||Based on whether it’s dark or light.|
|13.Actual time||Actual time written on the photo. Let’s hope it’s the correct timestamp J|
|14.Observations||If you see absolutely anything interesting in the photo, note it! Otherwise leave this column blank. I usually write ‘x2’ or ‘x3’ if there is more than one animal in the photo. Sometimes I write ‘eyes visible’ if it’s dark and you can only tell the presence of the animal from its shining eyes (rats usually)|
|15.Temp of positive||This is noted on the picture in Fahrenheit or Celsius. Whatever unit is shown, note it in your meta-data. If you’ve been working with one unit, and a certain photo rep has a different one, just use a converter to convert to the units you’ve been using for that particular photo rep.|
|16.Week||This is either 1, 2 or 3 since there are only 3 weeks per microsite. This column is super important because sometimes the datestamps are wrong but at least the week of sampling is correct|
*Note: The only time we actually fill in anything for columns 9 and 11-15 is when we have a “hit” and there is an actual animal.
Connecting most peripherals to a Mac is typically a snap. However, about two years ago, updates to OSX introduced to challenges to connecting Onset Hobo Micro Data Loggers to initialize and then download stored data. I decided to finally work through these challenges instead of switching machines. This may seem trivial, but it was a bit finicky; so, here are the steps quickly summarized.
Configurations: any version of OSX 10.8 and higher likely needs these steps particularly if your Onset product uses a serial port for communication. The steps listed below were developed on a late 2012 iMac running 10.13.1 OSX (High Sierra).
Steps to connect
Now, you are ready to explore some microclimate for your sites!
This fall I have been processing the insects and pollen samples that I collected this spring from my fieldwork in the Mojave Desert. The insects were primarily caught using pantraps, and were transferred into 90% isopropyl alcohol for preservation. With the help of our lab’s two undergraduate practicum students, Shobika and Shima, we are gradually getting them nicely organized into collection boxes.
I pinned many, many bees and wasps when I worked on a pollinator census during my undergrad in West Hamilton. These are the steps I use for processing insect samples:
I have also been mounting pollen samples whenever I can squeeze the time in. I collected stigmas from the field and have been storing them in ethanol-filled small tubes.
For a different experiment that I have not yet processed, I will put the tubes into a centrifuge, spin down and pipette out the pellet to save time and labour. Quite a few tubes from the current experiment are extremely small and I am concerned about their ability to hold up under the force of a centrifuge. I need a less labour intensive process to make slides for my upcoming field season. I can think of two main options right now – use sturdy tubes that I can centrifuge, or collect into small tubes without adding ethanol, and mount each evening while at the research station. This will cut down the need to let the alcohol evaporate.
The goal is have to have both an evidence folder of positive hits and a dataframe that can then be wrangled to estimate relative efficacy of sampling, frequencies of different animals, spatiotemporal dynamics, and differences between structured treatments in the implementation of trapping.
Meta-data for manual processing spreadsheet workflow
Attribute is the column headers.
|year||we have many years for Carrizo (evil laugh) so good to list here in a vector|
|region||MNP for Mojave, CNM for Carrizo|
|site||if you have more than one site, put name of site|
|microsite||larrea, buchhorn, ephedra or open depending on region|
|day||this is census day, 1,2,3, to however many days sampled|
|rep||if more than one rep per day|
|photo rep||just cut and paste to total number of photos each cam took on one day, could be 10 to 10000|
|animal.capture||binary 0 = false hit, 1 = animal present|
|animal||list animal as ‘none’ if false hit, then animal name if one was there|
|timeblock||with animal telemetry work, morning, afternoon, night is usually sufficient 6am to noon, noon to 6pm, them night time|
|night.day||back up if timestamps are incorrect – just do by night and day using light and darkness in photos. very quick|
optional depending on your filing system, copy all positive hits to a separate folder. somehow, keep track of positive locations and times for subsequent analyses
record observations of anything ecological that pops such as if there was another animal in the photo OR if it was the same animal repeatedly recaptured
This year the ecoblender lab attended CSEE 2017. The conference was great and covered four days of talks, workshops, and networking events. I attended a free workshop that taught some basics in mapping spatial data and different packages to use in R. There was also a wide range of talks that mostly seemed interdisciplinary. This included discussions of uncertainty in ecology, estimate the value of natural resources, and developing models of habitat selection. Here are some of the highlights I took away from the conference:
There was discussion over the usage and power of mechanistic vs. phenomenological models. This is a topic discussed often in ecology (see of that discourse here), but can be defined here as:
mechanistic: includes a process (physical, biological, chemical, etc) that can be predicted and described.
phenomenological: Is a correlative model that describes trends in associated data but not the mechanism linking them.
The discussion mostly described the relationship between phenomenological and Mechanistic models as not binary and rather a gradient of different models that describe varying amounts of a particular system. However, it did touch upon models such as GARP and MaxEnt that are often used for habitat selection or SDM but neglect the mechanism that is driving species occurrence. Two techniques I would like to learn more about are Line Search MCMC and HMSC which is a newly developed method for conducting joint species distribution models.
There was also a morning session that described benefits and tools for using camera traps. These sessions are always great as they give a chance to see some wildlife without disturbance. Topics focus around deer over abundance harming caribou populations, how wildlife bridges do not increase predation through the Prey-Trap Hypothesis and techniques for using wildlife cameras or drones. One talk that was particularly interested used call back messages when triggered to see how animals respond to noises such as human’s talking or a mating call.
One of the more useful things I believe to have taken out of the session is how to estimate animal abundance and movement when the animals in your camera traps are unmarked. One modelling technique using Bayesian modelling and was found to be equivalent to genetic surveys of animal fur for estimating animal abundance. This is in contrast to the more frequent spatial capture-recapture (SCR) methods that either mark individuals or supplement camera trap data with other surveys. I also discovered there the eMammal project at the Smithsonian that is an Open Access project for the management and storage of camera trap data.
Ecology and climate change:
Climate change as always is a big topic at these conferences. There was a good meta-analysis out of the Vellend lab that show artificial warming of plant communities does not result in significant species loss. However, there was evidence that changes in precipitation does significant impact plant communities. The results are very preliminary, but I look forward to seeing more about it in the future. I also liked a talk that is now a paper in Nature that models networks in the context of climate change. The punchline of the results being that species composition in communities is dependent on dispersal, and high dispersal rates can maintain network structure although members of the community may change.
I presented results from our upcoming paper modelling positive interactions in desert ecosystems:
Overall I learned a lot from the CSEE 2017 conference and thought it was a health balance of size and events. Victoria was also a great city and made hosting the conference very easy. Next year it will in the GTA and I plan on connecting with the organization committee to potentially host an R workshop at the beginning of the conference. Until then!
Full details are provided here.
The purpose of this workshop is to provide tools for a new/novice analyst to more effectively and efficiently analyse their data in R. This hands-on workshop will introduce the basic concepts of R and use of generalized linear models in R to describe patterns. Participants will be encouraged to help one another and to apply what they have learned to their own problems.
Who: The course is aimed at R beginners and novice to intermediate analysts. You do not need to have any previous knowledge of the tools that will be presented at the workshop.
Where: 88 Pond Road, York University. Room 2114 DB (TEL). Google maps
Requirements: Participants should bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) with administrative privileges. If you want to work along during tutorial, you must have R studio installed on your own computer. However, you are still welcome to attend because all examples will be presented via a projector in the classroom. Coffees and cookies provided for free.
It is best to deploy loggers with appropriate sensors to capture an environmental signal within a set of study sites. Nonetheless, when actively sampling for plant-plant interactions dynamics, an estimate of soil moisture at that particular point in time and space precisely is useful (at least as a covariate). We use the Delta-T SM-150 handheld unit to complement our long-term logging arrays.
Here is a brief summary of the settings/methodology we use.
Comments: Ranges you can expect at least in arid and semi-arid systems we have tested within California are between 1-40% but most frequently < 10%. The unit is durable, and the control unit is ‘water resistant’. However, when the controller gets wet in the rain, it stops working until it drys out again (typically at least a day later). The cable is not that robust, and to be safe, we insert/push the sensors into the ground using the ceramic casing.
Mini-reviews are shorter and more focused than traditional literature reviews. Their specific format varies between journals, however they all have a few things in common: They are topical, concise and specialized, rather than being exhaustive. They quickly bring the reader up to speed on current research in a field, particularly when there has been a major change in thinking. This is in contrast to major reviews, which provide a comprehensive overview of a subfield.
Mini-reviews often synthesize recent research, offering insight and new direction in an important emerging research area. They ideally propose new ideas and hypotheses that arise from the synthesis. Challenging current views in ecology and embracing a bit of controversy is welcome. Despite being called minor, these reviews may garner higher readership and impact than major reviews, due to their conciseness, readability and relevance. I think they are particularly suited to interdisciplinary synthesis, as they do not require writing an exhaustive background from each field, making it easier to communicate the interesting or important aspects of the crossover to a wider audience.
While only a handful of ecology journal explicitly provide guidelines for a mini-review, but quite a few impose a shorter word limit (< 3000 – 5000) and limit references to around 40, essentially requiring a mini-review. Other keywords I have noted are ‘topical’, ‘specialized’, ‘research reviews’, ‘briefings’ and ‘question-based’.
They following ecology-related journals either publish mini-reviews by name, have previously published mini-reviews or their submission guidelines strongly suggest that they welcome the format: