Photo courtesy of Jacob Lucero
Code, data, and exploration on GitHub.
Do native species density trials at two scale – micro and mesoscale using pots and large plastic buckets. Mix of potting soil and sand best – 50:50.
Consider intra and interspecific competition series – with replacement, i.e. keep net densities per pot consistent. Finally, perhaps consider competition with an exotic such as red brome.
then a*b, a*c, b*c
Or run each out in solo then against red brome.
We had a more ambitious set of goals this season.
- Habitat use frequency estimates. Tools: a. telemetry of blunt-nosed leopard lizard with a total at least 1200 relocations split between AM/PM with an estimate of shrub-open and behavior. b. cam traps at shrub-open on still mode.
- Behavior estimates. Tools: a. cam traps on video mode at a total of 100 hours recording time. b. direct observation (with recording too) by humans of lizards and grasshoppers at a total of 100hrs.
- Shrub-plant-animal interaction estimates. Tools: exclosures at two sites to exclude different taxa in shrub-open mesohabitats. a. cages. b. cams c. vacuums. d. sweeps.
- Temperature profile estimates. Tools: a. pulse of collars on lizards b. loggers at microhabitat scales.
- Census grasshoppers. Tools: stick, sweep, and vacuum. Also do direct observation to assess whether they are significant consumers.
1a. Mario and Steph. Goals 1,2,3,5.
1b. Malory and Nargol. Goals 1,2,5.
2. Emily and Kat. Goals 1 & 4.
Deploy one set of cam traps on still mode at a total no-shrub zone.
Get a solid handle on behavior by verts and inverts in the context of paired interactions with plants (at micro-scale) and shrubs at mesoscale.
Need an assay of insect diversity.
Predictions to test
Do relocations map onto where scat is found too?
Is there scat fidelity from day to day?
Does telemetry relocation or conversely scat presence within a likelihood MCP (polygon estimating 95% percent change animals present within area) correlate with one another?
ie – imagine a telemetry polygon and a scat one too – never been done!! and then we statistically overlay them.
How to test
1. do relocations this saturday (day #1), enter into usual relocation datasheet attached. Do morning and afternoon.
Enter data that evening and do a baton handoff to the scat team.
2. On sunday (day #2), scat people ‘check’ all locations where there was certainly a lizard. In each day 1 relocation, is there scat the next day!! So, imagine day 1 there were 100 spots where lizards where spotted. On day 2, what proportion of these have scat deposited.
3. On sunday (day #2), team telemetry repeats process and finds another 100 spots or whatever they can where they see lizards. Same process – enter and hand off to team scat.
4. On Monday (day 3#), same process – team scat check day 2 telemetry relocations for scat – so we are holding as best we can scat scent cones etc to 1-day old and team telemetry repeats and finds new spots for next sampling.
5. On tuesday (day #4), team scat checks team telemetry relocations from the day before (day #3).
A total of 4 days sampling with 3 statistical days to test for scat detections with a 1-day lag where lizards were spotted. SO POWERFUL.
NOW — -as you can imagine – there are also a few bonus opportunities here…. 🙂
A. If dogs have time, check back more than 1-day lag – ie on day #4, dogs can check ALL previous days days 3,2,1 – this gets the second main prediction – ie site fidelity. BE amazing to know this.
B. If team telemetry has the people and the receivers, it can also go the other way – team telemetry looks for lizards where there was scat detected – anywhere – the following day – so we pass the baton back and forth.
C. Team scat if they have time daily, checks other sites to fill in the region more – ie the polygon idea – to see how well regional NOT just point sampling works.
Ephedra regional gradient
My biggest project examines positive interactions along a regional gradient of continentality. The immediate question though is what is continentality? What abiotic and biotic variables change along this gradient in addition to plant-plant interactions. When we initially constructed this gradient the two main considerations were aridity and cold stress. For plants in the Deserts of California these are two very important considerations. After two years of conducting this experiment, I had very different climate profiles during the seasons. The most striking was the differences in my plant phytometers between the two seasons. In 2015-2016 growing season, the majority of my plants were present in the San Joaquin Desert. This desert is generally colder and wetter than the more continental Mojave Desert to the east. However, in the 2016-2017 the San Joaquin Desert sites had few plants of my chosen phytometer relative to the abundant Mojave Desert sites. All my plants were present at all my sites at some point, suggesting that this gradient shifts with inter-annual variability. Let’s take a look at what some of that looks like:
San Joaquin Desert year
The 2015-2016 shown in black had similar temperatures on average relative to the 2016-2017 growing season (in grey). The precipitation patterns though were different between years. These sites form a parabola with distances from the ocean. Sites closest to the ocean and most inland have the highest precipitation, while sites in the middle are the least. Overall the 2016-2017 season saw significantly more rainfall. Sites in the 2015-2016 season were extremely arid. For instance, Barstow and my site along Hwy40 saw as little as 30 mm of rainfall. The low abundance of my phytometer in the Mojave sites for that season is therefore likely because of low rainfall amounts. However, the San Joaquin sites has similar rainfall between years so then why so few plants in the 2016-2017. I believe this has to do with the cold stress factor:
Precipitation in mm (black) and temperature in C° (red) during the 2015-2016 growing season for the San Joaquin desert (top) and Mojave Desert (bottom).
Precipitation in mm (black) and temperature in C° (red) during the 2016-2017 growing season for the San Joaquin desert (top) and Mojave Desert (bottom).
Mojave Desert year
Both of these seasons had similar precipitation and temperature patterns. The patterns were also similar between the two deserts, but the noticeable difference that I believe contributed to low plant abundance in the San Joaquin in 2016-2017 is temperature. The year before had warmer temperatures from January onward, which is a key period for plant development. In January 2017 following the majority of rainfall there was a long freeze period of approximately 5 days, followed by another cold period with freezing temperatures end of February. This pattern was much warmer in 2016 and is why I believe cold stress negatively affected plants in San Joaquin Desert for 2017. On the other hand, the Mojave saw significantly ore precipitation and cooler temperatures that all contributed to greater plant abundance.
Slicing through this climate data was interesting and challenging because of all the different ways to summarize variables. Using season means collapses a significant amount of the information and can make conclusions more difficult to derive. I am primed and excited now to dig into the plant responses!
- Read some cam trap papers.
- Check camtrapR package and see what it does to decide if it suits your specific needs.
- Open folder for each site, each day, each rep. Do a folder ‘get info’ to count # of total pics. These are your ‘reps’ within reps, i.e. literally total number of snapshots (or use command line to get dir info for all your photo data).
- I would honestly just paste 0’s all the way down because many will be ‘false hits’.
- Then, open them all up and scroll through.
- Every time a positive hit, overwrite 0 in ‘animal.capture’ vector and in ‘animal’ vector record what it is.
- Also, copy all positive hits photos into a separate folder for additional analyses. Use a folder structure or ID system that keeps track of the place and time that photo was from. For instance, have a folder entitle positive-hits for each site, day, location, rep or aggregate into a single positive hit folder but use a mechanism to ensure we know where/when photo was taken. Do not cut and paste, copy. This is a backup mechanism for additional analyses and sharing data.
- We also want to know when animals are most active, or not; and hence, check timestamps and paste down in that column too. Ideal is actual time but morning, afternoon, night is absolutely adequate and more rapid if we cannot automate the scraping using R-package.
- If timestamp is incorrect, do a light-dark assessment to code as night or day – this is a very rapid process.
- Record observations if more than one animal or if the same animal was recaptured from previous instance. Record anything of note ecologically to calibrate the quantitatives and link photo-capture processing to data mapping/translation. The goal is to accurately map photos onto numbers that represent the dynamics of the system in study.
The goal is have to have both an evidence folder of positive hits and a dataframe that can then be wrangled to estimate relative efficacy of sampling, frequencies of different animals, spatiotemporal dynamics, and differences between structured treatments in the implementation of trapping.
Meta-data for manual processing spreadsheet workflow
Attribute is the column headers.
|year||we have many years for Carrizo (evil laugh) so good to list here in a vector|
|region||MNP for Mojave, CNM for Carrizo|
|site||if you have more than one site, put name of site|
|microsite||larrea, buchhorn, ephedra or open depending on region|
|day||this is census day, 1,2,3, to however many days sampled|
|rep||if more than one rep per day|
|photo rep||just cut and paste to total number of photos each cam took on one day, could be 10 to 10000|
|animal.capture||binary 0 = false hit, 1 = animal present|
|animal||list animal as ‘none’ if false hit, then animal name if one was there|
|timeblock||with animal telemetry work, morning, afternoon, night is usually sufficient 6am to noon, noon to 6pm, them night time|
|night.day||back up if timestamps are incorrect – just do by night and day using light and darkness in photos. very quick|
optional depending on your filing system, copy all positive hits to a separate folder. somehow, keep track of positive locations and times for subsequent analyses
record observations of anything ecological that pops such as if there was another animal in the photo OR if it was the same animal repeatedly recaptured
This year the ecoblender lab attended CSEE 2017. The conference was great and covered four days of talks, workshops, and networking events. I attended a free workshop that taught some basics in mapping spatial data and different packages to use in R. There was also a wide range of talks that mostly seemed interdisciplinary. This included discussions of uncertainty in ecology, estimate the value of natural resources, and developing models of habitat selection. Here are some of the highlights I took away from the conference:
There was discussion over the usage and power of mechanistic vs. phenomenological models. This is a topic discussed often in ecology (see of that discourse here), but can be defined here as:
mechanistic: includes a process (physical, biological, chemical, etc) that can be predicted and described.
phenomenological: Is a correlative model that describes trends in associated data but not the mechanism linking them.
The discussion mostly described the relationship between phenomenological and Mechanistic models as not binary and rather a gradient of different models that describe varying amounts of a particular system. However, it did touch upon models such as GARP and MaxEnt that are often used for habitat selection or SDM but neglect the mechanism that is driving species occurrence. Two techniques I would like to learn more about are Line Search MCMC and HMSC which is a newly developed method for conducting joint species distribution models.
There was also a morning session that described benefits and tools for using camera traps. These sessions are always great as they give a chance to see some wildlife without disturbance. Topics focus around deer over abundance harming caribou populations, how wildlife bridges do not increase predation through the Prey-Trap Hypothesis and techniques for using wildlife cameras or drones. One talk that was particularly interested used call back messages when triggered to see how animals respond to noises such as human’s talking or a mating call.
One of the more useful things I believe to have taken out of the session is how to estimate animal abundance and movement when the animals in your camera traps are unmarked. One modelling technique using Bayesian modelling and was found to be equivalent to genetic surveys of animal fur for estimating animal abundance. This is in contrast to the more frequent spatial capture-recapture (SCR) methods that either mark individuals or supplement camera trap data with other surveys. I also discovered there the eMammal project at the Smithsonian that is an Open Access project for the management and storage of camera trap data.
Ecology and climate change:
Climate change as always is a big topic at these conferences. There was a good meta-analysis out of the Vellend lab that show artificial warming of plant communities does not result in significant species loss. However, there was evidence that changes in precipitation does significant impact plant communities. The results are very preliminary, but I look forward to seeing more about it in the future. I also liked a talk that is now a paper in Nature that models networks in the context of climate change. The punchline of the results being that species composition in communities is dependent on dispersal, and high dispersal rates can maintain network structure although members of the community may change.
I presented results from our upcoming paper modelling positive interactions in desert ecosystems:
Overall I learned a lot from the CSEE 2017 conference and thought it was a health balance of size and events. Victoria was also a great city and made hosting the conference very easy. Next year it will in the GTA and I plan on connecting with the organization committee to potentially host an R workshop at the beginning of the conference. Until then!