Friday, December 16, 2016

The World Without Evolution



Nine years ago, Alan Weisman posed the scenario “The World Without Us.” The premise was that, all of a sudden, people disappear entirely from the world. "What happens next?” The rest of the book described the slow decay of buildings, roads, bridges, and other infrastructure, and the gradual encroachment of wildlife on formerly human dominated landscapes. The same scenario has been postulated in various movies, including Twelve Monkeys, where humans dwelling underground send out hazmat-suited convicts to collect biological samples from the surface in hopes of a cure for the devastating disease that destroyed most of humanity. The images of lions on buildings and bears in streets can seem as jarring – ok maybe not quite as jarring – as the Nazi symbols on American icons in the Amazon.com adaptation of Philip K. Dick’s The Man in the High Castle.

Twelve Monkeys
The premise of this blog post is related – but even more dramatic – what if evolution stopped – RIGHT NOW? What would happen? The context for this question is rooted in my recent uncertainty, described in a paper and my book, about how eco-evolutionary dynamics might be – mostly – cryptic. That is, whereas most biologists seek to study eco-evolutionary dynamics by asking how evolutionary CHANGE drives ecological CHANGE (or vice versa), contemporary evolution might mostly counteract change. A classic example is encapsulated by so-called Red Queen Dynamics, where it takes all the running one can do just to stay in the same place. More specifically, everything is evolving all around you (as a species) and so, if you don’t evolve too, you will become maladapted for other players in the environment, which will cause you to go extinct. The same idea is embodied – at least in the broad-sense – in the concept of evolutionary rescue, whereby populations would go extinct were it not for their continual evolution rescuing them from environmental change.

From Kinnison et al. (2015)

So how does one study cryptic eco-evolutionary dynamics? The current gold standard is to have treatments where a species can evolve and other treatments where they cannot, with ecological dynamics contrasted between the two cases. The classic example of this approach is that implemented by Hairston, Ellner, Fussmann, Yoshida, Jones, Becks, and others that use chemostats to compare predator-prey dynamics between treatments where the prey (phytoplankton) can evolve and treatments where they cannot. This evolution versus no-evolution treatment was achieved by the former having clonal variation present (so selection could drive changes in clone frequencies) and the latter having only a single clone (so selection cannot drive changes – unless new mutations occur). These experiments revealed dramatic effects of evolution on predator-prey cycles, and a number of conceptually similar studies by other investigators have yielded similar results (the figure below is from my book).


One limitation of these experiments is that the evolution versus no-evolution treatments are confounded with variation versus no-variation treatments. That is, ecological differences between the treatments could partly reflect the effects of evolution and partly the effects of variation independent of its evolution. An alternative approach is a replacement study, where the same variation is present in both treatments and, although both might initially respond to selection, genotypes in the no-evolution treatment are continually removed (perhaps each generation) by the experimenter and replaced with the original variation. In this case, you still have an evolution versus no-evolution treatment, but both have variation manifest as multiple genotypes – at least at the outset.

All of these studies – and others like them – impose treatments on a single focal species, and so the question is “what effect does the evolution of ONE species have on populations, communities, and ecosystems?” Estimates of the effect of evolution of one species on ecological variables in nature, regardless of the method, are then compared to non-evolutionary effects of abiotic drivers, with a common driver being variation in rainfall. These comparisons of "ecology" to "evolution" (pioneered by Hairston Jr. et al. 2005) generally find that the evolution of one species can have as large an effect on community and ecosystem parameters as can an important abiotic driver, which is remarkable given how important those abiotic drivers (temperature, rain, nutrients, etc.) are known to be for ecological dynamics (the figure below is from my book).


A more beguiling question is “how important is ALL evolution in a community?” Imagine an experiment could be designed to quantify the total effect of evolution of all species in a community on community and ecosystem parameters. How big would this effect be? Would it explain 1% of the ecological variation? 10%? 90%? Presumably, evolutionary effects of the whole community won’t be a simple summation of the evolutionary effects of each of the component species. I say this mainly because studies conducted thus far show that single species – albeit often “keystone” or “foundation” species – can have very large effects on ecological variables. A simple summation of these effects across multiple species would, very soon, leave no variation left to explain. Hence, the evolution of one species is presumably offset to some extent by the evolution of other species when it comes to emergent properties of the community and ecosystem.

It is presumably impossible to have a real experiment with evolution and no-evolution treatments at the entire community level in natural(ish) systems. We must therefore address the question (What would happen if all evolution ceased RIGHT NOW?) as a thought experiment. 

I submit that the outcome of a world without evolution experiment would be:
  1. Within hours to days, the microbial community at every place in the world will shift dramatically. The vast majority of species will go extinct locally and a few will become incredibly abundant - at least in the short term.
  2. Within days to weeks, many plants and animals that interact with microbes (and what organisms don’t?) will show reductions in growth and reproduction. Of course, some benefits will also initially accrue as – all of a sudden – chemotherapy, antibiotics, pesticides, and herbicides become more effective. The main point is that the performance of many plants and animals will begin to shift within a week.
  3. Within months, the relative abundance and biomass of plants and animals will shift dramatically as a result of these effects changing microbial communities and their influence on animal and plant performance.
  4. Within years, many animals and plants will go extinct. Most of these will go extinct because the shorter-lived organisms on which they depend will have non-evolved themselves into extinction.
  5. Within decades, the cascading effects of species extinction will mean than most animals and plants will go extinct, as will the microbes that depend on them. The few species that linger will be those that are very long lived and that have resting eggs or stages.
  6. Within centuries, all life will be gone. Except tardigrades, presumably.

The above sequence, which I think is inevitable, suggests several important points.

1. Microbial diversity – and its evolution – is probably the fundamentally irreducible underpinning of all ecological systems.

2. Investigators need to find a way to study the eco-evolutionary STABILITY, as opposed to just DYNAMICS.

3. Evolution is by far the most important force shaping the resistance, resilience, stability, diversity, and services of our communities and ecosystems.


Fortunately, evolution is here to stay!

Friday, December 2, 2016

Wrong a lot?

[ This post is by Dan Bolnick; I'm just putting it up.  – B. ]

In college, my roommates and I once saw an advertisement on television that we thought was hilarious. A young guy was talking to a young woman. I don’t quite recall the lead-up, but somehow the guy made an error, and admitted it. Near the end of the ad she said “I like a guy who can admit that he’s wrong”. The clearly-infatuated guy responded a bit over-enthusiastically, saying “Well actually, I’m wrong a LOT!” This became a good-natured joke/mantra in our co-op: when someone failed to do their dishes, or cooked a less-than-edible meal for the group, everyone would chime in “I’m wrong a lot!”

Twenty years later, I find myself admitting I was wrong – but hopefully not a lot.

A bunch of evolutionary ecology theory makes a very reasonable assumption: phenotypically similar individuals, within a population, are likely to have more similar diets and compete more strongly than phenotypically divergent individuals within that same population. This assumption underlies models of sympatric speciation (1) as well as the maintenance of phenotypic variance within populations (2, 3). But it isn’t really tested directly very much. In 2009, a former undergraduate and I published a paper that lent support to this common assumption (4). The idea was simple: we measured morphology and diet on a large number of individual stickleback from a single lake on Vancouver Island, then tested whether pairwise difference in phenotype (between all pairwise combinations of individuals) was correlated with pairwise dissimilarity in diet (measured by stomach contents, or stable isotopes). The prediction was that these should be positively correlated. And that’s what we reported in our paper, with the caveat (in the title!) that the association was weak.


An excerpt from Bolnick and Paull 2009 that still holds, showing the theoretical expectation motivating the work.

Turns out, it was really, really weak. Because we were using pairwise comparisons among individuals, we used a Mantel Test to obtain P-values for the correlation between phenotypic distance, versus dietary overlap (stomach contents) or difference (isotopes). I cannot now reconstruct how this happened, but I clearly thought that the Mantel test function in R, which I was just beginning to learn how to use, reported the cumulative probability rather than the extreme tail probability. So, I took the P reported by the test, subtracted it from 1 to get what I thought was the correct number, and found I had a significant trend. Didn’t look significant to my eye, but it was a dense cloud with many points so I trusted the statistics and inserted the caveat “weak” into the title. I should have trusted my ‘eye-test’. It was wrong.

Recently, Dr. Tony Wilson from CUNY Brooklyn tried to recreate my analysis, so that he could figure out how it worked and apply it to his own data. I had published my raw data from the 2009 study in an R package (5), so he had the data. But he couldn’t quite recreate some of my core results. I dug up my original R code, sent it to him, and after a couple of back-and-forth emails we found my error (the 1-P in the Mantel Test analysis). I immediately sent a retraction email to the journal (Evolutionary Ecology Research), which will be appearing soon in the print version. So let me say this clearly, I was wrong. Hopefully, just this once.

The third and fourth figures in Bolnick and Paull 2009 are wrong. The trend is not significant, and should be considered a negative result.

I want to comment, briefly, on a couple of personal lessons learned from this.

 First of all, this was an honest mistake made by an R-neophyte (me, 8 years ago). Bolnick and Paull was the first paper that I wrote using R for the analyses. Mistakes happen. It is crucial to our collective scientific endeavor that we own up to our individual mistakes, and retract as necessary. It certainly hurt my pride to send that retraction in (Fig. 3), as it stings to write this essay, which I consider a form of penance. Public self-flagellation by blogging isn’t fun, but it is important when justified. We must own up to our failures. Something, by the way, that certain (all?) politicians could learn.

Drowning my R-sorrows in a glass of Hendry Zinfandel.

Second, I suspect that I am not the only biologist out there to make a small mistake in R code that has a big impact. One single solitary line of code, a “1 –“ that does not belong, and you have a positive result where it should be a negative result. Errors may arise from a naïve misunderstanding of the code (as was my problem in 2008), or from a simple typographic error. I recently caught a collaborator (who will go unnamed) in a tiny R mistake that accidentally dropped half our data, rendering some cool results non-significant (until we figured out the error while writing the manuscript). So: how many results, negative or positive, that enter the published literature are tainted by a coding mistake as mine was. We just don’t know. Which raises an important question: why don’t we review R code (or other custom software) as part of the peer-review process? The answer of course is that this is tedious, code may be slow to run, it requires a match between the authors’ and reviewers’ programming knowledge, and so on. Yet, proof-reading, checking, and reviewing statistical code is at least as essential to ensuring scientific quality as proof-reading our prose in the introduction or discussion of a paper. I now habitually double- and triple-check my own, and my collaborators’, R code.

Third, R is a double-edged sword. Statistical programming in R or other languages has taken evolution and ecology by storm in the past decade. This is mostly for the best. It is free, and extremely powerful and flexible. I love writing R code. One can do subtle analyses and beautiful graphics, with a bit of work learning the syntax and style. But with great power comes great responsibility. There is a lot of scope for error in lengthy R scripts, and that worries me. On the plus side, the ability to save R scripts is a great thing. I did my PhD using SYSTAT, doing convoluted analyses with a series of drag-and-drop menus in a snazzy GUI program. It was easy, intuitive, and left no permanent trail of what I did. So, I made sure I could recreate a result a few times before I trusted it wholly. But I simply don’t have the ability to just dust off and instantly redo all the analyses from my PhD.  Saving (and annotating!!!!!) one’s R code provides a long-term record of all the steps, decisions, and analyses tried. This archive is essential to double-checking results, as I had to do 8 years after analyzing data for the Bolnick and Paull paper.

Fourth, I found myself wondering about the balance between retraction and correction. The paper was testing an interesting and relevant idea. The fact that the result is now a negative result, rather than a positive one, does not negate the value of the question, nor does it negate some of the other results presented in the paper about among-individual diet variation. I wavered on whether to retract, or to publish a correction. In the end, I opted for a retraction because the core message of the paper should be converted to a negative result. This would entail a fundamental rewriting of more than half the results and most of the discussion. That’s more work than a correction could allow. Was that the right approach?

To conclude, I’ve recently learned through painful personal experience how risky it can be to use custom code to analyze data. My confidence in our collective research results will be improved if we can find a way to better monitor such custom code, preferably before publication. As Ronald Reagan once said, “Trust, but verify”. And when something isn’t verified, step forward and say so. I hereby retract my paper:
Daniel I. Bolnick and Jeffrey S. Paull. 2009. Morphological and dietary differences between individuals are weakly but positively correlated within a population of threespine stickleback. Evol. Ecol. Res. 11, 1217–1233.
I still think the paper poses an interesting question, and might be worth reading for that reason. But if you do read (or, God forbid, cite) that paper, keep in mind that the better title would have been: “Morphological and dietary differences between individuals are NOT positively correlated within a population of threespine stickleback”  , and know that the trends shown in Figures 3 and 4 of the paper are not at all significant. Consider it a negative-result paper now.
The good news is that now we are in greater need of new tests of the prediction illustrated in the first picture, above.

 A more appropriate version of the first page of the newly retracted paper.

1. U. Dieckmann, M. Doebeli, On the origin of species by sympatric speciation. Nature 400, 354-357 (1999).
2. M. Doebeli, Quantitative genetics and population dynamics. Evolution 50, 532-546 (1996).
3. M. Doebeli, An explicit genetic model for ecological character displacement. Ecology 77, 510-520 (1996).
4. D. I. Bolnick, J. Paull, Diet similarity declines with morphological distance between conspecific individuals. Evolutionary Ecology Research 11, 1217-1233 (2009).
5. N. Zaccarelli, D. I. Bolnick, G. Mancinelli, RInsp: an R package for the analysis of intra-specific variation in resource use. Methods in Ecology and Evolution, DOI:10.1111/2041-210X.12079, (2013).

Predicting Speciation?

(posted by Andrew on behalf of Marius Roesti) Another year is in full swing. What will 2024 hold for us? Nostradamus, the infamous French a...