[Coral-List] testing effects of ocean acidification

Grottoli, Andrea grottoli.1 at osu.edu
Thu Jul 9 12:47:08 EDT 2015

Dear Peter and All:

We agree that as OA researchers, we need to do a much better job on the experimental design side of things.  It is very frustrating when  reviewing stuff that has poor experimental design such that you cannot tell treatment effect from tank or genotype effects.  That said, there are typically logistical, financial, and practical limitations that restrict us from doing ideal experiments and instead we do practical ones.  We often are limited to "B-designs" because there is no other practical way around it. 

We think that recognizing when one has an A or B design is important and should be acknowledged.  Whether or not one is able to configure their experiment to meet the A-design standards is a whole other issue that is in some cases, going to be prohibitively expensive and therefore not possible.  At the very least, researchers should be very up-front about their experimental design and the limits of that design.  And hopefully, a paper like this helps folks to better plan their experiments with the resources they have.  Hurlbert turned the field of ecology on its head with his 1984 paper and the response to it was swift and dramatic -- but not necessarily always for the better.  As a consequence, ecological studies are much more rigorous now..  The downside, is that some take the Hurlbert advice so literally that it sometimes takes away from obvious messages and unnecessarily restricts good science. 

It would behoove the OA community to take this paper to heart by being completely transparent about their experimental designs, up front about the limitations of their interpretations of their data as a result of their design, and improving their experimental designs to the extent that it is possible.  OA research is incredibly time and money intensive and some of the A-design suggestions will be completely impossible for some types of research.  We should be cautious not be over-zealous in restricting ourselves so rigidly by experimental design and statistics that it clouds our creativity and blocks our ability to make sound research progress.

Andrea Grottoli
Mark Warner  


Message: 1
Date: Wed, 8 Jul 2015 11:42:57 -0400
From: Peter Sale <sale at uwindsor.ca>
Subject: [Coral-List] testing effects of ocean acidification
To: coral-list at coral.aoml.noaa.gov
        <OFF991F419.9917F189-ON85257E7C.005573B4-85257E7C.005656FB at uwindsor..ca>

Content-Type: text/plain; charset="US-ASCII"

Hi all,
I just came across this paper, and have not yet read beyond the abstract,
but it may be important to anyone doing research on effects of ocean
acidification on coral reef organisms.  With the caveat that some
statisticians are a bit over the top purists, but knowing that many
ecologists (including me sometimes) are statistically over-casual, I think
the abstract contains some telling comment.  Admittedly, this article is
based on studies from all over, not just reefs, and we who work on reefs
are far better than average scientists.  Still, would be nice to ensure we
avoid sloppy science as we accumulate information on OA effects on our
favorite system.

The article, by C. Cornwall and C. Hurd, out of UWA just came out in ICES:

The part of the abstract that caught my eye says, with reference to
aquarium or tank experiments, "To assess the use of appropriate
experimental design in ocean acidification research, 465 studies published
between 1993 and 2014 were surveyed, focusing on the methods used to
replicate experimental units. The proportion of studies that had
interdependent or non-randomly interspersed treatment replicates, or did
not report sufficient methodological details was 95%. Furthermore, 21% of
studies did not provide any details of experimental design, 17% of studies
otherwise segregated all the replicates for one treatment in one space,
15% of studies replicated CO2 treatments in a way that made replicates
more interdependent within treatments than between treatments, and 13% of
studies did not report if replicates of all treatments were randomly
interspersed. As a consequence, the number of experimental units used per
treatment in studies was low (mean = 2.0)."

Peter Sale


More information about the Coral-List mailing list