[Coral-List] Identifying bleaching resistant reef locations: Model calibration issues

Scott Wooldridge swooldri23 at gmail.com
Tue Apr 12 02:49:02 EDT 2016


Hi Jim (Hendee) and Ruben (van Hooidonk'),


Sorry, it wasn’t my intention to dismiss Ruben's comments, only to try and
keep the discussion to the science of the issue, rather than secondary
issues associated with the statistical calibration of predictive models.



Indeed, my summary of the BleachRisk model and its predictive capabilities
deliberately ignore the details underpinning the extensive calibration of
the model, which might I say doesn’t actually fit so well with Ruben's
summary of the situation.



The reason being, BleachRisk is based on a Bayesian Belief Network (BBN)
model, and as such, is not based on a binary ‘yes’ or ‘no’ calibration
process. BBNs utilise a probability risk-based framework that predicts the
p(bleaching = yes) and p(bleaching = no), where the probability can exist
across the continuum of 0 – 1. In essence the model is set up to maximise
the posterior distribution of the model variables. For the BleachRisk
model, this is principally the ‘learnt’ bleaching resistance node. To aid
this process, I utilise the Netica (www.norsys.com) software.



In essence for each Reef location (data point), Netica reads in all the
system input values for all the variables/nodes, except for any finding for
the unobserved nodes (i.e. bleaching status). It then does belief updating
to generate beliefs for each of the unobserved nodes. It goes back and
checks the value for those nodes, and compares them with the beliefs
generated. It accumulates of the comparisons into summary statistics. For
example, when Netica is done, it produces a report for each of the
unobserved nodes. This report includes a confusion matrix, error rate,
calibration table, quadratic (Brier) score, logarithmic loss score,
spherical payoff score, surprise indexes, and node/variable sensitivity.
For binary nodes (such as the bleaching node) the report also reports test
specificity, predictive value and predictive value negative. Enabling
receiver operating characteristic (ROC) plots to be developed and analysed.



The nature of this calibration process is actually more focused on trying
to learn the structural (conditional) behaviour of the modelled system,
less so to maximise predictions. Indeed, if my sole interest was in
predicting a binary ‘yes’ or ‘no’ I would choose another approach
(probably, a neural network).



If people are interested in the Netica software and the calibration process
I have utilised, can I suggest reading another bleaching prediction paper
of mine, which considers these issues in more detail:



https://www.researchgate.net/publication/227295197_Learning_to_predict_large-scale_coral_bleaching_from_past_events_A_Bayesian_approach_using_remotely_sensed_data_in-situ_data_and_environmental_proxies?ev=prf_pub



I strongly recommend the Netica software, which for all intensive purposes
is a free product.


Can I also state again that it is not my intention in this work to be
critical of the NOAA predictive products (e.g., DHW, HOTSPOT, Light Stress
Damage (LSD)). I think they are excellent. I will say that again. I think
they are excellent. Where I differ from most, is that I think the NOAA
products are the ‘starting’ point of the story, rather than its ‘ending’.
But then again, my main interest is in trying to explain the interacting
determinants responsible for reef-scale variability in thermal bleaching
response, and where possible, to draw attention to local adaptation actions
that may help to mitigate the risk.



Scott Wooldridge

Catchment to Reef Management Solutions, Newcastle, Australia (2280)


https://www.researchgate.net/profile/Scott_Wooldridge,


More information about the Coral-List mailing list