Hey, I made it halfway through the year writing my newsletter every week. I'm proud of that!
I'm still spending half my time in Santa Cruz, half my time in Berkeley. Still in the throws of
the decision on where to move, what to do when I graduate in December. Here's a
good song. Now for the thing.
~
I was hired for the summer to work on a specific project, studying erosivity of the bed of San Francisco Bay. Great precedent work by many researchers make us confident that the Bay is eroding. (The Bay naturally loses sediment to the ocean, but a lack of inflows—due to damming and construction in its watershed—mean a reduced sediment influx to the Bay, so its sediment budget isn't balancing.) This is bad for the integrity of the Bay's wetland habitats, which are meaningful for flood protection, bird populations, and water quality. The specific goal of my project is to nail down some parameters to inform bay-scale models using
Delft 3D.
Coastal erosion, like many things, is tricky to model because it is made out of processes that act across various spatial and temporal scales. The coast erodes a little bit every time a wave hits it and every time the tides move in-and-out. River flows erode their banks, and can affect coasts around estuaries. Especially in a mediterranean climates like California's, strong differences between summer and winter, where the latter brings storms, which bring rain and strong winds, lead to annual-scale erosion dynamics. Cliffy coastlines are also inclined to failure (in the sense of landslides, slumping, and cleaving). Wave, wind, tidal, and flow-driven erosion are slow-moving and relatively easy to forecast; cliff failure is sudden and hard-to-predict.
This complicates metrics that might be used in management scenarios. For example, if you are building somewhere along the California coast, you want to make an appropriate setback to ensure that your building is still safe after so many years of coastal erosion and sea level rise. How quickly is the coast moving inland? We can probably find a value (for a specific location) using satellite imagery or long-form data sets. But these numbers average over both the continuous erosion processes and the sudden cliff failure; e.g. one might get combined value of 3cm per year via continuous erosion of 1cm per year and then one sudden event which dropped 1m of coast in a 50-year observation window. So is 3cm, in this imaginary scenario, accurate? Or should we separate the different processes? It quickly complicates both management strategies and numerical models.
I had an internship back in 2012 where my biggest project was to research how to best make a single metric that would represent how vulnerable a particular geographic area was to natural disasters—any and all natural disasters. (We very much acknowledged how a single metric for this is super reductive, but there was still value in having it.)
OECD refers to these as "
Composite Indices"—where multiple types of processes and values get rolled together into a single number through of averaging, normalization, and creative combination. Heterogeneous metrics are really hard to do effectively, but the people who work in (social) risk management seem to be on the forefront, as far as I can tell.
This territory touches on statistics of rare events, too ("extreme value theory"). Again, we run into the risk of, for example, making year-by-year management decisions based on numbers that actually represent the kind of events that happen only once or twice a century. For data sets like this, it's useful to remind oneself that
"averaging" doesn't mean just one thing. Finding a descriptive "average" metric must be a conscious decision about how you expect the data to be, and what you need the number to represent. Folding together heterogeneous processes, metrics, and methods into single numbers can be useful, but turns descriptive complexity into simple values and thus carries a risk of its own.
Slowly, then quickly,
Lukas