California Failing: Why it is important to embrace failure in your research

Hello from sunny La Jolla, California. I am out here working on my dissertation research from my committee member’s lab, the Antarctic Ecosystem and Research Division of NOAA’s Antarctic Marine Living Resource group. Dr. de Mutsert asked me to write a blog while I was out here, and I honestly had no idea what to write.  I come out here, not for the ocean views (which are awesome), not for the data (also world class), but for the community of failing and moving forward.   Yes, I come to California to fail, and that is a bit awkward to write about, but here goes.Fail big and fix it

 

It’s tempting to hide from failure

The de Mutsert Fish Ecology lab is a great group of people. We try to look out for each other, but we (at least the students) put on fronts of success for each other.  Ask any one of us how our research is going and we will automatically say “good, making progress”.  We show each other pretty lines of model predictions and recite high numbers of data collected.  I struggle A LOT in my research.  I am switching fields from behavioral ecology to ecosystem modelling.  Half the time I can’t tell if my model is doing better or worse than it was last week and I am terrified of breaking it beyond repair.  My labmates, while wonderful people who want to help, do not work in my ecosystem.  If I totally break my model they don’t have great ecosystem relevant advice to get it working again. Since I am (more than a little) desperate to graduate in a timely fashion, I have a fear of totally breaking my model. This is really counterproductive. Let me repeat that. The paralyzing fear of breaking your model, or more generally messing up your research, is really counterproductive. Sometimes you have to smash your model to smithereens to understand what it is telling you. Sometimes you need to make a bad assumption or learn a data handling lesson the hard way to truly understand the problem you are working on.  Sometimes the fear of failing can hold you back and do more damage to your research than actually failing.

Failure is always an option

I come to California to my committee member’s lab to fail.  We have weekly lab meetings here. They are delightfully called “Science Friday” and they involve lots of tasty baked goods. Most importantly every week someone from the lab gets up and lays their science bare. They talk about what didn’t work. Even if they haven’t fixed it yet or don’t have a clue how to fix it, they openly talk about what failed. Failure is OK and expected here.  There is an understanding that failure is part of the process, and if you aren’t failing, you aren’t trying. Everyone is encouraged to well and truly mess up some of their data analyses, to produce a model that truly stinks for a time and to generally spend their time trying to do things that might fail.

For the past couple of lab meetings, the krill modeler (an actual PhD scientist who gets paid to do this) has talked about the numerous failings of his model. He showed where the predictions are off. He showed a number of things he tried to bring those predictions in line that didn’t work. He talked about how a recent outside expert review of his model found flaws and then he gave some strategies for exploring those flaws. He did not sugarcoat his model’s failings.  He has been doing this for years, yes years.  After years of failing, and publicly exposing his failure, his model is better.  He has a job, and people think well of his research skills. This should not be revelatory, but for me it is. Failing, and failing publically, is an important part of the science process.

Face your failure head on

When I got to California this trip I had a model that was recreating historical trends for three of my modelled species really well. I was ready to give up on three other species because no matter what I did, I could not get a better fit for them without destroying the fit for the others. In other words, I had modelling paralysis because I wanted to show something “successful” for a quickly approaching conference. I was also horribly embarrassed to show off my broken model. But I was encouraged by how open everyone here is about their failures, even showing off and laughing at their R code that failed for no good reason.

So, I laid my model bare before my committee member. I explained to him what I thought I understood about the predictions it was making, and he pointed me to new data to fix one of my modelled species. He was right about the new data, although initially the fit got worse when I incorporated them.

I knew that there were problems with the ways I was handling fish in my model, so I went and spoke to the fish expert here. He laughed and told me that my handling of the fish in my model was horrible (yes he used that word) but he pointed me to unpublished data and encouraged me to keep trying. The model now successfully recreates historic trends two of the species I was going to give up on.  Yup, a scientist laughed in my face about how horrible my model was, but he helped me to fix it.  There are worse things.

The other day my committee member came into my office and found me enthusiastically talking (or perhaps muttering words not suitable for public consumption) to my model output. My model had once again broken. He laughed and declared me to be turning into a true modeler. We had a nice science conversation about the ecological hypotheses I was testing each time I broke my model. I confessed that sometimes I felt like less of a scientist testing hypotheses and more like a kid playing Jenga. Stacking up all the groups in my model and then watching them come crashing down when I change one value. Here’s the thing:  each time I broke my  model, I learned about how the software was handling my input,  or even  better I actually was testing an ecological  hypotheses.  I am learning how to be more targeted with my changes in the model. I am seeing some ecological patterns emerge that I knew to be important from the literature, but I had no idea had how to deal with in my model. My new motto when approaching my model is now “Stack ‘em up and let ‘em fall!”

Go forth and fail!

Just being in a community of scientists where failure is an option has helped me to succeed. I have gained confidence in my model and my skills.  My model is still a work in progress, but it is so much better than it was when I got here. Without anyone making a point about it, I have been encouraged to put my science out there and expose its flaws for all to see.  I am working on an abstract for a meeting of experts that I never would have considered presenting to, but here I am.  So go forth and fail!  Rebuild your research from its own ashes!

-Adria Dahood, PhD candidate in the Fish Ecology Lab

Leave a Reply

Your email address will not be published. Required fields are marked *