“The importance of stupidity in scientific research” by Martin A. Schwartz

I recently saw an old friend for the first time in many years. We had been Ph.D. students at the same time, both studying science, although in different areas. She later dropped out of graduate school, went to Harvard Law School and is now a senior lawyer for a major environmental organization. At some point, the conversation turned to why she had left graduate school. To my utter astonishment, she said it was because it made her feel stupid. After a couple of years of feeling stupid every day, she was ready to do something else.

I had thought of her as one of the brightest people I knew and her subsequent career supports that view. What she said bothered me. I kept thinking about it; sometime the next day, it hit me. Science makes me feel stupid too. It’s just that I’ve gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn’t know what to do without that feeling. I even think it’s supposed to be this way. Let me explain.

For almost all of us, one of the reasons that we liked science in high school and college is that we were good at it. That can’t be the only reason – fascination with understanding the physical world and an emotional need to discover new things has to enter into it too. But high-school and college science means taking courses, and doing well in courses means getting the right answers on tests. If you know those answers, you do well and get to feel smart.

A Ph.D., in which you have to do a research project, is a whole different thing. For me, it was a daunting task. How could I possibly frame the questions that would lead to significant discoveries; design and interpret an experiment so that the conclusions were absolutely convincing; foresee difficulties and see ways around them, or, failing that, solve them when they occurred? My Ph.D. project was somewhat interdisciplinary and, for a while, whenever I ran into a problem, I pestered the faculty in my department who were experts in the various disciplines that I needed. I remember the day when Henry Taube (who won the Nobel Prize two years later) told me he didn’t know how to solve the problem I was having in his area. I was a third-year graduate student and I figured that Taube knew about 1000 times more than I did (conservative estimate). If he didn’t have the answer, nobody did.

That’s when it hit me: nobody did. That’s why it was a research problem. And being my research problem, it was up to me to solve. Once I faced that fact, I solved the problem in a couple of days. (It wasn’t really very hard; I just had to try a few things.) The crucial lesson was that the scope of things I didn’t know wasn’t merely vast; it was, for all practical purposes, infinite. That realization, instead of being discouraging, was liberating. If our ignorance is infinite, the only possible course of action is to muddle through as best we can.

I’d like to suggest that our Ph.D. programs often do students a disservice in two ways. First, I don’t think students are made to understand how hard it is to do research. And how very, very hard it is to do important research. It’s a lot harder than taking even very demanding courses. What makes it difficult is that research is immersion in the unknown. We just don’t know what we’re doing. We can’t be sure whether we’re asking the right question or doing the right experiment until we get the answer or the result. Admittedly, science is made harder by competition for grants and space in top journals. But apart from all of that, doing significant research is intrinsically hard and changing departmental, institutional or national policies will not succeed in lessening its intrinsic difficulty.

Second, we don’t do a good enough job of teaching our students how to be productively stupid – that is, if we don’t feel stupid it means we’re not really trying. I’m not talking about `relative stupidity’, in which the other students in the class actually read the material, think about it and ace the exam, whereas you don’t. I’m also not talking about bright people who might be working in areas that don’t match their talents. Science involves confronting our `absolute stupidity’. That kind of stupidity is an existential fact, inherent in our efforts to push our way into the unknown. Preliminary and thesis exams have the right idea when the faculty committee pushes until the student starts getting the answers wrong or gives up and says, `I don’t know’. The point of the exam isn’t to see if the student gets all the answers right. If they do, it’s the faculty who failed the exam. The point is to identify the student’s weaknesses, partly to see where they need to invest some effort and partly to see whether the student’s knowledge fails at a sufficiently high level that they are ready to take on a research project.

Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time. No doubt, this can be difficult for students who are accustomed to getting the answers right. No doubt, reasonable levels of confidence and emotional resilience help, but I think scientific education might do more to ease what is a very big transition: from learning what other people once discovered to making your own discoveries. The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.

Advertisements

Sociology of risks: lessons learn from “Vajont Dam Disaster”

I just watched a documentary on Vajont dam disaster in National Geografhics. It was said that locals knew before the disaster that the mountain has suffered  landslides in the past. Actually, the mountain was called “the walking mountain”. Nevertheless, the dam was build, in a context of postwar and needs of energy to fuel the economic growth experienced in the country between 1950 and 1960. A few lesson in relation to risk management and communication, non-knowledge and the role of social science in engineering projects (see the role of those specialized in collecting and understanding narratives and local stories): A socially robust construction process, i.e. the involvement of locals in the construction, would have gain relevant information that was unknown by scientists. Information that was present in the collective memory of the town. Hence, locals could have taken part in the knowledge production. Secondly, engineers involved in the construction knew that the the land had been sliding gradually throughout the two months before the disaster. They simply did not count with such a huge tsunami due to wrong estimation. They even knew that the landslide was going to take place sooner or later. Even though, they did not inform locals about this fact. Finally, the wave overpass the dam in 200 meters and destroy the whole valley. This event reminds that science faces, in many occasions, gaps of knowledge or the so called non-knowledge. As in this occasion, policymakers and scientist frequently communicate that decisions have to be based on reliable scientific knowledge and that the acknowledgement of uncertainties would “undermine the public´s confidence in scientific results” (Gross, 2010, p.2)


<p><a href=”https://vimeo.com/76140299″>Vajont Dam Disaster</a> from <a href=”https://vimeo.com/framepool”>Framepool</a&gt; on <a href=”https://vimeo.com”>Vimeo</a&gt;.</p>