I am a logical positivist (and in charge of the sheep dip--laugh if you get the reference), an empirical social scientist who studies international relations. One of my fondest responsibilities at GCC is teaching research methods in the Political Science Department. Really, I mean it, I like teaching research methods, it allows me to stress the logic of research, purpose of research, etc. to our students and to equip them with skills and techniques to enhance their own research work at GCC and beyond.
One of the first class discussions we have is on what Political Science means and how knowing what Political Science means leads us to engaging in empirical research. Part of this discussion is focused on understanding characteristics of scientific knowledge. One of those characteristics is that scientific knowledge is transmissible--we can clearly, and concisely communicate our research to others. Part of this process is crafting tests of hypotheses (answers to the research question) that other scholars replicate if they choose.
The question of replication has recently come forth in a study examined in an article in Inside Higher Education. The study was specifically about replication issues in Psychology research. The author asks if this problem stretches beyond just Psychology and what can be done to place a premium on replication.
For my two cents, more consideration should be given by scholars replication efforts. The truth is that we scholars are at least minimally arrogant, prima donna types who like to believe our work is very important and get really touchy about perceived unfair critique of our work. The best of us have learned or were always predisposed to set aside our feelings and insecurities to allow our colleagues to offer constructive critique of our work--it simply makes the final product better. Replication is one way we are able to offer critique to our colleagues. I minimally engage in such effort when I review potential journal publications. I more fully engage in replication of models used for testing data when I consider including pieces of research in the course literature I require my students to engage with each year. Occasionally the replication is impossible due to lack of data availability (authors, the onus is yours to make the data available to the academic community). Most often I find my failure to replicate results is based on the lack of clarity in communicating how particular data is organized and tested. Sometimes, this lack of clarity is on my shoulders for not being familiar with the particular modeling techniques employed. Sometimes, the lack of clarity is the fault of the authors. Regardless, the academic community needs to pay more attention to the issue of replication, or we need to rethink our understanding of scientific knowledge.
One of the first class discussions we have is on what Political Science means and how knowing what Political Science means leads us to engaging in empirical research. Part of this discussion is focused on understanding characteristics of scientific knowledge. One of those characteristics is that scientific knowledge is transmissible--we can clearly, and concisely communicate our research to others. Part of this process is crafting tests of hypotheses (answers to the research question) that other scholars replicate if they choose.
The question of replication has recently come forth in a study examined in an article in Inside Higher Education. The study was specifically about replication issues in Psychology research. The author asks if this problem stretches beyond just Psychology and what can be done to place a premium on replication.
For my two cents, more consideration should be given by scholars replication efforts. The truth is that we scholars are at least minimally arrogant, prima donna types who like to believe our work is very important and get really touchy about perceived unfair critique of our work. The best of us have learned or were always predisposed to set aside our feelings and insecurities to allow our colleagues to offer constructive critique of our work--it simply makes the final product better. Replication is one way we are able to offer critique to our colleagues. I minimally engage in such effort when I review potential journal publications. I more fully engage in replication of models used for testing data when I consider including pieces of research in the course literature I require my students to engage with each year. Occasionally the replication is impossible due to lack of data availability (authors, the onus is yours to make the data available to the academic community). Most often I find my failure to replicate results is based on the lack of clarity in communicating how particular data is organized and tested. Sometimes, this lack of clarity is on my shoulders for not being familiar with the particular modeling techniques employed. Sometimes, the lack of clarity is the fault of the authors. Regardless, the academic community needs to pay more attention to the issue of replication, or we need to rethink our understanding of scientific knowledge.