This is the second blog post that I’ve wrote in which I seem to be defending measurement.
Everything else being equal, I’d argue that a decision based on data is better than decision based on a belief (that often come in the form of urban myths, folklore or statements that start with “We all know…” or “Trust me…”)
Increased measurement, data and sound analysis of that data should help us challenge perceived “truths,” force us to rethink the how things work and enable us to challenge the ways things are. The goal ought to be to replace ‘We all we know’s with ‘What if’s.
So I had to stop and do some thinking when I came across the David Graber quote, “Violence and quantification are intimately linked” in the blog post Scholarly Debt and Deficit. My first thought was, “What data do you have to support that claim?” J
And my second thought was, “What if the measuring of things really leads to violence?”
Measuring leads to seeing. I’m willing to bet that there lots of localized violence including domestic violence in the Dark Ages. Just because we don’t have the stats doesn’t mean bad things didn’t happen and people did not get hurt in all sorts of awful ways. In contrast, I can pull stats that tell me that 50% of Canadian women half experienced at least instance of abuse, that it accounts for 12% of all violent crime, etc. Measuring a behaviour or attitude means acknowledging something exists. The first step is admitting you have a problem. The second step is identifying the size of your problem.
There are lots of academics out there taking their scholarly duties very seriously (I know because I’ve met a bunch) as well as a few that are less involved (or so I’ve heard). Without a way to quantify, there is no way of knowing the split between these groups or what a typical/ reasonable/ average contribution is.
Measuring lets you know if you are moving in the right direction. Back to domestic violence, after dropping for a decade rates of violence are now stagnant. Based on that information, we know that some changes have worked, we also know we have more work to do. Once there is baseline measurement in place, efforts to make things better can be quantified. When things are proven to move the numbers, sometimes it is as easy as doing more of the same. Other times, the numbers stop moving in the right direction and it points to a time to take a new approach. (This does not mean that by measuring anyone’s clicks, you can reduce the rate of violence, because that would be ridiculous, right?)
If we want to use the concept of “scholarly debt,” but choose not to quantify that debt, how can individuals and institutions gauge their “debt repayment”? If I am expected to repay an unquantifiable debt, doesn’t it at some point become a form of indentured servitude from which one can never escape? (Should we be talking about scholarly responsibility instead of scholarly debt?)
Measuring can enable choice. By developing a standard method for the quantification and division of family resources, it more possible for women to leave because she has a right to half the family assets as well as child support/alimony. Quantify and reduce the short-term challenges and it is again more possible for more to leave. Further quantify the long-term benefit to kids when their moms leave and again more women see leaving as a viable choice.
In contrast what are the reasons women choose to stay? While complex, their reasons are often aligned with those unquantifiable reasons including a sense of indebtedness to their spouse, family and/or other social institutions (the vow is ‘til death do us part, right?).
In the case of academics, how many choose to work with for-profit publishers. Is a sense of an unquantifiable scholarly debt to blame? Do they see providing and reviewing papers for proprietary publication as the only method of debt repayment?
While I have seen quantification of the overall value of the publishing industry, how much of that is due to scholarly efforts, the work done by those “not gainfully employed” in the publishing industry? (Lots, I would guess.) If we put a number on that unpaid/ underpaid work, how would that change the perception of value? If scholars thought about what it would look like taking a proportion (half maybe?) of the quantifiable value with them in a split from the publishers, how would that change the equation? Would academics who see “no choice” but to work with publishers start to see another option? If we began to quantify the number of academics who have been successful without working with publishers, would that information be the beginning of making the impossible seem possible?
Measuring provides an alternative to social influence as currency. Like other types of measurement, social influence as a currency can be both a good and a bad thing and it depends who is wielding it, how and for what purpose.
Domestic violence is an example of a really bad outcome but any time a power imbalance is created and we feel a sense of indebtedness to others, there is a risk of some type of exploitation. All kinds of unhealthy and unhelpful individuals and groups including many corporations have led lots of folks astray with words like “trust us,” and “we want to help you” followed by words like “if it weren’t for me you would be…” and “all you need to do in return is…”
And back to academia… Currently the system is largely based on reputation and social influence and in many instances they must work (or it would not have lasted this long, right?) There are almost certainly also examples of scholarly systems where that power is used to shut out dissenters and to embrace only the ideas that align with what they already “know” to be true.
My favorite conference rejection reads, “…while the authors used activity data to inform instructional design changes, there weren’t any new/innovative transformations of the data. From my reading, it seemed as if the authors simply used existing course/content/SBL usage data to identify areas of improvement. I like the content of the presentation, I just question its applicability to [Learning Analytics and Knowledge].”
In this example, the field of Learning Analytics appears to have become extremely narrow including only “new/innovative transformations of the data.” This interpretation is in contrast to other more robust definitions including Slade & Prinsloo’s 2013 definition: “…the collection, analysis, use, and appropriate dissemination of student-generated, actionable data with the purpose of creating appropriate cognitive, administrative, and effective support for learners.” Alas, that is likely a conversation that will need to wait for another year or another conference or lively discussions in my own head.
What are the choices when dealing with these types of power imbalances? If you are not dependent on the system, you can complain in a blog post , talk to yourself, and muse about starting a new field of inquiry (data-supported learning design?). I suspect however if you are seeking tenure or were seriously considering a PhD, you might instead take a more conservative approach and choose to adopt the definition provided in your feedback, set your research agenda accordingly and play by the established rules.
There is a lot of middle ground between quantification as the root of all evil and the blind pursuit of efficacy and regional accreditation via algorithmic data transformation. I’m looking for others looking to play in that space.
Note: I don’t in any way take the complexities of domestic violence or choices related to it lightly. I only use the examples here, because I believe that it is an all too common example of the bad outcomes associated with power structures that are based on loyalty, family connections and a sense of belonging.