Data vs. blind loyalty? I choose data

This is the second blog post that I’ve wrote in which I seem to be defending measurement.

Everything else being equal, I’d argue that a decision based on data is better than decision based on a belief (that often come in the form of urban myths, folklore or statements that start with “We all know…” or “Trust me…”)

Increased measurement, data and sound analysis of that data should help us challenge perceived “truths,” force us to rethink the how things work and enable us to challenge the ways things are. The goal ought to be to replace ‘We all we know’s with ‘What if’s.

So I had to stop and do some thinking when I came across the David Graber quote, “Violence and quantification are intimately linked” in the blog post Scholarly Debt and Deficit. My first thought was, “What data do you have to support that claim?” J

And my second thought was, “What if the measuring of things really leads to violence?”

Measuring leads to seeing. I’m willing to bet that there lots of localized violence including domestic violence in the Dark Ages. Just because we don’t have the stats doesn’t mean bad things didn’t happen and people did not get hurt in all sorts of awful ways. In contrast, I can pull stats that tell me that 50% of Canadian women half experienced at least instance of abuse, that it accounts for 12% of all violent crime, etc. Measuring a behaviour or attitude means acknowledging something exists. The first step is admitting you have a problem. The second step is identifying the size of your problem.

There are lots of academics out there taking their scholarly duties very seriously (I know because I’ve met a bunch) as well as a few that are less involved (or so I’ve heard). Without a way to quantify, there is no way of knowing the split between these groups or what a typical/ reasonable/ average contribution is.

Measuring lets you know if you are moving in the right direction. Back to domestic violence, after dropping for a decade rates of violence are now stagnant. Based on that information, we know that some changes have worked, we also know we have more work to do. Once there is baseline measurement in place, efforts to make things better can be quantified. When things are proven to move the numbers, sometimes it is as easy as doing more of the same. Other times, the numbers stop moving in the right direction and it points to a time to take a new approach. (This does not mean that by measuring anyone’s clicks, you can reduce the rate of violence, because that would be ridiculous, right?)

If we want to use the concept of “scholarly debt,” but choose not to quantify that debt, how can individuals and institutions gauge their “debt repayment”? If I am expected to repay an unquantifiable debt, doesn’t it at some point become a form of indentured servitude from which one can never escape? (Should we be talking about scholarly responsibility instead of scholarly debt?)

Measuring can enable choice. By developing a standard method for the quantification and division of family resources, it more possible for women to leave because she has a right to half the family assets as well as child support/alimony. Quantify and reduce the short-term challenges and it is again more possible for more to leave. Further quantify the long-term benefit to kids when their moms leave and again more women see leaving as a viable choice.

In contrast what are the reasons women choose to stay? While complex, their reasons are often aligned with those unquantifiable reasons including a sense of indebtedness to their spouse, family and/or other social institutions (the vow is ‘til death do us part, right?).

In the case of academics, how many choose to work with for-profit publishers. Is a sense of an unquantifiable scholarly debt to blame? Do they see providing and reviewing papers for proprietary publication as the only method of debt repayment?

While I have seen quantification of the overall value of the publishing industry, how much of that is due to scholarly efforts, the work done by those “not gainfully employed” in the publishing industry? (Lots, I would guess.) If we put a number on that unpaid/ underpaid work, how would that change the perception of value? If scholars thought about what it would look like taking a proportion (half maybe?) of the quantifiable value with them in a split from the publishers, how would that change the equation? Would academics who see “no choice” but to work with publishers start to see another option? If we began to quantify the number of academics who have been successful without working with publishers, would that information be the beginning of making the impossible seem possible?

Measuring provides an alternative to social influence as currency. Like other types of measurement, social influence as a currency can be both a good and a bad thing and it depends who is wielding it, how and for what purpose.

Domestic violence is an example of a really bad outcome but any time a power imbalance is created and we feel a sense of indebtedness to others, there is a risk of some type of exploitation. All kinds of unhealthy and unhelpful individuals and groups including many corporations have led lots of folks astray with words like “trust us,” and “we want to help you” followed by words like “if it weren’t for me you would be…” and “all you need to do in return is…”

And back to academia… Currently the system is largely based on reputation and social influence and in many instances they must work (or it would not have lasted this long, right?) There are almost certainly also examples of scholarly systems where that power is used to shut out dissenters and to embrace only the ideas that align with what they already “know” to be true.

My favorite conference rejection reads, “…while the authors used activity data to inform instructional design changes, there weren’t any new/innovative transformations of the data.  From my reading, it seemed as if the authors simply used existing course/content/SBL usage data to identify areas of improvement.  I like the content of the presentation, I just question its applicability to [Learning Analytics and Knowledge].”

In this example, the field of Learning Analytics appears to have become extremely narrow including only “new/innovative transformations of the data.” This interpretation is in contrast to other more robust definitions including Slade & Prinsloo’s 2013 definition: “…the collection, analysis, use, and appropriate dissemination of student-generated, actionable data with the purpose of creating appropriate cognitive, administrative, and effective support for learners.” Alas, that is likely a conversation that will need to wait for another year or another conference or lively discussions in my own head.

What are the choices when dealing with these types of power imbalances? If you are not dependent on the system, you can complain in a blog post , talk to yourself, and muse about starting a new field of inquiry (data-supported learning design?). I suspect however if you are seeking tenure or were seriously considering a PhD, you might instead take a more conservative approach and choose to adopt the definition provided in your feedback, set your research agenda accordingly and play by the established rules.

There is a lot of middle ground between quantification as the root of all evil and the blind pursuit of efficacy and regional accreditation via algorithmic data transformation. I’m looking for others looking to play in that space.

Note: I don’t in any way take the complexities of domestic violence or choices related to it lightly. I only use the examples here, because I believe that it is an all too common example of the bad outcomes associated with power structures that are based on loyalty, family connections and a sense of belonging.

 

4 thoughts to “Data vs. blind loyalty? I choose data”

  1. Hey – nice post, and nice blog you got here…

    I feel I have to start with an apology and a clarification – the apology is that both times you have felt the need to leap in to defend measuring things it’s been my fault! So sorry about that 🙂

    The clarification is that I absolutely don’t have a problem with measuring things! As long as everyone involved consents, I’ve not got an issue. (bizarrely, after I spoke at OpenEd13, someone accused me of hating numbers. As in actual numbers.)

    The violence referred to is a very similar to the practice of “gaslighting”, wherein an aggressor defines and controls the reality of the oppressed (perhaps with force as an unspoken back-up). It’s a thoroughly nasty deal, and the worst of it is it leaves one unable to describe reality in terms outside those set by the oppressor.

    The “deficit” is a big deal in politics in the UK right now, with one party castigating another for profligacy during their term of office and implying that this is the reason spending must be cut in order for debt to be paid down. It’s a simple-to-understand argument, perhaps too much so as it “shuts off” a number of equally valid descriptions of reality (for instance that running a deficit is the right thing to do during a recession, and growth is more important than spending cuts).

    As soon as numbers are assigned within an argument it changes the nature of the argument. If you happen to take one or other position in any given situation it is easy to assign numbers in order to back up your argument and do down the arguments of others. To argue after this point turns a dispute of principle into a dispute of statistical analysis.

    Daft example: I say we’ve enough beer for tonights party, Brian says we need another case. I then say that we have 48 cans of beer and that’s plenty. Now the only move Brian can make is to argue that 48 cans is not enough beer for the party. To do this he has to dispute my reasoning (“people will drink more than three cans each!”). But there are other arguments I have suppressed by bringing in a metric. (“some of those beers are Molson Canadian, and no-one drinks that!”, “I want to get some of that IPA that Tanya likes”, “I’d like some darker beers!”)

    So if you bring in a metric without the consent of those involved in a discussion, it is a violent act as it redefines their reality.

    As you say:

    “I suspect however if you are seeking tenure or were seriously considering a PhD, you might instead take a more conservative approach and choose to adopt the definition provided in your feedback, set your research agenda accordingly and play by the established rules.”

    Are those the right rules?

    1. Thanks, I appreciate the input. I will not pretend to understand the rules of success as an academic. (I’m an administrator, remember?) I’ve always found it easier to play with a set of rules, even if they are set to favour the other side, than situations where the only rule is that the opinion of the other party is always right.

      If you know that what you have to prove is that three beers is not enough, it points to a possible path of proving that Molson Canadian really does not meet the criteria to be called a beer. Although there are thousands of better uses of everyone’s time (and data), it should be preferable to the completely unwinnable scenario in which Brian is right because he says he is right and Tanya winds up with nothing to drink but Molson Canadian. 🙁

      1. Changing the angle a little, it occurred to me that the “End of Theory” conversations maybe were a little before you started working with Brian. This is a good intro if you’ve not yet come across this debate: http://simon.buckinghamshum.net/2013/12/learning-analytics-theory-free-zone/

        It was focused around learning analytics – perhaps a more insidious example of the same pressures to “give good metric” but without even the veneer of informed consent. Simplifying massively, the existence of massive datasets containing every conceivable datapoint are argued by some to mean that we no longer need to come up with theories and then test them, we can just read what the numbers tell us and act accordingly.

        The trouble with *that* is coming up with theories/predictions, testing, and then modifying as needed is a pretty neat encapsulation of what makes a human being a human being – as well as exactly describing academic practice since the middle ages.

        (I dug this stuff up today because I read this about Pearson and their “education theory gap” [https://codeactsineducation.wordpress.com/2016/01/19/who-owns-educational-theory ] which is scary for obvious reasons, and also wrong as we have loads of education theory but not the kind that can be tested with large poorly defined datasets…)

        Going back to our party beers issue – the only reason either of us know anything about what Molson Canadian is like is because we have tested it. At some point we both drank some assuming it was beer, and then modified our assumptions. It says “beer” on the can, but we went beyond the data and into qualitative research practice.

        Arguing about the terms of the argument is a wonderful, horrible, academic habit that both infuriates and delights me. Taking an argument at face value without querying underlying assumptions is generally a bad plan – and is the difference between wanting to win an argument and wanting to get something *right*.

        Of course, without data how do we know we are right?

        1. Thanks again, for forcing me to think some more. Your comments and a few other interesting conversations have led to my most recent post. Now you are responsible for 3 blog posts about the measuring of things, not that anyone is counting.

Leave a Reply

Your email address will not be published. Required fields are marked *