Multilitteratus Incognitus
Pondering what to learn next 🤔
RhizoResearch - some thoughts brought on by Sunlight and Shade.
10-03-2015, 04:30 #fslt12, #oldsmooc, #rhizo14, cMOOC, critique, dissertation, EDDE802, MOOC, PhD, reliability, research, rMOOC, validity
It is a bit of an odd thing to admit, but ever since I started formal school again in order to pursue a doctorate the amount of pleasure reading has gone down. Now, this is to be expected, time resources need to be allocated differently in order to meet the rigorous demands of a doctoral program. That said, my pleasure reading was research articles anyway, so it's kind of hard to out down your candy (research articles about MOOCs and online learning) in order to have your balanced meal consisting of research in other fields that you aren't necessarily aware of. This is a good thing, but the amount of research on MOOCs keeps piling up in my dissertation drawer at work. Summer project!
Anyway, I digress! I saw that France Bell and Jenny Mackness had a recent article in Open Praxis about Rhizo14. I actually did with it what I do with all MOOC articles these days - download the PDF, archive it, print it out, add to my "to read" pile. Normally that would have been the end of that but two things happened: First, this article seemed to generate a lot of chat in the Rhizo14 community on Facebook, which is still going strong despite the course being over for close to a year now. Second, all this EDDE 802 work is making me think even more deeply about research articles I read - as though thinking about them "deeply" was not deep enough (I guess we're going from Scuba range to submarine range...). Then of course there was a comment from France on her blog that she would not engage directly on the Rhizo14 facebook page about this work (see here) which really raised an eye-brow. So, I picked up the article during my morning commutes and over the days I finally was able to read it (it wasn't long, just had other things on my plate).
From reading this article I have some reactions. In the past I've been part of a MOOC community that has been studied. I think that for FSLT12 I was surveyed by the people who offered the MOOC so that they could produce the final report. The same was true for OLDSMOOC if I remember correctly. However both of those MOOCs, despite the connectedness felt at the time, didn't feel as connected as Rhizo. With the exception of Rebecca I actually did not know others all that well prior to Rhizo14. So, Rhizo was a little different, and this probably has an effect on how I interpret the research findings, but I really tried to put my EDDE 802 cap on and look at the findings strickly from a researcher point of view.
The article frames Rhizo14 as an experimental open course, and that there are "light" and "dark" sides to participating in an experimental MOOC. The article seems to be written in a "one the one hand, on the other hand" manner. For instance examine the following quote:
Other issues that came up, other than the tone, are methodological. For instance the literature review is, for me, incomplete - or at the very least not totally accurate if you are considering all of the cMOOC literature. For instance Rhizo14 is described as:
Another issue that came up is the play-time that different views got. For instance in the article teh authors write that:
I don't know if this was the intent of the authors, or if it was an unfortunately side-effect of cutting and selectively editing in order to make a word-limit, but the article has the tone of an article written with "moral panic" as the intended outcome. The selected quotes which included profanity, and the language of experimentation and participants as lab-rats has the effect of evoking negative feelings toward this MOOC, the people that convened it, and to some extent those participants who were active in the course.
Finally, since this post seems to be getting long, there are two areas that I think need addressing: In an article like this, how does one tackle the issue of validity? One of the ways, in 802, I've seen validity addressed in qualitative research is to have the people interviewed and sampled read the findings and then discuss whether or not those findings resonate with what they've experienced and what they reported or whether they do not. I don't know if those 47 respondents got a chance to vet this interpretation of what the survey results say. I may be one of those who took the survey (I don't remember, but chances are high that I did) but I have not seen any indication that I was asked to interpret the interpretation of the survey results. From the reactions I've seen from people in Rhizo, it seems that this paper is not indicative of their experiences, or how they observed interactions in the course, so to some extent the paper seems to lack some validity. The other odd thing, that raises a bit of a red flag for me, is France's disengagement from the Rhizo community on this matter. It seems that if you study a community you have the ethical obligation to discuss and debate with them your findings on their turf, so to speak, and not on your own.
The other way to address validity is to have other researchers review the anonymous survey data in order to cross-validate the findings in this research. I know that at least a couple of people have asked to see the data (one I thought was in jest, but at least one seemed serious) only to be turned down due to privacy concerns. If the data is anonymous then there ought not be privacy concerns. This, in turn, makes things seem suspicious in some sense. I am not sure what the ethical implications are. If I run a survey, and conduct a set of interviews, for my dissertation (or any research project for that matter), if other researchers want to see the data in order to validate my findings, should I not oblige? (especially if those asking are on my review/exam committee!). As a researcher I should anonymize the data, and while I wouldn't provide the cypher to the data (so as to render them eponymous), I think that providing anonymized data for analysis is indeed something that falls within ethical guidelines.
At the end of the day, this article has proven to be an interesting case study for my research methods course.
Thoughts? Views? Opinions?
Anyway, I digress! I saw that France Bell and Jenny Mackness had a recent article in Open Praxis about Rhizo14. I actually did with it what I do with all MOOC articles these days - download the PDF, archive it, print it out, add to my "to read" pile. Normally that would have been the end of that but two things happened: First, this article seemed to generate a lot of chat in the Rhizo14 community on Facebook, which is still going strong despite the course being over for close to a year now. Second, all this EDDE 802 work is making me think even more deeply about research articles I read - as though thinking about them "deeply" was not deep enough (I guess we're going from Scuba range to submarine range...). Then of course there was a comment from France on her blog that she would not engage directly on the Rhizo14 facebook page about this work (see here) which really raised an eye-brow. So, I picked up the article during my morning commutes and over the days I finally was able to read it (it wasn't long, just had other things on my plate).
From reading this article I have some reactions. In the past I've been part of a MOOC community that has been studied. I think that for FSLT12 I was surveyed by the people who offered the MOOC so that they could produce the final report. The same was true for OLDSMOOC if I remember correctly. However both of those MOOCs, despite the connectedness felt at the time, didn't feel as connected as Rhizo. With the exception of Rebecca I actually did not know others all that well prior to Rhizo14. So, Rhizo was a little different, and this probably has an effect on how I interpret the research findings, but I really tried to put my EDDE 802 cap on and look at the findings strickly from a researcher point of view.
The article frames Rhizo14 as an experimental open course, and that there are "light" and "dark" sides to participating in an experimental MOOC. The article seems to be written in a "one the one hand, on the other hand" manner. For instance examine the following quote:
There were plenty of learning moments and evidence of joy and creativity, but we also experienced and observed some tensions, clashes and painful interactions, where participants seemed to expect different things from the course and were sometimes disappointed by the actions and behaviours of other participantsThe way this is written looks perhaps at two dichotomies, the good and the bad side, but is this realy what happened? Can this only be interpreted as one or the other? Were there some tensions in Rhizo14? Well, I can think of at least one. But Painful Interactions? Painful to whom? and in what way? Is painful used in the sense of awkward, and thus in a more literary style? Or is it used in a more concrete style, as in causing harm to someone? This being a research article, I tended to take painful to mean causing harm, and thus this seemed a bit like an exaggeration to me, given that I have seen most Rhizo discussions over the past year.
Other issues that came up, other than the tone, are methodological. For instance the literature review is, for me, incomplete - or at the very least not totally accurate if you are considering all of the cMOOC literature. For instance Rhizo14 is described as:
Rhizo14 also differed from prior cMOOCs in that it was “home-grown.” Dave Cormier ran the MOOC in his own time, often convening the weekly Hangouts in the evening from his own home. Despite this, his intention was that there would be no centre to the course; he would be one of the participantsIf we look at the history of MOOCs, I would say that everything prior to coursera was home-grown. CCK was home-grown, despite its affiliation with the University of Manitoba, PLENK was home-grown, the various MobiMOOC incarnations were home-grown, and so on. I honestly didn't think that Rhizo14 differed a lot in its setup compared to other, previous, cMOOCs that I had been part of over the years. The execution was certainly different, but the setup didn't seem different to me. I think that a review of the MOOC literature to date could have painted a broader picture, but I am willing to accept that there were space constraints in this article, and things just needed to get cut out in order to make it to print (I did get a call for papers for Open Praxis and I recall seeing a 5000 word limit which is crazy for a qualitative paper!)
Another issue that came up is the play-time that different views got. For instance in the article teh authors write that:
The distributed nature of the spaces, the mix of public / private, and the number of survey respondents (47) combine to remind us that we must be missing some important perspectives. What does encourage us is that despite this partial view, our decision to allow for confidential and electively anonymous responses to our surveys, has enabled a light to be cast on what people are thinking, and not saying, in public and semi-public forums. This research will make a contribution to the hidden MOOC experienceBy my count, as of today, there are 432 Rhizo14 participants on P2PU, and 321 in the facebook group. It it hard to tell how many people there are actually in this MOOC, I just know the visible participants of who was actively participating in P2PU or facebook. Assuming that the P2PU number is the canonical number, 47 respondents only represents around 10% of the people who signed-up for the MOOC. Furthermore the researchers do not discuss how, if any, coding was done for the interview and free-form text data in the survey, to determine the overall themes and positive and negative feelings toward the course, the conveners, and fellow participants. Equal air-time is given to both those who have positive things to say, and negative things to say; however we do not know quantitatively how many people were in each camp (positive/negative). Were there more in the negative/dissatisfied camp? Or more in the positive? Or were they equally distributed across this self-selected sample? Other things that that should have been explored more, such as people feeling isolated despite their "experience MOOCer" status went unexplored. For instance who deems these individuals as experienced? Experience in the cMOOC? the xMOOC? both? And how is that measured?
I don't know if this was the intent of the authors, or if it was an unfortunately side-effect of cutting and selectively editing in order to make a word-limit, but the article has the tone of an article written with "moral panic" as the intended outcome. The selected quotes which included profanity, and the language of experimentation and participants as lab-rats has the effect of evoking negative feelings toward this MOOC, the people that convened it, and to some extent those participants who were active in the course.
Finally, since this post seems to be getting long, there are two areas that I think need addressing: In an article like this, how does one tackle the issue of validity? One of the ways, in 802, I've seen validity addressed in qualitative research is to have the people interviewed and sampled read the findings and then discuss whether or not those findings resonate with what they've experienced and what they reported or whether they do not. I don't know if those 47 respondents got a chance to vet this interpretation of what the survey results say. I may be one of those who took the survey (I don't remember, but chances are high that I did) but I have not seen any indication that I was asked to interpret the interpretation of the survey results. From the reactions I've seen from people in Rhizo, it seems that this paper is not indicative of their experiences, or how they observed interactions in the course, so to some extent the paper seems to lack some validity. The other odd thing, that raises a bit of a red flag for me, is France's disengagement from the Rhizo community on this matter. It seems that if you study a community you have the ethical obligation to discuss and debate with them your findings on their turf, so to speak, and not on your own.
The other way to address validity is to have other researchers review the anonymous survey data in order to cross-validate the findings in this research. I know that at least a couple of people have asked to see the data (one I thought was in jest, but at least one seemed serious) only to be turned down due to privacy concerns. If the data is anonymous then there ought not be privacy concerns. This, in turn, makes things seem suspicious in some sense. I am not sure what the ethical implications are. If I run a survey, and conduct a set of interviews, for my dissertation (or any research project for that matter), if other researchers want to see the data in order to validate my findings, should I not oblige? (especially if those asking are on my review/exam committee!). As a researcher I should anonymize the data, and while I wouldn't provide the cypher to the data (so as to render them eponymous), I think that providing anonymized data for analysis is indeed something that falls within ethical guidelines.
At the end of the day, this article has proven to be an interesting case study for my research methods course.
Thoughts? Views? Opinions?
Comments (5)

Archive
Apr 2025 (1)
Mar 2025 (1)
Feb 2025 (1)
Jan 2025 (1)
Dec 2024 (2)
Oct 2024 (2)
Sep 2024 (1)
Aug 2024 (5)
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)
Mar 2025 (1)
Feb 2025 (1)
Jan 2025 (1)
Dec 2024 (2)
Oct 2024 (2)
Sep 2024 (1)
Aug 2024 (5)
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)
