Club-Admiralty

v6.2.3 - moving along, a point increase at a time

What am I training for again?

From PhD Comics

It's been a while since I've had the bandwidth to think about something other than my dissertation proposal.  When I started this process four years ago (starting with matriculation in March 2014) I thought I'd be the first or second person in my cohort to be done (ha!), but like most marathoners I guess I am part of the pack looking at the fast folks ahead of me 😏.  Being part of the pack does have its benefits, such as getting an idea of how long the process takes (having friends in other cohorts also helps with this).  I thought, initially, when someone submitted their draft (be it proposal or final dissertation) that you would get feedback and signs of life from your various committees soonish, but seeing Lisa's journey (currently at 5 weeks and counting) gave me a reality check. Waiting isn't bad per se (we wait for a ton of things in life), but I think it is the expectation of things to come that makes this type waiting much more anxious for us doctoral student. Questions pop in your mind such as: Will they like what we submitted?  How much editing do they need me to do? Will they ask me to go back to the drawing board? How long will that take?  And if I have to defend this thing next week...well, do I have time to prepare? Do I remember everything I read in my review to the literature? eeek!

That said, I think I should rewind a bit.  What have I been up to?  Well, lots, and lots, and lots of reading and then funneling that into some sort of literature review. The past 4 (or 5?) weekends have been about process (and grit?); they have been about sitting down for hours and crafting what I learned into a coherent literature review. They have been about concentration (and probably some weight gain due to all the sitting...maybe some bad posture as well).  And, at last, this past weekend I finished the 139 page monster, put it all into one word file and emailed by advisor (hopefully she won't hate me because of the length 😜 ).  Without counting references, front-matter, and tables of contents, here is what the word count breaks down to:
  • Chapter 1: Introduction  ≅ 3,800 words
  • Chapter 2: Literature Review  ≅ 16,600 words
  • Chapter 3: Methods ≅ 6,700 words
Assuming that the average academic article is around 8,000 words (with references), I've written 4.5 academic articles, and this isn't even the full dissertation!

Now that the draft is submitted I have some free time (maybe 3-4 weeks if other cohort-mates reports are any indication of average length of waiting) to work on a research project that's been on the back burner and that's collaborative. In this project, in order to make it  to the appropriate word length the operative word is cut. This is a little challenging because when it comes to cutting there aren't really that many options.  Do you cut your methods?  Then reviewers will call you out on incompleteness of methods (and you might actually get penalized for it!).  Do you cut your findings?  Well, for a qualitative research paper without some qualitative data (which takes up space) you could be told that there isn't enough data (or they could say that you are making things up).  Do you cut the literature review?  Well, this seems like the most likely place to make cuts, but how is your reading audience assured that you did your due diligence? Hmmmm... dilemma...dilemma...dilemma.

This pondering lead me down another path: a recent (recentish?) tweet by Maha Bali, a critique of doctoral programs.  The gist of it was that PhD programs don't really prepare you for a lot of things that are expected in academia. The traditional pillars of faculty in academia are research publishing (usually of the academic article variety), service, and teaching; however the critique was that doctoral programs don't really prepare you for these things. I think this is is a much larger discussion which first needs analysis of what faculty actually do and what they are asked to do. Maybe this is an opportunity to examine what faculty do and their relation to other roles at the institution, but for now I want to focus on one part of it: the research and publishing.

I consider myself lucky to have had opportunities to research and publish prior to pursuing my EdD, and to do this both alone and in collaboration with others (as an aside, I find collaboration more satisfying as it satisfied both work and social aspects of life). Working on the doctoral degree affords me the opportunity for some directed study to fill in potential areas that I was missing, and to see things from different frames of view; for instance I have a finer understanding of learning in other fields such as military and health-care (just to pick on a couple) because of my cohort-mates.

However the dissertation process, and the reason for this process, seems quite arcane to me. I understand, from a cognitive perspective, that the dissertation is meant to showcase your skills as a researcher; and those with more romantic dispositions among us might also say that it contributes to the overall level of knowledge in our field. But if you are one of those romantics let me ask you this: when was the last time you cited a dissertation in your research?  And, just in case you are a smarty-pants and you have cited one dissertation, how often do you check out dissertation abstracts for your literature reviews? I digress though... Back on point...

It seems to me that as an academic (well, if I chose to go the tenure track once I earn my EdD) I need to contribute to the field by writing research articles, field notes, book chapters, reports, and maybe even a whole book; and I also need to provide peer reviews to fellow authors.  With the exception of book writing (which every academic does not do), the vast majority of writing is between 3,000 and 9,000 words.  A dissertation  is considerably longer. This makes me wonder (again) whether the purpose of the dissertation is one of endurance (i.e. if you can do this, you can do anything!) or of holding us up to romantic, inappropriate, or irrational standards, as in "once you graduate you are expected to write books". As an aside, this may have been the case when there were fewer scholars around, but these days there aren't enough positions open in the traditional tenure-track faculty profession, so the Alt-Ac isn't even addressed or acknowledged...but again, I digress.

The instructional designer in me has pondered the purpose of the dissertation (even before I applied to doctoral programs). If we've already replaced the once prevalent Masters Thesis with other means of assessment (or at least made the MA Thesis as one of a few options), why can't we do the same with the Doctoral Dissertation, which - if we're honest - just another form of assessment.  I should say that my own point of reference here are what are called 'taught' PhDs where there is required coursework before you are allowed to be a doctoral candidate, and not the kind you might find in Europe where you are apprenticed into the discipline by applying as an apprentice (basically) and just work on your dissertation upon completed of a masters program.

So my three questions out there for you:

  1. Do the traditional pillars of academia still hold up or should be re-conceptualized? What might they be? and how do they work collaboratively with other parts of the academy?
  2. Based on these current pillars where does doctoral education fall short (name your field as fields will most likely vary)
  3. Keeping the dissertation in mind: what would you replace it with? What are the underlying assumptions for your model?


Discussion welcomed (if you blog, feel free to post link)









View Comments

2017 year in review - school edition

From wikipedia: 1779 illustration of a Catholic
Armenian monk of the
Order of St Gregory the Illuminator,  
Happy New Year! Yeah... it's the fourth of January, but I figure I can get away with it since we're still in the first week of 2018, and this is my first post for the year 😉

Things have been a little quiet here on the blog as of late. Not a lot of MOOCing, not a lot of virtual connecting, not a lot of collaborative or cooperative learning as was the case in previous years.  There has been a lot of reading, mostly in monastic form - you know, lock yourself in a room and read until your inner teenager starts screaming at you "are we done yeeeeeet????" - I guess I am really in the thick of dissertation prep "stuff" (reading and sorting mostly) which I hope I'll get through in 2018 (for the most part anyway)

I thought I would take a break from the monastic lifestyle to put together a few things that really struck me in 2017, at least as far as my own learning, and learning journey, go.

I guess the first thing of interest is that  2017 was my last year of courses. In spring 2017 (winter 2017 in Canadian terms) I completed EDDE 806 which was the last structured and graded seminar for my EdD.  Coming into the seminar I really wanted to get done with classes, so even though I had until Fall 2017 to complete the requirements for the seminar, I made sure I was there each session and doing what was required to be done. 2017 didn't start off energetically though...

I think the major realization I came to in 2017 was that I had over-exerted myself in 2016, and as a consequence I really felt burned out; not starting to feel burned out, but actually burned out. I felt a little guilty because I still wanted to participate in extracurricular academic stuff such as virtually connecting, MOOCs, and working more on research papers with friends and colleagues as these activities encompassed both an academic purpose but also a social purpose. However, in the quest to get moving with the dissertation proposal I needed to sacrifice most of the more fun things in academic life and work on the more utilitarian parts to just get done. The hope is that people will still be there once I am done 😜

Even though I had bits and pieces of my proposal already forming in 2016, I put all of those mostly in the back burner for the spring semester because I wanted to finish off the requirements for the final graded seminar of my studies (EDDE 806). I think what I learned here is that I am mostly a sequential person when it comes to doing stuff. Many people think I can multitask like a madman, but I guess it's all relative from where you are standing.  I think I can do two or three ongoing projects at a time (of varying cognitive intensity), but class + proposal weren't working out for me.

So, once EDDE 806 was done, I took a mental breather, and a month later I begun working on my proposal again.  The introductory chapter (chapter 1) and the methods chapter (chapter 3) seemed to be on a better path, so I focused on that.  I think the core  of what my proposal is was pretty well done in August when I completed writing chapters 1 and 3. The who, what, where, and how were pretty much answered.  The big question still looming was  the "why".  It was hinted at during the introduction, but it needed more exposition, via a literature review chapter. So, September to December I read...I read a lot...and then I read some more (all the while taking notes)...until my inner knowledge glutton decided it was enough. That was after around 500 PDFs which breaks down to about 20 books/conference proceeding compilations and the around 480s articles (overkill?).

This past month I've been going back through the readings and I realized a few things: First is that I had downloaded some articles twice or thrice (since I had been collecting articles on MOOCs since 2014 in anticipation of this moment), but there weren't that many duplicates (less than 10% of the total, definitely).  In reading the same article a few times, over a period of a few months, different things jumped out at time; some things were the same, but my increased knowledge from readings was definitely coloring how I interacted with the texts.

The second part that jumped out at me is that no matter how detailed my lit-review search is, there is still stuff that will be missing from journal database searches. By looking at the references of the articles I read I saw that there were things that could be of interest (maybe), but they were in specific disciplinary journals that didn't deal with education (travel, medicine, geology, etc.), or that were in proceedings of conferences from various professional associations that my library doesn't have the most current access to, or even things that were in a different language.  I could decipher things in French, Spanish mostly without a problem, and some Portuguese, but reading academic discourses in languages that you are not used to reading academic discourses is definitely slower and more taxing. This is not a revelation - I know this from a theoretical perspective as a linguist, but this is one of the few times I felt it personally. Even so, there is probably good stuff out there in languages that are not accessible to me, and things that are not even on my radar, so 100% completion on a specific topic is a fool's errand.

The third thing, that really surprised me, was how much misinformation there was about the history of MOOCs.  In the grand scheme of things it was a small amount of articles, and it was limited to the introductory sections of articles where they were introducing the topic, but having such erroneous info printed in academic journals was a bit jarring. An example: complete lack of discussion around cMOOCs, and xMOOCs being described as evolving from OER. Now, while a nuanced discussion on the topic could reveal that certain xMOOC providers do have dotted lines in their histories to OER, I don't think this is a broad generalization that can be made, and in an introductory section to a journal article it seems quite misleading, especially to someone who might know nothing of MOOCs.  This got me thinking about literature reviews in general, and how little work goes into them sometimes; the write something, cite something, to get it done approach, rather than really thinking about it.  I know that I may have gone overboard here with mine, but I think that the minimalist approach also can be troublesome because it can (and sometimes does) devolve into a find the reference that supports you POV.

This leads me to AK's Theorem of the Funnel of Usage for the literature review (you read it here, so you better cite me! 😝). Basically what my theorem says is that for a literature review you may read hundreds or thousands of pages of stuff, you may comment or find useful things that amount to several hundred pages, bu ultimately only a small amount of what you comments on and find useful will make it your your literature review.  I am looking at a lit-review of 40 pages (double spaced) maximum. Assuming 15 pages per article on average (or around 300 per conference proceeding and book), I'd estimate 13,000 single-spaced pages read.  I have around 200 pages of excerpts and notes, which will go down to 20 pages. All that reading doesn't go to waste, it informs my views and stances (and also impacted some questions I'd like to ask in my interviews, in the methods section), but there is definitely a funneling effect here.

Speaking of overkill, perhaps I didn't have to read as much as I did (we'll never know). I think the fear is that my exam committee will decide to ask me about an article that I actually have NOT read that they think is important and then I'll be standing there like an idiot (and possibly fail 😖).  There is a Greek expression that has stuck with me πιασμένος αδιάβαστος which basically translates to "caught unread" (unprepared). It goes back to the days where children read at home, and they went to school to be quizzed on what they had read. You get caught unprepared, it's a bit of a mark of shame. I sort of felt like this when I was preparing for the comprehensive exams for my MA in applied linguistics and I passed that with flying colors, so I guess I shouldn't be as worried about these things...but I guess students reap what teachers have sowed...even if it was more than 25 years ago...

Another thing that really piqued my interest was thinking about how many web 2.0 technologies have died between now and 2008.  In reading articles about the original CCK, and MOOCs or other collaborations that have happened since I see quite a few technologies that closed down (either recently or some while back).  This makes me ponder a bit about what can be done about digitally preserving some of the artifacts created, or ways in which things worked, or something else.  While describing activities with PageFlakes (for example) is interesting, what happens 20 years down the road when no one knows how PageFlakes worked?

Finally, I realized that this entire process took about five times (or more) the length of time I expected it to take (there are quite a few "LOL :-)" next to self-imposed deadlines I've missed), and in academic articles I've read I noticed that others have also forgotten to include something in the reference list that they referenced in the body of the text, so it makes me not feel as bad when I've done my due diligence but still something goes missing or is erroneously ommited.

I kind of feel like the quote from Francois Truffaut is applicable: "You start a film and you want to make the greatest one ever made. Halfway through, you just want to finish the damned thing." - just substitute film with dissertation :)

Anyway, back to the monastic lifestyle to get some more things done.  How was 2017 for you, academically speaking?
~~~~~~~~

PS Hey! Committee members! I don't know who you are yet, but if you googled me and came across this post, go easy on me 😜



View Comments

One more thing!

... No seriously! I swear! This will be the last thing I read and then I will start to write my literature review ;-)

I am back up for some air.  When I originally made my plans last May to have the fall semester be the semester that I focused on the literature review part of my dissertation proposal I sort of envisioned a lot of reading.  Reading on the train. Reading on the weekends.  Reading while walking (through text to speech), reading while driving (also through TTS).  My goal was to put pen to paper (figuratively speaking) on November 30th.  Well, that date has come and gone and I still haven't put pen to paper yet.  And, I am still reading. 

A couple of times I've actually come close to being done reading - having my "to read" folder on dropbox empty and all things read, skimmed, or otherwise evaluated for usefulness for my proposal.   When I've come down to 10 items somehow the folder magically populates again.  Well...it's not magic - I add things to the folder.  Three weeks ago I remembered that I should have looked at the Horizon Reports to see if there were any prognosticated trends that related to my proposal. A couple of weeks ago I remembered that I didn't look at the Educause Review for related items. And, this past weekend I got notice that the OLJ (OLC's peer reviewed journal) just released a new issue and a couple of articles seemed relevant.  D'oh!

The encouraging news is that I am basically done. There are two or three relevant(ish?) articles on the OLJ, and I have the MOOC Invasion to read (or at least skim).  After that I am truly, 100%, no regrets, calling the literature review reading done and I will start to collect my notes to write the chapter up.

I suspect that I am not the only doctoral student who has suffered from the "one more article! one more book!" syndrome.  When doing research (alone or with colleagues) for an article or chapter-length piece there is a tacit understanding that you just can't fit everything in, and that stuff gets left on the cutting floor.  No one will do an oral exam for an article you submit to a journal for review - so if you haven't read something...well, no one's the wiser.  For a dissertation I feel like what you don't put in (and what you don't read), could come back to bite you in the oral defense, hence better be prepared.  But...can you take preparedness to an necessary extreme? 

Any thoughts from current doctoral students and recent grads?
View Comments

Letters of recommendation - what's up with that?

It's been a while since I've blogged, or at least it really feels like it.  I've had my nose stuck in (virtual) books trying to get through my literature review - but more on that on some other blog post. I came across an article on InsideHigherEd this past week asking whether or not letters of recommendation are really necessary. My most immediate context is admissions, given that that's part of my work at the university, but the people who gave their two cents also mentioned something I had not considered: academic jobs. I won't rehash the opinions of the people who wrote for the article, but I will add my own two cents, mostly from a graduate admissions perspective. I don't have a fully formed opinion on letters of recommendation for employment purposes, but I'll add my two cents as a prospective hire (in a few years when I might be done with my EdD :p)

For admissions to graduate course of study, be it a masters program, a PhD program, or even a certificate program, I really personally don't see much value in letters of recommendation any longer.   My point of view is framed from the perspective of a student, an instructor, and a program administrator.   When I was applying for my first master's degree I bought into the rationale given to me for letters of recommendation: former professors can provide the admissions committee qualitative information about you as a learner that a transcript cannot provide.  This is all fine and dandy, and for me to worked out: I was working on-campus as an undergraduate student, and I had some professors who I had for more than one course and who were able to write letters of recommendation.  This was a privilege that I had that other students may not have had.  For my second masters I was applying to the same college, and I was applying to a new program of the college, so they looking for student, so getting recommendations wasn't that big of a deal.  Once I finished my second masters, I really didn't want to deal with more solicitations for letters of recommendation - I started to feel odd, and I kept going back to the regular well of people for recommendations.

So, I applied to two programs concurrently so that I could write one statement, and the letters of recommendation could pull double duty.  After I finished my last two masters degrees I took some time off regular, "regimented" school and programs and focused on MOOCs.  Going back to earn an EdD posed some issues as far as recommendations go.  I had previously applied to a PhD program at my university (at the college in which I earned two masters! - never heard a final decision on my application by the way), and by the time I wanted to apply to Athabasca I felt that the well had run dry for recommendations.  Former professors still gave me recommendations, but I kind of feel I was taking advantage of their kindness by asking for a recommendation for yet another degree program I wanted to pursue (don't judge, at least I complete my degree program haha 😜).  Not that I am thinking a ton past my completion of the EdD, but should I want to pursue a regimented course of study in the future (degree or certificate program) the recommendations will be an issue; not because I can't get them, but because I feel bad about asking for them - after all I am asking for someone to volunteer their time to give me a recommendation when my academic record should suffice. This is how I feel about the GRE and other entrance tests, by the way.  If you've completed undergraduate studies then the GRE is pointless - you can do academic work.  If you are unsure of the academic work capabilities of applicants, accept them provisionally.  Just my two cents.

Another lens I view this through the administrative.  Asking for letters of recommendation, and subsequently receiving them (or not) requires time.  It requires time from the student (especially in tracking down referees if they don't submit stuff in time), it requires processing time from admissions departments, and it requires reading time on the part of committees who review applications. When such a system takes that much time and effort into it, you have to ask what the benefit, or net positive gain, is.  Going back to the story I was told - the qualitative component of the transcript, basically - does make sense in theory, but in practice... not so much. 

While I don't make decisions on applications that come to my department for review, I sneak and peek at materials that come in because I need to process them.  What I've noticed is that by and large (1) recommendation are uneven, and (2) they tend to be the same, more or less, just with different names.  The unevenness is partially cultural in nature.  If you get a recommendation from someone employed at a western institution you tend to get (more or less) what you seek.  However, non-western colleagues don't use the recommendation system so for them a recommendation is just an affirmation that the student was indeed in their class, in the specific semester, and from what they remember they performed well.  The "basically the same" aspect of recommendations runs into the same problem as non-western recommendations; that is that recommendations basically boil down to: student was in class, they performed well, so accept them.  It just turns out that western colleagues are more verbose in their recommendations so they happen to add in some anecdotes of your awesomeness as a candidate, but even those anecdotes tend to run along the same wavelength most of the time:  asked interesting questions in class, was the first to post in forums, engaged fellow classmates, submitted assignments early, etc.  From an administrative perspective there is (so far as I know) no background check on these folks providing recommendations so we are taking what they are writing in good faith.

Finally, as an instructor, I am lucky, in a sense, that I haven't had to write a ton of recommendations.  I've done so a couple of times but after a few original recommendations I've basically gone back to the awesome student, accept them, here are a couple of anecdotes formula because that's life, we're not living on Lake Wobegon. I'd gladly give a recommendation to former students who did well in my classes, but it's hard to not feel like I am writing a generic letter sometimes. So why spend time writing something that feels like a template letter if I am not providing much value to the system?

In short, recommendations for admission add no value while taking away time and resources from other areas.

In terms of letters of recommendation for academic employment, on a purely theoretical basis I'd say that they are pointless too.  Both for reasons articulated in the IHE commentary piece, but also for one of the reasons that's similar to graduate program admissions: the genericness aspect.  I think having some references is fine, but I think a quick conversation (or heck, a survey-style questionnaire) would be more preferable to a letter. The reason I think it's not that useful in hiring decisions is the same reason no one gives recommendations anymore (for us regular plebes getting work), and that is that people sue if they get wind that they got a bad recommendation. Generally speaking no one will agree to give you a letter of recommendation (or reference) if they can't give you positive reviews, and HR departments just confirm dates of employment these days.  Nothing more, nothing less; otherwise they risk a lawsuit. So, if you're not getting much information about the candidate, and if the information is skewed toward the positive (because that's how the system works), then is the information you're getting valuable?  I'd say no.

So, what are your thoughts?
View Comments

Academic precarity and other-blaming

I think I am going to commission a saint painting (Byzantine style, of course) of Paul Prinsloo (I just need to find a clever Saint Epithet for him).  Here is another though process sparked by something he shared recently on his Facebook.  Paul shared this blog post without comment (I swear, sometimes I feel like this is an online class he's conducting and we're all participating in a massive discussion ;-) ) and it got me thinking...

I do recognize the adjunctification (and probably de-profesionalization) of the professoriate, and I see it as a trend that's not new.  If I really think back to my undergraduate days, almost 20 years ago now, I could probably see it back then as well. There is, however, plenty of blame to go around. Academia is (slowly or quickly, depending on your standpoint) becoming a capitalist monster operating on a greedy algorithm. My own university, a state university, seems to be in competition with other state universities in the same state.  Instead of looking at complimentary and cross-institutional programs to help one another out (heck, we all get money from the same source!) we compete with each other in (what seems to be) a Hunger Games-like environment for academia. So we must have a program X, a program Y, and a program Z because our sister schools (20-60 miles away) have similar programs.

Having this as a background, we also have internal fiefdoms shaping up. While the few (lucky?) ones on the tenure track hunker down to protect their ever diminishing ranks and privileges, they leave others on the outside to fend by themselves and to be picked off by the (metaphorical) wolves, by getting adjunct jobs with no job security, high operating costs (you go ahead and travel between several job sites so that you can string together work to pay the bills, see how much that costs both monetarily, physically, and emotionally), no benefits, no retirement, and low wages.  Let's not get started with the (tenured) faculty know-best mentality that exists, where two of the by-products are bastardizations of the notion of self-governance and academic freedom.

There are plenty of problems with academia to go around. That said, anyone pursuing any sort of degree - doctoral, masters, or even bachelors, needs to take a hard look at the path that they are setting for themselves. You need to pursue something that is smart and helps you on the road in getting you a job to pay for yourself (think of Maslow's lower levels if you will). I sort of fell for the glamour of specific jobs.  I loved technology and went for a computer science degree as an undergraduate. I liked it, but that sort of thing wasn't exactly what I was passionate about.  But, back then you could easily get a six-figure job, with a bonus, right out of college if you had a CS degree.  And then the market crashed and jobs were sent off-shore. Luckily I didn't have school debt.  This maybe was easy to predict, but I certainly didn't see it.  I've been more cautious since then.  Faculty jobs (perhaps with the benefit of my own hindsight) are easy to observe as being diminishing in number.

The line that really made me roll my eyes in the blog post was this:
Do you retrain to do HR or Admin or tax preparation and forfeit the research you have done, or do you follow the conventional wisdom that if you are tough enough to hang in there, and brilliant enough to shine through, you'll be the one who gets the job and gets to be the professor? 
The answer is "yes" - you retrain an get other jobs to sustain yourself. If her LinkedIn page is any indication, I'm slightly younger than her, and I've had to adapt a few times in my professional life to keep a roof over my head. It's what the regular person does if they want to survive.  You can do something that's fulfilling in life, but sometimes work that pays the bills and sustains us does not coincide with what fulfills us deep in our soul.

The fact of the matter is that academia (at least in the US, I am not sure globally) has problems. Systemic and systematic problems. Both systems and individuals need to be analyzed to fix the problem, but when people are just looking out for their own good...well eventually we all lose. I should point out that I am in a doctoral program as well. I do it because I like learning and it stretches my mind. But, I don't go into debt for it, and I know that a tenure-track job isn't on the horizon for me for all the obvious reasons. I can still research and publish. I actually do that now, and I've worked with some pretty fabulous people over the years.  I count myself lucky to have been at the right place, at the right time, in the right mindset to capitalize on those acquaintances, make good friends, and expand my learning in the process. The fact of the matter is that just because you've earned a doctorate, doesn't mean that you'll be getting a tenure-track job. That's not how the system works. The system is broken, and you can't play by its "rules" if you want to change the system.

Just my two cents.  Your thoughts?  (now back to my lit review)
View Comments

speedwalking the lit review


The lit-review (lit-review 2.0 as a dub it) has been going from a crawl to, a walk, to hopefully hitting speedwalking pace.  Lit-review 1.0 was last fall, which was a little too broad to be fit for purpose, and it really explored a lot of themes that might be worthwhile keeping in mind as things to discuss in the discussion portion of the dissertation  - you know, after I pass the proposal defense, and collect and analyze data - so it's not all that useful now.

Because I am working on collaboration as a topic, and more specifically collaboration borne out of participation in a specific set of MOOCs, I am looking some literature on MOOCs and some literature on collaboration.  After I finished reading a handful of books on collaboration, I've made my way to academic articles on MOOCs (before I go back to collaboration discussed in academic articles).  It's been a couple of years since I've sat down to make a concerted effort to read articles on MOOCs (given that most of my spare time was spent on class stuff).   As I am reading these newer articles on MOOCs (2014 and beyond), the obligatory 'historical' introductions (you know, where MOOCs came from), seem to be all over the place.  Some describe them in ways that  closely tie them to the OCW movement.  Others skip everything and start with Thrun and Koehler.  Others point to Siemens and other Canadian colleagues with MOOCs like CCK.  Yet others find imaginative ways to have some sort of combination of these†.

Despite these (minor?) issues in their introduction or background sections, these articles made it through the gauntlet of the peer review and go published, so they are now part of the research record.  It's not that I am hugely bothered by things that I view as historical inaccuracies in these articles. After all, the advice given to me by my mentors is to basically go to the original citation and look up the fact and underlying reasons there, instead of citing someone who cited the info.  It's a good point, and it's good to basically double check your "facts", but it really adds to the workload if you can't trust what you're reading in an article.  Does the level of detail in a literature review reflect the level of care taken to craft the methods section, data analysis, and conclusions?  Maybe it doesn't; maybe the introduction is just a short afterthought after all else is done, I don't know.

As I go through this pile of academic articles I am struck by the two warring sides in my mind.  One side wants me (the completionist daemon) to read every single word and analyze every single sentence of an article.  OK, maybe it's not painstaking analysis, but really do give each article a good portion of my mindshare in order to make sure that I am correctly getting out of the article what the authors intended me to get out of the article.  On the other side of things, I am looking at the large (digital) pile of papers to read and a more pragmatic daemon is pointing me toward more efficient‡. The efficiency that my pragmatic daemon advocates for is skimming introductory and background sessions, and really just focusing on data analysis and conclusions, so basically make an assumption that the journal editors and peer reviewers have done a good enough job so that I can reasonably assured that what I am reading is worthwhile♠.  The problem with the pragmatic daemon's approach is that in the haste to be more efficient (just the findings, ma'am) I might be making the same errors as those folks that make me roll my eyes with their (minor?) issues in their introductory and background sections (errors I don't want to make).  I am sure that there is a good middle ground, which I am intent on finding before I am done with this proposal...

How is your dissertation process going?  If you are done with your doctorate, what were your daemons?



DIGITAL MARGINALIA
† maybe it's my own bias as a MOOC follower since 2010(ish) but the only correct version of MOOC history seems to be CCK08 as the start. Yes the open movement probably influenced it a lot, but I wouldn't go as far as to call it a descendant of OCW.
‡ imagine air quotes around this word.
♠ not counting predatory journals here.
View Comments

It's the end of the MOOC as we know it, and I feel...

...ambivalent?  I am not sure if ambivalence is the word I am going for because I am getting hints of nostalgia too.  Perhaps though I should take a step back, and start from the beginning.

This past weekend two things happened:

The first thing is that I've completed reading full books as part of my literature review for my dissertation, and I have moved onto academic articles, articles I've been collecting on MOOCs and collaboration in general. While MOOCs aren't really the main focus of my dissertation study, they do form the basis, or rather the campgrounds on which the collaborative activities occurred on, and it's those collaborative activities I want to examine. This review of MOOC articles (while still relatively in the early stages) made me reflect back on  my own MOOC experiences since 2011.

The second thing is that I received a message from FutureLearn which was a little jarring and made me ponder.  Here is a screenshot:



My usual process, when it comes to MOOCs these days, is to go through  the course listings of the usual suspects (coursera, edx, futurelearn) and sign-up for courses that seem interesting.  Then, as time permits I go through these courses.  I usually carve out an hour every other Friday to do some MOOCing these days since most of my "free" time is spent on dissertation-related pursuits.  It would not be an understatement to say that I have quite a few courses that are not completed yet (even though I registered for them about six months ago).  What can I say? I find a ton of things interesting.

If you're new to MOOCs you might say "well, it was a free course, and now it's going back into paid land - you should have done it while it was available". Perhaps you're right, perhaps not.  For a MOOC old-timer, like me (ha!), this type of message is really disheartening, and it really speaks quite well to the co-opting  and transmogrification of the MOOC term (and concept) and making something that is not really recognizable when compared to the original MOOCs of 2008-2012; or perhaps it's a bit even like an erasure - erasing it form the past, but luckily at least articles exist to prove that it existed, and cMOOC is still recognized as a concept.

I am convinced that platforms like coursera and futurelearn can no longer be considered MOOC platforms, and should be referred to  as either a learning management system (which they are), or online learning platform. Over the past few years things that seemed like a given for an open learning platform are starting to not be there.  First the 5Rs started being not applicable.  You couldn't always revise or remix materials that you found on these platforms...but you could download copies of the materials so that you could retain your own copy, and this meant that you could potentially reuse and redistribute.  Redistribution was the next freedom that went,  and after that was reuse.  You could still download materials though (at least on coursera and edx).  Then a coursera redesign made video download not an option... (still an option in edx, not sure if it was an option in futurelearn), and now courses are becoming time-gated... argh.

The certificate of completion was an interesting concept - a nice gift from the people who offered the course if you jumped through their hoops to do the course as they intended, but it was really only valuable when it was free of cost. This freebie has also been lost (not a great loss since it doesn't really mean much - at least not yet).

All of this closing off of designs and materials (closing in a variety of ways) makes me long for the days gone by, day not long ago, and MOOCs only about 10 years in the past.  Although, I suppose in EdTech terms 10 years might as well be centuries.

I do wonder when might be a good time to reclaim the name and offer up connectivist courses again - or perhaps it's time to kill the term (wonder what Dave thinks of this ;-) ), and create something that doesn't have such  commercial interests infused into it right now.

Thoughts?

View Comments

Instructional Designers, and Research

Yet another post that started as a comment on something that Paul Prinsloo posted on facebook (I guess I should be blaming facebook and Paul for getting me to think of things other than my dissertation :p hahaha).

Anyway,  Paul posted an IHE story about a research study which indicates that instructional designers (IDers) think that they would benefit from conducting research in their field (teaching and learning), but they don't necessarily have the tools to do this.  This got me thinking, and it made me ponder a bit about the demographics of IDers in this research. These IDers were  in higher education.  I do wonder if IDers in corporate settings don't value research as much.

When I was a student and studying for my MEd in instructional design (about 10 years ago), I was interested in the research aspects and the Whys of the theories I was learning. I guess this is why further education in the field of teaching and learning was appealing to me, and why I am ultimately pursuing a doctorate. I digress though - my attitude (inquisitiveness?) stood is in contrast with fellow classmates who were ambivalent or even annoyed that we spent so much time on 'theory'.  They felt that they should be graduating with more 'practical skills' in the wizbang tools of the day.  We had experience using some of these tools - like Captivate, Articulate, Presenter, various LMSs, and so on, but obviously not the 10,000 hours required to master it†. Even though I loved some classmates (and for those with who are reading this, it's not a criticism of you! :-) ), I couldn't help but roll my eyes at them when such sentiments came up during out-of-class meetups where we were imbibing our favorite (hot or cold) beverages.  Even back then I tried to make them see the light.  Tools are fine, but you don't go to graduate school to learn tools - you go to learn methods that can be applied broadly, and to be apprenticed into a critical practice.  As someone who came from IT before adding to my knowledge with ID,  I knew that tools come and go, and to have a degree focus mostly on tools is a waste of money (and not doing good to students....hmmmm...educational fast food!). I know that my classmates weren't alone in their thinking, having responded to a similar story posted on LinkedIn this past summer.

My program had NO research courses (what I learned from research was on my own, and through mentorship of professors in my other masters programs). Things are changing in my former program, but there are programs out there, such as Athabasca University's MEd, which do work better for those who want a research option.

Anyway, I occasionally teach Introduction to Instructional Design for graduate students and I see both theory-averse students (like some former classmates), and people who are keen to know more and go deeper. I think as a profession we (those of us who teach, or run programs in ID) need to do a better job at helping our students become professionals that continually expand their own (and their peer's) knowledge through conscious attempts at learning, and research skills are part of that.  There should be opportunities to learn tools, for the more immediate need of getting a job in the field, but the long term goal should be setting up lifelong learners and researchers in the field.  Even if you are a researcher with a little-r you should be able to have the tools and skills to do this to improve your practice.

As an aside, I think that professional preparation programs are just one side of the equation.  The other side of the equation. The other side is employment and employers, and the expectations that those organization have of instructional design.  This is equally important in helping IDers help the organization. My conception of working with faculty members as an IDer was that we'd have a partnership and we'd jointly work out what was best based on what we had (technology, expertise, faculty time) so that we could come up with course designs that would be good for their students. The reality is that an IDer's job, when I did this on a daily basis, was much more tool focused (argh!).  Faculty would come to us with specific ideas of what they wanted to do and they were looking for tool recommendations and implementation help - but we never really had those fundamental discussions about whether the approach was worth pursuing anyway. We were the technology implementers and troubleshooters - and on occasion we'd be able to "reach" someone and we'd develop those relationships that allowed us to engage in those deeper discussions. When the organization sees the IDer role as yet another IT role, it's hard to make a bigger impact.

On the corporate side, a few of my past students who work(ed) in corporate environments have told me that theory is fine, but in academia "we just don't know what it's like in corporate" and they would have liked less theory, more hands-on for dealing with corporate circumstances. It's clear to me that even in corporate settings the organizational beliefs about what your job as an IDer is impacts what you are allowed to do (and hence how much YOU impact your company). Over drinks, one of my friends recently quipped (works in corporate ID, but formerly on higher education) that the difference between a credentialed (MEd) IDer and one that is not credentialed (someone who just fell into the role), is that the credentialed ID sees what's happening (shoverware) and is saddened by it. The non-credentialed person thinks it's the best thing since sliced bread‡. Perhaps this is an over-generalization, but it was definitely food for thought.

At the end of the day I'd like to see IDers more engaged in education research. I see it really as part of a professional that wants to grow and be better at what they do, but educational programs that prepared IDers need to help enable this, and organizations that employ them need to see then as an asset similar to librarians where they expect research to be part of the course to be an IDer.

Your thoughts?


MARGINALIA:
† This is obviously a reference to Gladwell's work, and the 10,000 hours of deliberate practice.  It's one of those myths (or perhaps something that needs a more nuanced understanding). It's not a magic bullet, but I used it here for effect.
‡ Grossly paraphrasing, of course
View Comments

Ponderings on predatory journals


I originally posted this as a response in a post that Paul Prinsloo wrote on facebook (in response to this Chronicle Article on Beall's list and why it died), but it seemed lengthy enough to cross-post as a blog post :-)
--------------

So many issues to dissect and analyze is such a (relatively) brief article. It is important to see and analyze predatory journals (and academic publishing) in general systematically with other trends in academia. This includes the fetishization of publish or perish, and the increased research requirements to even get a job in academia (see recent article on daily noos as an example)

One thing that bugged me was this line --"Good journals are not going to come to you and beg you for your articles. That should be your first clue." There are legitimate journals out there that are new, and hence don't have any current readership because they are new, so they can't necessarily rely on the word of mouth to get submissions for review. I am helping a colleague get submissions for for upcoming issues (shameless plug: http://scholarworks.umb.edu/ciee/ ), and we certainly solicit submissions from within our social network (and the extended social network). We don't spam people (perhaps that the difference), but the social network is used for such purposes.

I also don't like the idea of categorization of 'high quality' and 'low quality' . Anecdotally I'd say that what passes as high quality tends to (at least) correlate with how long they've been in the market, the readership they've amassed over the years, and the exclusivity they have developed because of this (many submissions, few spots for publication). Exclusivity doesn't necessarily mean high quality, and a high quality journal doesn't necessarily mean that a specific article is high quality (but we tend to view it under that halo effect).

At the end of the day, to me it seems that academics are equally susceptible to corporate interests as other professionals. True freedom to say what you need to say sometimes requires a pseudonym - sort of like the Annoyed Librarian...
View Comments

Validity...or Trustworhiness?

It's been a crazy few days!  If it weren't for my brother coming down to hangout for a while I probably would have more in common with Nosferatu than a regular human being😹 (having been stuck indoors for most of the weekend).  When I started off this summer I gave myself a deadline to be done with my methods chapter by August 30th (chapter 3 of my proposal).  After reading...and reading...and reading...and re-reading (select articles form EDDE 802), I reached a point of saturation when it comes to methods.  I really wanted to read all of Lincoln & Guba's 1985 book called Naturalistic Inquiry during this round, but it seems like I will just need to focus on specific aspects of the book.

So, in this whirlwind of activity, I went through the preamble to my methods section, my target participant descriptions, my data collection, my data analysis techniques, and any limitations.  I added to these sections, explicated, went more in-depth in each section, I corrected issues that were brought up by Debra in EDDE 805, some outstanding issues and comments from the feedback on the parts I had worked on MDDE 702, and some of the initial comments I got back from my dissertation chair. **phew** That was hard work!  The only parts that I still have left to complete in order to be "done" with my methods section are (1) The ethics section; (2) the validity/reliability/bias section;  (3) a conclusion section for the chapter bringing it all together; (4) an appendix with a sample survey; and (5) an appendix with the participant consent form.  I am considering adding (6) the REB application to an appending as well before I call this section "done".  I am not sure if I will be done by August 30th (as was the original plan) but I think I will be damn close.

That said, there is one thing that is tripping me up, and that has to do with the validity/reliability/bias section.  Bias is actually not that hard.  I think I can write up procedures and things to be on the lookout for in order to avoid bias in both data collection and analysis.  The thing that  is much more concerning is philosophical:  Do I go with Trustworthiness, Credibility, Dependability, and Transferability as what I talk about in this section (coming from the Lincoln and Guba tradition of qualitative research work)?  Or do I choose the more traditional Validity and Reliability and discuss my methods in that frame of reference?  Of course, for qualitative studies the Lincoln & Guba approach makes sense (at least it does to me, and it's references in a variety of other texts I've read on qualitative approaches to research), but at the same time quite a few of the texts that I've read (both on case studies and on qualitative studies in general) still use reliability and validity as terms in qualitative researcher. So, so I "translate" validity and reliability (from the texts) in Lincoln & Guba terms?  Just discuss in the framework of Lincoln & Guba? or try to smash both together?  Perhaps start with Validity & Reliability and transition to L&G terms since they make sense?    I need to re-read Chapter 11 of Naturalistic Inquiry this week to help make up my mind (any thoughts are more than welcome in the comments).

As of this point I am at 23 pages (with 1 paragraph of lorem ipsum text, and quite a few scraps of though patterns for items 1 and 2 above), so I am thinking I should be wrapping this up soon and not getting logorrhea.
View Comments
See Older Posts...