Club Admiralty

v7.2 - moving along, a point increase at a time

Multilitteratus Incognitus

Pondering what to learn next 🤔

DALMOOC Episode 9: the one before 10

Hello to fellow #dalmooc participants, and those who are interested in my own explorations of #dalmooc and learning analytics in general.  It's been a crazy week at work with many things coming down all at the same time such as finishing advising, keeping an eye on student course registrations, and new student matriculations, making sure that our December graduates are ready to take the comprehensive exam...and many, many more things. This past week I really needed a clone of myself to keep up ;-)  As such, I am a week behind on dalmooc (so for those keeping score at home, these are my musings for Week 7).

In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-).  In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.

In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success.  I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors.  So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing. 

In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course):  (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.

One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement.  This work is based on the ideas of Piaget (1963) and cognitive conflict.  Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity.  In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.



Finally this week I messed around with LightSIDE, without reading the manual ;-).  According to Carolyn the manual is a must read (D'oh ;-)  I hate reading manuals).  I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after.  The screenshots are from the work that I did.  I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back.  I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)

So, how did you fare with Week 7?  Any big take-aways?






 Comments
Stacks Image 20

Archive

 Apr 2025 (1)
 Mar 2025 (1)
 Feb 2025 (1)
 Jan 2025 (1)
 Dec 2024 (2)
 Oct 2024 (2)
 Sep 2024 (1)
 Aug 2024 (5)
 Nov 2023 (1)
 Aug 2023 (1)
 Jul 2023 (1)
 May 2023 (1)
 Apr 2023 (4)
 Mar 2023 (5)
 Feb 2023 (2)
 Dec 2022 (6)
 Nov 2022 (1)
 Sep 2022 (1)
 Aug 2022 (2)
 Jul 2022 (3)
 Jun 2022 (1)
 May 2022 (1)
 Apr 2022 (2)
 Feb 2022 (2)
 Nov 2021 (2)
 Sep 2021 (1)
 Aug 2021 (1)
 Jul 2021 (2)
 Jun 2021 (1)
 May 2021 (1)
 Oct 2020 (1)
 Sep 2020 (1)
 Aug 2020 (1)
 May 2020 (2)
 Apr 2020 (2)
 Feb 2020 (1)
 Dec 2019 (3)
 Oct 2019 (2)
 Aug 2019 (1)
 Jul 2019 (1)
 May 2019 (1)
 Apr 2019 (1)
 Mar 2019 (1)
 Dec 2018 (5)
 Nov 2018 (1)
 Oct 2018 (2)
 Sep 2018 (2)
 Jun 2018 (1)
 Apr 2018 (1)
 Mar 2018 (2)
 Feb 2018 (2)
 Jan 2018 (1)
 Dec 2017 (1)
 Nov 2017 (2)
 Oct 2017 (1)
 Sep 2017 (2)
 Aug 2017 (2)
 Jul 2017 (2)
 Jun 2017 (4)
 May 2017 (7)
 Apr 2017 (3)
 Feb 2017 (4)
 Jan 2017 (5)
 Dec 2016 (5)
 Nov 2016 (9)
 Oct 2016 (1)
 Sep 2016 (6)
 Aug 2016 (4)
 Jul 2016 (7)
 Jun 2016 (8)
 May 2016 (9)
 Apr 2016 (10)
 Mar 2016 (12)
 Feb 2016 (13)
 Jan 2016 (7)
 Dec 2015 (11)
 Nov 2015 (10)
 Oct 2015 (7)
 Sep 2015 (5)
 Aug 2015 (8)
 Jul 2015 (9)
 Jun 2015 (7)
 May 2015 (7)
 Apr 2015 (15)
 Mar 2015 (2)
 Feb 2015 (10)
 Jan 2015 (4)
 Dec 2014 (7)
 Nov 2014 (5)
 Oct 2014 (13)
 Sep 2014 (10)
 Aug 2014 (8)
 Jul 2014 (8)
 Jun 2014 (5)
 May 2014 (5)
 Apr 2014 (3)
 Mar 2014 (4)
 Feb 2014 (8)
 Jan 2014 (10)
 Dec 2013 (10)
 Nov 2013 (4)
 Oct 2013 (8)
 Sep 2013 (6)
 Aug 2013 (10)
 Jul 2013 (6)
 Jun 2013 (4)
 May 2013 (3)
 Apr 2013 (2)
 Mar 2013 (8)
 Feb 2013 (4)
 Jan 2013 (10)
 Dec 2012 (11)
 Nov 2012 (3)
 Oct 2012 (8)
 Sep 2012 (17)
 Aug 2012 (15)
 Jul 2012 (16)
 Jun 2012 (19)
 May 2012 (12)
 Apr 2012 (12)
 Mar 2012 (12)
 Feb 2012 (12)
 Jan 2012 (13)
 Dec 2011 (14)
 Nov 2011 (19)
 Oct 2011 (21)
 Sep 2011 (31)
 Aug 2011 (12)
 Jul 2011 (8)
 Jun 2011 (7)
 May 2011 (3)
 Apr 2011 (2)
 Mar 2011 (8)
 Feb 2011 (5)
 Jan 2011 (6)
 Dec 2010 (6)
 Nov 2010 (3)
 Oct 2010 (2)
 Sep 2010 (2)
 Aug 2010 (4)
 Jul 2010 (9)
 Jun 2010 (8)
 May 2010 (5)
 Apr 2010 (4)
 Mar 2010 (2)
 Feb 2010 (3)
 Jan 2010 (7)
 Dec 2009 (9)
 Nov 2009 (5)
 Oct 2009 (9)
 Sep 2009 (13)
 Aug 2009 (13)
 Jul 2009 (13)
 Jun 2009 (13)
 May 2009 (15)
 Apr 2009 (15)
 Mar 2009 (14)
 Feb 2009 (13)
 Jan 2009 (10)
 Dec 2008 (12)
 Nov 2008 (6)
 Oct 2008 (8)
 Sep 2008 (2)
 Jun 2008 (1)
 May 2008 (6)
 Apr 2008 (1)
Stacks Image 18