Multilitteratus Incognitus
Discord as a discussion forum - initial thoughts from last fall
%10-%01-%2026, %R courseDesign, discord, forums, instructionalDesign, InstructionalTechnology, participation, teachingLast fall, I got to design and teach a course that I've been wanting to teach for a very long time: Language Learning and Technology (or, in other words, Computer-Assisted Language Learning - if you are in the language education field). It was a lot of fun to design, and a good experience to teach. I really enjoy design work (even though I don't get to do it often), and it's been ages since I taught a class that was a regular graduate class; all of my grad courses since 2021 have been Capstone courses, which I've treated mostly like a studio space with peer review. There isn't a lot of "discussion" that happened in that kind of course.
Anyway, the last time I taught a course with regular weekly discussions, we used Blackboard "Classic." I've been using discussion forums on Canvas for a few years now through my OLC facilitation, and before that, I've had experience with a variety of LMS and their associated forum functionality. They are all pretty similar, and honestly, I felt a bit bored by the format, having used this format both as a learner and as an instructor since 2005. Over the years, both as a learner and as an instructor, what I've seen most students do is to jump in, say something by some initial deadline, leave, come back 4 days later, post responses to one or two peers, and disappear again until next week. That's not really a discussion; there's a lack of idea exchange. I've described this behavior like a drive-by shouting: You're in a slow-moving car, you shout something out the window, and you speed away. Meanwhile, the people on the side of the road who are having a conversation are left asking, "what was that about?" When I was new to teaching, I was using a discussion board rubric based on the Middle Tennessee State University discussion forum grading rubric. It was fine, it worked fine, and students got their grade, but over the years, this seems like a lot of work for a discussion forum activity that's worth 10% of overall participation.
What did I want to discussion to be like? I wanted it to be more "of the moment." If students were reading something in the weekly readings, I wanted them to be able to come online and post something that intrigued them, even if they had not read ALL the readings just yet. I wanted this facility to be in their pocket, so if they had their phone there while reading (even if they didn't have access to a laptop for Canvas), they could just share multimodal content with others. I also wanted them to share what they found with others in a more drip fashion. Found a relevant podcast episode, article, or resource? Why not share it with others? Finally, I wanted folks to feel like the discussion forum was more of a watercooler rather than a space where they felt compelled to be twice per week. Given all this, I decided to give Discord a try. Yes, I know that Canvas has a mobile app, but while their UI was innovative for 2011 when I first encountered it (and also in comparison to other LMS providers of the time), it feels a bit stale today and not conducive to what I wanted it to do.
Here were some parameters for discussion forums each week:
- Students needed to "log in" five days per week and post something.
- This means that they needed at least five posts per week.
- The posts could be anything relating to the course, I'd also accept a post per week that was off-topic (e.g., share a funny photo of your pet, a recommendation for a TV show, or whatever builds community).
- Students would need to participate in the weekly topic, but if there was a week that just didn't resonate with them, they could lurk in that channel, read what others wrote, and post in other channels.
Have you used an alternative to the discussion forum (that isn't Voicethread)? What do you think?
Rolleyes... LLM edition
%03-%01-%2026, %R AI, AIEd, LLM, microsoft, trainingIt's the winter break, and now that I have some actual downtime, I decided to do some Microsoft training. I think the last time I had the mental space to do any of this on the Microsoft Education was sometime in 2021 (if the last badge earned is any indication). Anyway, I went through the course offerings to see what's on tap at Microsoft, and I came upon a whole load of AI-related things. Cool. While I've been paying attention to this whole AI thing, I haven't really paid that much attention to what corporate training is saying about their products (and how they might be used).
I've seen some colleagues post their badges on LinkedIn, so I thought I'd also follow the AI for Educators learning path on Microsoft Education to get conversant with what others on my campus are experiencing through these trainings.
Now, AI has been touted as a time saver on a variety of fronts, a claim that I think has yet to pan out. As I was going through the AI for Educators training, the following use case scenario was presented:
An educator in a secondary class needs to create a rubric for an upcoming writing unit in Greek mythology, then write an exemplar response for learners to follow. Facing what could be hours of work to complete, the educator turns to an AI tool and starts to enter in the prompts. The educator starts with the rubric. They turn to Microsoft 365 Copilot Chat and paste in the state standards and description of the upcoming Greek mythology writing unit. Then, they ask Copilot Chat to create a 20-point rubric including all the information they pasted. It’s completed, but it’s not quite what they were expecting. After interacting with Copilot Chat with a few more clarifying prompts like “Make the wording better for a 13-year-old,” the educator has the rubric. Finally, the educator asks the AI to write an exemplar response based on the rubric it created.
It should be pointed out that the typical learner age is in a K-12 setting in the examples that microsoft training gives, but the example above reiterates this. This type of scenario isn't unique in its over-the-top-ness. I've seen other similar use cases given as examples elsewhere. The thing that all of these scenarios have in common is a late-90s or early 2000s infomercial, where the exasperated user tries to do something "simple" (like drain pasta in a colander) only to have the task explode in their face (or have the pasta all over the sink and going down the drain, as was the case with the Pasta Pro Cooker, or whatver is happening with the following tupperware containers 😂
Anyway, I would expect that if someone is preparing lessons and activities based on subject matter that they know, they shouldn't need an LLM to create this stuff for them. Furthermore, why in God's would you outsource the rubric creation? Don't you know what you want out of the activity? And, why would you want the LLM to create an exemplar response? Why should novice learners emulate what an LLM produces? This all just seems highly sus to me 😒.
ChatGPUghs...and LLNos...
%30-%12-%2025, %R AI, AI-Slop, AIEd, GenAI, LLM, teaching![]() |
| Academic AI-Slop (ChatGPT Produced) |
Happy end of the semester, and almost the end of the calendar year!
Alright, I'll own up to it. The title of this post probably doesn't hit the mark ;-)
One more calendar year is in the can, and for me, it was a year of (potential) endings, a year of (potentially) new beginnings, and a year where things changed much more so than I expected in my teaching practice. I won't dwell too much on the endings and beginnings in this post because those things are best viewed in retrospect, so they might take a few years to distill down to a post. I did want to reflect a bit on my teaching practice. As I look back at the year of teaching, I think this year marked an inflection point: the year I started seeing AI-slop as student submissions. Interestingly enough, it wasn't my graduate students, but my faculty learners in some of the various workshops I facilitate that submitted such work as part of their coursework.
Can I prove it? No. No, I can't.
Do I have a strong Spidey Sense about people using ChatGPT? Absolutely!
Was it technically prohibited? Again, no. No, it wasn't.
How do I know? I've experimented ChatGPT and Claude enough to be able to spot something that looks like AI-slop... The telltale sign in this case was a lot of bullet-point lists, which also had a lot of emojis, and also were quite generic in nature - these things could be applied to a Philosophy class, or an English Literature class, or a History class. Granted, there was some customization here for the content of the class, based on what workshop participants had submitted in prior weeks as their area of focus, but that could have easily been in the prompt. I really had no way of discerning if folks had manually edited the slop that came out, or if they just copied/pasted it into the assignment prompt. To add to this, there was a small number of folks who also worked weeks ahead of where we actually were with the cohort, as a result, the slop-submitters were submitting things 4, 5, 9 weeks in advance.
Now, don't get me wrong, when I am in the student's seat, I also work ahead. Life can get unexpected, so I read ahead. If I have the forum post prompts, I compose my initial post ahead of time (although I don't post it until the week of). And, if I have assignment details, I try to get those started during my downtime. This inoculates me in case things get busy in other facets of life, and "gifts" me free time at the end of the term in case things all fall into place. Working ahead isn't an issue; it's all of these factors combined that present an issue.
As a teacher/instructor/facilitator, I give feedback to the submissions that folks have submitted in my various classes and workshops. It's through feedback that we grow. I found myself feeling like I wasted my time with some of the submissions that I received this year because I was using my "free" time (outside of my day job hours) to give at least some feedback to people who submitted something. By the end, I may have gotten a bit resentful that I was potentially commenting on AI-Slop. Now, I am getting compensated for this kind of professional activity, and in my 20 years of teaching professional development workshops, I've always had people phoning it in (for one reason or another), but there was an understanding: You phone something in, I'll give you some feedback. You put in some half-assed work, and maybe my feedback will spark an "AHA!!!" moment. At least the phoning in included your own work. Now, phoning it in basically means 10 minutes with some prompting and copy/pasting into a submission prompt. No thought. Just bypassing all (or most) cognitive processes in course design. Meanwhile, my feedback is easily something that takes double that, which means that a large portion of my weekends (and some afternoons) were spent reading, pondering, processing, and ultimately responding to AI-Slop. I could have been playing Xbox or going to the gym...
I acknowledge that I am a bit burned out from all the things this year and I might be more crabby than usual, but even when I took courses because it was a requirement for something (e.g., Gen Ed, Concentration Requirements, Professional Requirement, whatever) that I wasn't particularly keen on, I at least tried my best to put deliverabes together that resembled something passable. I am not understand why faculty who want to "AI-Proof" their curricula use LLMs to submit slop for their own professional development.
I am hoping that my spring semester "sabbatical" from teaching will help recharge those batteries for my summer and fall teaching.
Blackboard Ughs...
%26-%12-%2025, %R Blackboard, LMS, teachingA great number of years ago, I was part of an edtech team tasked with evaluating learning management systems to move to after WebCT was essentially EoL'd by Blackboard. Long story short, despite our recommendations, management went with Blackboard, which I guess now is classed "Classic," and it too has been EoL'd. I wasn't a big fan of Classic, but it worked fine for what we needed it to. When the time came for the next LMS, my institution went to Canvas (finally...). I've seen Ultra, the next incarnation of Blackboard, and all I could say was "me'h." I was just happy I didn't have to use it ;-)
Fast forward a year, and I am teaching for an institution that uses Bb Ultra. On the plus side, the course was already designed for me, so I didn't really have to figure out a ton of the nuts and bolts of Ultra ahead of the semester, and it seems pretty easy to pick up. There are some big "ughs" when it comes to using this system, however. My three pet peeves here are read notifications that don't go away, a UI that seems to isn't great, and the kindergarten way of building content.
While building content wasn't on top of my list (the course was built already, after all), trying to put something in an announcement, in a way that didn't treat me as a child, was frustrating. While the blocks of content that one can drop into a page (or page-type) is interesting, there is very little wiggle room for customizability. I can either have formatted text, or HTML, for example. I can't just go into the HTML for a page I've formatted with Bb and then post a snippet of HTML embed code. This makes the page layout a bit long and scroll-y, IMO.
The UI I got used to, for the most part, over the course of the term, but the number of times I hit that stupid "X" button and expected to go back to my previous page/setting/location only to be dropped out of my course entirely was too high. I have yet to get used to the horizontal menu items on the top of the page, as if this is some kind of application in my computer. Granted, at least Blackboard is playing a bit with the UI, so I respect that to some extent, but I wish there were a different way to navigate that was more "classic." The top-to-bottom accordion style for content is also not that great...
Finally, let's talk about there is a disturbing lack of ability to mark discussions as read/unread with ease. If you hover over a discussion contribution long enough, it will be marked as read, but if you read quickly or skim, that won't happen. Also, if you have unread posts (or Bb thinks you do...), you'll get the number of unread posts showing on your course homescreen next to discussions (cool, I think), however, when you click on that "discussions" component from the menu, you may end up seeing NO discussions as being unread. My best guess is that some response to a response to a response in a discussion forum is marked unread, and the main thread is showing as all caught up, while the unread posts marker is still there 🤦♂️. When you are trying to determine if there are posts that need attention, this idiotic marker doesn't make things easy.
Overall, this isn't the worst LMS I've used in my 15 years (or more?) of teaching online, but I would have expected some better UX from such a big company with decades of experience supporting LMSs...
![]() |
| Screenshot of Blackboack Ultra (not mine) |
Turn it in...facepalm
%15-%10-%2025, %R AI, AIEd, assessment, grading, plagiarism, teachingThis semester, I am helping co-facilitate a course somewhere new. More details of that in the future (maybe), but for the time being, I wanted to reflect a bit on technology use. Since this is a new institution for me, I have to click on the technology acceptance pop-ups, see policy dialog boxes that I no longer see at my institution, and so on. One of the assignments I am grading this term requires the use of TurnItIn, and while perusing through the course to familiarize myself with the course, I clicked on TII. Since I am new to this instance of TII, I got greeted with a disclaimer about AI detection (bolding my own):
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (i.e., our AI models may produce either false positive results or false negative results), so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any academic misconduct has occurred
Alright, fine. It's a fine disclaimer; there should be human judgment determining whether something breaks your academic code of conduct. However, I was wondering what this thing flags. So, since I have access to a previous semester's course shell, I popped in quickly to see what last year's student submissions were for this assignment to get a sense of what TII flags. Pretty much all of the assignments were flagged with about 20% suspected AI use... Ooooohhhh kay.... I decided to have a closer look to see what was marked as AI. Having used TII for stuff at my own institution, I know that their 'regular' plagiarism detector usually "detects" things that aren't really plagiarism... Looking at the highlighted passages that are suspected to be generated by AI, I have absolutely no idea how any human would go through and determine if these passages were AI-generated. How does one determine if something is truly created by a synthetic text extruding machine? To accuse a student of academic dishonesty would require solid proof, and all TII does is sow the seeds of anxiety and doubt amongst faculty that students have cheated, with no way of being able to verify such claims.
So...I guess my takeaway is: Tell me you're marketing something that doesn't work without telling me you're marketing something that doesn't work...
All AI Use Case Claims Need Verification - Period.
%29-%09-%2025, %R AI, ChatGPT, experimentA few weeks ago, I saw this posted on a friend's social media feed, and the response was also repeating the "oh wow! I would have never thought of doing this! This is so nifty!
Well... my first thought, knowing how ChatGPT works, is that this is a complete bunch of bullshit, and I immediately jettisoned this idiocy from my brain. Over the last few days, I've been thinking that I really should try it to see if it does actually work. After all, I wouldn't be doing my due diligence if I didn't check this use case out. If it does, it could be useful for accessibility purposes.
However, I didn't want to snap a photo of my own bookshelf, and I've been looking for a sample photo to upload to ChatGPT to test this out. Last night, I was browsing Reddit and I saw a post from the hotsauce subreddit, and I found my use case. So, I downloaded the picture, submitted it to ChatGPT (yeah...sorry about that), and asked it some questions. I've added the photo (which may be downscaled by Blogger at the end of this post).
Then, I got to asking questions. First, a curveball, ask for something that doesn't exist.
Prompt: Help me find the tabasco sauce. Where is it?
Response: I looked closely at the shelves, and I can see the Tabasco sauce bottles on the top right corner of the display.
They’re the familiar small bottles with the diamond-shaped Tabasco logo and red caps, placed in front of the green cylindrical packaging tubes.
Verdict: ❌
Total failure, but OK. Let's ask it for something that DOES exist...
Prompt: Where is Melinda's hot sauce?
Response: I see the Melinda’s Hot Sauce bottles on the right side of the shelves, about the third row up from the bottom.
They’re grouped together in a row, with their distinctive white caps and colorful labels featuring the portrait of a woman (the Melinda’s logo).
Verdict: ❌
Prompt: Where's the fat cat hot sauce?
Response: I found it — the Fat Cat Hot Sauce bottles are on the left side of the shelves, about the fifth row up from the bottom.
They have white caps and labels with a simple cat face logo, making them stand out from the more fiery and graphic hot sauce labels around them.
Verdict: ❌
Prompt: Where is the spicy shark hot sauce?
Response: I spotted the Spicy Shark hot sauce bottles on the left side of the display, about the fourth row up from the bottom.
They’re the bottles with white caps and colorful labels featuring a shark illustration, lined up together in a row.
Verdict: ❌
Prompt: Where is the orange krush hot sauce?
Response: The Orange Krush hot sauce is on the left side of the display, second row from the top.
You can spot it by its bright orange label and cap, standing out among the darker bottles around it.
Verdict: ❌
While this is not expensive testing, the fact that it could not find ANYTHING, and it hallucinated my fake hot sauce, it's clear to me that this is not a use case for ChatGPT and LLM-based "AI." I really think that folks should not just hit "share" on anything related to LLMs until they independently test things (at least if you're into tech and have the means to experiment; I don't expect this from my dad or other lay folk)
Nose to the AI grinder and course dev ponderings
%12-%07-%2025, %R AI, AIEd, appliedLinguistics, CALL, courseDesign, EdTech, teachingWell, it's been a hot minute since I last jotted down some thoughts. Don't worry. blog, it's not you, it's me 😂. I also have a daily meditation/reflection journal that I used to jot things down in, even though sometimes it was "Doogie Houser style," that only gets an entry twice or thirce a week. Gotta take a step back and do some more reflecting. On the plus side, the weather is finally nice enough to get out and do a daily walk, something I used to do most summers until work (in all its multiple facets) got to be a bit too much.
In any case, you didn't come here to read about my (lack of?) exercise and my busy schedule, did you? I thought I'd jot down some thoughts since it is July and summer is one-third gone (sigh). One of the things that I am not making as much progress on as I would have liked is course design. In all honesty, I thought that by now I would have chosen all my readings for the fall, started to put them into Canvas, and have August free to just chill for the rest of the summer. That's not happening... While I have evaluated one book (and a whole bunch of articles), my balance is a bit off, despite having around 40 readings so far for my class. The main issue I keep coming up against is that there is so much AI-Boosterism out there that the more balanced CALL stuff I want to include might seem irrelevant to learners. The "we gotta teach AI because learners will need it in the future, it's inevitable" is a discourse that you can't avoid. So yeah... Trying to navigate the fresh, actually stale, hype of it all.
On the plus side, there are a few things that have happened! I created my Academic Article resources list, and my Podcasts Resources list, things that students will need for one-third of the materials each week (plus one of their major assignments), so things ain't all that bad. I've also started prepping the options for the major term project, so that's going well. I just need to develop some directions about what to do with each
The other notch on the "plus side" of things is that I've gotten my CALL groove back. Back in 2012(ish) when I was examining my options for a doctoral program, something in CALL was an area that I was seriously considering. As part of the work in that area, I spent a lot of time reading CALICO and ReCALL articles, and through my evaluation of resources so far, it's really great to see (or in many instances hear) a lot of familiar names of researchers that I recognize from back then. I think it's really lit a fire that I've been kindling for a while. The other cool thing is that I've seen so much overlap between applied linguistics/CALL with instructional design in what I've read so far. I always knew this was the case, but it's really great to evaluate book chapters that make those connections much more concrete.
On a slightly different note, as I've been pondering this course's final project over the past month (and reading materials for inclusion in the course), one of the things that stood out to me was that a survey course on EdTech (with a focus on language teaching and CALL) is that it's not just geared toward teachers. Sure, while teachers may be the majority of folks enrolled (given the type of program I work for), there are other personae who might find the course valuable. The Technology/Teaching Coach, the Educational Technologist, the CALL Imagineer, and the EdTech Researcher all come to mind as potential students in the course.
The famous saying "T∞ knąw thgselϝ is the begin Ϸominutius" - Yup
%06-%05-%2025, %R AI, AIEd, ArtificialLanguage, ChatGPT, experiment![]() |
| ChatGPT Patch of the Wise Owl |
Recently, I've been playing around with image generation in ChatGPT, not so much to create output that I plan on using seriously for something (although some output do end up on this blog as post images), but more to see how easy (or hard) it is to get something from my mind's eye into some kind of machine output. I am also curious to see how the LLM interprets what I input (that element of surprise).
I only really have the free credits that OpenAI gives to its free users, to my experimentation is basically 10-15 minutes of futzing around while watching TV in the evening. As I was playing around the other day, this scene from Star Trek: The Next Generation came to mind. In Schisms, the crew had been abducted by an alien race but had no memory of it (think Alien Encounters of the Third Kind). As they start to remember small elements of their experience, they all try to piece together their memories so that they can come up with a reconstruction of the probing table they were on.
I've included an excerpt from the show in this post. Back in 1992, this was a big of a "whooooaaah!!!" moment. Look at what the holodeck and the Enterprise computer can do!!! Playing with ChatGPT to make something like this happen... now so much! [although, maybe I should take the directives that the Enterprise crew gives to the holodeck and see what ChatGPT comes up with haha😅].
My experiments focused on creating something small and concrete, like embdoidered patches. I was interested in seeing what the technology does with mashups, like "a morale patch for a D&D players group that cosplays as Thundercats and loves Captain Planet." Some of the results were interesting (I'll admit to that), other results were humorous, so I got a good laugh. When something interesting came up and I wanted to make slight modifications, that's when things just totally broke. Text generation for images is still problematic, even when you instruct the machine to give you something specific like "To know thyself is the beginning of wisdom" ends up being correct on the first attempt (sometimes), but then as you ask the machine to update the graphic and keep everything else the same, somehow the quote becomes "To know thyself is the beginnnnn dom." Further prompts to fix this error by correcting it with "To know thyself is the beginning of wisdom" make the text become "T∞ knąw thgselϝ is the begin Ϸominutius." The same is true for adding visual elements to your creation. If you ask the machine to add a small element on a specific part of the image, the entire image is getting reworked, and elements that you liked are now changed and modified🙄.
So, unlike Schisms, where the crew of the Enterprise got their examination table (in a short time I might add), playing with image generation and modification on ChatGPT feels like a game of Marco Polo, or Hot/Cold. You think you're getting hotter with every successive iteration and prompt, but then something happens and you're cold again...much much colder...
Fun parlor trick (that consumes A LOT of energy), but useful? Nah...
It's all really tiring, isn't it?
%18-%04-%2025, %R academia, academics, learning, politics, writingHappy Friday, to all!
So, what's changed? To cut to the chase, Trump, and Musk,... and their demented clown posse that are running the federal government are making things just really tiring. We've got both Musk and Trump running their own little vengeance campaigns, on the public's dime. Meanwhile, enriching their rich benefactors, and throwing democracy down the toilet. As much as I try to avoid the news and focus on the day-to-day, the discourse on social media makes it hard to not pay attention, even with app-limiting timers to avoid doomscrolling. I have my own social media feeds, and I manage my department's social media presence. On the professional side, I always hated the stale social media presence of academic departments, the "Hey, we've got an event, so come join us...."once every 3 months. I never really had the courage (or even perhaps talent?) to go "Full Wendy's" (😂), so I stuck with sharing language and linguistics-related news. In return, I followed some awesome linguistics and learned a new thing here or there. Those feeds are now fully in fighting-fascism mode (as one would expect). In my own feeds, what used to be distance education and EdTech as far as the eye could see has now become more...fighting fascism. Fucking hell man🙄. LinkedIn (LinkedIn of all places!!!) has become the place where people share their academic thoughts (at least as far as EdTech and Distance Education goes), but who's got time for that?🤷♂️
All of this is making me ponder what some kind of renewal look like? How do Weekend Academics keep up their energy? What do they focus on (if anything)? For me, it seems like I've gone from Dissertation, to COVID, to AI, to Donlon Trusk in five years without much of an opportunity for a breather in between 😅, a kind of dystopian chat roulette, if you will. Even if I had a breather, I am pondering the ethical implications of "splurging" on the academic bit when I don't plan on pursuing an academic career (assuming nothing just falls in my lap...) when there are so many other things out there that seem more pressing. Thoughts? 🤔
Course Design Should Cost Zero...or not.
%08-%03-%2025, %R AI, AIEd, instructionalDesign, learning, OELM, teaching
A bit of a kerfulle happened a few weeks ago, and it's just indicative of how the rest of life is going what I've had this post in draft form for almost a month while I've plugged away at it...
Annnyyywhoooo🙄
The kerfuffle was kicked off by Wiley's Open Educational Language Models initial post describes OELM as bring together a collection of openly licensed components that allow an openly licensed language model to be used easily and effectively in support of teaching and learning. In his follow up post, Wiley is open pondering/brainstorming about OELMs, Wiley discusses a separation of form from content, similar to how text on the web is separated from the formatting CSS layer.
Wiley's original posts are intersting and do provide some points to ponder. I don't necessarily agree in whole with what he proposes, but I can see a grain of something interesting there, and certainly worth pondering and discussing. Maybe I've gotten a bit more "get off my lawn" these days having seen at least five major hype-cycles since 2010🧑🏫, and I think GenAI is not sustainable, it seems to be reaching a plateau, and it just doesn't really produce useful things. At least MOOCs opened up education a bit. But I digress. Moving on...
Where it gets interesting is with a follow up by Siemens, in his somewhat-weekly roundout of AI news, linked to Wiley's post and left the following zinger "I’m thinking course design should cost about zero." Here, Downes follows up with "My thought too" in his own sharing and brief analysis of things in the EduSphere. This brings us Crosslin's rebuttle on the point that "course design should cost about zero." I don't want to summarize Wiley's and Crosslin's posts, I think it's worthwhile reading them (and subscribing via RSS)
Siemens used to blog (before he nuked the site and barely does anything these days) so you could get a bit more than a soundbite. The whole "I’m thinking course design should cost about zero" feels very much like a Musk-esque provocation than anything else. I don't know if Siemens meant to drop a stinkbomb and run, or what, but it's a shit take. It doesn't matter if you are an instructional designer, learning engineer (🤮), course developer, or faculty member creating courses, the course creation process is part curation, part creation, and part experimenation. Based on the defined outcomes of a course, and the expected student demographic, you re-use what you can, you create what you don't have available, and you experiment with new ways of engaging, assessing, and communicating with the learners in the classroom. That creative process isn't something that a machine can do (at least not yet?) because everything that is extruded from an LLM comes from something that already exists. Replicating teaching practices that we've experienced mindlessly is one of the reasons many critique things like mindlessly replicated Chalk & Talk approaches; so why would we accept the equivalent from any sort of automated technology? Sure, you can create things faster, but is faster always better? You can massify things, but is massification always better?
In the end, no. Course creation should not cost zero. If you want someone to take care and effort in creating something useful, pay for it. Digitally Extruded materials seems to be a new reality (e.g., the image I asked Dall-E to make for me for this post, which is dumb, but you're not reading this post because of the image, are you?)
As an aside, along with "Better Practices," which I already use, I think I will adopt "Current Best Thinking" into my vocabulary.
Your thoughts?
PS: maybe in a subsequent post I can talk about how the HTML/CSS analogy doesn't work for learning content, but maybe at this pace you'll read about it in July 😂
Archive
Nov 2025 (2)
Sep 2025 (1)
Aug 2025 (1)
Jun 2025 (1)
Apr 2025 (1)
Mar 2025 (1)
Feb 2025 (1)
Jan 2025 (1)
Dec 2024 (2)
Oct 2024 (2)
Sep 2024 (1)
Aug 2024 (5)
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)










