Pages

Showing posts with label Canvas. Show all posts
Showing posts with label Canvas. Show all posts

February 23, 2020

Teaching Online and Student Surveillance

One of my goals this year is going to be to post something every day in this blog that can be a bit of brainstorming and/or resource-gathering for the JHU book project, and so I wanted to say a few words today about teaching online and student surveillance. This is a topic that has taken on a special urgency in this past year, as I've watched Instructure begin to pursue the use of LMS data to develop predictive algorithms and other AI/ML projects that they hope will lead to highly profitable new LMS features and products. I've been documenting that in detail here at the blog since March 2019, and a good starting point is this post: LMS, Privacy, and Purpose Limitation.

As a teacher, I do gather data about my students within the context of the course, and I've written about my data process here: The Value of SMALL Data and Microassignments. I'm not really able to use any of the Canvas data analytics for my purposes because Instructure has zero understanding of how my courses work and what the data mean. The meaninglessness of most of the data that Instructure collects is one of my many concerns about the viability of the whole data analytics project, but that's a topic for a separate post.

What I want to write about here is the use of student data beyond the limitations of a course. My impression is that students do not think about the way an LMS company could be using their data for commercial purposes (not selling it necessarily, but using the data to create new commercial products), and my impression is also that students are not aware of how their own schools are using the data that is collected in the LMS. At least at my school, no information has been provided to faculty members about what happens to our Canvas course data, so I'm assuming that no information has been provided to students either. I know that final-grade data is used for institutional reporting purposes, and that grade data is available in the SIS (we use Banner). But what about all the other data that Canvas records? Is my school using that for its own data analytics projects? I have no idea, and that is very concerning; we need to know.

My hope is that the sudden eruption of data analytics onto the higher ed scene is going to lead to lots of conversations, and I feel that one of my duties as an online instructor is to find out what I can about how our course data is being used and relay that back to the students.

In addition, I also see it as my duty to find out what the students think about that and relay that information about out to the discussions that are starting to take place about this. For example, Cristina Colquhoun is leading discussions with Instructure right now in order to review Instructure's privacy and data use policies and practices. She has done a fantastic job of soliciting feedback from faculty Canvas users via Twitter and social media. That is a great way for her to reach faculty and administrators, but student voices need to be part of that discussion also. So, in order to create a space where students can make their voices heard, I set up an anonymous poll which I shared with my students via the class announcements, and today I wrote up interim results of the poll for Cristina to see before the next Instructure data meeting, which is later this week. You can see the interim results of the poll here: Interim Canvas Poll Results.

I don't want to generalize about the numbers because it is just a small, self-selecting group of students who have filled out the poll. But what I do want to call attention to are the extremely thoughtful comments that the students made in the open-ended questions (as a general rule, it is always the open-ended non-numeric data that carries the most meaning for me personally). One of my teaching mantras is ASK THE STUDENTS. You can't know if you don't ask... and if you do ask, you may learn things you never expected.

To share what I'm hearing from students, I'm going to paste in just a few selections from their comments so far. You can see from the depth and detail of what they wrote that the idea of predictive analytics is one that they are very concerned about, and with good reason (see all comments here). I have separated out the longer comments into separate statements, so some of these comments below come from the same student:

I don't like being under any kind of surveillance. I feel like it's invasive. 
Past performance does not equal future performance. This will cause students to fulfill their own prophesies.   
If the students have a low prediction, they might feel defeated or unsuccessful in the class before it has even begun. Even if a student has high predictions, they might not try as hard than they would if they weren't given a good prediction.  
If people see predictions they treat them as facts and that will bias how they view the students.  
personally i have already had alot of issues with teachers and advisors judging my academic decisions, and i feel that allowing them to see my grade predictions would only exacerbate issues.  
The most important thing is that the student is given a choice and an easy way to opt out. 
My advice would be to NOT use the data, just delete it.  
Some people might have gotten bad grades because of external circumstances like working jobs, having children, or suffering from major illnesses. Any data prediction model might not be smart enough or have the right information to account for external factors. 
I would NOT want professors or advisors to see this information. I think it would create a bias for professors when grading.  
Depending on the student, this could motivate them or depress them. I know for me, if something is telling me I am doing bad, I am going to try to prove it wrong. 
People could become discouraged at the predictive analysis and either drop out of a class before trying or feel defeated before it really has begun.  
This could bias professors or other educators or advisers as they could form opinions about students before knowing them, or could change their opinions about a former student. Even if we try not to be biased, this information can unconsciously affect our decisions.

If this feature is to be used at all it should include explicit warnings about what the predictions are based on, what limitations the predictions have and what the predictions are to be used for.  
If canvas is predicting that you are not going to do too well in a class, then maybe that tells you that you may need to put a little more time an effort into that class that others.  
I think the good side of using data in this way is that we would be able to choose classes based on the predictions. But I also think it could be bad because when it comes to classes we have to take, if Canvas predicts we are going to make a bad grade it could get into our heads and cause us to end up doing badly. 
I don't want to be put in a box based on past performance. 

And yes, I made a cat at cheezburger for that last one. :-)



Don't put me in a box
based on past performance.


Interim Canvas Poll Results

Two weeks ago, on February 9, I shared a poll with the students in my classes, and I also made that available for others to use as well; details here: Canvas Student Survey on Data and Predictive Algorithms.

I'm going to keep the poll open until March 9 which corresponds with the middle of my semester, but  I wanted to share some results here now to share with Cristina Colquhoun as a contribution to her ongoing work with Instructure on data privacy and related questions. I don't know what efforts Instructure has made to gather student voices to bring to the discussion, but this is my contribution to that effort.

The poll is anonymous and voluntary; my goal is to give my students an opportunity to share their opinions so that I can pass that along to Instructure as part of the ongoing discussion. There is also an option on the poll for students to indicate whether or not they are willing to contribution their comments to this public reporting, so the comments you will find below are only those comments students volunteered to share.

A total of 15 students completed the poll, 13 of them students at my institution (University of Oklahoma), and 2 at other institutions. Below you will find the answers to the no-yes questions (each posed on a scale of 1 to 6 to gauge strength of response). Then, at the bottom, I have copied-and-pasted in the comments made in response to the open-ended questions. I have randomized the order of presentation in the comments.

When I close the poll on March 9, I will write a new post. Here's what I have so far, as of February 23:



Should Instructure ask your permission before using your data to build its predictive algorithms? (strong-no on left, strong-yes on right: 73% strong-yes)




Would you give Instructure permission to use your data, or would you opt out? (opt-out on left, opt-in on right: 60% opt-out, 40% opt-in)




Would you want to see Canvas's predictions about the grade you will get in a class? (strong-no on left, strong-yes on right: 53% no, 40% strong-no)




Would you want YOUR ADVISOR to see Canvas's predictions about the grade you will get in a class? (strong-no on left, strong-yes on right: 87% no, 40% strong-no)




Would you want YOUR PROFESSORS to see Canvas's predictions about the grade you will get in their class? (strong-no on left, strong-yes on right: 93% no, 67% strong-no)




Your Thoughts about Canvas Predictions:

While I do feel like there are some benefits to using this data, I feel like they should at least allow students the possibility to opt in or out. That way it becomes a choice. I personally wouldn't want to use this feature, I don't like being under any kind of surveillance. I feel like it's invasive, though I do understand it's usage is harmless. I've already seen some of the new data analytics for myself on myself and I don't understand what it means and I don't like that others have access to it while I don't know its usage or purpose. Overall, I believe that it's the choice of the student to choose whether they want to use this feature or not. I also believe that predictions are not set in stone and I think that it's irresponsible to say with certainty that they can predict a grade of a certain student based on past performance. 

Past performance does not equal future performance. This will cause students to fulfill their own prophesies. Can you provide one example of data and algorithms ever improving your life in a new, novel, and meaningful way?

I think canvas predictions would have a negative affect on students. If the students have a low prediction, they might feel defeated or unsuccessful in the class before it has even begun. Even if a student has high predictions, they might not try as hard than they would if they weren't given a good prediction. 

i need more real reasons why predicting my grades is useful for me to support it. to me if people see predictions they treat them as facts and that will bias how they view the students. personally i have already had alot of issues with teachers and advisors judging my academic decisions, and i feel that allowing them to see my grade predictions would only exacerbate issues. The most important thing is that the student is given a choice and an easy way to opt out

I don't want to be put in a box based on past performance. I feel like whoever saw the predictions based off my stats would be biased in their opinion of me just based on the prediction (either a positive bias or negative bias). Neither bias is helpful. My advice would be to NOT use the data, just delete it. Some people might have gotten bad grades because of external circumstances like working jobs, having children, or suffering from major illnesses. Any data prediction model might not be smart enough or have the right information to account for external factors.

I am unsure of what the point is. Is it to give students an opportunity to drop a class they may not be successful in based on previous data? I can see how this can be good, but I can also see how it can be bad. I would NOT want professors or advisors to see this information. I think it would create a bias for professors when grading. 

I think this allows a student to kinda know how things are going. Depending on the student, this could motivate them or depress them. I know for me, if something is telling me I am doing bad, I am going to try to prove it wrong. I wonder if there could be an option to turn it off and on. Canvas predictions can change maybe? 

The bad sides to this tool jump out at me first. Predictive tools like this are not a guarantee, but not everyone understands that. People could become discouraged at the predictive analysis and either drop out of a class before trying or feel defeated before it really has begun. For self-esteem, I think it is a negative. Also, this could bias professors or other educators or advisers as they could form opinions about students before knowing them, or could change their opinions about a former student. Even if we try not to be biased, this information can unconsciously affect our decisions. Some positives would be knowing where you could possibly stand in a class, maybe change your schedule to accommodate for harder classes later on in your career? Another positive could be it helps people choose a major. They could see where their skills are best tailored for classes they may take. I'm very hesitant about this tool however. 

I think that allowing Canvas to use data/predict future performance is strange and a bit unnecessary. I think it would to lead professors to make judgement about their students before they start the class. 

I think using data to predict future performances may give some students confidence in their abilities and help them decide what classes may be right for them. Despite this, I feel future performance predictions may lead some students to feel they don't have to work as hard to get the grade they want in a class and bias in the way professors grade if they have access to predictions of who SHOULD get an A in a class. If this feature is to be used at all it should include explicit warnings about what the predictions are based on, what limitations the predictions have and what the predictions are to be used for. Ideally this feature should be easy to avoid for students who rather not know their predicted grades.

I think this would be beneficial to students and it might help them gage themselves. If canvas is predicting that you are not going to do too well in a class, then maybe that tells you that you may need to put a little more time an effort into that class that others. 

I think this is a very interesting idea. I think the good side of using data in this way is that we would be able to choose classes based on the predictions. But I also think it could be bad because when it comes to classes we have to take, if Canvas predicts we are going to make a bad grade it could get into our heads and cause us to end up doing badly.

I think it would be a good way to for a student to track their progress in the class.However, some professors don't utilize Canvas that much or the grade is inaccurate. 



General Thoughts:

I think that Canvas has enough features already (and they don't work that well). They don't need to collect data on students and share it with others!

I like Canvas; I think it functions well. I do not think there is a need to use student data for prediction purposes.

This is the one safe place I thought was free of the possibility of advertising and data mining. Now I won't be as open and honest in my class participation.

canvas low key sucks

I think Canvas predictions would be a good idea since a lot of students already use the What If function. 

I think Canvas is really great right now, much better than other platforms I have used such as Blackboard at my community college/

I like Canvas, but a lot of it really depends on how the professor uses it if they do even use it. I have had professors who don't even use it. 

Not really. Canvas is fine otherwise. In terms of grading platforms its more user friendly than others that I've used in the past. 


February 9, 2020

Canvas Student Survey on Data and Predictive Algorithms

On February 9, I released an opinion poll about Canvas Data and Predictive Algorithms to the students in my three online classes at the University of Oklahoma, and I also designed it so that the form could be used by other students at other schools. Here is the survey:

I will keep that online until Monday, March 9 (which is the middle of our semester). I'll write up the results and share them with Instructure then. There's a form here if you want to sign up to receive the results:

As you can see, this opinion poll reflects my personal concerns (about which I have written extensively at this blog), but I hope it can be useful to others. So, please feel free to share this poll with your students, and also feel free to adapt it to reflect your own context and questions about Canvas data. Melissa Loble at Instructure has said she would like to hear student voices on these topics, so I am glad to have a chance to gather up comments from my students, along with any other students who want to use this as their platform from which to speak.

I am curious to see what my students will say, and I know I am going to learn a lot from their perspectives! Maybe Instructure will find a way to gather feedback from the millions of student users of Canvas; for now, I am just hoping to understand better the perspectives of my own students, and also to share this opinion poll for others who want to hear what students have to say.

This is just a blog post, not a tweet, but I'll add one of my favorite Twitter hashtags/mantras: #AskStudents.



Canvas Data. Be Heard. 
Are you a student? Please share your thoughts.
The survey will be available until March 9 2020.
For more information contact Laura Gibbs:






February 8, 2020

LMS, Privacy, and Purpose Limitation: A response to Melissa Loble

On February 5, Melissa Loble, Instructure's new Chief Customer Experience Officer, published a blog post: Data Privacy: Our Current and Future Commitment. Unfortunately, her post does not even mention the possibility of a data opt-out. So, when I got to the end of the post, I had the same reaction as Bram Vantasner:


For those of you who have not followed this long-running discussion about Canvas data, here are some references:
Responding now to Loble's post, I first want to point to the gap between the message she is sending here and the messages coming from Instructure in the context of the Thoma Bravo acquisition. For example, compare what Loble says in her post — "we at Instructure want to do what is right for education" — with this January 19 letter from Lloyd "Buzz" Waterhouse, Lead Independent Director of the Board of Directors of Instructure: "The Board of Directors has one priority: maximizing value for our stockholders."

For some thoughtful comments on this inevitable tension between educational vision and business imperatives, I would urge everyone to read this new post from Brian Whitmer: Instructure's Acquisition is Not What I'm Worried About (February 7 2020).

I personally think it's better to acknowledge the tension rather than to pretend that it does not exist, and there is some serious tension here, which I will explore in more detail below.

Canvas in the Cloud

When Brian Whitmer and Devlin Daley created Canvas as a cloud-based LMS, I'm sure there were many factors that went into their decision to build in the cloud. At the same time, I'm also pretty sure that one of those factors was NOT the collection of massive quantities of data with which to engage in AI research and machine-learning in order to develop and then market predictive algorithms.

What has happened, though, is that because Canvas has always been cloud-based, it has accumulated data in a way that the other major LMS companies have not. I will confess that I was very naive about this: until CEO Dan Goldsmith announced that Instructure had developed predictive algorithm products, it never even crossed my mind that an LMS company would do that. (Yes, I really was totally naive.)

For me, as an instructor, Canvas was just another LMS that I accessed with my web browser in the same way that I accessed Blackboard and WebCT and Desire2Learn with my web browser, all of which were self-hosted at my school. Over two decades of using those LMSes, it never occurred to me to ask where my data ended up. I just assumed somebody somewhere was archiving my course data for legal reasons in the same way that email and other university data are archived for legal reasons. After all, once a course was over, what use could the data even be? (Yes, like I said: I was completely naive.)

But as it turns out, Canvas data are different. Canvas data go into a database at Instructure, where data from all the courses and all the schools are sitting there together in the cloud, just waiting to be turned into marketable new data products. And that is exactly what Dan Goldsmith announced to investors in March 2019, with these hyperbolic claims about DIG (Instructure's data analytics initiative):


I am guessing that many of Goldsmith's claims during that investor meeting were overblown, but it's been almost a year, and so far there has been no substantial clarification from Instructure about how many of Goldsmith's claims about DIG are accurate, and how much was just misguided marcomm.

Canvas and its Purpose(s)

Melissa Loble mentioned the proverbial "elephant in the room" in her blog post, but I would like to invoke a different elephant: the elephant being examined by the blind men. It's an ancient South Asian fable that has now spread around the world, with a Wikipedia article of its own: Blind men and an elephant.

When Dan Goldsmith feels the LMS elephant, it's all big-data and AI and ML.

When I feel the LMS elephant, it's just a clunky tool I use for communicating with students.

And I am sure others have their own take on the LMS elephant, and that means we also have our own take on what LMS data privacy should mean, and that's because we see the LMS as having different purposes. Canvas is both one thing, and also different things, based on what we experience from our own perspectives (instructor, student, Instructure shareholder, etc. etc...).

Speaking for myself as an instructor using Canvas, I think the purpose of data in an LMS should be limited to the courses in which students are enrolled, and any use of data beyond that purpose should be protected by a data privacy policy, requiring permission for reuse beyond that original purpose. (For more about purpose limitation, here are some observations from ICO.org with regard to the GDPR.)

So, no wonder there is conflict here, and it's a crucial conflict for the entire future of the LMS enterprise. I see the LMS merely as an online service to students and instructors for the purpose of conducting a course, while Instructure sees the LMS as a data-gathering mechanism they can use to develop new data products.

And how do colleges and universities view the LMS? That is an urgent discussion that needs to happen on every campus. If and when such a discussion takes place on my campus, I will be fighting hard to limit the purpose of LMS data to the needs of instructors and students within the confines of a course, rejecting the type of predictive data analytics that employ the LMS as a surveillance tool.

I'll stop there because my rule for these blog posts is not to exceed the length of the blog post to which I am replying. Loble's post afforded me the luxury of 1100 words, so I am glad to have had the chance to write more in depth this time, and I will continue to hope, even now, for a data reuse opt-out.

June 2020 Update: The new CEO of Instructure, which is now owned by the private equity firm Thoma Bravo, announced the end of DIG; details in Phil Hill's write-up: Canvas cancels Insights (née DIG). Of course, that is just the end of DIG, not the end of Instructure collecting data. We must continue to insist on real protections and a real privacy policy that will limit Instructure from using our data without permission for purposes to which we have not consented.




Blind men feeling the elephant:
It's a fan! It's a spear! It's a snake! it's a wall! It's a rope! It's a tree!

January 21, 2020

Thoughts on Dan Goldsmith's Underwhelming Blog Post of January 20

CEO Dan Goldsmith published a post at the Instructure blog on January 20 (online here), but that post fails to address the concerns resulting from his own claims about Instructure AI projects last March (online here). In the post, Goldsmith repeated an announcement made last week that Melissa Loble is now "Chief Customer Experience Officer" who will be the "executive sponsor" for "data usage and privacy." I think we can safely assume that this appointment is in reaction to customer complaints about Instructure's data policies, but you would not know that from reading Goldsmith's post. He refuses to acknowledge our complaints, much less engage in a dialogue about our concerns.

What are we to make of the fact that Goldsmith has stopped talking about DIG publicly after his hyperbolic claims last year? As I see it, Goldsmith's silence (and the ongoing silence of others at Instructure about DIG) is making things worse, not better. I suspect (just guessing) that Goldsmith realized speaking publicly about these plans would make research and development more difficult, as users were objecting to Instructure's unilateral appropriation of our data for their own commercial purposes. He said as much in his letter about the Thoma Bravo acquisition (online here): "Reflecting this year on our goals and path forward revealed that operating in the public spotlight wasn’t fueling innovation and was starting to get in the way of customer success."  I have no idea what it means to claim that sharing information about data usage was getting in the way of "customer success," but then the rhetoric of "customer success" is really just marcomm-speak which is hindering, not helping, an honest dialogue about what Instructure is doing with our data.

When Jared Stein published a blog post about DIG last summer (online here), I replied here at my blog (online here), hoping for a dialogue. We never heard from Jared again with any further information about DIG, which is also disappointing. There is so much that needs to be discussed in order for all parties to have a clear understanding of each other's goals, constraints, concerns, etc. 

Everyone knows I'm prone to long blog posts, and I made sure then to write a post that was no longer than Jared's post; I'm doing the same again here. I've got 295 words left to equal Goldsmith's post, so here's a quick recap of my three biggest concerns:

1. Data Opt-Out. In addition to privacy issues in play, there are serious ethical concerns about the development of AI products and predictive algorithms in education; for a good discussion, see Michael Feldstein on A/B testing and product development (online here). Those of us who do not want our data used for AI research and development need an opt-out.

2. FERPA. I still do not understand how it is not a violation of FERPA for Instructure to use student grade data and enrollment status (i.e. profiling across courses) without the students' express permission. As an instructor, I am not allowed to see my students' grades in other classes, nor their GPA. I suspect (?) such data is crucial to Goldsmith's predictive algorithm for student performance: "We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom" (online here).


If they are predicting performance before students even begin work for a class, grades in other courses must surely (?) be a big part of that predictive algorithm. I suspect many students would object to having their performance predicted in this way based on Instructure's unilateral appropriation of their grade data.
Update. If not the letter of FERPA, then the spirit; see Twitter convo

3. Dialogue. If CEO Goldsmith can't/won't engage in dialogue with users, then I hope we will be hearing soon from others at Instructure who are willing and able to do that.

If you have questions and concerns, I would urge you to add them to the GoogleDoc that Cristina Colquhoun (@call_hoon) has created here: Questions for Instructure. I'm very grateful to Cristina for the excellent job she has done of organizing this latest effort to get Instructure to respond to our concerns. 




Edu-Cat has concerns.
P.S. No, it's not just FUD.

August 22, 2019

Canvas and the Botched Gradebook Labels: Why haven't they fixed this yet?

I'm taking a Zuboff break this week because I need to document my ongoing battle with Canvas Gradebook labels. The new semester has started, and as of midnight Tuesday, I've been fighting with Canvas to get control of my Gradebook. I think the Gradebook is my space, but Canvas insists on intruding. Yes, it's the labels. If you don't know what I mean, they look like this:


Yep, that would be red ink all over the Gradebook. Here's the story:

Unlike other LMSes I have used, Canvas does not respect the Gradebook space as belonging to teachers and students. Instead, Canvas thinks it knows better than teachers and students what's going on in a class. "MISSING" says the Gradebook in angry red letters (even when the assignment was optional), and "LATE" says the Gradebook (even when the student turned the work in before the deadline). By means of these labels, Canvas is sending negative and incorrect messages to my students.

So, if anyone is curious why it is that I have zero trust in Instructure's use of student data for machine-learning and AI, this is why: Canvas is intruding into the Gradebook with wrong messages for my students... and sending wrong messages to students about grades is just about the worst thing that can happen in a class. It's hard work to turn the Gradebook into a positive, rather than a negative space (my approach: Grading.MythFolklore.net), and Canvas then pulls the rug right out from under me. I tell the students they are in control... but Canvas then tells them the opposite: MISSING shouts Canvas (even when the work is not missing) and LATE shouts Canvas (even when the work is not late).  

Does Canvas have any positive messages to send my students? Nope. Nothing but red ink. MISSING. LATE. Over and over again. And I cannot stop it. 

It's like waking up in the morning to find that someone has thrown garbage on your front lawn.


Luckily, James Jones (the API and scripting guru of the Canvas Community) has written a script that will go pick up the garbage; you can see how he did that here: Removing Missing and Late Labels. Because Canvas built the Gradebook without any course-level control over the labels, the script checks every single assignment item for every single student, adjusting the label data item by item, student by student. Because I use a microassignments approach, that means the script has to check 18,450 records each time, and it does so quickly. Yay for James! Yay for scripts!

But here's the thing: James's script cannot stop the LATE labels from appearing; the LATE labels show up no matter what, and I cannot stop the students from seeing those labels. So I apologize to the students for the incorrect LATE labels and ask them to just ignore them; then I run the script once a week to clear them out.

Picking up the trash off our lawn.

The trash that Canvas put there.

It's not like Instructure doesn't know about this problem. When they first rolled out the red labels in the Beta version of the new Gradebook in September 2017, I documented the problem in great detail at the Canvas Community; that link goes to my blog posts tagged "red ink" and the first one is called "Gradebook Dismay," dated September 9, 2017. I was not the only one who was upset to find Canvas putting labels on my students, and Instructure rolled back that Beta feature from the Gradebook. I was sure they would fix it when we were all forced to go to the new Gradebook, which at my school happened in Spring 2019.

But I was wrong. 

When Spring 2019 began, there were the labels in the Gradebook, just like before. I contacted Canvas support and found out there was nothing I could do about it; I could not disable the labels at the Gradebook level. I could not disable the labels at the Assignment level. I could not change the wording of the labels or the color or alter the algorithm that assigned the labels. All I could do was click on the 18,450 items in my Gradebooks one by one.

So, as I said, James Jones came to my rescue and wrote a script.

But is that really a solution? My guess is that most Canvas users are not going to want to copy a script from GitHub, configure the variables manually in the script, and then run that script in the Javascript Console of their browser separately for each class. And to do that week after week. Yes, it's amazingly cool how it works, and I personally love to watch the network performance monitor go blip-blip-blip as it checks on thousands of records at lightning speed. But I'm a nerd, and you shouldn't have to be a nerd to stop Canvas from putting labels on your students. Especially when those labels are completely inaccurate.

And now, let's talk about why the labels are inaccurate, because that reveals a lot about how the people at Instructure view student learning: Instructure is applying an old-fashioned, deficit-driven approach to education, an approach that is exactly the opposite of what we need in the year 2019 IMO.

What is LATE? Before the new Gradebook, Canvas had a great approach to the late problem: they let you have a soft deadline and a hard deadline. This used to be one of my favorite features of Canvas. The soft deadline is what I tell my students to aim for; such-and-such is due on Tuesday (and I set the soft deadline to Tuesday midnight). But does it really matter if students are finishing up something at midnight as opposed to 2AM? No, that's silly. My students are not Cinderellas riding in pumpkin carriages; midnight is totally arbitrary. So I set up the soft deadline, and then I give everybody a 12-hour extension for every assignment, no questions asked; that is the hard deadline, and it is set for noon the next day (so, noon on Wednesday for an assignment due Tuesday). I call it the grace period. If students make the hard deadline, that is GREAT. That is the whole point; they got the assignment turned in my the deadline: yay! But Canvas does not think so: nope, Canvas thinks the work is late, putting that punitive red label on every assignment turned in during that grace period. I call it a no-questions-asked extension, but with those red LATE labels, Canvas undermines my message. Your teacher may tell you it's okay to use the grace period, but we at Canvas know better: a good student should not need the grace period, and you are not a good student; your work was LATE. I gave the students an extension on purpose, and I want them to use the extension if that helps them to get the work done. But Canvas doesn't care about what I want or what my students need. Canvas is just going to apply its algorithm, using the mind of a machine and trampling our humanity. LATE. LATE. LATE. LATE. As if the students who struggle with time don't already beat themselves up enough as it is, Canvas is going to beat them up some more. I say: students should be praised for getting the work turned in, not shamed.

What is MISSING? So, those Late labels are pretty bad, but brace yourselves: the MISSING labels are even worse. The way Canvas applies the MISSING label means that you cannot let students choose what assignments to do. I repeat: Canvas will not let you make assignments optional. So, if you think that student choice is important (I do!), and if you want to design your course so that students choose what assignments to complete (I do!), then you better learn how to run James's script because Canvas is going to label every assignment that your students choose not to complete as MISSING. And it is going to freak your students out, understandably. That is how I first found out about the labels back in September 2017; one afternoon I started getting panicked emails from students. "You told us that we could choose what assignments to do, but now Canvas is telling me I have to do them all!" I was baffled; how could Canvas tell my students what to do or what not to do? I didn't even understand what the students were talking about because I had no idea Canvas had started putting labels in my Gradebook. But it's true: Canvas really was telling my students that they had missed assignments. Even though the assignments were not required. Of course my students were upset. And here we are, almost two years later, and the Canvas Gradebook still wants to put MISSING labels on all those student assignments. The only thing that saves me is James's magic script.

What is the point of labels anyway? Even if these labels were correct (and they are not correct in my classes; every single label Canvas applies in my Gradebook is incorrect), these labels are still not going to help students. So, this is not just about Laura-and-her-weird-classes. This is instead about a wrong approach to feedback at Instructure. Students need encouraging, actionable feedback to motivate them to improve their performance. They need to know what they got right, and they also need to know what they can work on in order to do better for next time.

The Late label fails because it disregards the fact that the student DID turn in the work, which is actually good! But instead of praising the student for getting the work turned in, the red label conveys the message "no, you did bad." Negative messages like that are not how you encourage students to do better the next time.

And the Missing label is worse: it sends a negative message, and it is not even clear what the student is supposed to do next. Are they supposed to complete the missing work and turn it in anyway? Or not? Different teachers have different approaches to missing work (if the work really is missing), but Canvas doesn't know that. And Canvas doesn't care. If Canvas cared about that, they would let us configure the labels in our own way, based on our own algorithms, and conveying our own messages to our students. 

About uplift. I'll add one last observation here, and that is about what it means to be "uplifting." I used to be an active member of the Canvas Community, and my last blog post at the Community was about the Gradebook labels, along with my criticisms of Instructure's claims about AI and predictive algorithms. If they can't get the Gradebook right, why should I trust them to get anything else right about student data? At the time, the Community Managers told me I could no longer write posts like that; all contributions to the Community must be uplifting in nature, so say the Community Guidelines. Fair enough: it's their space; they make the rules, and they don't want me complaining in their space. Because I was not willing to self-censor my posts in order to be uplifting all the time, I started blogging again here; that was back in March of this year.

So, what about the Gradebook? Who makes the rules there? Just like Canvas wants an uplifting Community, I want an uplifting Gradebook. Those punitive red labels are NOT uplifting to my students, and I want them out of my Gradebook; those are my Gradebook Guidelines, and Canvas should respect that. The Gradebook belongs to me and my students. It is our space, and we should be able to tell Canvas to get its negative messages out of our space.

And so, in 2000 words (tl;dr I know), that is why I have zero faith in Instructure's ability to do anything useful with data analytics. The devil is in the details, and the details about the Canvas Gradebook are not pretty. 

That's it for this week, but I'll be back with more Zuboff again next time. And I'm glad to say that, aside from the Gradebook labels, my classes are going great! The blog network is up and running; I'll be writing about our adventures at Twitter: @OnlineCrsLady. Happy New Semester, everybody!



August 3, 2019

Data Analytics... no, I don't dig it

This week Jared Stein wrote a blog post about Canvas data, Power to the People with Canvas Data and Analytics (Can You Dig It?). I'm glad that a conversation is happening, and I have a lot to say in response, especially about an opt-out for those of us who don't want to be part of the AI/machine-learning project, and a shut-off so that we can stop intrusive profiling, labeling, and nudging in our Canvas classes. It's not clear from Jared's post just what kind of opt-out and shut-off control we will have, and I hope we will hear more about that in future posts. Also, since Jared does not detail any specific Dig projects, I am relying on Phil Hill's reporting from InstructureCon which describes one such project: profiling a student across courses, including past courses, and using that comprehensive course data to predict and manage their behavior in a current course. (This use of grade data across courses without the student's express consent sure looks like a violation of FERPA to me, but I'll leave that to the lawyers.)


And now, some thoughts:

1. Not everyone digs it. I understand that some people see value in Dig predictive analytics, and maybe they are even passionate about it as Jared says in his post, but there are also people whose passions run in different directions. As I explain below, my passion is for data that emerges in actual dialogue with students, so it is imperative that I be able to stop intrusive, impersonal auto-nudges of the sort that Dig will apparently be generating. The punitive red labels in the Canvas Gradebook are already a big problem for me (my students' work is NOT missing, and it is NOT late, despite all the labels to the contrary). Based on the failure of the Gradebook algorithms in my classes, I do not want even more algorithms undermining the work I do to establish good communication and mutual trust. So, I really hope Instructure will learn a lesson from those Gradebook labels: instructors need to be able to turn off features that are unwelcome and inappropriate for their classes. Ideally, Instructure would give that power directly to the students, or let teachers choose to do so; that's what I would choose. My students voted by a large majority to turn off the labels (which I now do manually, week by week, using a javascript), although a few students would have wanted to keep the labels. I say: let the students decide. And for crying out loud, let them choose the color too; the labels don't need to be red, do they?

2. We need to target school deficits, not student deficits. I believe that Instructure's focus on at-risk students comes from good intentions, but that cannot be our only focus. Instead, we need data to help us focus on our own failures, the deficits in our own courses: deficits in design, content, activities, feedback, assessment, etc., along with data about obstacles that students face beyond the classroom. This is a huge and incredibly important topic, way too big for this blog post, so I hope everybody might take the time to read some more about the perils of deficit-driven thinking. A few places to start:
For a great example of what happens when you invite students to talk about the obstacles they face, see this item by Peg Grafwallner: How I Helped My Students Assess Their Own Writing. Applying that approach to Canvas: instead of labeling students with red ink in the Gradebook ("you messed up!") and then auto-nudging them based on those labels ("don't mess up again!"), the labels could be more like a "what happened?" button, prompting a dialogue where the student could let the instructor know the reason(s) why they missed an assignment or did poorly, etc., and the instructor could then work with the student to find a positive step forward, based on what the student has told them. That is the way I would like to see data-gathering happen: student-initiated, in context and in dialogue.

3. Dig is not just about privacy; it is about Instructure's unilateral appropriation of our data. Jared emphasizes in his post that Instructure is not selling or sharing our data, but there is more at stake here than just data privacy and data sharing. Instructure is using our data to engage in AI experiments, and they have not obtained our permission to do that; I have not consented, and would not give my consent if asked. Dan Goldsmith has stated that users "own" their data, and one of the data tenets announced at InstructureCon was "Empower People, Don't Define Them" (discussed here). Speaking for myself, as someone who does not want to be defined by Instructure's profiling and predictive algorithms, I need to be able to just opt out. In his post, Jared writes about Instructure being a "good partner" in education, "ensuring our client institutions are empowered to use data appropriately." Well, here's the thing about partnerships: they need to work both ways. It's not just about what Instructure empowers institutions to do; it is also about what we, both as institutions and as individuals, empower Instructure to do. By unilaterally appropriating our data for their own experiments, Instructure is not being a good partner to individual students and teachers. If the people at Instructure "see no value or benefit in technology that replaces teachers or sacrifices students' agency" as Jared says in his post, then I hope they will give us the agency we need to opt out of Dig data-mining and remove our courses from the Dig laboratory.

Okay, everybody knows my blog posts are always too long, so I'll stop here to keep this about the same length as Jared's post (and of course I've written a lot about these topics here at the blog already). I hope this conversation can continue, and I also hope that Instructure will explore the options for both data opt-out and feature shut-off as they proceed with the Dig rollout for 2020. Thanks for reading!




July 14, 2019

After InstructureCon: Yes, I'm still hoping for that data opt-out!

Last week, I did a round-up post focused on InstructureCon, summarizing my many concerns about Instructure's new AI experiments. Back in March, CEO Dan Goldsmith announced a big shift for Instructure: instead of just giving teachers and schools access to data for traditional statistics as in the past, Instructure itself would be analyzing our students, profiling them in order to create predictive algorithms for future business growth, while doubling their TAM as Goldsmith claimed:


InstructureCon updates on DIG

So, after InstructureCon we know a lot more about this AI project, called DIG. For example, Goldsmith now claims: We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom. 

Personally, I find this claim hard to believe, given that the only data Instructure has to work with is the isolated, low-level data they gather from Canvas activity: log-ins, page views, quizzes, gradebook, etc. Unizin schools add demographics to that Canvas data (which I find even more alarming), but it sounds like Goldsmith is making the claim about Canvas data itself.

In any case, speaking for myself, I do not want Instructure to tell me how to do my job ("we can make recommendations..."), prejudicing my views of students before I have even met them. My school currently does not share a student's GPA with me, and for good reason; as I see it, Instructure's labeling of students in this way is no different than sharing their GPA. In fact, I would suspect that past grade data is a very significant component in Instructure's prediction engine, perhaps even the most significant component. But hey, it's their proprietary AI; I'm just guessing how it might work, which is all we can do with corporate AI/ML experiments.

Notice also the slipperiness of the word "outcome" in Goldsmith's claims about predictive accuracy. When teachers think about outcomes, we are thinking about what students learn, i.e. the learning they can take away with them from the class (what comes out of the class), especially the learning that will be useful to them in their later lives. And that's very complex; there is a whole range of things that each student might learn, directly and indirectly, from a class, and at the time of the class there's no telling what direction their lives might take afterwards and what might turn out to be useful learning along that life path. But the LMS has no record of those real learning outcomes. In fact, the LMS has no real measures of learning at all; there are only measures of performance: performance on a quiz, performance on a test, attendance, etc. So when Goldsmith talks about predicting the "likely outcome" for a student, what I suspect he means is that Instructure is able to predict the likely final grade that the student will receive at the end of a class (which is why I suspect GPA would be a big component in that prediction). But the grade is not the learning, and it is not the only outcome of a class. In fact, I would argue that we should not be using grades at all, but that is a topic for a separate discussion.

What about a data opt-out?

So, now that we know more about the goals of DIG, what about opting out? There was no announcement about an opt-out, and no mention even of the possibility of an opt-out. Goldsmith even claimed in an interview that there hasn't been any request for an opt-out: "We haven’t had that request, honestly." 


Well, that claim doesn't make sense as I myself had a long phone conversation with two VPs at Instructure about my opt-out request. What Goldsmith must mean, I suppose, is that they have not had a request at the institutional level for any campus-wide opt-outs, which is not surprising at all. While it would be great if we had some institutional support for our preferences as individual users, I would be very surprised if whole institutions decide to opt out. Predictive analytics serve the needs of institutions far more than they do the needs of individual teachers or students, and I can imagine that institutions might be eager to see how they can use predictive analytics to look at school-wide patterns that are otherwise hard to discern. Teachers can grok what is going on in their individual classrooms far more easily than provosts and deans can grok what is going on across hundreds and thousands of classrooms. 

Yet... there is hope!

Yet I still have some hope for an opt-out, because I learned from that same Goldsmith interview that individuals OWN their data: One of our first and primary tenets is that the student, the individual and the institution own the data—that’s their asset. 


And he says the same in this at video interview here: we own our data.


This concession about data ownership really caught me by surprise, in a good way, and renewed my hope for an opt-out. If individuals own their data, then we should be able to take our data out of the Instructure cloud when a course is over if we choose to do so. In other words: a data opt-out, perhaps with the same procedure that Instructure already uses to sunset data from schools that terminate their Instructure contract.

In fact, in the context of ownership, it really sounds more like an opt-in is required. If Instructure wants to use my data — data about me, my behavior, my work, my OWN data  then they should ask me for my permission. They should ask for permission regarding specific timeframes (a year, or two years, or in perpetuity, etc.), and they should ask for permission regarding specific uses. For example, while I strongly object to AI/ML experiments, there might be other research to which I would not object, such as a study of the impact that OER has on student course completion. Not all data uses are the same, so different permissions would be required.

Of course, as I've said before, I am not optimistic that Instructure is going to implement an opt-in procedure — even though they should — but I am also not giving up hope for a data opt-out, especially given the newly announced Canvas data tenets.

Canvas Data Tenets

In addition to this surprising concession about data ownership, we learned about these new Canvas data tenets at InstructureCon. In the video interview cited above, Goldsmith promised a post about data tenets coming soon at the Instructure blog, and there was already this slide in circulation at InstructureCon, which I assume are the data tenets Goldsmith is referring to in the interview (strangely, even the Instructure staff keynotes were not livestreamed this year, so I am just relying on Twitter for this information). As you can see, one of those tenets is: Empower People, don't Define Them.


Now, the language here sounds more like marcomm-speak rather than the legal or technical language I would expect, but even so, I am going to take heart from this statement. If Instructure promises to empower me, then surely they will provide a data opt-out, right? It would not be empowering if Instructure were to take my Canvas data and use it for an experiment to which I do not consent, as is currently the case.

My Canvas Data Doubts

Meanwhile, that tension between empowering people, not defining them, is what I want to focus on in the final part of this blog post. I saw really mixed messages from InstructureCon this year, as the big keynotes from Malcolm Gladwell, Dan Heath, and Bettina Love were all about community, peak moments, love and creativity... with a corporate counterpoint of big data and a billion Canvas quizzes as I learned via Twitter:


See also the contradiction where Goldsmith claims in an interview that Instructure is all about "understanding the individuals, their paths, their passions, and what their interests are" and what we see in the data dashboards: there are no passions and interests on those dashboards (but I do know those red "missing" labels all too well):




Impersonal personalization

There's a single word that I think expresses this dangerous ambivalence in ed-tech generally, and at Instucture in particular; that word is personalization. On the one hand, personalization looks like it would be about persons (personal agency, personal interactions, personal passions) but personalization has also become a codeword for the automation of education. Both in terms of philosophy and pedagogy, automation sounds really bad... but personalization: ah, that sounds better, doesn't it?

So, for example, listen to what Dan Goldsmith says in this interview: it's technology inevitablism, literally. (video hereSo when you think about adaptive and personalized learning I think it's inevitable that we as an educational community need to figure out ways of driving more personalized learning and personalized growth experiences.


I'm not going to rehash here all the problems with the rhetoric of personalization; Audrey Watters has done that for us, as in this keynote (among others): Pigeons and Personalization: The Histories of Personalized Learning. (A good all-purpose rule for thinking about ed tech: READ AUDREY.)

Instead, I will just focus here on the impersonality of Canvas data, listing five big reasons why I mistrust that data and Instructure's claims about it:

1. Canvas data measure behavior, not learning. Canvas is an environment that monitors student behavior: log on, log off; click here, click there; take this quiz, take that quiz; this this many words, download this many files, etc. If your educational philosophy is based on behaviorism, then you might find that data useful (but not necessarily; see next item in this list). If, however, your educational philosophy is instead founded on other principles, then this behavioral data is not going to be very useful. And consider the keynote speakers at InstructureCon: none of them was advocating behaviorism; just the opposite. Here's Bettina Love, for example, on liberation, not behaviorism (more on her great work below):


2. Canvas fails to gather data about the why. Even for purposes of behavior modification, that superficial Canvas data will not be enough; you need to know the "why" behind that behavior. If a student doesn't log on to Canvas for a week, you need to know why. If a student clicks on a page but spends very little time there, you need to know why. If a student does poorly on a quiz, you need to know why. For example, if a student got a poor score on a quiz because of a lack of sleep that is very different from getting a poor score because they did not understand the content, which is in turn very different from being bored, or being distracted by problems at home, etc. Just because students completed a billion quizzes in Canvas does not mean Instructure has all the data it needs for accurately profiling those students, much less for making predictions about them.

3. Canvas data are not human presence. The keynote speakers consistently emphasized the importance of people, presence, relationships, and community in learning, but numbers are not presence. Does this look like a person to you? This is how Canvas represents a student to me right now; the coming data dashboard (see above) uses the same numbers repackaged, because that is all that Canvas has to offer me: numbers turned into different kinds of visualizations.


Goldsmith claims that Instructure is different from other learning companies because they are all about people's passions and interests, but that claim does not fit with the views I get of my students in the Canvas Dashboard and the Canvas Gradebook: no passions, no interests; just numbers. I don't need percentage grades, much less the faux-precision of two decimal points. Instead, I need to know about students' passions and interests; that would be exactly the information that will help me do my job well, but Canvas data cannot provide that data.

4. Canvas data does not reflect student agency. The basic pedagogical design of Canvas is top-down and teacher-directed. Student choice is not a driving principle; in fact, it is really a struggle to build courses based on student choice (I will spare you the gory detail of my own struggles in that regard). Students cannot even ask questions in the form of search; yes, that's right: students cannot search the course content. The only access to the course content is through the click-here-click-there navigation paths pre-determined by the instructor. And, sad to say, there is apparently no fix in sight for this lack of search; as far as I could determine, there was no announcement regarding the deferred search project from Project Khaki back in 2017 (details here). Think about that lack of search for just a minute. It's no accident that Google started out as a search engine; the questions that people brought to Google, and people's choices in response to those answers, generated the behavioral surplus juggernaut that now powers Google AI. Netflix succeeds as a prediction engine precisely because it is driven by user choice: lots of options, lots of choices, and lots of data about those choices with which to build the prediction engine. The way that Canvas forestalls student choice, including the simple ability to initiate a search, is why I believe their AI project is going to fail. (Meanwhile, if I am wrong and there was an announcement about Canvas search at InstructureCon, let me know!)

And this last item is actually the most important:

5. Canvas data cannot measure obstacles to student learning. By focusing data collection on the students, Instructure runs the risk of neglecting the social, political, and economic contexts in which student learning takes place. Whether students succeed or fail in school is not simply the result of their own efforts; instead, there are opportunities and obstacles, not evenly distributed, which are crucially important. Does Canvas data record when students are hungry or homeless or without health insurance? Does Canvas data record that a course is taught by a poorly paid adjunct with no job security? As Dave Paunesku wrote in Ed Week this week, "When data reveal students' shortcomings without revealing the shortcomings of the systems intended to serve them, it becomes easier to treat students as deficient and harder to recognize how those systems must be changed to create more equitable opportunities." I hope everybody will take a few minutes to read the whole article: The Deficit Lens of the 'Achievement Gap' Needs to Be Flipped. Here's How. (Short answer: another billion quizzes is not how you flip the deficit lens.)



Of course, this is all a topic for a book, not a blog post, so I'll stop for now... but I'll be back next week to start a new approach to these datamongering round-ups: a commentary on Shoshana Zuboff's Surveillance Capitalism. Of all the concepts in play here, the one that is most important to me is what Zuboff's calls our "right to the future tense." So, I will work through her book chapter by chapter in the coming weeks, and hopefully that will make it more clear just why it is that I object so strongly to Instructure's predictive analytics experiment.

~ ~ ~

I want to close here with Bettina Love's TED talk; take a look/listen and see what you think: I think she is brilliant! More also at her website.


Speaking for myself, I'll take dance and lyrics over data analytics any day. So, keep on dancing, people! And I'll be back next week with Shoshana Zuboff's book and our right to the future tense. :-)