Cognition at Marquette, Spring 2017: A Postmortem of the Blog Project

The Spring 2017 semester is complete (by a couple of weeks, but I’ve been busy relaxing…) at Marquette University. I had the opportunity to adjunct there, teaching two classes, while looking for a tenure-track position. In Cognition, specifically, I enacted a blog project based on one at the Learning Scientists. It wasn’t the first blog project I’ve done, but quite different in approach and content. Im this post, I wanted to share my thoughts on the project as a trial run, some observations I made with grading, and try to figure out what I am going to do with this project moving forward. I’m open to suggestions and comments–just let me know in a comment below or on Twitter!

Though I had minimal planning, I knew I had a subdomain on my own website, so I thought I’d set-up a blog, after I saw how successful a friend’s blogging project was. I came across the aforementioned blog assignment that instructed students to use cognitive psychology principles to describe effective ways to study and learn, targeted at an audience of incoming college freshmen. I thought it was fantastic, because this topic is something I constantly discuss in my classes, as well as knowing some fantastic educational/cognitive psychologists!

First and foremost, I thought the students did a fantastic job across the board. Utilizing a rubric from the same blog post, all blogging pairs did fantastically as far as their grade was concerned. My curiosity was generally concerned with not how me and my TA would grade the blog posts, but how the students would score each others’ posts. So from the rubric, I created a student rater sheet (using a Likert scale of 1-5, based on the way the rubric was set up, with 1 being the poorest and 5 being the best). On one of the last days of class, I had each pair (students worked on their posts in pairs because I had 52 students in this class) go around to computer stations set up with a blog post. They then rated the blog posts based on categories of the rubric. I’d like to share some observations, which give me a certain sense of satisfaction for my grading (as well as my TA’s), because the students tended to see the posts in much the same way.

These were the categories the students rated:
1. Content/Originality
2. Writing Quality
3. Writing Style
4. Empirical Evidence
5. Visual Appeal

OVERALL RELIABILITY

I first wanted to see if the average score of my grade and my TA’s grade would correlate with a composite, averaged score from the students. So I had to average each individual rating per post, and then I collapsed the above 5 categories into a single score. I showed this to my class as a way to express that the score they ultimately received from the two “instructors” in the course reflects how their peers also viewed their blog post. I got nods and understanding faces, so I assumed they bought the argument.

INDIVIDUAL CATEGORY CORRELATIONS

I also wanted to see how the individual correlations fared when compared to an overall score. I know, I should probably look the individual rubric scores for each of these categories, but that takes a lot of work and I don’t have an undergraduate I can rope into inputting hand-scored rubrics into an Excel file and running individual tests. So below is a single graph that contains the 5 student-rated categories with their “grades”. As expected, the higher rated blog posts tended to yield the higher scores overall. Nothing shocking here. What I did find interesting is that visual appeal had the largest effect–a testament to good blog-making. It’s not just a report of your empirical findings, but something worth reading on an Internet full of things! The post that had the highest-rated visual appeal included videos that were great AND pertained to the topic.

However, I noticed that the smallest effect was on Writing Style, which includes things like “is the post written appropriately for the audience”? Is there jargon to make it unapproachable, or on the other end of spectrum, is it too colloquial and not informative? I suppose this might need a more nuanced eye and more time to spend with each post that the students just didn’t have when they were making their ratings.

CONTINUING THE EFFORT

The success of this trial run showed me that this is a fun and educational endeavor worth continuing. I want to keep generating content for this site, but in a uniquely educational sense. The majority of the content needs to be generated by students. I make no claims that this site should be the authority on any of these psychology topics, but it can be a repository for students connecting with other students, real or imagined!

Next semester, I will be teaching Cognition again, at a different school with a much smaller cache of students. They will likely be working on their own posts. Some of the changes I am planning to make:

1. I didn’t make comments mandatory time around; I gave extra credit. I plan to make commenting on others’ posts part of the grade, but also make it multiple comments–requiring the author to respond as well. I look forward to this type of dialogue.
2. Piggybacking off of that, I plan to have authors seek outside commenting, through whatever publication/sharing means they want (while each post is automatically shared with on my Twitter account, they can use the link to share on FB/Twitter, email, etc).
3. I plan to make use of rolling due dates to coincide with content, rather than due at the end of the semester.
4. I want to make slight content changes, with the same general assignment goals/aims. I haven’t really zeroed-in on what these tweaks might entail, but I am leaning toward some film component. I am most open to suggestions regarding this last change!

I’m excited to do this again next semester in Cognition, and I have some ideas on how to incorporate other classes into this mode (perhaps research methods/lab courses, anyone?) Again, I’m open to comments and suggestions for the content continuing into the future!