20 April 2012

Week 14: Project Ends

I made it! Doing my capstone was the one thing about the iSchool program that interested and terrified me at the same time. I am so glad there is the "professional project" option but since I work full-time, I was afraid there would be no good way to accomplish this requirement. Thankfully, I was able to get permission to do my capstone at my place of work and for a different department where I have a real interest in what they are doing. I still can't believe another 14 weeks have gone by and all that I have accomplished in the 125 hours of the project. What I have ended up with is not at all what I envisioned going in; it really is so much more, a much more well-rounded and complete project than what I scoped out initially.

This week was spent tying up loose ends so that I could turn the project in. I finished getting the last bits of data (mostly participant quotes) into my presentation slides and then adding the little touches like animated annotations to my designs and a nifty "play" button overlay for the three video clips I included. I also learned how to export my speaker notes so I can see them during the presentation, though I don't like they way they work exactly. It makes you print out one page per slide with notes below, whether there are notes for that slide or not. Well I have 45 slides and many don't have notes so I will find a way to remove the blank slides at least!

I tried to keep my slides clean, interesting and with few words, the exception being the slides with participant quotes because I do plan to read those word for word and want the audience to be able to follow along. I decided last minute to move the position of the quantitative scorecard from the end of the Results section to the beginning, my reasoning being that seeing the snapshot overview first and then going into detail around each of the key metrics would probably keep their attention more. I know if I just saw chart after chart of data without knowing what it was leading up to, I would get bored. At least this way they know the final scores first and then are shown a breakdown of the data.

That being said, here's a PDF version (sans videos) of my final project deliverable: WEM v.8.2 Pickers

There's a neat feature when you export to PDF from PowerPoint where it can also export the speaker notes into the PDF as a layer that can be toggled on and off so my speaker notes are also included.

Well, this certainly isn't the "end" quite yet; there is still the matter of my capstone poster which I will have ready next week as well as presenting that poster in two weeks. It all feels a little surreal right now. I've been having sleep issues this week even though I am not worried about getting all my work done or anything; I think it's just that change is impending. I'll no longer be a student in a few weeks and I'll have all this free time again. The first thing I'll be doing after graduating is going back to Alaska for another road trip, but beyond that is a great unknown. Maybe I'm afraid nothing will actually change.

Me at the Arctic Circle in Alaska, May 2012

13 April 2012

Week 13: Presentation Work

I'd like to start out by saying I got an actual shiver of excitement this morning when I realized that three weeks from today, I will be DONE with school. It has been nearly four years since I first started getting stuff together to apply to grad school and studying for the GRE. At times, it felt like this was never going to happen, that I was never going to finish. I still don't know what I'm going to do next but at least it will be a new chapter.

So this week I have been preparing my final deliverable: a PowerPoint presentation on my findings. I have had to remind myself that unlike what I write about here and what my Capstone poster will contain, this presentation is not so much about my project as it is about the product I was trying to improve. While there will be some overlap between the contents of my presentation and my poster, they are for different audiences and have different purposes. I have been trying to work on both somewhat in parallel—as far as getting the information together that I want to use—but I have to be very aware about telling two stories.

The presentation is going to be more focused on the usability study, the tasks run, comments and video clips of tasks and the results of testing. I think OT is more interested in whether the design suggestions I came up with are worth pursing in the next version of the product. My poster will be more concerned with the overall project and will likely focus more on the design decisions I made, with metrics from the study being presented only as a snapshot of the usability portion of the study. I started out by creating an outline of what I wanted to put into my presentation:
  1. Introduction
  2. Usability Concerns
    1. Picker models are inconsistent
    2. Selecting multiple objects is confusing
    3. Icon for removing selections is unclear
    4. Channel selection model is counter intuitive
  3. Designs
    1. Single Container Picker
    2. Multiple Containers Picker
    3. Single Content Item Picker
    4. Multiple Content Items Picker
  4. Evaluations
    1. Participants
    2. Tools
    3. Testing
      1. Comments/Videos for each task (9)
  5. Results
    1. Satisfaction (SUS)
    2. Effectiveness
    3. SEQ
    4. Efficiency
    5. Time on Task
    6. Appearance
    7. Scorecard
  6. Recommendations
I have tried to compile all this information into my document, filling out the outline as it were so that next week I can focus on making sure things are in the right order, adding in any animations I might need, and so forth. It has been more work than I though it would be. That is, I think creating a polished, interesting presentation is harder than writing up a formal report. Part of what has been time consuming is going back through the recordings of my testing sessions to pull quotes and to create short video clips (I'm only including three). I think the clips will be especially helpful for demonstrating possible problem areas.

I am also still grappling with some of the terminology and meanings around the statistical data. I don't think I need to understand the ins and outs of every calculation but I would at least like to be able to speak to the major figures I am presenting. I read up on error bars and geometric means and Tanya loaned me a book, Measuring the User Experience, that I found helpful for a couple of reasons. I was struggling to understand what "efficiency" is measuring, how it is calculated since it takes into account only successes, time on task and benchmark times by an expert; the book gave me this explanation: the core measure of efficiency is the ratio of the task completion rate to the mean time per task. Ah! I can at least tell other people that.

About 3am Wednesday morning, I woke up with an idea for how to layout my poster.

My half-asleep sketch of a possible Capstone poster layout

There is just so much information I could include that I need to be careful not to overload the space. My biggest concern is being able to adequately demonstrate my design ideas since I have four permutations.  I think I might end up showing just the two multiple picker designs and indicating that the checkboxes would not be used for the single picker designs. We'll see.

06 April 2012

Week 12: Reviewing the Data

This week, I've been moving data from the spreadsheet used to capture notes during testing into another that compiles further statistics based on these data.  It assesses four measures:
  1. results of the system usability scale
  2. effectiveness of tasks
  3. efficiency of tasks
  4. results of the appearance scale
It then takes these four measures and averages them for an overall benchmark score.  It also calculates the mean for responses to the single-ease question (SEQ) asked after every task: Overall, on a scale from 1 to 7 where 1 is Very Difficult and 7 is Very Easy, this task was… I looked up the significance of the SEQ because I wasn't really sure what it was supposed to measure and I found this explanation:
Was a task difficult or easy to complete? Performance metrics are important to collect when improving usability but perception matters just as much. Asking a user to respond to a questionnaire immediately after attempting a task provides a simple and reliable way of measuring task-performance satisfaction. Questionnaires administered at the end of a test such as SUS, measure perception satisfaction. [1]
I'll admit, going into testing I was really only concerned with whether participants succeeded for failed at the tasks I created.  And I was feeling pretty good that there were only two incidences of failures among the 84 possibilities (14 separate measures for each of six participants). That resulted in a 98% effectiveness rating but alas that tells only part of the story.  The measure that is most suspect in my opinion is the efficiency rating. I spoke with Tanya about this and really there is no hard and fast rule for this one. What the UX team typically does is time themselves doing each task at a moderate pace and uses those times as benchmarks for an "expert" user to compare the times on task for each participant only for those tasks resulting in successes.  Pretty much, if one participant struggles a bit or is exploring the interface and trying to figure out what to do, it can really throw off the overall times. I was dismayed to see the mean efficiency of these tasks was only 74% which falls short of the standard 80 that we strive for on any metric.

But, the overall benchmark score for my study squeaked in at 81 which I guess means my designs were an improvement.  One of the big issues we were trying to address with this project was making it easier/clearer for users to be able to add channels to a content item. In WEM 8.0 summative testing, only 40% of customers and 44% of non-current customers were able to do this.  In my study, 100% of customers and 67% of non-customers were able to add channels without any prompting for an overall score of 83%.  The one non-customer failure for this task said it was likely an issue with the prototype because she just didn't notice that the "square" icons were supposed to be check boxes.  So here is the overall scorecard for this round of testing:

WEM v8.2 Pickers scorecard after one round of usability testing

The scorecard is a PowerPoint slide and I got to learn how to link data from an Excel spreadsheet into PowerPoint.  The only issue was that the error message I was receiving from PowerPoint that it could not update the data because it couldn't find the linked file was very confusing and I had to do a Google search to find where I needed to update this file path.  I never would have found this on my own.

From the Home button, go to Prepare and then Edit Links to Files
Next week I'll start interpreting these data and trying to weave an interesting story into a PowerPoint presentation that I'll eventually give to internal stakeholders.  I plan to use some video clips from the tests in addition to just throwing out a bunch of statistics.

Footnotes

[1] Sauro, J. If you could only ask one question, use this one.