30 March 2012

Week 11: Usability Testing Ends!

Hurray! I have survived. I finished up my usability testing this week with one current OT customer and with a former OT employee who used to use an older version of the system. Unfortunately, I never heard back from the other customers I reached out to, but I think still getting someone who had used the system in the past was just a good. These two tests were very different in feedback and flow. The currently customer is a developer, not so much an end-user of the GUI, so she had a lot to say about other aspects of the product beyond just what I was testing. I had to be diligent about capturing her other feedback for the team that extended beyond being able to choose objects in the system. She lamented all the clicking, even though she was able to perform each task, and really wanted to be able to drag stuff around, like dragging a channel over to the "My Selections" area instead of clicking checkboxes. I saved an excerpt of the recording where she was performing task six and complains about the clicking saying I was torturing her by making her do that (all in good fun, though) that I will use in my presentation.

The former employee did not have much to say in comparison. She mostly said things seemed simple enough. What I found interesting, though is she had a real problem with task eight where I ask the users to try and navigate just using the folder grid with the tree hidden. She got very frustrated that she could not see where to go or what to do and it was SO HARD for me to sit there and watch her struggle. I was worried she was going to get mad at me for not stepping in. In reality, it only lasted about two minutes but it felt much longer. She did eventually discover the "up" button and was able to complete the task, though I marked this as a failure simply because she was ready to give up long before and I think would have in a normal use situation—at least she would have used another method. In fact she tried going back and looking at the tree and it was on that screen where she first noticed the "up" button for folder navigation but the button was not active for that screen. I was most surprised that she struggled with this task since in the version of the system she used, that is how navigation worked. There was no folder tree to show hierarchy, but there was a breadcrumb. It has been interesting for me just how much having a semi-working prototype influences the testing. I'd be interesting in running a test with just printed out pieces of paper having the users explain to me what they would like to have happen.

I think overall that the testing portion of my project went very well. It was a new experience to do everything myself: coming up with the design, writing the test plan, administering the tests and acting as both facilitator and note taker at the same time. The spreadsheet really helps and allowed me to capture time on task, success or failure of tasks, difficulty ratings and general comments very easily. There are only three weeks left in my project and I will use the remaining time to write up a "formal" report of my findings, basically translating all the boring data into a story with my designs and recommendations. If this small sample tells me anything it is that I was able to come up with some real improvements over what is in the current system. I'm ready for all the questions I'm sure to get from the business stakeholders about the results.

I want to leave you, the reader, with something fun or interesting each time so here's a short clip of a friend of mine climbing a lamppost to replace a geocache called Climb #6. This cache has a terrain rating of 4.5/5. I have only personally done a 4 so far.

23 March 2012

Week 10: Milestone 2

Let me just start with a usability rant.  I had nearly completed this post when I accidentally clicked the browser back button with my mouse and lost the entire thing. Unlike other Google products, for some reason Blogger didn't auto save correctly. #fail

Today technically marks milestone two of my project but there was no set deliverable.  I had originally planned to be done with my user testing but decided instead to build in a week buffer between testing with students and employees and testing customers so that I would have a chance to tweak the prototype for any major issues discovered.  I met with Robin and Tanya on Monday to debrief them about the user tests I have completed so far and to get clarification on what the final deliverable will be.  I will be creating a usability study report using a standard template and then also creating a PowerPoint deck that I will present to the User Experience team and also the WEM product managers showing them my findings and suggestions.  While the presentation is not strictly part of the project requirements and I might end up doing it after the actual project end date, I think it will be good experience to talk through my project before the Capstone poster session.

I also asked Tanya about these strange, complicated formulas I noticed at the bottom of each spreadsheet tab in the usability test template and she told me they were calculations of the confidence interval for each task.  She said she doesn't bother to calculate it every time and I told her I would look into it.  I spent some time researching this and quickly got frustrated as statistics and I do not get along (I was really worried about taking the Intro to Research Methods course).  What I could glean was that the confidence interval is the same as the margin of error for a sample and also that it doesn't matter as much for small sample sizes.  I just couldn't figure out how to calculate it even if I wanted to.  At least Excel has a formula for standard deviation!

I learned a valuable lesson this week about scheduling usability test participants: always schedule more than you think you need or have alternates on standby.  Tuesday, I had one of my customer participants cancel on me for next week.  I have since emailed two more customers but haven't received responses.  The good news is that it is not necessary that I use customers for my tests, but it would have been nice.  If I don't get responses by Monday, I will just ask someone else to participate.  I know several former employees who have had experience with older versions of the product.  Also, the two current employees I already used for testing are actually considered customers, just internal customers, since they use OT products on a daily basis.

I spent a lot of this week adding some increased functionality to the prototype in anticipation of testing involving customers.  The first thing I did was to make the "add folder" and "add channel" buttons appear active all the time, not just when a user has a parent channel highlighted.  While I think this is the way these actions should work, I think it might complicate the test since the prototype only allows the user to actually add a new folder (task 2) when the correct parent folder is highlighted—there are just too many possible variations of this task to code it correctly.  The other change I made was to allow users the ability to "uncheck" the boxes for channel selections they had made during task 3 when I ask them to clear all the channel associations in task 5.  Before, they had to find and use the "remove all" button but I noticed some participants really wanted to undo the check boxes instead.

This week, I leave you with another video, this time of a pseudo-usability test of Windows 8 that was linked in the latest version of the user experience newsletter I've been following for years, Good Experience.  I haven't actually seen Windows 8 for myself, but this video doesn't give me a lot of confidence.


16 March 2012

Week 9: More Usability Testing

I decided to work through spring break so that I could finish up this project sooner and have more time to work on my poster.  This week, I conducted two more usability tests, this time with internal OT employees who have a lot of experience with another content management system but have only taken the training for WEM (our team is actually in the process of setting up a new instance of WEM so that they can start using it for part of the corporate website).  These are people I have worked with a little, remotely, and only met once in December so it was still comfortable because they weren't total strangers like my next round of participants. During the first test, my WebEx account wasn't working so I got to experience troubleshooting technical difficulties on my feet. Thankfully, I had a back up plan and we used the free screen sharing tool join.me.

One of the more interesting observations from my first participant this week was his confusion about "where he was in the system" when he added a new content item and then again when I asked him to add a new folder. It was almost like the OS metaphor went too far because he thought he was somewhere within the folder structure when actually he was outside of it, like when you're saving a new Word document and you have to choose where to save it. I wonder if in the CMS the Marketing team is currently using, they can only create content while inside a folder. You certainly have that option in WEM too, but it's not required.  He also really wanted the add folder (and add channel) options to be available from contextual right-click menus. He really did not like the button placement for these options. Based on this feedback, I do think the add folder and add channel buttons should always be active (right now the buttons become enabled only if the user has highlighted an existing folder or channel, and so the new container would be created as a child). Since there is an option to choose the placement of the new container from within the creation screen, I see no reason why the add option should not always be available; this would also allow users to be able to create new containers at the root level.

Nothing terribly exciting happened this week and I don't  have any new designs or demonstrations, so instead I leave you with a song.  I first heard this three years ago during my first semester of grad school and I listened to it over and over on the Friday during spring break as I tried to write a paper for the Understanding and Serving Users course. I hope you enjoy Okkervil River, "Unless It's Kicks."


09 March 2012

Week 8: Usability Testing Begins

It's amazing, really, how much work goes into usability testing. Each test takes about an hour but there are many more hours that go into finding participants, prepping materials, testing technologies, reviewing the test afterward and evaluating findings just to name a few things. This week, I spent some time further refining the test plan and prototype. I realized that when a participant clicks through the folder grid, the folder tree should update at the same time; it's funny the things you overlook when you have been super involved in a project. I think this is part of the reason why developers don't normally do the testing too—they are too close to the project. I think I am definitely missing things and probably biasing my test results just due to the fact that I also created the prototype.

The UX team has a very nice spreadsheet it uses when conducting usability studies that I had to spend quite a bit of time adjusting for my needs, filling it out with my scenario and tasks. It has a macro that allows the facilitator to easily record time on task, which was helpful. It was also easier to take notes in a spreadsheet while also administering a test than I thought it would be. To that end, I conducting my first two tests this week and they both went well and took the right amount of time. There was only one failure of one task and I think it was due more to issues with the prototype than of the system design; the user just didn't notice that the squares I was using to indicate checkboxes were checkboxes. Once I pointed this out, she had no problem completing the task.  So I'm not sure if that really counts as a task failure or just a prototype failure.

These are supposed to be checkboxes, but they are a little too large!
On the plus side, both testers used the folder grid instead of the folder tree to drill down and select a content item; this is one of the major changes I'm suggesting to the interface so it was great to see people default to using it.

It was nice to start my testing with iSchool students and to be able to do in-person testing to work out more kinks, practice running through the tasks, and get better at reading everything aloud then asking questions. (It made me wonder after the fact if was supposed to get IRB approval but since these tests are for a company and not really UT, I think I'll be okay.)  The rest of my tests will be administered remotely, which has all kinds of potential for problems.  I ran though a technology test to try out Webex and see what kind of lag I was getting.  I think it worked out okay. I've included a short video of how the test will appear to me while the remote user has control of my screen. The audio quality is pretty terrible so I will avoid using my cellphone for these tests.




Next week will be more testing!

02 March 2012

Week 7: Usability Test Pilot

This week I have spent my time preparing to begin usability testing.  I created several more screens for the prototype earlier in the week and polished the testing task list after meeting with Robin and Tanya to get some feedback.  Wednesday, I conducted a pilot test in person but also shared it over Live Meeting to another laptop so that I could gauge quality and response times.  The unfortunate thing is that since it's video streaming live over the Internet, there is some lag, so even if I am allowed to record my remote testing sessions, they won't pick up everything.  I have some concerns about being both facilitator and note taker during these tests but it will be good experience.  Below is an excerpt from the pilot test demonstrating what I thought was the most flawed part of my prototype:


Overall the pilot went very, very well. It look just under 50 minutes and the participant was able to complete all tasks. I got some great feedback about changing some of the wording and breaking up one of the tasks into two tasks. I found some broken hotspots within the prototype as well. Most importantly, I discovered an entire interaction missing from the prototype, as demonstrated by the video. I went back and created a way for participants to be able to use either the folder tree or the folder grid when attempting to select a single content item. Since I proposed to allow users to navigate folders and channels using the folder grid, that needs to be functional from the beginning screen of this task—it will be valuable to see how many users choose to use the tree versus the grid upon first being presented with this picker layout. I can't believe I almost didn't include that! (Participants could hide the tree in order to use the grid as I explain in the video, but that will not give me the testing data I need in the most accurate way.)

My prototype is up to 116 screens :/ I just didn't realize the differences in doing true interaction design for a piece of software versus wireframing for a website—I'm learning that these are two unique tasks with their own separate needs so I am glad for the experience. I still might make another tweak to my overall design again based on the pilot. I haven't decided whether a single click to a folder in the tree should display the contents of that folder in the grid, or if that should remain a double-click.

I've scheduled most of my test participants over the coming weeks, starting with two iSchool students next week followed by two OpenText employees the week after. Following that initial round of testing, I will make adjustments to the test plan and prototype before conducting tests with real OpenText customers the last week of March.