04 May 2012

Week 15: Grad School Ends

And that's it. I turned in my last paper the other day and just finished with our capstone poster session an hour and a half ago. No more school work and I get my "life" back—accept for that having to work part. I gave a presentation about my project to a great group of people at work and it sounds like my findings are at least getting people to talk about some of the issues with the interface again, even if they don't end up implementing my suggestions exactly as I laid them out. It's nice to know my work will go to some use. I really can't believe this is the end. Part of me is really glad to have it behind me and the other doesn't know what I'll do now. I'm just looking forward to going on vacation in three weeks to Alaska and Canada.

Standing around for four hours waiting for people to come talk to you about your project while dressed up in "professional" clothes (and shoes) frankly sucks but I did get five business cards and two solid job leads, so that was pretty cool. I had even made up my own business cards with just my information to hand out. Those will be good for other general networking events too. Really, my last, last day is 19 May since that is graduation day with three separate events but still, it feels great knowing I don't have to get up early and study anymore. So long, iSchool.

Capstone Poster [PDF]

Rachele presenting her capstone poster at the iSchool

20 April 2012

Week 14: Project Ends

I made it! Doing my capstone was the one thing about the iSchool program that interested and terrified me at the same time. I am so glad there is the "professional project" option but since I work full-time, I was afraid there would be no good way to accomplish this requirement. Thankfully, I was able to get permission to do my capstone at my place of work and for a different department where I have a real interest in what they are doing. I still can't believe another 14 weeks have gone by and all that I have accomplished in the 125 hours of the project. What I have ended up with is not at all what I envisioned going in; it really is so much more, a much more well-rounded and complete project than what I scoped out initially.

This week was spent tying up loose ends so that I could turn the project in. I finished getting the last bits of data (mostly participant quotes) into my presentation slides and then adding the little touches like animated annotations to my designs and a nifty "play" button overlay for the three video clips I included. I also learned how to export my speaker notes so I can see them during the presentation, though I don't like they way they work exactly. It makes you print out one page per slide with notes below, whether there are notes for that slide or not. Well I have 45 slides and many don't have notes so I will find a way to remove the blank slides at least!

I tried to keep my slides clean, interesting and with few words, the exception being the slides with participant quotes because I do plan to read those word for word and want the audience to be able to follow along. I decided last minute to move the position of the quantitative scorecard from the end of the Results section to the beginning, my reasoning being that seeing the snapshot overview first and then going into detail around each of the key metrics would probably keep their attention more. I know if I just saw chart after chart of data without knowing what it was leading up to, I would get bored. At least this way they know the final scores first and then are shown a breakdown of the data.

That being said, here's a PDF version (sans videos) of my final project deliverable: WEM v.8.2 Pickers

There's a neat feature when you export to PDF from PowerPoint where it can also export the speaker notes into the PDF as a layer that can be toggled on and off so my speaker notes are also included.

Well, this certainly isn't the "end" quite yet; there is still the matter of my capstone poster which I will have ready next week as well as presenting that poster in two weeks. It all feels a little surreal right now. I've been having sleep issues this week even though I am not worried about getting all my work done or anything; I think it's just that change is impending. I'll no longer be a student in a few weeks and I'll have all this free time again. The first thing I'll be doing after graduating is going back to Alaska for another road trip, but beyond that is a great unknown. Maybe I'm afraid nothing will actually change.

Me at the Arctic Circle in Alaska, May 2012

13 April 2012

Week 13: Presentation Work

I'd like to start out by saying I got an actual shiver of excitement this morning when I realized that three weeks from today, I will be DONE with school. It has been nearly four years since I first started getting stuff together to apply to grad school and studying for the GRE. At times, it felt like this was never going to happen, that I was never going to finish. I still don't know what I'm going to do next but at least it will be a new chapter.

So this week I have been preparing my final deliverable: a PowerPoint presentation on my findings. I have had to remind myself that unlike what I write about here and what my Capstone poster will contain, this presentation is not so much about my project as it is about the product I was trying to improve. While there will be some overlap between the contents of my presentation and my poster, they are for different audiences and have different purposes. I have been trying to work on both somewhat in parallel—as far as getting the information together that I want to use—but I have to be very aware about telling two stories.

The presentation is going to be more focused on the usability study, the tasks run, comments and video clips of tasks and the results of testing. I think OT is more interested in whether the design suggestions I came up with are worth pursing in the next version of the product. My poster will be more concerned with the overall project and will likely focus more on the design decisions I made, with metrics from the study being presented only as a snapshot of the usability portion of the study. I started out by creating an outline of what I wanted to put into my presentation:
  1. Introduction
  2. Usability Concerns
    1. Picker models are inconsistent
    2. Selecting multiple objects is confusing
    3. Icon for removing selections is unclear
    4. Channel selection model is counter intuitive
  3. Designs
    1. Single Container Picker
    2. Multiple Containers Picker
    3. Single Content Item Picker
    4. Multiple Content Items Picker
  4. Evaluations
    1. Participants
    2. Tools
    3. Testing
      1. Comments/Videos for each task (9)
  5. Results
    1. Satisfaction (SUS)
    2. Effectiveness
    3. SEQ
    4. Efficiency
    5. Time on Task
    6. Appearance
    7. Scorecard
  6. Recommendations
I have tried to compile all this information into my document, filling out the outline as it were so that next week I can focus on making sure things are in the right order, adding in any animations I might need, and so forth. It has been more work than I though it would be. That is, I think creating a polished, interesting presentation is harder than writing up a formal report. Part of what has been time consuming is going back through the recordings of my testing sessions to pull quotes and to create short video clips (I'm only including three). I think the clips will be especially helpful for demonstrating possible problem areas.

I am also still grappling with some of the terminology and meanings around the statistical data. I don't think I need to understand the ins and outs of every calculation but I would at least like to be able to speak to the major figures I am presenting. I read up on error bars and geometric means and Tanya loaned me a book, Measuring the User Experience, that I found helpful for a couple of reasons. I was struggling to understand what "efficiency" is measuring, how it is calculated since it takes into account only successes, time on task and benchmark times by an expert; the book gave me this explanation: the core measure of efficiency is the ratio of the task completion rate to the mean time per task. Ah! I can at least tell other people that.

About 3am Wednesday morning, I woke up with an idea for how to layout my poster.

My half-asleep sketch of a possible Capstone poster layout

There is just so much information I could include that I need to be careful not to overload the space. My biggest concern is being able to adequately demonstrate my design ideas since I have four permutations.  I think I might end up showing just the two multiple picker designs and indicating that the checkboxes would not be used for the single picker designs. We'll see.

06 April 2012

Week 12: Reviewing the Data

This week, I've been moving data from the spreadsheet used to capture notes during testing into another that compiles further statistics based on these data.  It assesses four measures:
  1. results of the system usability scale
  2. effectiveness of tasks
  3. efficiency of tasks
  4. results of the appearance scale
It then takes these four measures and averages them for an overall benchmark score.  It also calculates the mean for responses to the single-ease question (SEQ) asked after every task: Overall, on a scale from 1 to 7 where 1 is Very Difficult and 7 is Very Easy, this task was… I looked up the significance of the SEQ because I wasn't really sure what it was supposed to measure and I found this explanation:
Was a task difficult or easy to complete? Performance metrics are important to collect when improving usability but perception matters just as much. Asking a user to respond to a questionnaire immediately after attempting a task provides a simple and reliable way of measuring task-performance satisfaction. Questionnaires administered at the end of a test such as SUS, measure perception satisfaction. [1]
I'll admit, going into testing I was really only concerned with whether participants succeeded for failed at the tasks I created.  And I was feeling pretty good that there were only two incidences of failures among the 84 possibilities (14 separate measures for each of six participants). That resulted in a 98% effectiveness rating but alas that tells only part of the story.  The measure that is most suspect in my opinion is the efficiency rating. I spoke with Tanya about this and really there is no hard and fast rule for this one. What the UX team typically does is time themselves doing each task at a moderate pace and uses those times as benchmarks for an "expert" user to compare the times on task for each participant only for those tasks resulting in successes.  Pretty much, if one participant struggles a bit or is exploring the interface and trying to figure out what to do, it can really throw off the overall times. I was dismayed to see the mean efficiency of these tasks was only 74% which falls short of the standard 80 that we strive for on any metric.

But, the overall benchmark score for my study squeaked in at 81 which I guess means my designs were an improvement.  One of the big issues we were trying to address with this project was making it easier/clearer for users to be able to add channels to a content item. In WEM 8.0 summative testing, only 40% of customers and 44% of non-current customers were able to do this.  In my study, 100% of customers and 67% of non-customers were able to add channels without any prompting for an overall score of 83%.  The one non-customer failure for this task said it was likely an issue with the prototype because she just didn't notice that the "square" icons were supposed to be check boxes.  So here is the overall scorecard for this round of testing:

WEM v8.2 Pickers scorecard after one round of usability testing

The scorecard is a PowerPoint slide and I got to learn how to link data from an Excel spreadsheet into PowerPoint.  The only issue was that the error message I was receiving from PowerPoint that it could not update the data because it couldn't find the linked file was very confusing and I had to do a Google search to find where I needed to update this file path.  I never would have found this on my own.

From the Home button, go to Prepare and then Edit Links to Files
Next week I'll start interpreting these data and trying to weave an interesting story into a PowerPoint presentation that I'll eventually give to internal stakeholders.  I plan to use some video clips from the tests in addition to just throwing out a bunch of statistics.

Footnotes

[1] Sauro, J. If you could only ask one question, use this one.

30 March 2012

Week 11: Usability Testing Ends!

Hurray! I have survived. I finished up my usability testing this week with one current OT customer and with a former OT employee who used to use an older version of the system. Unfortunately, I never heard back from the other customers I reached out to, but I think still getting someone who had used the system in the past was just a good. These two tests were very different in feedback and flow. The currently customer is a developer, not so much an end-user of the GUI, so she had a lot to say about other aspects of the product beyond just what I was testing. I had to be diligent about capturing her other feedback for the team that extended beyond being able to choose objects in the system. She lamented all the clicking, even though she was able to perform each task, and really wanted to be able to drag stuff around, like dragging a channel over to the "My Selections" area instead of clicking checkboxes. I saved an excerpt of the recording where she was performing task six and complains about the clicking saying I was torturing her by making her do that (all in good fun, though) that I will use in my presentation.

The former employee did not have much to say in comparison. She mostly said things seemed simple enough. What I found interesting, though is she had a real problem with task eight where I ask the users to try and navigate just using the folder grid with the tree hidden. She got very frustrated that she could not see where to go or what to do and it was SO HARD for me to sit there and watch her struggle. I was worried she was going to get mad at me for not stepping in. In reality, it only lasted about two minutes but it felt much longer. She did eventually discover the "up" button and was able to complete the task, though I marked this as a failure simply because she was ready to give up long before and I think would have in a normal use situation—at least she would have used another method. In fact she tried going back and looking at the tree and it was on that screen where she first noticed the "up" button for folder navigation but the button was not active for that screen. I was most surprised that she struggled with this task since in the version of the system she used, that is how navigation worked. There was no folder tree to show hierarchy, but there was a breadcrumb. It has been interesting for me just how much having a semi-working prototype influences the testing. I'd be interesting in running a test with just printed out pieces of paper having the users explain to me what they would like to have happen.

I think overall that the testing portion of my project went very well. It was a new experience to do everything myself: coming up with the design, writing the test plan, administering the tests and acting as both facilitator and note taker at the same time. The spreadsheet really helps and allowed me to capture time on task, success or failure of tasks, difficulty ratings and general comments very easily. There are only three weeks left in my project and I will use the remaining time to write up a "formal" report of my findings, basically translating all the boring data into a story with my designs and recommendations. If this small sample tells me anything it is that I was able to come up with some real improvements over what is in the current system. I'm ready for all the questions I'm sure to get from the business stakeholders about the results.

I want to leave you, the reader, with something fun or interesting each time so here's a short clip of a friend of mine climbing a lamppost to replace a geocache called Climb #6. This cache has a terrain rating of 4.5/5. I have only personally done a 4 so far.

23 March 2012

Week 10: Milestone 2

Let me just start with a usability rant.  I had nearly completed this post when I accidentally clicked the browser back button with my mouse and lost the entire thing. Unlike other Google products, for some reason Blogger didn't auto save correctly. #fail

Today technically marks milestone two of my project but there was no set deliverable.  I had originally planned to be done with my user testing but decided instead to build in a week buffer between testing with students and employees and testing customers so that I would have a chance to tweak the prototype for any major issues discovered.  I met with Robin and Tanya on Monday to debrief them about the user tests I have completed so far and to get clarification on what the final deliverable will be.  I will be creating a usability study report using a standard template and then also creating a PowerPoint deck that I will present to the User Experience team and also the WEM product managers showing them my findings and suggestions.  While the presentation is not strictly part of the project requirements and I might end up doing it after the actual project end date, I think it will be good experience to talk through my project before the Capstone poster session.

I also asked Tanya about these strange, complicated formulas I noticed at the bottom of each spreadsheet tab in the usability test template and she told me they were calculations of the confidence interval for each task.  She said she doesn't bother to calculate it every time and I told her I would look into it.  I spent some time researching this and quickly got frustrated as statistics and I do not get along (I was really worried about taking the Intro to Research Methods course).  What I could glean was that the confidence interval is the same as the margin of error for a sample and also that it doesn't matter as much for small sample sizes.  I just couldn't figure out how to calculate it even if I wanted to.  At least Excel has a formula for standard deviation!

I learned a valuable lesson this week about scheduling usability test participants: always schedule more than you think you need or have alternates on standby.  Tuesday, I had one of my customer participants cancel on me for next week.  I have since emailed two more customers but haven't received responses.  The good news is that it is not necessary that I use customers for my tests, but it would have been nice.  If I don't get responses by Monday, I will just ask someone else to participate.  I know several former employees who have had experience with older versions of the product.  Also, the two current employees I already used for testing are actually considered customers, just internal customers, since they use OT products on a daily basis.

I spent a lot of this week adding some increased functionality to the prototype in anticipation of testing involving customers.  The first thing I did was to make the "add folder" and "add channel" buttons appear active all the time, not just when a user has a parent channel highlighted.  While I think this is the way these actions should work, I think it might complicate the test since the prototype only allows the user to actually add a new folder (task 2) when the correct parent folder is highlighted—there are just too many possible variations of this task to code it correctly.  The other change I made was to allow users the ability to "uncheck" the boxes for channel selections they had made during task 3 when I ask them to clear all the channel associations in task 5.  Before, they had to find and use the "remove all" button but I noticed some participants really wanted to undo the check boxes instead.

This week, I leave you with another video, this time of a pseudo-usability test of Windows 8 that was linked in the latest version of the user experience newsletter I've been following for years, Good Experience.  I haven't actually seen Windows 8 for myself, but this video doesn't give me a lot of confidence.


16 March 2012

Week 9: More Usability Testing

I decided to work through spring break so that I could finish up this project sooner and have more time to work on my poster.  This week, I conducted two more usability tests, this time with internal OT employees who have a lot of experience with another content management system but have only taken the training for WEM (our team is actually in the process of setting up a new instance of WEM so that they can start using it for part of the corporate website).  These are people I have worked with a little, remotely, and only met once in December so it was still comfortable because they weren't total strangers like my next round of participants. During the first test, my WebEx account wasn't working so I got to experience troubleshooting technical difficulties on my feet. Thankfully, I had a back up plan and we used the free screen sharing tool join.me.

One of the more interesting observations from my first participant this week was his confusion about "where he was in the system" when he added a new content item and then again when I asked him to add a new folder. It was almost like the OS metaphor went too far because he thought he was somewhere within the folder structure when actually he was outside of it, like when you're saving a new Word document and you have to choose where to save it. I wonder if in the CMS the Marketing team is currently using, they can only create content while inside a folder. You certainly have that option in WEM too, but it's not required.  He also really wanted the add folder (and add channel) options to be available from contextual right-click menus. He really did not like the button placement for these options. Based on this feedback, I do think the add folder and add channel buttons should always be active (right now the buttons become enabled only if the user has highlighted an existing folder or channel, and so the new container would be created as a child). Since there is an option to choose the placement of the new container from within the creation screen, I see no reason why the add option should not always be available; this would also allow users to be able to create new containers at the root level.

Nothing terribly exciting happened this week and I don't  have any new designs or demonstrations, so instead I leave you with a song.  I first heard this three years ago during my first semester of grad school and I listened to it over and over on the Friday during spring break as I tried to write a paper for the Understanding and Serving Users course. I hope you enjoy Okkervil River, "Unless It's Kicks."


09 March 2012

Week 8: Usability Testing Begins

It's amazing, really, how much work goes into usability testing. Each test takes about an hour but there are many more hours that go into finding participants, prepping materials, testing technologies, reviewing the test afterward and evaluating findings just to name a few things. This week, I spent some time further refining the test plan and prototype. I realized that when a participant clicks through the folder grid, the folder tree should update at the same time; it's funny the things you overlook when you have been super involved in a project. I think this is part of the reason why developers don't normally do the testing too—they are too close to the project. I think I am definitely missing things and probably biasing my test results just due to the fact that I also created the prototype.

The UX team has a very nice spreadsheet it uses when conducting usability studies that I had to spend quite a bit of time adjusting for my needs, filling it out with my scenario and tasks. It has a macro that allows the facilitator to easily record time on task, which was helpful. It was also easier to take notes in a spreadsheet while also administering a test than I thought it would be. To that end, I conducting my first two tests this week and they both went well and took the right amount of time. There was only one failure of one task and I think it was due more to issues with the prototype than of the system design; the user just didn't notice that the squares I was using to indicate checkboxes were checkboxes. Once I pointed this out, she had no problem completing the task.  So I'm not sure if that really counts as a task failure or just a prototype failure.

These are supposed to be checkboxes, but they are a little too large!
On the plus side, both testers used the folder grid instead of the folder tree to drill down and select a content item; this is one of the major changes I'm suggesting to the interface so it was great to see people default to using it.

It was nice to start my testing with iSchool students and to be able to do in-person testing to work out more kinks, practice running through the tasks, and get better at reading everything aloud then asking questions. (It made me wonder after the fact if was supposed to get IRB approval but since these tests are for a company and not really UT, I think I'll be okay.)  The rest of my tests will be administered remotely, which has all kinds of potential for problems.  I ran though a technology test to try out Webex and see what kind of lag I was getting.  I think it worked out okay. I've included a short video of how the test will appear to me while the remote user has control of my screen. The audio quality is pretty terrible so I will avoid using my cellphone for these tests.




Next week will be more testing!

02 March 2012

Week 7: Usability Test Pilot

This week I have spent my time preparing to begin usability testing.  I created several more screens for the prototype earlier in the week and polished the testing task list after meeting with Robin and Tanya to get some feedback.  Wednesday, I conducted a pilot test in person but also shared it over Live Meeting to another laptop so that I could gauge quality and response times.  The unfortunate thing is that since it's video streaming live over the Internet, there is some lag, so even if I am allowed to record my remote testing sessions, they won't pick up everything.  I have some concerns about being both facilitator and note taker during these tests but it will be good experience.  Below is an excerpt from the pilot test demonstrating what I thought was the most flawed part of my prototype:


Overall the pilot went very, very well. It look just under 50 minutes and the participant was able to complete all tasks. I got some great feedback about changing some of the wording and breaking up one of the tasks into two tasks. I found some broken hotspots within the prototype as well. Most importantly, I discovered an entire interaction missing from the prototype, as demonstrated by the video. I went back and created a way for participants to be able to use either the folder tree or the folder grid when attempting to select a single content item. Since I proposed to allow users to navigate folders and channels using the folder grid, that needs to be functional from the beginning screen of this task—it will be valuable to see how many users choose to use the tree versus the grid upon first being presented with this picker layout. I can't believe I almost didn't include that! (Participants could hide the tree in order to use the grid as I explain in the video, but that will not give me the testing data I need in the most accurate way.)

My prototype is up to 116 screens :/ I just didn't realize the differences in doing true interaction design for a piece of software versus wireframing for a website—I'm learning that these are two unique tasks with their own separate needs so I am glad for the experience. I still might make another tweak to my overall design again based on the pilot. I haven't decided whether a single click to a folder in the tree should display the contents of that folder in the grid, or if that should remain a double-click.

I've scheduled most of my test participants over the coming weeks, starting with two iSchool students next week followed by two OpenText employees the week after. Following that initial round of testing, I will make adjustments to the test plan and prototype before conducting tests with real OpenText customers the last week of March.

24 February 2012

Week 6: Writing a Usability Test Plan

On Monday, Robin and I met with Tanya, who does most of the usability testing for the User Experience team.  I quickly showed her the four picker models I've been working on and got a little feedback about getting these user tested.  She said they look fine and wanted to know if we'd like to use customers as test participants.  We agreed that having two customers would be good so that we could get some "real users" to look at these designs, but I'll also be using two people from the iSchool and two internal employees—one who has been using another content management product for years but will be switching over to WEM and another who has used older versions of WEM with a totally different interface.

So this week I have been focused on creating a usability test plan based on my initial designs.  I've been attempting to break them up into measurable tasks and constructing a scenario as a common thread through the test.  Tanya sent me the template they usually use and I have been creating my own based on that in combination with some of the info from the usability testing book I've been reading.  Additionally, I've been figuring out the technology needs required for both recording screen captures of the tests and broadcasting them to remote observers.  Internally, we use Camtasia for recording and Microsoft Live Meeting for online sharing.  I was able to get a Camtasia license and a Live Meeting account so I have been playing around and trying to learn how they work.

To demonstrate the screen capture and to show an example of how the my prototype functions, I created this short video:


The most irritating issue I'm dealing with at the moment is that Balsamiq isn't a prototyping tool, as I mentioned last week.  It is really intended for rapid wireframing of new concepts, not for creating elaborate, clickable mock-ups to demonstrate complex functionality.  I had only used Balsamiq previously for wireframing, not interaction design.  To create prototypes for websites, I would use at least some HTML and run them in a web browser. But, I think that would actually be more difficult for demonstrating a software interface so I am making my way carefully with Balsamiq.

I'm up to 50 different screens for my designs as this sort of prototyping is like painfully slow, single frame animation.  I have to make sure all my links between frames are correct and provide the necessary functionality to replicate something of a working interface.  There is another product called Flairbuilder that can import data files created with Balsamiq (using the .bmml file extension) that I would look into if I need to do more of this kind of prototyping in the future.  For now, I still have a lot of rework to do of the designs to get them in line with my usability test tasks.

17 February 2012

Week 5: Milestone 1

Wow, a third of the way through the project already and up to my first milestone.  This week I was completely heads down with interaction design.  I kept pretty thorough notes based on each use case with a checklist of modifications I wanted to make to each picker type.  I spent some time at the end of last week and the beginning of this week re-familiarizing myself with Balsamiq and how everything works.  What I really like about that tool is how easy it is to move and place your symbols on the canvas.  Since you can directly manipulate x, y coordinates and width/height, it is super simple to get everything to line up from screen to screen.

Now, as it turns out, Balsamiq isn't a great prototyping tool—it is meant for rapid storyboarding and wireframing.  Even though it does have options for easily linking mock-ups together in a flow, it is not great at organizing dozens of screens per project and breaks down quickly for complicated interactions.  For my purposes, though, it works well enough.  It doesn't have a way to show hover states, tooltips or single versus double-click, but that isn't necessary for what I'm trying to do here.  Also, I came up with a naming convention for my files to keep the four use cases separate within a single project which at the moment is 31 different screens.

My favorite new feature that I've implemented is for the single and multi content item pickers.  I had decided early on that users should be able to "drill down" through Folders and Channels to select content from within the grid view; in the current system, they must use the tree to navigate the hierarchy.  In addition to adding the drill down option, I decide to also allow the user to hide the tree altogether and widen the grid view, relying solely on drill down.

screen shot of multiple content item picker with tree and grid navigation
Multiple content item picker with tree and grid

screen shot of multiple content item picker with tree hidden, expanded grid size
Multiple content item picker with tree hidden (grid only)

Robin and I met yesterday to review my first round of designs.  I got some good feedback and need to do a little clean up to polish the prototypes for testing.  We're going to meet with Tanya on Monday to discuss user testing.  I'll be coming up with user tasks and probably writing the test script so I started reading Steve Krug's Rocket Surgery Made Easy yesterday; it is a how-to for discount usability testing. I read his first book, Don't Make me Think, a couple of years ago and have wanted to read this one too.  I think it will be a helpful refresher.

10 February 2012

Week 4: The Users Speak and Design Commences

This week, I had the chance to speak with a group of users and observe them using the 8.0 version of WEM. This is a company that has been using the Vignette Content Management product for many years and is still experiencing some growing pains while transitioning to the updated user interface of WEM. The person doing most of the "driving" during the screen share session was a guy who had not used the new interface much and not at all for some of the tasks. He still uses the "console" interface, which admins can still access, instead of the new "workspaces" model, aimed mostly at end users.

It was an interesting and terrifying experience to speak with customers live like that. I wasn't alone either; in the room with me were three other members of the UX team, including the team lead. The customers were very nice and very vocal, quick to voice their opinions about all aspects of the UI, not just the picker tasks I was asking them to perform. We got a lot of good feedback on other issues, like the tree nodes indicating that there are child nodes when there aren't; direction on how they like to seek, search and browse for content in the system; and concerns with changes that were made to the preview site menus and controls.

Most of what they about the pickers were known issues but it was still valuable for me to see people struggling with some of the UI inconsistencies.  Still, after a little practice, the user seemed to perform repetitive tasks pretty easily, like remembering that you have to stop one node higher in the tree than the channel or folder you'd like to select.  One UI suggestion he made had to do with adding content items to the "Your Selections" pane; he said that instead of using the "Add to Selections" button, he thought having an arrow pointing from the grid to the selections pane that the user would need to click to "move" the selected items might be better.  I will keep this in mind in case my design idea doesn't test well.

I finished up my assessment of the four picker types this week and moved into my initial design phase. The first thing Robin asked me to do, though, was to write up scenarios for what I intended to design in order to make it clear what problem I was trying to solve. I wrote all my uses cases based on the "Kristen" persona that OpenText uses:
Kristen, a content contributor, for whom web content management is not the focus of her job.  She enters content infrequently and needs the process to be easy and intuitive. She is not very computer savvy and is an expert in an area unrelated to the website like a nurse, HR administrator or legal assistant.
These are the four uses cases I will be designing for:
  1. Selecting a Folder for a New Content Item
  2. Selecting Channels for a New Content Item
  3. Adding a Content Item to a Content Component
  4. Adding Related Links to a Content Item
I started with use case one:

Kristen needs to enter a new article into the content management system. After entering her content, she wants an easy and consistent way to select a Folder for her article. She doesn’t want to see every Folder in the system, only those she has access to where she might want to save her content items. She thinks it might be useful if she could search for Folder names instead of having to drill down to select them. She thinks it would be a neat feature if she could create a new Folder if she can’t find one that is appropriate for her article. She needs a clear indication of which Folder she has selected and clear direction for how to save her selection.

Below are screen shots of the current design and then my first low fidelity mock-up:

WEM 8.1 folder picker

My initial folder picker design
 Some of my design changes include:
  • Moving the "selection" pane from the bottom to the right to be consistent with content item selectors
  • Removing the "browse" pane and expanding the "tree" pane to accommodate deep hierarchies with minimal sideways scrolling
  • Removing the "All Folders" root element to lessen visual clutter and to eliminate the possibility of collapsing the entire tree
  • Removing the "plus" icons from tree nodes that do not contain child nodes
I purposely use "sketchy" design widgets (courtesy of Balsamiq Mockups) and minimal color so that people viewing these designs do not get distracted by look and feel elements and can concentrate on functionality. For the final set of mock-ups, I will probably create high fidelity versions that use the WEM interface design.

My first milestone is next Friday at which point I should have mock-ups created for each of the four use cases above.

03 February 2012

Week 3: Deep Dive

This week has been busy as I try to go deeper into the problem while narrowing my focus. At our weekly meeting, I showed Robin the use cases I was working on as well as the design examples of other pickers that I collected. She was quite interested in the examples and said that research will be good to keep. She asked that I look at creating four different design scenarios with a UI approach that makes each of the pickers nice to use but also relate to one another:
  1. Single content item picker
  2. Single container picker
  3. Multiple content items picker
  4. Multiple containers picker
She also asked that I look at making the process of picking flow from left to right in all instances, ending with the user clicking the "OK" button to commit the selections. I started trying to think about what these have in common so that I can abstract one process, one flow:
  • User needs to select something
  • User needs to know what s/he has selected
  • This needs to be easy and similar regardless of what is being selected and regardless of how many items are being selected
  • What are the various states? What does it look like when something has been selected?  What explicit notifications should the system give the user?
I started with exploring how the existing multiple content item picker works in detail. I think it is important to utilize and maintain existing controls and concepts so that the user doesn't have to think too much about what s/he is doing. I kept selecting items, one, then many, over and over, seeing what happens if I click on a row versus a checkbox, multiple checkboxes then a row, etc. It was easy to see how this paradigm of selecting items becomes transparent when implemented well. Where the current picker system seems to break down most is that the act of selecting an item doesn't really select it; the user must then click the "Add to Selections" button, then click "OK" to commit the selections.

The current multi-item content picker doesn't have a clear flow.
I think if the act of clicking a row or checkbox automatically adds the item to the list of selections and if confirmation of that action is made obvious through system notifications, this will become much easier to use. I came up with over a dozen changes I would make to this process flow alone, though many should abstract to a new general picker model. Over the next few days, I will perform the same analysis on the other three picker types.

This week, I also had a chance to hear what real, external customers think about the 8.0 UI. I was able to sit in on a customer gripe session with the WEM product manger and Tayna Payne, Senior User Experience Designer, who led the session (and who coincidentally attended the same PhD program at the University of New Mexico as my dad in the late '80s). She's hoping to set up a more technical session next week where we'll have a chance to discuss the pickers and their issues with these same customers in great detail. I am really looking forward to hearing some additional perspectives on this problem and, I hope, will get to see them actually using the pickers.

On a personal note, I am finding this project really puts me outside my comfort zone. I am anxious a lot and find myself waking up thinking about pickers! This type of project is hard for me for a couple of reasons, the first being that it is really undefined in a sense. I have to be creative and try to solve a problem that doesn't exactly have a right answer and that is very uncomfortable for me because I am used to doing very precise, specific tasks. The second reason is that the project is so spread out and it is hard to focus on it for only a few hours at a time amidst my other class and the three major work projects I'm engaged with too. The creative process is just different than most of the work I'm used to and it probably doesn't help that I have a low tolerance for ambiguity. I think as the project becomes more refined over the next couple of weeks that I will start to feel more comfortable. My first mile stone is at the end of week five when I should have a preliminary design ready for review.  By the second milestone at week 10, Robin is hoping to get my design user tested so that by the end of the project, I have had a chance to refine it and make changes based on outside user input.

27 January 2012

Week 2: Use Cases and More Research

This week, I started looking around the product interface and getting myself familiar with out how this version differs from the older versions I've used.  I'm in somewhat of a unique position to be working on this product because I am also a "customer"—that is I have been using this particular content management system for several years but never a production instance with the 8.0 UI.

I did two types of research this week. First, within the product, I tried to find the places where the picker framework was clearly in use and then write up some use cases for myself which I will discuss with Robin on in our weekly meeting Monday to make sure I'm on the right track.  I divided up these use cases into two categories, single item pickers and multi-item pickers, and found six areas within the console where these are in use.

Single:
  • User wants to create a new content item and has to choose a destination folder for that content item
  • User wants to move the location of a content item and is asked to choose a destination folder
  • User wants to create a new content category and must select a parent category to place it into
  • User wants to create a new "Quick Action" and must select a content type
Multi:
  • User wants to assign one or more existing content items to one or more channels
  • User wants to share one or more existing content items to one or more sites
As I found each of these, I took screen captures of the existing pickers and started playing around with modifying them.  What I realized is that I am not convinced there needs to be a singular picker metaphor for all pickers, but I think the way the pickers function should be consistent between all single and all multi-select pickers.  It looks like in many instances, the pickers are trying to emulate the tree and grid patterns used in other parts of the interface meant for browsing and that is likely causing visual clutter and confusion.

Content type picker with some issues identified

My initial redesign of the content type picker

Second, I extended my research this week by looking closely at many of the applications I use on a daily basis and noting how each treats the action of picking items.  I looked at MS Outlook 2010, Google Documents, the old version of WEM (VCM 7.3.1), Content Server 10 (another CMS used/sold by OpenText) and Changepoint.  Of all of these, I really liked the simplicity, speed and minimal design of Google Documents.  Of course, it is not a CMS nor meant to categorize and publish large amounts of content, but I did find some useful features and repeatable patterns.  For example, I think WEM should look into supporting drag and drop to move content items and folders around as well as shift-click to select multiples.  I am also a fan of having check boxes within a tree view for when the user needs to be able to both browse and select multiple folders (the old version of WEM used this pattern for channel selection).

Google Docs "Organize" tree

20 January 2012

Week 1: Background Research

This week has been all about getting up to speed and homing in on exactly what I'll be doing for this project.  My field supervisor, Robin Silberling, provided me a tailored overview late last week of what the UE team would like to me explore over the next 15 weeks.  I'll be working on a redesign of a very sticky and inconsistent piece of the Web Experience Management (WEM) interface called the "picker." Pickers are one of the defined frameworks used in the WEM product to allow users to move items around and make selections.

Example of a multi-select picker in version 8.1

I reviewed the results of summative testing done on the WEM product and problems with the picker interface were identified as two of the top 10 items needing to be addressed in the next version.  Only 40% of customers and 44% of non-customers were able to add a channel to a content item on the first try.  The team aims for an 80% success rate without help to give a task a passing score.  Some of the issues include
  • The single picker and multi-picker interfaces are different
  • Confusion about the red "x" icon used for the remove action, concern this meant there was an error
  • Users expected to be able to select channels from the tree
  • Users often didn't notice that an item had been added to the "Your Selections" area
  • Confusion about the need to click the "Add Selection" button to confirm choices
I'm excited to have the chance to come up with some design improvements for these pain points.  I don't know yet if my proposal will actually be tested with users but that would be really great to see how they scored.  I have several suggestions from users and the UE team to get me started and I have started a list of my own.  I think the biggest challenge will be coming up with a unified design to satisfy the many instances in the interface where pickers are used.  My first task next week will be to sit down with a development version of WEM and use it myself to get a feel for using this product.