This week was a little hectic, as I was trying to get everything done before my next baby is born (luckily he hasn’t come yet). This week we worked more on eye tracking, coming up with a final report for the videos that went over the Kent State University Library site. It was interesting seeing where their eyes went with the eye tracking software, but hard to watch as the participants struggled with completing the task.
The reading we had was about how to make use of eye tracking software and the possible pitfalls of it. It’s important that when testing, the tasks given to participants using this technology make use of the advantages and don’t overshadow other aspects of usability testing. The final report we turned in had to reflect that, coming up with alternative tasks for the videos we had as examples. Overall, I felt I learned a lot about the advantages and disadvantages of eye tracking in conjunction with usability testing.
This week was interesting because we explored something that I had never gone into depth with before, which was eye tracking technology and the results of that, including gaze plots and heat maps. Knowing where your users are looking is useful in knowing what jumps out at them, what interests them, and where they can get lost.
It seems like it has improved a lot, technologically speaking, since its inception, but still feels a little hard to set up. One of the things the readings we had emphasized was that it doesn’t replace actually testing users, it is only a tool that adds to the data and gives a bit more dimension to what’s going on, one that can easily be misinterpreted. That is why it wasn’t recommended for most companies, but if there is a big budget for usability testing, it can compliment the other things that are being measured. I don’t know how much I would recommend using eye tracking, but I can see its benefits.
Like last week, we looked at how to do mobile usability testing. It’s not as easy as it looks, trying to find a way to record the screen, the participant’s face, and their gestures. Finding something that would get all of that, or an approximation, while still simulating a natural environment, was the challenge.
The way I decided that worked best was to have a MacBook’s webcam recording the participant’s face, while the participant adjusted a few settings so that their mobile device was reflected onto the MacBook and recorded, and that a dot would appear when their fingers touched the screen. It seemed to be the best solution I could find, but even that was imperfect. I just didn’t like the idea of a camera mounted on the device itself. It seemed like it would be unnatural.
This week we didn’t have anything due, just had some assigned reading and should have gotten started on the project for next week. As I went through the reading and videos assigned, I realized how complicated it is to set up a usability test with mobile, and how none of the solutions I have seen really feel very natural, with the mobile device in some sort of cradle or a fixed camera. When I’m using my phone, it doesn’t stay in one spot, but I’ll bring it closer to my face if I want to concentrate on something.
I’ve been thinking about how I would set up a mobile usability test, trying to keep it as flexible as possible, but it’s not easy. I still have to figure things out better for next week’s assignment. I think it’s a good assignment in that it makes me think. It should be interesting figuring out how to solve it.
This week we had to make a presentation of the findings from the remote usability tests that were taken. Going through the videos and the data took a while, and then it was a bit difficult figuring out what data to present, what was pertinent, what wasn’t. But eventually I was able to come up with something. It is interesting the gap between the knowledge I had of what happened and the simplified version that was presented.
I think the presentation I created went well. I tried not to make it very long, but I don’t know if I put in too much information. Knowing how much is too much information and how much is too little isn’t easy. In a presentation, you don’t want to bore the stakeholders, but you do want to empower them. Hopefully I did it well enough.
This week we had to choose a website to test and then build up three tasks for that test. I think one of the hardest things about this week was actually choosing the website. The demo video we were shown on how to use validately was a little misleading, in that they mentioned having a final page the user had to get to in order to complete the task. But once I got in to validately, I was able to create the tasks pretty easily.
I used hotels.com as my testing site, and created three tasks for unmoderated remote testing. The first was to see if the participant could find a hotel in Chicago with a pool. The second was to see if they could find a package deal to Miami with both flight and hotel. The third was to find how to sign up for secret deals. After going through other people’s tests, I realized I could have put in some different follow up questions, but for the most part I think it turned out well. Now to collect the data and make a presentation.
This week, as the first week in the new class, things were fairly easy. We talked about some methods of remote testing, including unmonitored and monitored, and how to get people for the studies. We had to get ready for the main assignment by signing up for a service that facilitates remote testing.
The reading focused on the benefits for remote testing, while still making use of any in-house testing. Sometimes, a quick study is needed and there are ways of making that happen through remote testing that avoid the costs and recruiting issues of live testing. Even if live testing is preferred, remote testing can have useful benefits. I had never even considered any other way before, but this week’s assignments have opened up my eyes.
In this final two weeks of the course, we reviewed each others’ videos and had to make a report from them, making them the participants in our usability study. We had to review four videos, go through and find where they had problems, pull quotes, and find other uses measures to add in the final report.
It wasn’t as hard as I expected it to be, but it was difficult to know what to put into the report and what to leave out. The assignment also felt a little too short for two weeks, but perhaps too long for one. I think it has been good experience, though. It’s made me realize just how much stuff I’d have to sort through in a real study.
This week we talked about moderating usability tests. Then we actually went through and moderated one that was already planned out, we just needed to find someone to do it with and use our script with them.
Looking back on my performance, I’d say that it was probably average for a first timer. There were some parts that could have gone smoother and I could have asked more follow up questions, but didn’t venture out as much as I could have. But for the most part I was happy with it, I didn’t freeze up like I thought I might have, and I feel that I didn’t lead the participant on.
The part where I struggled was ending a task and transitioning to a new one. I didn’t know how many follow up questions to ask and how to end a task, nor did I know how to end the whole test, other than thank him for his time.
I think I did let him speak and remained unobtrusive, which was pretty important. There were times when I wanted to say something, give a hint, but I held myself back. Overall it was a good learning experience and I think I will get better the more I do it.
This week we discussed emotions and expressions in usability testing and what they mean to the testers. Although hard to quantify, they can still be useful to observe, and some software is beginning to record emotions as well. Just to bring it into the discussion, I talked about reasons not to record faces, which included privacy concerns and biases. Personally, I think it is good to see faces, but the article I shared gave some food for thought.
The assignment was about choosing a quantitative measurement and focusing on that for testing Chipotle.com. I focused on error rate, saying that if you found the errors users made, it would help everything else and improve satisfaction. Of course, some errors are purely user generated, but some can be reduced with a good interface design. The trick is knowing what category each goes into.