Usability Testing for User-Centered Design
Dr. Carol Barnum (2002) identifies the following characteristics of usability testing:
- The goal is to improve the usability of a product
- The participants represent real users
- The participants do real tasks
- The researchers observe actions and record what the participants say
- The researchers analyze the findings, diagnose problems, and recommend changes
The important thing to notice here is the inclusion of paid participants, or users, who are representative of the target audience, and a researched protocol that the testing follows. There are a number of testing models, including lab testing, testing without a lab, and field testing.
In the usability lab (which is the most expensive and time-consuming process) a number of users come into a controlled environment and are given a task to complete in a specific time frame. Observers may watch from behind 2-way mirrors and record what they see or hear or use a television monitor to observe and listen to the participants. Typically, a lab requires dedicated space and lots of equipment, including video or audio recording devices.

Testing without a lab requires a space like an office or conference room where the participants and observer will not be disturbed. The observer may sit next to the participant and record manually or with a recorder what the participants does or have the participant “think aloud” during a process. Modern technology, like computers or phones with cameras and microphones, make this form of testing easily available and economically feasible, but, according to Jakob Nielson (2012), a notepad and pen are the only equipment you will need.

Field testing means that the observer goes to the user and “tests” in the actual environment that the document or device will be used, and as an added bonus, can observe users in their natural environment with supports and distractions.
What if I Just Skip this Process All Together?
Yes, usability testing can be expensive and time consuming, but in most cases will be worth the time and expense. The costs of not testing a product or program are reflected in the amount of additional training needed to support the users, the competitive advantage of the product or program, the image and reputation of the organization, and the efficient use of employee and client time (Barnum p.23).
Start by Making a Plan
If you are going to conduct a usability test, you have to start with a plan. That’s how you will document what you’re going to do, how you’re going to do it, how many participants you need to recruit, and what you will have them do. In this case, you will be the usability specialist.
For your plan, you need to identify the scope and purpose of the testing, decide when and where you will do the testing, identify the equipment you will need, determine how many sessions you will conduct and how long each will be, and how many participants you think you will need. You must determine what tasks you will be testing and develop the metrics for evaluation. For example, subjective metrics include the questions you’ll ask the participants about ease and pleasure, and quantitative metrics indicate what data about errors, completion rate, or time to complete a task you will collect. You may need to identify your staff and what role other members of the team will play (usability.gov Planning a Usability test).
Recruit the Participants
Once you have a plan, you will recruit your participants. You will try to find people who are as close to your target audience as possible, and you may have multiple users groups. Its okay to use your own colleagues for testing during piloting stages, but not during actual testing. If you are seeking insights, Jakob Neilson states that 5 users will give you as much information as you will need. For quantitative data collection, seeking statistics, you will need at least 20 users. If you are going to conduct iterative testing over the course of developing a document or site, you should have a different group of participants for each test. Lastly, since participants are usually compensated, you will need to decide how you will pay them. Keep in mind that you cannot pay federal employees.
Run the Test!
A typical usability test might look like this:
The facilitator welcomes the participant, explains the test session, and asks any demographic questions. The facilitator will then explain what the participant will do, then explains the task scenario. The participant begins working on the scenario and may think aloud during the process while the observer or facilitator takes notes of what is said and the participant’s actions. The session ends when tasks are complete or the mandated time is up, and the facilitator either interviews the participant with end of session subjective questions or thanks the participant, offers the compensation, and escorts the participant from the testing area.
Jen Bergstrom (2013) observes that choosing the best moderation technique for the session depends on the goals of the session. A concurrent think aloud (CTA) is useful for understanding participants thoughts as they work through the task. The retrospective think aloud (RTA) has the participants retrace their steps when the session is complete. Concurrent probing (CP) requires that the facilitator ask follow-up questions whenever the participant makes a comment of does something out of the ordinary. Retrospective probing (RP) waits until the end of the session and then asks questions about the participants’ thoughts and actions as a follow up. Each method has its pros and cons, and none of them contribute to collecting quantitative metrics data.
Interpret and Record the Data
After you finish conducting your tests, it will be necessary to turn all that data into information that you can use to improve the document or site. Essentially, you will sort the quantitative data, like performance measures, and the subjective data, like attitude. You will analyze it carefully, looking for problems. Lastly, you will present your research in a report. Here’s an example of a usability report for a study conducted on The Purdue OWL (website).
Don’t Forget Accessibility!
Typically, usability testing does not consider the user with a disability. As a technical communicator, you have a responsibility, both legally and ethically, to produce documents and sites that meet are compliant to Section 508 of the Americans with Disabilities Act. A site that is accessible presents information through multiple channels that allows users with disabilities to access the same information as users without disabilities. Check out the Americans with Disabilities web site (website), for more information.
Media Attributions
- Private: Figure 39: Usability Lab
- Private: Figure 40: Usability test without a lab