Эротические рассказы

Remote Research. Tony TulathimutteЧитать онлайн книгу.

Remote Research - Tony Tulathimutte


Скачать книгу
if you’re doing a scientific test and looking across very large sample sets, these minute effects are going to play a bigger role.

      It also depends on what you are testing. There are some obvious cultural differences with the way people use the Web, but there are also universal habits: registering for a service, noticing positioning, etc. It’s unlikely that the Brazilian community in Brighton is not going to pick up something the community in Sao Paulo might.

      On the Use of Technology in UX Research

      People often try and find technological answers to human problems. A lot of the drive for remote testing is an attempt to find shortcuts. “Oh, it’s difficult to find test subjects, so let’s get technology to help us out.” Today I was reading an article where someone was saying, “How can we do remote ethnographic research? Can we get live cameras streaming?” And I thought, “Do a diary study.” Diary studies are probably the ultimate in remote ethnographic research. You don’t need a webcam streaming back to Mission HQ. People love tinkering with technology because it makes them feel like superheroes; it’s something to show off. I believe in human solutions; I think technology is often used as a crutch.

      I think remote testing is still in its infancy. It’s based on the technology that’s available. It’s preferable to have lightweight technology that you can send to a novice user, double-click on it, and it opens and installs. But if you’re looking at an average computer desktop, which is now above 1024×768, and you also want to capture the person’s reactions, you want to send audio and video down that pipe as well—that’s a complicated problem. You need good bandwidth to do this really well. So then you create artificial problems, because you’re limited to people who have got pretty good tech and bandwidth. And so that would probably prevent us going and doing remote testing with somebody in a cybercafé in Brazil.

      What I am kind of interested in is unmoderated [i.e., automated] remote testing, because it’s a hybrid between usability testing and statistical analysis or analytics. The benefit is that you can test on a much wider sample set. It complements in-person usability testing.

      On the Purpose of User Testing

      The point is to develop a core empathetic understanding of what your users’ needs and requirements are, to get inside the heads of your users. And I think the only way you can do that is through qualitative, observational usability testing. There are lots of quantitative tools out there, stuff that can tell you what’s happening, but it can’t necessarily tell you why it’s happening. We’ve all done usability tests where you watch people struggle and have a real problem doing something, and you can see they’re having a problem. Then they’ll go, “Oh, yes. It was easy.” We did a test where a user thought he’d purchased a ticket, and he hadn’t, but he’d left thinking he had. If he’d told you, “Yes, I’ve succeeded,” you would have been mistaken. Watching and observing what users do is very enlightening. Frankly, it’s easier for people to learn from direct experience than through analyzing statistics. There’s nothing like actually watching people and being in the same room.

      So, you’ve decided it’s worth a shot to try a remote research study. Feels good, doesn’t it? The first thing to know is that remote research can be roughly divided into two very different categories: moderated and automated research.

0103.png

      Figure 1.3

http://www.flickr.com/photos/rosenfeldmedia/4218824495/ Moderated research: a researcher (“moderator”) observes and speaks to a participant in another location. Outside observers can watch the session from yet a third location and communicate with the moderator as the session is ongoing.

      In moderated research, a moderator (aka “facilitator”) speaks directly to the research participants (see Figure 1.3). One-on-one interviews, ethnographies, and group discussions (including the infamous focus group) are all forms of moderated research. All the parties involved in the study—researchers, participants, and observers—are in attendance at the same time, which is why moderated research is also sometimes known as “synchronous” research. Moderated research allows you to gather in-depth qualitative feedback: behavior, tone-of-voice, task and time context, and so on. Moderators can probe at new subjects as they arise over the course of a session, which makes the scope of the research more flexible and enables the researcher to explore behaviors that were unforeseen during the planning phases of the study. Researchers should pay close attention to these “emerging topics,” since they often identify issues that were overlooked during the planning of the study.

      Automated (or “unmoderated”) research is the flip side of moderated research: the researcher has no direct contact or communication with the participant and instead uses some kind of tool or service to gather feedback or record user behaviors automatically (see Figure1.4). Typically, automated research is used to gather quantitative feedback from a large sample, often a hundred or more. There’s all sorts of feedback you can get this way: users’ subjective opinions and responses to your site, user clicking behavior, task completion rates, how users categorize elements on your site, and even your users’ behavior on competitors’ Web sites. In contrast to moderated research, automated research is usually done asynchronously: first, the researcher designs and initiates the study; then the participants perform the tasks; then, once all the participants have completed the tasks, the researcher gathers and analyzes the data.

0104.png

      Figure 1.4

http://www.flickr.com/photos/rosenfeldmedia/4218825627/ Automated research: a Web tool or service automatically prompts participants to perform tasks. The outcome is recorded and analyzedlater.

      There’s plenty of overlap between automated and moderated methods, but Table 1.5 shows how it generally breaks down.

      Table 1.5 Moderated vs. Automated Research

http://www.flickr.com/photos/rosenfeldmedia/4286398073/

Table01.05.png

      Moderated research is qualitative; it allows you to observe how people use interfaces directly. You’ll want a moderated approach when testing an interface with many functions (Photoshop, most homepages) or a process with no rigid flow of tasks (browsing on Amazon, searching on Google) over a small pool of users. Since it provides lots of context and insight into exactly what users are doing and why, moderated methods are good for “formative research” when you’re looking for new ideas to come from behavioral observation. Moderated research can also be used to find usability flaws in an interface. We cover the nitty-gritty of remote moderated research in Chapter 5.

      Automated research is nearly always quantitative and is good at addressing more specific questions (“What percentage of users can successfully log in?” “How long does it take for users to find the product they’re looking for?”), or measuring how users perform on a few simple tasks over a large sample. If all you need is raw performance data, and not why users behave the way they do, then automated testing is for you. (Suppose you just want to determine what color your text links should be: testing every different shade on a large sample size to see which performs best makes more sense than closely watching eight users use three different shades.) Also, some automated tools can be used to gather opinion-based market research data as well, so if you’re looking for both opinion-based and behavioral data, you can often gather both in a single study. And certain conceptual UX tasks, like card


Скачать книгу
Яндекс.Метрика