字幕列表 影片播放 列印英文字幕 SPEAKER: 86% of app ideas are born from a developer's personal pain. These ideas are form apps nobody needs. Developers believe research with users is a waste of time. They perceive their app as a coding exercise. To validate their idea, they ask their sister if she likes it. She says yes. TOMER SHARON: Meet Will and Dana. Will and Dana are the co-founders of Noteo, a note-taking app nobody needs. Dana is 26 years old. She's an MIT Computer Science graduate, she's a Trekkie, and she has been coding since she was eight years old. Will is 25 years old. He's a Stanford Computer Science graduate, and he loves Star Wars and LEGOs. They met over a weekend hack-a-thon here in San Francisco about a year ago, and they liked each other's way of thinking. So about six months ago, they started Noteo, and they've been working on it since. Will and Dana are failing. They're crashing, they're burning, and they're sinking $200,000 of seed money that they got, without even knowing what went wrong. I'm here to help you avoid their mistakes. I'm here to help you execute the right plan. Hi, my name is Tomer Sharon. I'm a Google Search User Experience Researcher, and I have been studying dozens of thousands of users, of people. Learning about what they need, what they want, and how they use apps and other products. I've been helping Google Search and startup teams come up with products that meet human needs. Going back to Will and Dana. They have six big, big problems. Their number one is they did not fall in love with a problem. They rushed into launching a landing page for a product they weren't sure of. They had personal pains related to note-taking, and they were sure that this is something people needed. Just to be very sure, they launched this landing page. And I did put an arrow here, but I'm going to use my lightsaber. They launched this landing page. And they collected people's email addresses, trying to figure out if they're interested, therefore if the app is needed. But the only question that they were able to answer is the one that you see down there. Are people interested enough to give them their email addresses? They were never getting or gathering information about what users need. Their number two problem is that they learned from friends. They interviewed seven friends and family members. Now family members and friends are always happy to give you feedback. The problem is that they're biased. How can they hurt your feelings? They're your friends. They're your family. Of course they like your idea. Sure, they'll use it. No doubt they'll pay for it. A lot. Third problem. They listened to users. Now I know coming from a User Experience Researcher, it sounds kind of weird, but bear with me here. The first rule of research is don't listen to users. Instead, observe their behavior. When Will and Dana asked their friends and family, would you use this app, would you pay for it, how much you pay for it, they got really good answers. People liked it. But they forgot-- or didn't know, or ignored-- what social psychologists know for almost 100 years now. We humans are very, very bad at predicting our own behavior. Here are two studies that were done during that time. This is a study from 1937. The researchers went into classrooms and asked students-- they passed along a questionnaire-- and they asked students, would you cheat in an exam? And the students answered. A few weeks after that, they came back to these classes, to these students, and together with the teachers, they were giving an opportunity for the students to cheat without them knowing. And surprise, surprise, there was close to zero correlation between what people said about their behavior and what they actually did. 1937. Now Will is kind of sarcastic about studies from 77 years ago, so here's a study from 2012. This was done in the UK with dozens of thousands of people. The researchers went into public bathrooms in gas stations. And they asked people coming out of the bathrooms, did you wash your hands after you finished your business? 99% percent of people said of course, yes. But the researchers, they installed electronic recording devices on the faucets in the bathrooms. And they actually knew exactly how many people did wash their hands. So 32% of men and 64% of women actually did wash their hands. There's a very, very big difference-- very big difference-- between what we say we do and what we actually do. There are many reasons for that. Some people would say, they're just liars. They're not. We are having trouble predicting our behavior. There are many reasons for that. When Will and Dana ask their friends and family, or anyone else, would you use our app? They're asking them to predict the future. Again, we humans are very bad at it. Their fourth problem is that they didn't test the riskiest assumption. Every product or idea comes with a set of assumptions or beliefs. The riskiest assumption is the one that is core to the idea. And it's also unknown. Or, the riskiest assumption, if that's not true-- if the riskiest assumption is not true-- the whole idea falls apart. What they could have done, Will and Dana, is-- and this is just an example-- they could have assumed that this is risky. Smartphone owners are aware of their ineffective note-taking habits. If this is not true, they have no reason to develop Noteo. They didn't do anything to validate or invalidate this riskiest assumption. Their fifth problem is what I call, that they're having a Bob the Builder mentality. They rushed into developing a product. This is what they know best. This is what they know how to do. They rushed to developing a product, and they launched a minimum viable product without even knowing what for. They kept asking themselves, can we build this note-taking app, instead of asking if they should. In their mind, this was a coding exercise. And a coding exercise doesn't require any insights from users. And their last problem is that they were perfectly executing the wrong plan. They developed a beautiful app nobody needs. They could have validated or invalidated three things. The problem. Is there a problem, a note-taking problem in this world that people care about? They could have validated the market. Are there enough people who have this problem and care about this problem? And later on, they could have validated their product. Is our product solving this problem for this market? And I give credit to Laura Klein, who's sitting right here. Thanks for that. They are doing a few things very well. I want to mention two of these things. So in the past year, I interviewed 150 app developers and startup founders. And I wanted to know what are the questions that they ask themselves about their users, or potential users. And the good thing that I found was that they ask the right questions. These are some of the results. I'm just going to go over a couple. 97% of them ask, who are my customers? 95% ask, do people need my product? Probably the most important question to ask. 89% ask if the product is usable. These are very good and important questions to ask. The second thing that is going well for them is that they understand priorities. They understand what are the questions, what are the most important questions they need to ask. And they know when to ask these questions. I'm completely ignoring the invalid, unreliable way they answered those questions. But just having the right questions and knowing when to ask them is a very, very positive thing. So up until now, I talked about their six problems and a couple of good things for them. What I want to do next is suggest a solution. Suggest a way to execute the right plan. So say hello to lean user research. Lean user research is a discipline that is providing insights into product users, their perspectives, and abilities to the right people at the right time. Excellent lean user research is of high quality. It's not crappy research. It's impactful, meaning it's not just interesting, but you actually have something to do with it. And it's fast, because nobody wants to wait for research. Next, I'm going to introduce you to three lean user research techniques. The first one is called experience sampling, the second is observation, and the third is fake doors. Let's start with experience sampling. During an experience sampling study, research participants are interrupted several times a day to note their experience in real time. This is a very unique way of mining their reality. Experience sampling is coming from a research technique that was called pager studies. This was developed in the 1950s. Back then, researchers handed pagers to their research subjects and asked them a question several times a day. For example, how do you feel? Where are you? And things like that. Or what do you do? And they collected these responses and understood the lives of their users, or research subjects. The key in experience sampling is asking the same question over and over again. So for example, if you want to know-- if you ask people, what annoyed you in the last couple of hours? Imagine you asked that question five times a day, for five days, and you have 100 research participants. Quick math, you collect 2,500 data points. This is a huge, huge, insightful, useful body of knowledge. I want you to try it out. If you sit nearby the screen, you can use the QR code. If not, access this URL right now. Yes, do it right now. And even if you watch at home, you can do that. And you have an experience sampling question there. A sample question related to Noteo. It works! Answer the question, and we'll go over your answers in a minute. And I'm going to play with my lightsaber. Don't futz with it too much. The URL is working too. All right, I'm moving on. So imagine you are asked that question five times a day, for five days. You're not always going to have an answer, but you will in many cases. Let's go over sample responses. So here we have 31 responses to this question. If you look at it-- just eyeball what you see here-- very, very quickly you can understand that there are several groups of things you can learn here. Let me color them for you. Some people are writing down lists. Others are writing down ideas. And others are just sketching stuff. That's from a few seconds of looking at it. Imagine 2,000 of these and what you can learn from that. One of the most important things in experience sampling is the question that you ask, phrasing it right. A great experience sampling question asks about repeated behavior. Something that happens a lot during a day, or during the study period. I'm going to give examples in a minute. A bad experience sampling question is asking either yes/no questions. Yes/no would give you nothing about user needs. It would give you 2,500 responses of yes or no. You're not going to learn much from that. A bad question also can ask about numbers. Again, same thing. You're going to get a lot of numbers, you're going to average them, it's not going to tell you much about what people need. And a bad question also asks about opinion. If you ask me eight times a day about my opinion about something, it's not going to change much. All right? Ask about repeated behaviors. Two examples. A good example. What was the reason you recently updated your website? Repeated behavior, no data, no numerical data, not yes or no, and it's not about an opinion. It's about behavior. A bad example. How many emails did you receive in the past hour? Some people would say 0, 3, 17, 52, 200. Again, there's not a lot you can learn from that about user needs. So to sum it up, experience sampling helps you identify needs. It identifies features that might be helpful for your audience. And it tells you a lot about current pain points and delights. So this was experience sampling. I'm moving on to observation. This is the second lean user research technique I want to introduce you to. In observation, you are gathering data at the user's or the person's environment. They don't have to be users yet. This is the science of contextualization. There are four pillars to observation. The first one, pretty obvious, is observing. Watching people as they go along with their daily lives, at home, work, the street, the bus station, the train, wherever is relevant, based on what you want to learn. The second pillar is listening. Listening to the jargon people use, their language, and witnessing conversations they have with one another. Third, noticing. There are several behaviors, or things that happen, that you can observe and identify that would tell you a lot about what people need. And gathering. You collect artifacts. Artifacts are things people either use or create to complete their tasks. So for example, if you're observing a person at work, and a work related task is coming in, and the person is noting that task in a spreadsheet they created, that's a spreadsheet you need to gather. I talked about noticing behaviors. I want to go over several such behaviors. Paying attention to these behaviors is really, really hard. I don't know if you're aware of it, but when you observe people, there are tons of things you can learn. It's critical to know what to look for. So here are just five behaviors that are important to pay attention to. The first one is routines, regular actions people are doing. The task example I just gave, with the spreadsheet, this is a routine. Interruptions. When something happens and it stops a person-- the person you're watching-- stops them from completing a task. Breaking its continuity in some way, either because of the person himself or herself, or because of someone else, can teach you a lot. Life is never clean of interruptions. If a phone call is coming in, the person is in a meeting, looks at the phone, takes the call. As an observer, your intuition would be to ignore that because you didn't want to learn about meetings. But what happens during that conversation is extremely important to learn from. It has a direct relationship to what you want to learn about the meeting. Annoyances. Annoyances are just obstacles to complete a task. They're not going to prevent people from completing a task, but they would make them angry. They would make them overwhelmed, frustrated, and things like that. Delights. A lot of people think that research is there to identify problems, challenges, pain points. That's partially true. Learning about what people love, learning about what people enjoy, is extremely helpful in understanding what they need. And the last behavior to pay attention to is transitions. This is what happens when the thing you're observing is done, or on the way to something else. I'll give an example, because it better explains it. Let's say, again, you're interested in meetings and you're observing a person during a work day. And they have a meeting. The meeting has ended, and now they're walking 200 feet to the next meeting. Your intuition would be to take a break, a mental break. Drink water, check your email, not pay attention to what's happening. But that person might open their laptop and walk to that meeting doing something. This is a very, very good signal for a need. So pay attention to these transitions. Instead of just talking about observation, let's watch 90 seconds of an observation session. What I want you to do is this. Imagine that Stop & Shop, a grocery shopping retailer, came to you and asked you to answer this question. How can we improve the in-store grocery shopping experience with technology? Try and list, or think about, three things that she really, really cares about. Let's watch. WOMAN SPEAKER: And another thing that I do is, if something is really a good price-- if something's on sale and it's for a really good deal-- and it's not perishable, or that it has an expiration date that's farther along, I'll buy a bunch. Like, I literally-- if I know my kids love it-- and I know that it's an amazing price-- I buy like 10 of something. For example, the other day I bought Haagen-Dazs ice cream. I see that it's on sale now. I'm biased. Haagen-Dazs is just really good quality ice cream, and there's a lot of flavors, and my kids love ice cream. So to me, sometimes quality is important, even though you're paying a little bit more. So I bought the Haagen-Dazs ice cream at a good price because then you save-- two for $7.00-- you save $0.99 each. So you're actually saving $2.00 when you're paying $7.00 for two. So I loaded up my cart. You know, Haagen-Dazs doesn't do a bigger size, so of course you're paying more. MALE SPEAKER: How do you know this is a good price compared to other places? WOMAN SPEAKER: Good question. Just because, in my experience, I've seen Haagen-Dazs being sold, and it's usually the full price that it says that it is. It's at least $4.50. For example, King's, the high end market down the street, everything's more expensive in here, baseline. So one time, my husband liked the Greek yogurt, and I wanted to buy him the Greek yogurt when I was at King's, and they had a little sale. And I was so happy, and I bought a bunch, and I came home. And when I came here to Stop & Shop and I looked at the Greek yogurt, it was cheaper baseline than it was at-- TOMER SHARON: At King's. All right. So, in 90 seconds, these are three insights you could have learned. She cares about saving money, she cares about the quality of the food she's buying, and she cares about her kids' health. Imagine what you could have learned during a longer session with several people. All right? Observation is really good for identifying features your users or potential users need, for validating or invalidating assumptions about your users, and learning more about their problems, workflows, and goals. So this was observation. The third, and last, lean user research technique I want to go over is called fake doors, or as I like to call it, fake it till you make it. And fake doors-- and I'm using, again, Will and Dana's landing page-- you are pretending you have something. They did that in their other landing page, where they tried to collect email addresses. The difference here is this button. Instead of collecting email addresses, they are trying to understand the commitment. They're trying to prove that there's a commitment to using the product. They're asking people to buy a product that doesn't exist yet. If people do that, that's a very good signal that there's a need here. A version of that is called the button to nowhere. The button to nowhere is exactly what it sounds like. I'm going back to this page. Imagine that Will and Dana thought about they had an idea. Let's develop a TV app for note-taking. Now without writing-- well, maybe not one line of code, because adding this button right there would take a line of code-- but without writing a lot of code, without developing the app, they can learn a lot about needs. What they need to do is measure, or look at the ratio, between how many people clicked the button divided by how many people are exposed to this page. If they decide in advance on a threshold, that if crossed that would make them develop the app, they have a very powerful decision making tool at hand. And they didn't need to develop anything else, rather than this. They don't need to develop the product. This technique is also called 404 testing, for obvious reasons. And it's extremely helpful in learning about needs without risking time, money, and effort you can never get back if you develop the wrong product. So lean user research technique. I talked about experience sampling, observation, and fake doors. And lean user research is a discipline that is providing insights into products users, their perspectives, and abilities to the right people at the right time. [LAUGHTER] Funny, but when you talk about apps, it's not funny anymore. Ignore lean user research, and your app's destiny will be very similar to the destiny of helicopters in movies. Many developers think that they don't have time to waste on learning from users. If there's one thing, only one thing, I want you to take away from this talk, is that you don't have time to develop the wrong app. Don't perfectly execute the wrong plan. Execute the right plan. Thank you. [APPLAUSE] A couple of things. Feedback on this talk would be extremely helpful. And I want to thank all these good people for serving as the photographer, logo designer, models, and so on and so forth. If you have any questions, please come to the mics here, and I'll be happy to answer. We have enough time. Or you can come to me. Yeah, go ahead. AUDIENCE MEMBER 1: I've got two. So, suppose our team's getting bigger. Same customer base. Could be like we're working on exposure touch point A, and the other team's working on touch point B. Is there strategies or approaching for sharing findings and results of user researches to other team members? And then, probably the second one, that is, do findings and research results expire? TOMER SHARON: Sharing is key to success. I mean, I don't see why you don't share your findings with other teams. Are you competing? Are the teams competing? AUDIENCE MEMBER 1: No, no, no. The question is how. TOMER SHARON: How to share? AUDIENCE MEMBER 1: Yeah. Well, I mean-- So I actually have to hand someone raw data. That's part of the [? parser, ?] where I can make [INAUDIBLE] for them to gain meaningful insights [INAUDIBLE]. TOMER SHARON: OK, so the question is asking about what to share. All right. I'm against reports, so I'm not for writing reports. But do do the analysis and share what you learned from it. You don't need to drop raw data on them. What was the other question? AUDIENCE MEMBER 1: Do findings ever expire? TOMER SHARON: Do findings ever expire? Sometimes if you change the design, if you change the product, some findings might expire. Finding about human behavior don't tend to expire. Findings about user needs don't really expire. Because human behavior-- and I know it's a funny thing-- doesn't change much. We are behaving very similarly to how humans behaved 10,000 years ago without technology. Yes. AUDIENCE MEMBER 2: So what is the right sample size for any kind of research like this? So for a small application, what would the sample size be? For a bigger application with more risk, what would the sample size be? TOMER SHARON: It depends on your question. Remember, the questions I introduced? For observation, you don't need a lot of people. And not a lot is, I don't know, eight, or around that are more than enough to learn from. In experience sampling, when I'm doing that, we use thousands now. We used to use hundreds, now we're using thousands. It depends on what you want to know and what is the research method that you're picking. But it's changing. If you have a specific question in mind that you want to learn about, I'm happy to talk to you later about the sample size. Yes. AUDIENCE MEMBER 3: Hi, I thought your observation example was very interesting. It looked a lot like the listening to your users thing that you told us never to do. TOMER SHARON: When I say, don't listen to users, observe their behavior, there are two ways to do that. I didn't go into that because I didn't have time, but I am happy to do that now. What you watched is what I call an out loud session. What I asked her to do is describe what she's doing. I didn't ask her for-- you know, I didn't ask her, would you use the app I'm thinking about? I didn't ask her for feature recommendations. I asked her to describe what she's doing and why. OK? Another way of doing that is what I call, do it in silent mode. Ask them to shut up. People usually don't talk to themselves when they shop, so you can just watch. It's a trade off. There are pros and cons to each one. Because when you watch someone-- I have a video of someone I asked not to talk during shopping, grocery shopping-- you don't really understand what's going on. It seems very unhelpful. So sometimes, in some situations, I do ask them to describe what they're doing. Sometimes I'll do some kind of combination. I ask them to be silent, and only if I don't understand something, I'll ask them a question about it. I'll ask them to explain. Don't listen to users when you ask questions about the future. OK? If you have no choice, ask them about the recent past, not about the future. AUDIENCE MEMBER 3: On a closely related note, the first thing I noticed in that example was that the user couldn't get through 90 seconds of shopping without being interrupted by somebody yelling on a loudspeaker about something she cared nothing about. TOMER SHARON: Yeah, it's like flying. Yes. AUDIENCE MEMBER 4: So with 404 testing, what are some of the pitfalls? It seems like if I put a button in my app and pushed that out to everyone, that they're going to be upset. TOMER SHARON: There's one very, very big pitfall. I attended one of Eric Ries's talk, and he gives an example. Or, somebody asked him, what do you mean post an ad that leads to nowhere? To a 404 page? People would chase me. People would call me, and they would want to do bad things to me. And he responded and said, then you have a business. They really care. So the con is that the people feel tricked. So, I'm not recommending that you do that all the time. But sometimes this is extremely helpful in understanding needs. You can be more gentle than serving 404 pages and come up with something like I showed, coming soon, or something like that. Thank you for your feedback, you helped us. If you have money, pay them. Things like that. Give them a gift or something like that to make it better for them. Yes. AUDIENCE MEMBER 5: I was wondering if you have any recommendations for conducting the user experience sampling? Just in terms of implementation-wise. Do I just put dialogues in my app after the users execute some repeated action, or? TOMER SHARON: I do have a lot to say about that. What I'm going to say is that tomorrow at 9 o'clock, I'm giving a talk just about experience sampling. So instead of three minutes, I have 30 minutes, so I'm going to talk more about that. And I'm going to talk about an example from Google. AUDIENCE MEMBER 5: OK, thank you. TOMER SHARON: Sure. Yes. AUDIENCE MEMBER 6: How do you pick your riskiest assumption? TOMER SHARON: Again, it's an assumption. It's not easy. First, you need a list of assumptions. A lot of people don't even do that. List of assumptions. And then, look at each one and try and think-- if you have a team, do that as a team-- try to identify the one assumption that if it turns out that it's not true, everything falls apart. There's no reason for this company, there's no reason for this product. Might be more than one, but normally it's about one. All right? Yep. AUDIENCE MEMBER 7: Hi. I was wondering, what is your advice for a redesign or a product that exists already, but you want to improve? And maybe you kind of executed it in not the best way? TOMER SHARON: That's a pet peeve of mine. Usually, redesigns, from my observations, are done because three years passed. And it's not necessarily a good reason. There needs to be a good reason for a redesign. If it's from 15 years ago and it looks old, that's a good reason. Three years ago, I'm not sure it's going to look old. A good reason is, we identified different user needs. OK? And identified user needs is not we asked people what they need, and they told us. Or feature requests. Or things like that. User needs bubble up from these things I just described. AUDIENCE MEMBER 7: OK, thank you. AUDIENCE MEMBER 8: You can let them go. They've been-- OK. TOMER SHARON: No, no, I give priority to women because we want them to come back. AUDIENCE MEMBER 8: So, really, we're talking apps here. But a product is a product. Hardware, software, it doesn't matter what it is. One's a little more permanent than another. But how do you end up finding the right problem, executing it correctly, but not doing it the correct way? So, this is something-- I do a lot of stuff with startups and I see this happen all the time-- and I just go, this product is useless to me because it solves the problem, but it doesn't solve it in a way that's going to be continuously useful. Or it's too expensive. How do you find that out early on in the stages of development and ideation in order to prevent that kind of outcome? TOMER SHARON: One thing that I mentioned during the talk is something that people in many cases ignore. It's not only important to identify a problem, because there are many problems in this world. It's important to identify and invest time in identifying a problem people really care about. Because there are many problems that nobody would argue that they're problems, but people don't care enough about them to solve them. We do a lot of bypasses all day long. We don't really care. Yes, there's a problem. No big deal. There are hundreds of thousands of apps that solve problems that nobody cares about. I would say this is key. You can solve the right problem, you can solve a problem people care about, and you can design a beautiful app, but then the workflow is terrible, and people would never use it. There are problems, from the company or the startup perspectives, that can arise in every stage. In my opinion, the hardest to fix is solving the wrong problem. Or not solving any problem. There are many solutions without problems. OK? It's hard. It's not something I can answer in a minute. We can talk later. Yes. AUDIENCE MEMBER 9: Does this change if you're building an enterprise software instead of a consumer application? For example, there are two big differences that I see. One is there are multiple stakeholders. And number two, the access to top management is limited, or perhaps the intentions of managers are opaque sometimes. TOMER SHARON: I always like to talk about research in terms of questions. If you care about what people need, what people want, and can they use the thing, it's not really different. What product, what app, or what organization you're targeting. Or what type of audience you're targeting. Of course, there are differences in the details. Differences in stakeholders, definitely. In organizations, there is the person or department that pays for the product, and then there are users. That's different. They have very different considerations. So yes, there are differences. And when you have multiple stakeholders, you need to focus on each one of them. Because user experience is not just what happens to the user during the usage of the product. But very basically, there's not a lot of difference. The basic things you need to do, there's not a lot of difference, in my opinion, at least. Yep. AUDIENCE MEMBER 10: So what do you do when your users have a need that happens much less frequently than taking notes or going shopping, like buying a flight? TOMER SHARON: Good question. So I mentioned, what was the reason you recently updated your website? People don't necessarily update their website five or eight times a day. So the study would just take longer. So let's say your audience is updating their website twice a week. All right? So you're not going to ask that five times a day. I would recommend that you ask that once a week and run this study for 10 weeks. All right, so, a participant would answer the question during a study 10 times over 10 weeks. Do that with 50, 100 people, you have a lot of data points still. It's just going to take longer. Yep. AUDIENCE MEMBER 11: Hi. How do you deal with insincere answers to your questions? How do you-- TOMER SHARON: I ask more questions. In my interviews with founders and app developers, I wanted to know if they read certain books. And I thought in advance that they would tend to not say the truth because they want to look smart. So I asked them, after asking about the books, so say three questions after that, I asked them what does the term blah, blah, blah say to you. And that's a term taken from an important book in that list. If they haven't read the book, they wouldn't know what I was talking about. And some of them didn't. So there are ways to understand if people are not very sincere. But there's an even bigger problem when you ask people and not observe their behavior. When you ask people, they do what's called rationalization. They tell you things they think you want to hear because they want to look smart. They want to be perceived as good people. So they would change reality a little bit. And the bad news is that we have no way of knowing if that's true or not. So it's hard. It's hard. That's why I heartily recommend not to ask questions and just watch them. AUDIENCE MEMBER 11: Thank you. TOMER SHARON: Last question. We don't have time. AUDIENCE MEMBER 12: It's regarding the 404 testing. Most of the time, you are just pitching something new. It's hard to get data from your competitions. How do you generalize the data that you have collected, since you don't have anything that is similar to what you're trying to do? TOMER SHARON: Let me see if I understand. You are saying that you don't have anything to compare to? AUDIENCE MEMBER 12: Probably. TOMER SHARON: OK, so you can decide in advance, what is a number that would make us feel confident? Or you can compare yourself to yourself over time. So you can run the test today, and then again a month from now, and in two months, and see what has changed, if at all. OK? All right, one more. I have a timer here, and something will explode if-- AUDIENCE MEMBER 13: All right, I'll try to keep it tight. The lady in the video seemed like an extrovert. And it seems like these would be the kind of people who would be more inclined to participate in an observational study. Do you have any methods for recruiting introverts, or making sure that there isn't a bias there? TOMER SHARON: Yes. When you find these people, don't just communicate with them over email. Talk to them. All right? And many people would not volunteer even if you're paying. They would not volunteer themselves to do that. But if you talk to them, you can create rapport and make them feel comfortable, and some would come. You don't have to have-- that woman is what we researchers call a good participant. Because she's really, you know, expressing everything in a very clear way. It's very easy for us to learn from these kinds of people. But we also need the introverts. So we talk to them in advance, to make sure that they're comfortable with it. And not all of them will. Sometimes we do blind observations. We observe people without them knowing. We just walk-- let's say the grocery shopping-- we just walk into a store. I was in a sports bar a month ago watching people watch games. And they didn't even know I was doing research. AUDIENCE MEMBER 13: One more. Do you find that there's kind of a certain period through the session that after which, then you kind of really start in your mind recording interactions, because you've warmed up the participant, so to speak? TOMER SHARON: What do you mean, recording? AUDIENCE MEMBER 13: Or like, you might need to warm them up before you can kind of get the behaviors you want, I guess. TOMER SHARON: Yes. They need to feel comfortable. So I pretty much in the first five, ten, fifteen minutes, sometimes, I don't really collect any data. I just want to make them feel comfortable sharing stuff with me, or just getting used to the odd situation that you do grocery shopping with somebody with a camera following you. And there are ways to make them feel comfortable. All right. So, for those of you who are here, you don't have time to work on the wrong app. Please, please, please perfectly execute the right plan. Thanks guys.
A2 初級 美國腔 Google I/O 2014--完美執行錯誤的計劃 (Google I/O 2014 - Perfectly executing the wrong plan) 689 57 Ashley Chen 發佈於 2021 年 01 月 14 日 更多分享 分享 收藏 回報 影片單字