Placeholder Image

字幕列表 影片播放

  • >>Yvette Nameth: Next up, we have Jouko from Bitbar to talk about using image recognition

  • for mobile app and game testing. >>Jouko Kaasila: Hello.

  • Okay. So my name is Jouko Kaasila. I'm the cofounder and COO of Bitbar, the company behind

  • a pretty awesome online service called Testdroid Cloud, which is a -- it's a device cloud of

  • Android and iOS devices from every single market in the world, including Japan, China,

  • Korea, and India. We also host private device clouds for people

  • who don't want to share their devices. And we license our technology to companies who

  • build large-scale on-premise device labs. So in the next ten minutes, I'm going to tell

  • you how to get rid of 60 manual testers with -- and replace them with one smart one. And

  • this is actually something that has happened with one of our customer organizations, when

  • they moved from 100% manual testing using real devices to 100% automated testing using

  • real devices. And at the end, I'm going to have a gift for

  • all of you in the spirit of the holiday season. So let's start.

  • So if you look at the -- the charts of the most grossing apps on Android or the Google

  • Play today, on the top 50 -- list of the most-grossing apps, there is only one app that is not a

  • game. And if you look at the top 100 most grossing apps, there are only three apps that

  • are not games. So it's pretty obvious who is making the money on the app stores.

  • And usually, games are free to download, and they monetize with in-app purchases. So it

  • takes a little bit of time, or you have to engage the user long enough to make really

  • any money on that. Because currently, the average customer acquisition cost or the cost

  • per download on Android and iOS is around $5. So it means that you have to keep the

  • customer for long -- as many releases as possible in order to recover that $5 investment and

  • make some profit on top of it. So it's very high motivation for testing in general and

  • making sure that your game works on every single device out there.

  • So given the commercial incentive, why hasn't this all been automated a long time ago? You

  • know, these guys do have a lot of money. That's not the problem. The problem is that mobile

  • games are not really easy to automate. They are very difficult to automate.

  • So the main reason is that the games use direct screen access in the form of OpenGL or active

  • X. And they effectively bypass all the operating system controls or services you have. So that

  • means that any of your -- any of the mobile -- native mobile test automation frameworks

  • that we know can't really access any of the internal data of the game. So you only have

  • to resort to X and Y clicks and, you know, reading the screen buffer. You basically have

  • to test everything from the outside of the game. And, of course, in terms of automation,

  • that's pretty tricky. The second thing is that the games are very

  • performance-driven. So, first of all, they consume a lot of resources. So the game binaries

  • are two to three gigabytes today. They use a lot of network traffic, they use a lot of

  • memory, they use a lot of CPU, they use a lot of GPU, they utilize the sensors. So it's

  • pretty clear that, you know, these guys, they don't test anything on emulators and simulators.

  • It doesn't just make any sense. And there's another interesting incentive,

  • is that one of our customers told us that, you know, if they can add one more very popular

  • Chinese Android device on the list of their supported devices, it can bring up to 5 million

  • revenue over the lifetime of the game. So you can invest quite a lot on optimizing your

  • game on that model. So all the difficulties have actually led

  • to this. So the guys have resources, and it's difficult to automate games. So all the gaming

  • companies, the large gaming companies, have very large manual device farms on their favorite

  • low-cost locations. And that's -- of course, it doesn't scale very well. And that leads

  • to another problem, that the QA process causes delays on the actual release of the game,

  • which increases the time to market. And that's the real cost of the manual testing, that,

  • you know, things get delayed, and you don't stick to the schedule.

  • And, of course, even with games, there is -- there's still a lot of room for manual

  • testing. But that space is more for the qualitative, you know, testing of the fluidity of the game

  • play and that sort of aspects, not to go through every single menu on every single language

  • on every single device. That's the job for the automation.

  • So the most typical assignment we get is that, you know, guys, we only need to automate the

  • basic game -- basic functionality of the game, so -- on as many devices as possible and it

  • has to run as fast as possible, because they want to automate it on every single build.

  • And, of course, with these guys, the challenge is never the access to the devices. You know,

  • for the manual testing, they have hundreds of devices. That's not a problem. The problem

  • is that how to automate these, like, monotonous routines, and especially in a way that they

  • produce actionable -- actionable results in -- such as, like, performance data logs, screenshots,

  • and videos at scale. You know, with manual testing, you can't -- you can get those, like,

  • one by one, but you can't get it at scale. And another curve ball is that of course these

  • teams, they don't have, like, programming skills or background on scripting, which severely

  • limits the tools and the frameworks that can be used. And also, these guys -- these teams

  • are usually quite separated from the actual development team. So any kind of, like, white

  • box testing or instrumenting the game itself is not usually feasible for these guys. These

  • guys get the APK thrown over the fence, and it's like pure, 100% black box testing scenario.

  • So with these two requirements in mind, so, need for, like, very simple scripting without

  • any computer science skills, and the pure 100% black box approach, we started looking

  • at the open source space that, you know, what sort of building blocks could we find to,

  • like, gobble up a solution that actually solves this sort of problem.

  • So as a foundation of the solution, we selected Appium. And the fact that Appium is cross-platform

  • test automation framework works really well here, because the game -- usually the game

  • is exactly the same on Android and iOS. So I -- an ideal case, you can use the same script

  • to run your automation on both Android and iOS.

  • Also, Appium provides a pretty nice abstracted API that we can use for the next level of

  • our automation to run. So in this scenario, we cannot use any of the, like, advanced features

  • of Appium, like -- like the object -- inspecting the objects and those sort of things. We can

  • only use, like, X and Y clicks and drags. For the next level, we had to kind of figure

  • out three things. How -- first, how can we -- how can we know where in the game flow

  • we are? How can we get that info? Then, when we get that info, where should

  • we click next? What is the next X and Y coordinate to click?

  • And then after we have clicked something, did we achieve what we wanted to do? Did the

  • game go to the next stage as -- as we expected? So the -- to drive the execution 100% from

  • the outside of the game, we selected OpenCV image recognition library. So, basically,

  • we feed the library with screenshots and reference images. And it does, like, a pixel comparison,

  • pixels not matching. And when it finds the matching location, it will feed the X and

  • Y coordinates to the Appium script that then executes that on the real devices. It's pretty

  • -- The OpenCV library is very good for this, because it's customizable, it's resolution-agnostic,

  • which is really good on real mobile testing -- like, real device testing context, and

  • it can even recognize images that are stretched or, like, somehow at an angle, so you can

  • even test, like, 3D games as well. So the outcome is that there's only two simple

  • tasks that the test automation engineers need to do. They need to cut the reference images

  • and then change only, like, one line or -- like, one line of code to define what kind of click

  • needs to be done when the reference is matching or when you get the coordinates.

  • So then parallelizing this was quite interesting. So in the -- in the device cloud, like, remote

  • device cloud context, the Appium client sits on the remote machine, which is typically

  • a developer workstation or a mobile -- like, a (indiscernible) integration server on the

  • other side of the Internet where the devices are. And this -- Appium creates WebDriver

  • session from the Appium client to the Appium server, which is in the cloud, and then, you

  • know, using that server, all the data -- all the data goes over the WebDriver session,

  • including the logs and the screenshots and all the control commands.

  • And the way you scale this is that you just spin a lot of WebDriver sessions on the same

  • -- same remote machine, and, you know, you just keep running it.

  • But that effectively renders the -- the remote machine, the bottleneck. And especially in

  • our scenario, where we run the OpenCV image recognition stack on top of the Appium client

  • and we are transferring very large screenshots all the time on all the devices, we really

  • hit performance wall on, like, five, six, seven devices per remote machine. It didn't

  • scale. We had to look for something else. So our solution was that we moved all the

  • processing, the whole test execution, on the server side. So we created virtual machine

  • images that included the whole stack, including the OpenCV, the Appium client and the Appium

  • server, and this virtual machine automatically connects to one individual Android or iOS

  • device at a time. Then we can run the -- a lot of these -- On

  • one physical server, we can run as many of these as we want. And the server has enough

  • processing power to do all the pixel comparison, all that. We don't have to transfer the screenshots

  • over the Internet. And, actually -- it's actually made this very robust and very fast.

  • So in a way, it's like Appium on steroids, because, you know, it runs, like, three times

  • faster than your normal Appium. And it's really a very scalable -- and the end user doesn't

  • have to worry of any of the -- how to spin this up and how to scale it. From the end

  • user point of view, it's like running one session. They upload just script, and it runs

  • -- you know, their online system scales all that execution for them.

  • So we prepared a demo on how all this works in practice. So can we start the video?

  • [ Video ] >>Jouko Kaasila: Okay. So that's a demo. And

  • as promised, the gift. All the sample code, all the instructions, everything you can get

  • access to those at testdroid.com/gtac15. Questions? >>Yvette Nameth: Sadly, we actually don't

  • have time for questions. >>Jouko Kaasila: Good.

  • >>Yvette Nameth: Good? Yeah, well, you are standing between these people and food.

  • [ Laughter ] >>Jouko Kaasila: Yes. You can reach me today

  • and tomorrow. I'm going to be here the whole day. So any questions on automating game testing

  • or scaling Appium on the server side, just reach me.

  • [ Applause ]

>>Yvette Nameth: Next up, we have Jouko from Bitbar to talk about using image recognition

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

GTAC 2015:使用真實設備的移動遊戲測試自動化 (GTAC 2015: Mobile Game Test Automation Using Real Devices)

  • 67 4
    Chi-Tan Kao 發佈於 2021 年 01 月 14 日
影片單字