Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • JONATHAN GERRISH: Hello, everyone.

  • Welcome to this morning session on test-driven development

  • for the Android platform.

  • My name is Jonathan Gerrish and I'm part of the Mobile Ninjas.

  • We're a small team within Google who

  • are passionate about software testing.

  • Can I get a quick show of hands in the audience?

  • How many of you are actually writing tests

  • as part of your normal software development practice?

  • That's fantastic.

  • OK.

  • So if you've written tests for Android before,

  • you've probably used some of our tools.

  • We developed the Android testing support library,

  • which includes the JUnit for test runner

  • and rules, the Espresso UI testing framework,

  • and we're also active contributors

  • to Roboelectric, the open source Android unit testing framework.

  • So everyone is telling you to write tests,

  • but why should you really do it?

  • It's true that tests take time to write.

  • They're adding code to your code base.

  • And perhaps you've been in this situation

  • before, where your manager or client has been telling you

  • that they're slowing you down.

  • But there's so many compelling reasons to write tests.

  • Tests give you rapid feedback on failures.

  • So failures that are spotted earlier on in the development

  • cycle are far easier to fix than ones that have gone live.

  • Secondly, tests give you a safety net.

  • With a good suite of tests, you're

  • free to refactor, clean up, and optimize

  • your code, safe in the knowledge that you're not going

  • to break existing behavior.

  • Tests are really the backbone of sustainable software

  • development.

  • You'll be able to maintain a stable velocity

  • throughout the lifetime of your project,

  • and you're going to avoid the boom-bust cycles of crunch

  • feature time and the aggregation of technical debt.

  • So in software testing, there exists the concept

  • of the testing pyramid.

  • And this is made up of a number of layers.

  • And each layer brings with it its own trade-offs that you're

  • going to have to weigh.

  • At the lowest layer is the small tests, or the unit tests.

  • And these need to be very fast and highly focused.

  • That's why we recommend you run these kind of tests, what

  • is known as local unit tests.

  • And these are going to run on your local desktop machine.

  • The trade-off you're making with these kind of tests

  • is infidelity because you're not running

  • on a realistic environment and you're probably substituting

  • in a bunch of mocks and fakes.

  • As we move up the pyramid, we're now

  • into the realms of integration testing and end-to-end testing.

  • And the key with these kind of tests is to bring in fidelity.

  • That's why we recommend that you run

  • these kinds of tests on a real device or an emulator.

  • These are the kinds of tests that

  • are going to tell you that your software actually works.

  • However, they are less focused, so a failure

  • in one of these kind of tests might take a little longer

  • to track down than it would in a unit test.

  • And one of the big trade-offs you're making

  • is in test execution speed.

  • Because you're assembling multiple components,

  • they all have to be built and then packaged,

  • shipped to a device where the tests are run,

  • and the results are collected back.

  • That's going to take extra time.

  • There's no single layer in this testing pyramid

  • that can suffice, so what you need to do

  • is to blend in tests at each different tier,

  • leveraging the strengths of one category

  • to weigh off the trade-offs in another.

  • There's no real hard and fast rule here,

  • but Google's own internal testing experts

  • recommend the 70-20-10 rule of thumb

  • as the ratio between small, medium, and large tests.

  • Let's take a look at our workflow.

  • So with test-driven development, the idea

  • is that you start by writing your tests,

  • then you implement the code to make those tests pass.

  • And then when your tests are green, you can submit.

  • Again, a quick show of hands.

  • Who out there has test-driven their code,

  • tried test-driven development in the past?

  • OK.

  • Cool.

  • We like test-driven development because it

  • makes you think about the design of your application up front.

  • It gives due consideration to APIs

  • and the structure of your code.

  • With test-driven development, you're

  • also going to be writing less code because you only

  • write the code necessary to satisfy your tests.

  • This will enable you to release early and often.

  • As you're constantly green, you'll

  • be able to deploy a working application at a moment's

  • notice.

  • If you're following the test pyramid,

  • the workflow is going to look something like this.

  • First of all, we have a larger outer iteration

  • that's concerned with feature development.

  • Here, it's driven by a UI test, and the mantra

  • with test-driven development is Red, Green, Refactor.

  • We start off with a failing test,

  • we implement the code to make that test pass,

  • and then we refactor.

  • Inside the larger iteration are a series

  • of smaller iterations and these are

  • concerned with the unit tests.

  • Here, you're building the units required

  • to make the feature pass.

  • And again, you use the same mantra here.

  • Red, Green, Refactor.

  • Red, Green, Refactor.

  • Let's take a look at an example application.

  • The feature we're going to implement today

  • is the Add Notes flow to a sample note-taking application.

  • If we take a look at our mock-ups,

  • we can see that we start on a notes list screen

  • full of some existing notes.

  • There's a floating action button down at the bottom.

  • And the user will click this, taking them

  • onto the new add notes screen.

  • Here, they can enter a title and a description for their note

  • before clicking Save.

  • The note will be persisted and then

  • they'll return back to their notes list screen,

  • where they can see their newly added note,

  • along with any other notes that previously existed.

  • Coming back to our workflow for a moment,

  • remember that we start with a failing UI test.

  • So let's take a look at how this test would look using Espresso,

  • the UI testing framework.

  • The first step is to click on the Add Note button.

  • Then we enter the title and description and click Save

  • before returning to the notes list screen.

  • And here, we're going to verify that the note that we just

  • added actually shows up.

  • Now remember, with test-driven development,

  • we'll not implemented code just yet.

  • All we have to do is implement enough

  • of the application to satisfy the specification of our tests.

  • So an empty activity, and just the resources that we need,

  • will suffice.

  • Once we have that, we can run our test

  • and we'll see it'll fail.

  • Now we have to implement this feature.

  • So applications are built up of many small units.

  • These are small, highly focused, specialized components

  • that do one thing and they do it well.

  • Collections of these small units are then

  • assembled together so that their collaborations will

  • satisfy our feature.

  • Let's take a moment to summarize the key characteristics that

  • make up a good unit test.

  • As well as the normal conditions,

  • you're wanting to test your failure conditions, invalid

  • inputs, and boundary conditions.

  • You're going to end up writing a lot of unit tests.

  • Unit tests must always give you the same result every time.

  • So avoid depending on things that might change--

  • For example, an external server or the current time of day--

  • because this is going to bring flakiness into your unit tests.

  • Unit tests should exercise one specific aspect of your code

  • at a time.

  • You're wanting to see that a failure in a unit test

  • will lead you, very quickly, to a natural bug in your code.

  • And when you write unit tests, avoid

  • making too many assumptions on the actual implementation

  • of your code.

  • You want your unit test to test behavior.

  • That way, you avoid rewriting your test

  • when your implementation is changing.

  • And one of the most important aspects of unit tests

  • is they've got to be fast, especially because you're

  • writing so many of them and, during TDD workflow, running

  • them rapidly.

  • It would be terrible if you were discouraged

  • from writing tests or refactoring your code because

  • of the pain in the execution time of those tests.

  • And finally, unit tests are an excellent source

  • of documentation and the way it's constantly

  • evolving with the code as it changes,

  • unlike static documents that will stagnate over time.

  • Let's try a unit test for our Add Notes activity.

  • This activity is going to take in user input

  • and then we're going to persist it

  • to local storage on the device.

  • OK.

  • So we're going to create the Add Note activity class,

  • and this will extend Activity, which is an Android framework

  • class.

  • It has a view which is going to be inflated with a layout.

  • The user will enter their data here.

  • And then we're going to persist that note into Android

  • SharedPreferences mechanism.

  • It's conceivable that, as our application evolves,

  • so did our requirement.

  • And perhaps our storage requirements

  • evolve to persist the notes onto cloud storage

  • and we have to build some kind of a synchronization mechanism

  • for local storage for the offline use case.

  • And in these cases, we see opportunities for abstraction.

  • We might, in this example, see that we

  • can extract a notes repository.

  • However, one of the key aspects of test-driven development

  • is that we only start by writing the simplest case first,

  • and then we iterate.

  • So we're going to resist the temptation to do this early.

  • Let's take a look at a sample of what an idealized unit

  • test would look like.

  • They're generally built up into three stages.

  • The first stage is you're setting

  • the conditions for the test, and this

  • includes preparing the environment,

  • setting up your dependencies with their required state,

  • and preparing any input data.

  • Next, we'll exercise the code under test, before finally,

  • making assertions on the results or the state.

  • I like to clearly separate each of these three

  • stages of the test and bring the pertinent aspects of each test

  • front and center to make for a really readable test.

  • Up until now with the Android platform,

  • you're writing your unit tests using

  • the mockable jarring conjunction with a mocking

  • library, such as Marketo.

  • And let's take a look at an example

  • of a test written with Marketo.

  • OK.

  • Wow.

  • That's a lot of code.

  • OK.

  • So because we have so many interactions with the Android

  • framework, we're going to need to provide

  • stubbing behavior for all of them in order just to make--

  • just to satisfy the execution paths of our test.

  • And furthermore, because Android uses a lot of static methods,

  • we're forced to introduce a second mocking

  • library, PowerMock, that will handle

  • this special case for us.

  • And there are also some pretty bad code [INAUDIBLE] here.

  • Let's take a look.

  • You see, we're forced to spy on the activity on the test

  • and we're needing to do this to modify its behavior.

  • And stubbing it out and providing some no ops.

  • So we're moving out of the realms of black box testing

  • here.

  • And finally, at the end, we're making

  • assertions about the implementation details.

  • And if these change, our test will need to change, too.

  • So remembering the characteristics of a good unit

  • test, let's take a moment to score this particular test.

  • While it is very focused, we're just

  • testing the happy path of our Add Notes flow,

  • and it's certainly fast because it's running on the local JVM.

  • However, we were making rather a lot

  • of assumptions about the implementation in that test.

  • And with this, if any of our implementation changes,

  • it's likely we'll need to rewrite

  • that test substantially.

  • And finally, all that excess boilerplate stubbing

  • is really distracting.

  • It's distracting away from the key aspects

  • of the test, the conditions of the test

  • that you're trying to document.

  • Well luckily, there's a tool that helps

  • address some of these issues.

  • So introducing Roboelectric.

  • Roboelectric is an Android unit testing

  • tool that's open sourced that we are actively contributing to.

  • And to tell you more about how you

  • can write great tests with Roboelectric,

  • I'm going to hand you over to Christian Williams,

  • the original author of Roboelectric.

  • [APPLAUSE]

  • CHRISTIAN WILLIAMS: Thanks, Jonathan.

  • It's awesome to see so many people who are

  • into Android testing and TDD.

  • Yeah, Roboelectric is this scrappy little open source

  • project that I started hacking on back

  • in the early days of Android Testing

  • because I was just super annoyed at how long it took to deploy

  • and run tests on an emulator.

  • And it's kind of been a side project

  • of a bunch of different people until last year, when

  • I had the privilege of joining my friend Jonathan at Google,

  • where he was already working, on improving Roboelectric

  • for Google's own internal test suites.

  • And since then, we've been really beefing up Roboelectric

  • and contributing back to the open source project.

  • Today, Roboelectric isn't an officially supported part

  • of the Android testing platform, but we

  • found that, when it's used correctly,

  • it can be a really useful part of your testing strategy.

  • And I'm going to show you a little bit about how

  • you can do that, too.

  • Let's go back to our notes unit test

  • and see how we might approach it with Roboelectric.

  • Since Roboelectric runs as a local unit test,

  • it'll still be running on your workstation,

  • not on an emulator.

  • But Roboelectric provides a little Android sandbox

  • next to your test, where the actual SDK code is running.

  • You'll have access to your activities, your layouts,

  • and views, and resources.

  • And you can generally just call most Android methods

  • and they'll kind of work like you'd expect.

  • There are parts of the Android framework

  • that rely on native code or collective hardware or interact

  • with external system services.

  • So for that, Roboelectric provides a sort of test stubble

  • that we call Shadows.

  • And those provide alternative limitations

  • of that code that's appropriate for unit testing.

  • Remember that test that we just saw that had 20 lines of code,

  • of mock set-up code?

  • Let's see how that looks in Roboelectric.

  • That's a lot less.

  • We've gotten rid of all the boilerplate.

  • The test is about half the size and much more concise.

  • We're not forced to think about the implementation details

  • as we're writing the test, which is quite nice.

  • Roboelectric is going to set up the application

  • according to your manifest.

  • And if we were asking it to set up our activity,

  • it runs it through the appropriate life cycle

  • to get it into the right state.

  • Inflates views, all that stuff that we expect on a device.

  • So we can just interact with it as if you're on a device.

  • So we add some text to some fields, click on it,

  • and assert that it adds a note to the repository.

  • Now, notice that we're not actually

  • going as far as the UI test that we wrote at the very beginning.

  • We're not asserting that the new note

  • appears on the view screen.

  • That would be the job of another unit test.

  • Now, I mentioned Roboelectric's shadows.

  • They actually give extended testing

  • APIs to some Android classes that

  • let us query internal state and sometimes change

  • their behavior.

  • In this example, we were asking the application

  • if any of our activities requested that an intent be

  • launched during the test.

  • We could use that to assert that, after saving

  • the note to the repository, we're

  • going to go to the View Notes activity.

  • Similar testing APIs exist for simulating hardware responses

  • or external services, things like that.

  • At this point, we have a failing unit test.

  • And now we get to--

  • we're ready for the easy part, writing the production code.

  • In the spirit of TDD, we're only going

  • to write exactly as much as is needed to make the test pass.

  • No more, no speculative coding.

  • So we inflate a layout, attach a click handler,

  • and when the click happens, we fade a note

  • and add it to the repository.

  • Now we can run the test, see it pass.

  • If there's some improvement we can make to the code,

  • we'll go back and refactor, and then we repeat.

  • This is where you get the thoroughness.

  • And Roboelectric is super handy for this

  • because it gives you nice, fast test runs.

  • You can get into a comfy cycle.

  • We want to not just test the happy path here.

  • We're going to test all the different cases that our code

  • is likely to encounter.

  • So for example, input validation and external conditions

  • like the network being down and stuff like that.

  • Roboelectric can also help with simulating device conditions

  • that you'll encounter.

  • For example, you can specify qualifiers

  • that the test should run with.

  • Here, we're saying a certain screen size and orientation,

  • which might change the layout a bit.

  • You can ask Roboelectric to run your test under a specific SDK.

  • So we'll say Jelly Bean here.

  • And it actually uses of the SDK code from that version.

  • And you can also tell Roboelectric,

  • I want to run this test under every SDK

  • that you support, or some range of them

  • that you're interested in.

  • And we support Jelly Bean through O right now.

  • At Google, we rely really heavily on Roboelectric

  • and we're investing in making it better.

  • We've got dozens of apps, including

  • these, that have hundreds of thousands

  • of unit tests running internally.

  • So it's well battle-tested.

  • And we've also recently started running the Android CTS, which

  • is the official Android test suite against Roboelectric.

  • And we're about 70% passing right now, getting better

  • with every release.

  • So if you used Roboelectric in the past

  • and found that it's come up short,

  • or if you're stuck in an old version,

  • I definitely recommend that you get up to the latest

  • because it's come a long way.

  • We've been working on reducing friction

  • in integrating Roboelectric with the Android tool chain.

  • It works now very well with Android Studio, with Gradle.

  • And we've got support for Bazel, Google's own open source

  • build system coming soon.

  • Roboelectric isn't a one-size-fits-all testing tool.

  • It's fast, but it's not 100% identical to Android

  • in every way, so you want to use it judiciously.

  • As I said before, avoid writing unit tests that link

  • multiple activities together.

  • That's not so much a unit test.

  • That's much better for Espresso.

  • If you find yourself dealing with multiple threads,

  • synchronization issues, stuff like

  • that, you're also probably not writing a unit tests, so not

  • good for electric.

  • And particularly, avoid using Roboelectric

  • to test your integration with Android APIs and things

  • like Google Play services.

  • You really need to have higher-level tests to give you

  • confidence that that's working.

  • So now that we've got some passing unit tests,

  • I'm going to hand you over to my colleague, Stefan

  • to talk about higher level testing.

  • [APPLAUSE]

  • STEFAN: Thank you, Christian.

  • Let's go back to our developer workflow diagram.

  • At this point, we hopefully have a ton of unit tests

  • and they thoroughly test all our business logic.

  • But let's switch gears and try to see how we can actually

  • write some integration tests to see how these units integrate,

  • and how they actually integrate with Android

  • and how they run in a real environment.

  • On Android, these tests are usually referred to

  • as instrumentation tests.

  • And I'm pretty sure most of you have written an instrumentation

  • test before.

  • And even though they look super simple on the surface,

  • there's actually a lot going on under the hood,

  • if you think about it.

  • You have to compile the code, you

  • have to process your resources, you

  • have to bring up a full system image and then run your test.

  • And there's a lot of things that go on on various levels

  • of the Android stack.

  • So these tests give you high fidelity,

  • but as John was mentioning, they come

  • at a cost, which is they are slower

  • and sometimes, they're more flaky than unit tests.

  • So let's actually see how this works

  • in your day-to-day development flow.

  • Let's say you're an Android Studio.

  • You've just written your new Espresso test

  • and you hit the Run button to run the test.

  • So the first thing that Android Studio is going to do

  • is it's going to install two APKs for you, the test

  • APK and the app on your test.

  • Now, the test APK contains Android JUnit Runner,

  • it contains the test cases, and your test manifest.

  • And then, in order to run the test,

  • Android Studio calls, under the hood, ADB Shell AM Instrument.

  • And then Android JUnit Runner will use instrumentation

  • to control your app on your test.

  • What is instrumentation?

  • I think you guys may have noticed this.

  • It's a top-level tag in your manifest, and why is that?

  • Instrumentation is actually something

  • that's used deeply inside the Android framework,

  • and it's used to control the lifecycle of your activities,

  • for instance.

  • So if you think about it, it's a perfect interception point

  • that we can use to inject the test runner.

  • And that's why Android JUnit Runner is nothing more or less

  • than instrumentation.

  • Let's go a little bit deeper and see

  • what happens when Android Studio actually runs your test.

  • It runs ADB Shell AM Instrument, which

  • will end up calling out to Activity Manager.

  • Activity manager will then call, at one point,

  • onCreate on your instrumentation.

  • Now that we know that Android JUnit Runner is

  • our instrumentation, at this point,

  • it will call onCreate on the runner.

  • And then the runner is going to do a few things for you.

  • It's going to collect all your tests.

  • Then it's going to run all these tests sequentially

  • and then it's reporting back the results.

  • One thing to note here is that Android JUnit runner--

  • and you may have noticed this--

  • runs in the same process than your application.

  • And more importantly, if you usually

  • use Android JUnit Runner, it runs all the tests

  • in one single instrumentation invocation.

  • Android JUnit runner is heavily used inside Google.

  • We run billions of tests each month

  • using Android JUnit runner.

  • And while doing so, we saw some challenges that we faced

  • and that we had to solve.

  • One thing that we see a lot is Shared State.

  • And I'm not talking about the kind of shared state

  • that you control and that you code in your app.

  • I'm talking about the shared state that builds up on memory,

  • builds up on disk, and makes your tests fail

  • for no reason or unpredictable conditions.

  • And this, among other things, will, at one point,

  • lead to crashes.

  • But in the previous module that I just showed you,

  • if one of your tests crashes your instrumentation,

  • it will take the whole app process with it

  • and all the subsequent tests will not run anymore.

  • And this is obviously a problem for large test suites.

  • Similarly, if you think about debugging,

  • if you run a couple of thousand tests in one invocation,

  • just think about what your lock head

  • will look like when you have to go through it for debugging.

  • So that's why inside of Google, we have

  • taken a different approach.

  • Inside of Google, every test method

  • runs in its own instrumentation and location.

  • Now, you can do this today, right?

  • You can make multiple ADB calls.

  • You can use a runner arc and maintain your custom script.

  • But the problem is it might not really

  • integrate well with your development environment.

  • That's why, today, I'm happy to announce the Android Test

  • Orchestrator.

  • And the Android Test Orchestrator

  • is a way that allows you to run tests like we do in Google.

  • It's a service APK that runs in the background

  • and runs its test in a single instrumentation invocation.

  • And this, obviously, has benefits, right?

  • There's no shared state anymore.

  • And in fact, the Android Test Orchestrator

  • runs PM clear before it runs its tests.

  • More so, crashes are now completely isolated

  • because we have single instrumentation invocations.

  • If a crash happens, all the subsequent tests

  • will still run.

  • And similarly, for debugging, all the debugging information

  • that you collect and pull off the device

  • is now scoped to a particular test.

  • This is great and we benefit a lot from it inside of Google.

  • Let's see how it actually works.

  • On top of installing the test APK and on our tests,

  • what we do now is we install a third APK on our device,

  • and it's a service APK running in the background containing

  • the orchestrator.

  • And then, instead of running multiple ATB commands,

  • we run a single ATB command.

  • But we don't instrument the app under test.

  • We instrument the orchestrator directly.

  • And then the orchestrator is going

  • to do all its work on the device.

  • So it's going to use Android JUnit Runner to collect

  • your tests, but then it's going to run

  • each of those tests in its own invocation.

  • And it's amazing and I'm pretty sure you will like this a lot.

  • And it will be available in the next Android Testing Support

  • library release.

  • And more importantly, we will have integration

  • with Android Studio.

  • It will be available in Gradle and we will also

  • have integration with Firebase Test

  • Lab coming later this year.

  • Now that we know how to run our test,

  • let's actually look at how we can write these integration

  • tests.

  • And usually, if you write a [INAUDIBLE] test on Android,

  • you're using the Espresso testing framework.

  • And as you can see, espresso has this nice and simple API.

  • And it actually works pretty simple.

  • What it does is you give us a view matcher

  • and we find a view in the hierarchy that

  • matches that matcher.

  • And then, we either perform a view action

  • or check a view assertion.

  • And because this API is so simple,

  • it's the perfect tool, too, for fast TDD prototyping

  • of UI tests.

  • But in order to provide you such a simple API,

  • there's a lot of things that need to go on under the hood.

  • So let's actually look at how Espresso works.

  • So when you call onView and give us your matcher,

  • the first thing that we're going to do

  • is we're going to create a view interaction for you.

  • And then the next thing is we make sure

  • that your app is in an idling, sane state

  • before we are ready to interact with it.

  • And you can think of it, this is at the core of Espresso.

  • And Espresso is well-known for its synchronization guarantees.

  • And the way we do it is we loop the message

  • queue until there are no messages

  • for a reasonable amount of time.

  • We look at all your idling resources

  • and make sure they're idle.

  • And we also look at Async Tasks to make sure there's

  • no background work running.

  • And only if we know that your app is

  • in a sane and stable state and we're ready to interact,

  • we're going to move on.

  • And then we're going to traverse the view hierarchy

  • and find the view that matches your matcher.

  • And once we have the view, we're then

  • going to perform a view action or a view assertion.

  • And this is great.

  • So now let's circle back to the test

  • that we showed you in the beginning

  • and have a closer look now that we know how Espresso works.

  • So in the first line, as you may remember,

  • we tried to click on the Add Note button.

  • And here, we're just going to use a with ID matcher, which

  • is a simple matcher that is matching a view in the view

  • hierarchy according to its ID.

  • The next thing we want to do is we want to click on the View

  • and we use a Click View action for this.

  • Now, where it gets interesting is the next line.

  • Because on this line, we want to type the title and description.

  • And we want to use a type text action for that.

  • But here, all the espresso synchronization guarantees

  • will kick in and only if we know that we

  • are ready to interact with your application,

  • we're going to invoke the type test action.

  • And this is great because it frees you

  • from adding additional boilerplate

  • code and additional slipping code to your test.

  • So similarly, we're going to save the note

  • and then we're going to verify that it's displayed on screen.

  • And this is great.

  • Now we know how Espresso works and we

  • know how it's a great tool to do test-driven development.

  • And now I'm going to hand it over to Nick

  • to talk a little bit more on how you can improve your UI tests

  • and how to improve your large and medium testing strategy.

  • [APPLAUSE]

  • NICK KOROSTELEV: Thank you, Stephan.

  • So One good attribute of a UI test

  • is a test that never sleeps.

  • So let's go back to our example to illustrate

  • this point a little bit further.

  • In our example, as you remember, we

  • have a note that we save into memory,

  • which is pretty fast and pretty reliable.

  • However, in reality, as your app grows,

  • you probably want extend this functionality

  • and save your note to the cloud or Google Drive, for example.

  • So when running our large end-to-end test,

  • we want to use a real environment

  • where we hit the real server.

  • And depending on your network connection,

  • this may take a long time, so you probably

  • want to do is in the background.

  • Now the problem is that Espresso synchronization is not

  • aware of any of your long-running tasks.

  • This is somewhere where developers would probably

  • do something as ugly as putting a thread sleep in their code.

  • But with Espresso, it is not actually required

  • because you can write an Idling Resource, where

  • an idling resource is a simple interface for you

  • as a developer to implement to teach Espresso

  • synchronization of any of your custom, long-running tasks

  • of your app.

  • So with this Idling Resource, we made our large end-to-end test

  • more reliable.

  • So let's see how we can add some more medium-sized tests

  • to your test suite.

  • So for a medium-sized test, we want

  • to keep them small and focused on a single UI component, where

  • a single UI component may be a specific view, fragment,

  • or an activity.

  • So let's go back to our example to see

  • how we can isolate our large end-to-end to more

  • isolated components.

  • Here in this example, again, you may have noticed

  • that there are two activities.

  • The List activity on the left and the Add

  • Note activity on the right.

  • So until now, we wrote a large end-to-end test

  • that gives us a lot of confidence

  • because it touches upon a lot of your code

  • in your app, which is great for large end-to-end tests,

  • but it's not so great for an iterative test-driven

  • development cycle.

  • So let's see how we can isolate these

  • and have isolated tests for each activity in isolation.

  • To isolate the left-hand side, the List activity,

  • we can use Espresso Intent, where Espresso Intent is

  • a simple API that allows you to intercept

  • any of your ongoing intents, verify their content,

  • and provide back a mock activity result.

  • Great.

  • Let's see how that API actually looks like.

  • So as you can see, it's very straightforward.

  • You have an intent matcher that will match your growing intent,

  • and you can provide a version of your activity result

  • back to the caller.

  • OK.

  • Let's use this API to write our first isolated test.

  • In this test, you can see, on the first line,

  • we do exactly that.

  • We intercept our content and we provide

  • a stub version of our activity result.

  • Now, on the second line, when we perform

  • Click, instead of starting a new activity,

  • Espresso will intercept this intent

  • and provide a Stub Activity result, which we can then

  • use on the last line to verify that our UI was updated

  • accordingly.

  • Now we have an isolated test.

  • OK.

  • So let's go back to our example and see

  • how we can isolate the second part, right?

  • So when you usually write tests, you

  • end up in a position where you may

  • have some external dependencies in play that

  • are outside of your control.

  • In our example, as I showed before,

  • we have a note that we save and it hits the real server.

  • Now even though we have another resource now

  • that makes it more reliable, your test

  • can still fail because your server

  • may crash for some reason.

  • So your task will fail.

  • So wouldn't it be better if we completely isolate ourselves

  • from these conditions and run our tests

  • in a hermetic environment?

  • This will not only make your test run much faster,

  • but it will also eliminate any flakiness.

  • And beyond this specific example,

  • you further want to isolate yourself

  • from any external dependencies.

  • So for example, you don't want to test any Android system

  • UI or any other UI components that you

  • don't own because they probably already tested

  • and they can also change without your knowing

  • so your test will actually fail.

  • Let's see how our second isolated

  • test will look in code.

  • So the main point here is that we no longer use

  • the real server.

  • Instead, we set up a hermetic repository.

  • Now, there's many different ways of you to do this

  • and this is just one way.

  • So then you can use this hermetic repository

  • in order to verify that your note is actually

  • saved without ever leaving the context of your app

  • or hitting the network.

  • So at this point, if you think about it,

  • you have two smaller tests that are way more reliable

  • and run much faster.

  • But at the same time, you maintain the same amount

  • of test coverage as your large end-to-end test.

  • And this is why we want to have more of these smaller

  • isolated tests compared to the large end-to-end tests we

  • showed before.

  • OK.

  • So at this point, we iterated through our developer cycle

  • a few times and we should see all of our tests

  • start turning green and we should be confident to release

  • our feature.

  • However, before we conclude, let's

  • jump into the future for a second.

  • As your app grows and your team grows,

  • you continue adding more and more features to your app.

  • And you may find yourself in a position

  • where you may have UI running in multiple processes, which

  • is exactly what happened at Google.

  • So if you go to our Add Notes example,

  • this may look something like this.

  • You have a first activity that runs in your main process

  • on the left-hand side.

  • And now the second activity will run in a private process.

  • And in this case, we're going to call it Add Notes.

  • So how do we test that?

  • Well, before Android O, it wasn't possible to test.

  • But with Android O, there is a new instrumentation

  • attribute that you can use in order to define which

  • process you want to instrument.

  • While instrumenting and running tests, I guess,

  • each process, in isolation, is a great idea

  • and you should do it, you may find yourself in a position

  • where you want to cross-process boundaries within one test.

  • So you would probably want to write an Espresso

  • test that looks like this.

  • While this was not only impossible on a framework level

  • before Android O, this was also impossible on Espresso level.

  • Because in this specific example,

  • Espresso is not even aware of your secondary process,

  • nor can it maintain any of the synchronization guarantees

  • we all know and love.

  • Today, I'm happy to announce Multiprocess Espresso support.

  • Without changing any of your test code or your app code

  • this will allow you to seamlessly interact with UI

  • across processes, while maintaining all of us

  • Espresso synchronization guarantees.

  • And it will be available in the next version of Android Test

  • Support Library release.

  • So let's have a quick overview of how it actually works.

  • Traditionally, as you know, in our example,

  • we start in one process, where we

  • have an instance of Android JUnit Runner and Espresso,

  • in this case.

  • Now, if you remember from our example,

  • when we click the Add Note button,

  • there will be a new activity and now we have a new process.

  • So the problem now is that we have

  • two processes with two different instances of Android JUnit

  • Runner and Espresso, and they're not aware of each other.

  • So the first thing that we want to do

  • is we want to establish communication between the two

  • Android JUnit Runners.

  • And now that we have this communication,

  • we can use it to establish the communication

  • to Espresso instances.

  • And the way we do that is by having

  • an ability in the Android JUnit Runner to register any testing

  • frameworks, like Espresso, with Android JUnit Runner.

  • And then the runner will then facilitate all the handshaking

  • required in order to establish communication between the two

  • Espresso instances.

  • Now that the two Espresso instances

  • can talk to each other, it can then

  • use it in order to enable cross-process testing

  • and maintain all the synchronization guarantees

  • that we had before.

  • OK.

  • With that, we're reaching the end of our developer workflow

  • and we showed you all the tools that you

  • can use across each step of the way

  • in order to make TDD happen on Android.

  • And with that said, even if you don't follow this flow exactly,

  • hopefully, you know how to use every single tool

  • and how to write good tests in order to bring your app

  • quality to the next level.

  • So if you like to write tests and you want to write and run

  • tests like we do at Google, here are some resources

  • to get you started.

  • I want to thank you and I think we

  • have some time for questions.

  • And if not, we have office hours at 3:30 today.

  • So hopefully, we'll see you there.

  • Thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 美國腔

使用Android測試支持庫在Android上進行測試驅動開發(Google I/O'17)。 (Test-Driven Development on Android with the Android Testing Support Library (Google I/O '17))

  • 41 5
    freeman 發佈於 2021 年 01 月 14 日
影片單字