Patrol‑Driven UI Test Architecture for Flutter
UI testing has become very important in recent years. Application development has sped up thanks to new AI tools. This article will help you create a structure for an enterprise application. Let’s get started
I also presented this topic during an online session at the Flutter Global Summit. You can watch the full recording of my talk here
Agenda
In this article, we’ll build a complete UI testing architecture for a Flutter application from scratch. Here’s a look at what we’ll cover:
- ⚙️ The “What” and “Why” of UI Testing in Flutter
- 📱 Introducing Patrol: Our Framework of Choice
- 🏗️ Case Study: Building Tests for the “Hatayı Yaşat” Project
- 📐 Designing a Scalable Test Architecture
- 🎥 Live Demo: Seeing It All in Action
- 🚀 CI/CD Deployment: Automating Tests with Firebase Test Lab & GitHub Actions
⚙️ The “What” and “Why” of UI Testing in Flutter
With the recent advancements in AI, application development is increasing every day. Many tools help developers create new apps, but the testing phase is often overlooked. UI testing helps us find problems before they affect real users, clients, or customers. For that reason, it is very important.
The mobile industry has a couple of ways to test applications:
- Component Test: Testing individual widgets in isolation (e.g., a single button or text field).
- Integration Test: Testing a complete feature or a large part of the app to see how different components work together.
My aim in this article is to talk about and write integration tests. This type of testing helps us find problems before a new release or before making changes to a big feature’s core.
Before using integration tests, your team is probably performing manual tests. Manual test cases are important, but it becomes harder to test every feature and change for each release. A one-time manual test before publishing an app is good, but repeatedly testing everything manually is not efficient. Manual testing has several problems: it is time-consuming, it’s hard to catch UI regressions, and it has scalability issues.
That means:
“More screens = more test cases = more effort without automation.”
The Flutter framework directly supports integration tests without any extra requirements. It can be used directly in any Flutter project. This testing framework can find elements on the screen and perform interactions like taps, scrolls, etc.
Here is an example of Flutter’s UI testing code:
testWidgets('Onboarding screen shows logo and page indicator', (WidgetTester tester) async {
await tester.pumpWidget(MyApp());
await tester.pumpAndSettle();
expect(find.byType(Image), findsOneWidget);
expect(find.text("HATAY'I YAŞAT"), findsOneWidget);
expect(find.byType(Icon), findsNWidgets(3)); m
});You can find more details on the official website:
While Flutter’s standard testing framework is powerful for interacting with UI elements, it has limitations. Sometimes, our tests need to go beyond the app’s UI and interact with the device’s native functionality. This includes actions like:
- Going to the home screen
- Taking a screenshot
- Opening the device’s settings
- Handling native permission dialogs
These are tasks that Flutter’s default tools don’t support out-of-the-box. This is where Patrol comes in. So, let’s dive in and implement Patrol in our project!
⚙️ Patrol
A UI framework called Patrol helps Flutter developers test their UIs more effectively. The framework helps us with three main things: finding elements easily, controlling native device behavior, and providing powerful CLI support.
Btw, here’s how the LeanCode team describes Patrol’s efficiency:
Patrol is a powerful, open-source UI testing framework designed specifically for Flutter apps and released in September 2022. Developed and maintained by LeanCode, one of the world’s leading Flutter development consultancies, Patrol builds upon Flutter’s core testing tools to enable developers to do things that were previously impossible.
There are three key advantages to selecting Patrol for your UI testing:
- Full Native Support: Go beyond the app’s UI to control the whole device.
- Powerful CLI Tool: Easily handle building, running, and publishing your tests.
- Hot Restart & Debugging: Enjoy a fast and efficient development workflow.
These features will help you rapidly test your Flutter projects. To get you started, here is a cheat sheet that clarifies the basics of how to implement and use Patrol commands in your project.
For more details about the Patrol framework, check out the official website.
Before moving to the next session, make sure to check the native installation guide on their website. Now, let’s start structuring our project and make a demo.
📱 Case Study: “Hatayı Yaşat” Project
This is a project I’m very proud of. My team and I started this project two years ago after two major earthquakes occurred in Turkey. The first was a 7.8 magnitude, and the second was a 7.6. It was a very bad time for us. After the earthquakes, everyone tried to help the affected cities. During this time, we voluntarily created this project to help Hatay and the other affected regions.
Our project helps newly opened companies by showing these places with detailed information. Another feature is that my teammates share activities, job advertisements, etc. I want to thank my team again for their work on this project!
Today, I’ll show the testing solution I implemented for this project. The tests will cover:
- The splash screen flow, including checking the Lottie animation.
- The onboard screen flow, verifying that it is shown only on the first launch.
- The home screen, to validate categories, list items, and more.
🏗️ Test Architecture Design
Now, let’s dive into the core of this article: the test architecture. My approach is inspired by the “Chain of Responsibility” design pattern, which allows us to create a seamless, sequential flow for our feature tests.
Think about a standard user journey in your application. It often starts with a splash screen, moves to a login page, and finally lands on the home page. We can visualize this journey as a simple chain:
Splash -> Login -> Home
The architecture I designed allows us to build exactly this kind of chain. With a single command, we can trigger the first test in the sequence, and the rest will execute automatically, one after another.
Architecture: Keys
The adventure begins with Keys. While the Patrol framework supports a couple of ways to find elements (like by class name or object type), I prefer to find them using Key variables.
part 'items/general_keys.dart';
part 'items/home_keys.dart';
part 'items/onboard_keys.dart';
part 'items/splash_keys.dart';
final class ApplicationKeys {
const ApplicationKeys._();
static final splashKeys = _SplashKeys._();
static final onboardKeys = _OnboardKeys._();
static final homeKeys = _HomeKeys._();
static final generalKeys = _GeneralKeys._();
}To manage our keys effectively, we use a single ApplicationKeys class. This class acts as a central hub for all test keys in the application. However, instead of putting every key into one giant file, we organize them by feature using Dart's part and part of directives.
Here’s how it works:
- Each feature has its own
_keys.dartfile (e.g.,_login_keys.dart). - The main
application_keys.dartfile uses thepartdirective to include these smaller files.
This structure gives us the best of both worlds: a single, easy-to-use ApplicationKeys class to access any key, while keeping our code clean and organized by feature.
typedef K = ApplicationKeys;
/// Usage: K.splashKeys.viewThis is another useful way to organize your Keys class for easy use.
Architecture: View
Now, let’s add our keys to the view components. Every Flutter widget has a key parameter, so we will assign our predefined keys to their corresponding widgets. Let's look at the Onboarding View as an example of this usage.
Here are the keys for our view:
final class _OnboardKeys {
_OnboardKeys._();
final Key fullImage = const Key('full_image');
final Key skipButton = const Key('skip_button');
final Key view = const Key('onboard_view');
}Now, let’s implement these keys in our view:
As you can see, this usage is very easy. I just assign the key to the widget’s key parameter. That's it! The widget can now be found directly with this approach.
Architecture: Flow
Our application is almost ready to test. The keys are defined, and the widgets are ready. Now, we just need to create the “flow” for testing our application. Before we start, let’s implement our chain design.
We will use a base class that helps with all of our test cases. This interface helps to create and continue the test flow. Let’s explain how this works.
PatrolIntegrationTester: This is the test driver (we'll call it$) that helps us interact with UI components.nextScenario: A property that holds the next test flow to run after the current one is complete.run(): The main logic for the current test flow is executed here.waitAndCheckValid(): Before a flow starts, this method checks if it can run. (For example, the onboard flow only works on the first-time app installation).startFlow(): This method starts and logs the current flow.
Every test flow must extend this base class and implement its methods. Now, let’s implement the Onboarding View test with this approach.
Onboarding Test: A Real-World Example
Now, it’s time to use our Flow architecture in a real example. We will start with the Onboarding page test, which is the first entry point of the application.
The main message of this page is: “Welcome and Do not forget your history.” The UI is simple: it has an image and a close button.
Here is the test flow that helps us test this page:
final class OnboardTest extends BaseTestScenario {
OnboardTest(super.$, {required super.next});
@override
Future<bool> run() async {
await $.pumpAndSettle();
final isFullImageVisible = $(K.onboardKeys.fullImage).exists;
expect(isFullImageVisible, isTrue, reason: 'full image is not visible');
final isSkipButtonVisible = $(K.onboardKeys.skipButton).exists;
expect(isSkipButtonVisible, isTrue, reason: 'skip button is not visible');
await $(K.onboardKeys.skipButton).tap(
settleTimeout: const Duration(milliseconds: 100),
settlePolicy: SettlePolicy.noSettle,
visibleTimeout: const Duration(milliseconds: 100),
);
return true;
}
@override
Future<bool> waitAndCheckValid() async {
if (AppHelper.isOnboardCompleted()) {
return false;
}
await $(K.onboardKeys.view).waitUntilVisible();
return $(K.onboardKeys.view).exists;
}
}Explaining the Flow Logic
The flow starts after the waitAndCheckValid() method works. This method decides if the flow should continue or not. If it returns false, it means the flow does not need to run, and the test runner can move to the next flow or finish.
In our case, the Onboarding page is only shown once in the application, so its test flow should also run only once. The waitAndCheckValid() method checks this condition and makes the decision.
If waitAndCheckValid() returns true, the main flow starts with the run() method. My validation checklist for this method is:
- Check if the full-screen image is visible.
- Check if the close button is visible.
- Tap the “skip” button to continue to the next step.
Linking the Flows Together
Now it’s time to link the flows together. I have two more scenarios ready: a HomeScenario and a SplashScenario. As we talked about at the top of this article, we will create a chain.
The chain is now ready. The test will begin when we call the startFlow() method on the first scenario in our chain. Here is how the logic will work:
- It starts with the
OnboardingScenario. The test runner checks: Is the Onboarding flow valid?
- Yes: It runs the Onboarding test. When finished, it moves to the next flow.
- No: It skips the Onboarding test and immediately moves to the next flow.
2. Next is the SplashScenario. The runner checks: Is the Splash flow valid? (e.g., is the Lottie animation visible?).
- Yes: It runs the Splash test.
- No: It skips and moves on.
3. Finally, the HomeScenario. The runner checks: Is the Home flow valid?
- Yes: It runs the Home test.
- No: The test finishes.
After the entire flow starts and runs until it’s complete, here are my results:
Extra Scripts for Easier Local Testing
When you run the patrol test command, it can sometimes be confusing if you have multiple devices connected (e.g., a real device, an emulator, an iPad).
I often want to run tests only on my Android emulator. This script helps to find the running emulator and executes the tests on it directly.
Script for IOS Emulator:
#!/bin/bash
# Get device ID from iPhone line in patrol devices output
booted_device_id=$(patrol devices | grep -i 'iPhone' | grep -oE '[A-F0-9]{8}-[A-F0-9]{4}-[A-F0-9]{4}-[A-F0-9]{4}-[A-F0-9]{12}')
# If no device is found, exit with error
if [ -z "$booted_device_id" ]; then
echo "iPhone emulator bulunamadı. Lütfen bir iOS emulator başlatın."
exit 1
fi
echo "iPhone emulator bulundu: $booted_device_id"
# Run patrol command
patrol develop --target integration_test/start_test.dart --device "$booted_device_id" --flavor developmentScript for Android Emulator:
#!/bin/bash
# Get device ID from emulator line in patrol devices output
booted_device_id=$(patrol devices | grep -i 'emulator' | awk '{print $NF}' | tr -d '()')
# If no device is found, exit with error
if [ -z "$booted_device_id" ]; then
echo "Android emulator bulunamadı. Lütfen bir Android emulator başlatın."
exit 1
fi
echo "Android emulator bulundu: $booted_device_id"
# Run patrol command
patrol develop --target integration_test/start_test.dart --device "$booted_device_id" --flavor developmentLet’s Test a Native Feature
Our application shows a native notification permission dialog on the home page. Our test needs to tap the “Approve” or “Deny” buttons on this dialog. Patrol can handle this easily.
final isPermissionDialogVisible =
await tester.native.isPermissionDialogVisible();
if (isPermissionDialogVisible) {
await tester.native.denyPermission();
await tester.pumpAndSettle(duration: const Duration(seconds: 1));
}This feature is very useful for my application, and I can handle many things with it. You can find other native features in the Patrol documentation.
/// https://patrol.leancode.co/~2372/native/overview
void main() {
patrolTest('demo', (PatrolIntegrationTester $) async {
await $.pumpWidgetAndSettle(AwesomeApp());
// prepare network conditions
await $.native.enableCellular();
await $.native.disableWifi();
// toggle system theme
await $.native.enableDarkMode();
// handle native location permission request dialog
await $.native.selectFineLocation();
await $.native.grantPermissionWhenInUse();
// tap on the first notification
await $.native.openNotifications();
await $.native.tapOnNotificationByIndex(0);
});
}That’s all for the coding part. The flow we created will work on your local environment directly. But what about running tests in a cloud environment? That can also be done easily with Patrol commands.
Cloud Testing with Firebase Test Lab & GitHub Actions
Our test flows need to be tested on many different devices. Firebase Test Lab offers a free tier for testing on several devices directly. I prefer to implement this workflow using the command-line interface (CLI).
For more details, you can check the official documentation
Before using Test Lab with our Patrol test flow, we need to call the build command.
patrol build android --target integration_test/start_test.dart --flavor developmentThis creates the necessary APKs for uploading to the server. The following script can be used for building the APKs and sending them to Test Lab.
#!/bin/bash
gcloud firebase test android run \
--type instrumentation \
--use-orchestrator \
--app build/app/outputs/apk/development/debug/app-development-debug.apk \
--test build/app/outputs/apk/androidTest/development/debug/app-development-debug-androidTest.apk \
--timeout 1m \
--device model=MediumPhone.arm,version=34,locale=en,orientation=portrait \
--record-video \
--environment-variables=clearPackageData=true,IS_TEST_LAB=true \
--project YOUR-PROJECT-NAME \
--results-dir=test_results \
--results-history-name=instrumentation_testsHere’s what it does step-by-step:
gcloud firebase test android run: Tells Google to run your Android tests on Firebase.--app ...&--test ...: It uploads your application's APK and the test APK that contains your test code.--type instrumentation: Specifies that you are running standard UI tests (like Espresso).--use-orchestrator: Runs each test in isolation to prevent them from affecting each other, ensuring clean results.--device ...: Defines the exact virtual device for the test: a medium-sized phone with Android 14, set to English.--timeout 1m: Sets a 1-minute time limit for the entire test run. If it takes longer, it will be canceled.--record-video: Records a video of the screen during the test, which is very helpful for debugging failed tests.--project ...&--results-dir ...: Saves all the test results (like logs and the video) into a specific folder (test_results) in your Google Cloud project.--results-history-name ...: Groups the results under the nameinstrumentation_testsin the Firebase console, making it easy to see the test history over time.
The result looks like this from the website:
Here is the complete workflow file:
Final Thoughts
Phew, that was quite a journey! This architecture, Patrol, and of course, UI testing in general, have become very important in recent years. I hope you can implement these techniques in your own projects and relax a little more before every release :)
I also have a similar video in Turkish on my YouTube channel:
Thank you for reading!
See you in the next article!
