Set Phasers to Experiment: Using Feature Flags to Drive SQLite Performance Tests

In this blog post, we explore how feature flags are essential for effective experimentation and introduce a practical example of performance testing across different SQLite options to showcase DevCycle's experimentation capabilities.

Set Phasers to Experiment: Using Feature Flags to Drive SQLite Performance Tests
Photo by Stefan Cosma / Unsplash

If you've been keeping an eye on recent trends, you've probably noticed that SQLite is having a moment. Traditionally seen as a lightweight, local storage solution—often used in development or test environments—SQLite is now showing its value in production-scale deployments. Even the Laravel PHP framework has recently made SQLite its default database type.

Why you should probably be using SQLite | Hacker News

As SQLite’s use expands, we’re also seeing new cloud services emerge that offer SQLite-based solutions. This might seem surprising, given that SQLite’s main strength is its ability to run locally alongside your application. However, these cloud and edge computing platforms, such as Turso and SQLiteCloud, open up new possibilities, particularly for globally distributed applications.

So, which option performs best in a global context? Should you stick with a traditional, local SQLite setup, or is it time to explore these new, edge-focused options that promise improved global performance?

To answer that, performance testing across multiple solutions is essential. And this is where feature flags—combined with an experimentation-focused platform like DevCycle—become a game changer.

In this blog post, we'll explore how feature flags are essential for effective experimentation and introduce a practical example of performance testing across different SQLite options to showcase DevCycle's experimentation capabilities.


Why Feature Flags Matter in Experimentation

Experimentation is a continuous process that goes beyond controlled environments. In a live application, conditions vary dramatically due to factors like user behavior, traffic fluctuations, and geographic distribution, all of which can impact performance. Testing in isolated environments doesn’t provide the full picture, which is why live experimentation is essential.

Feature flags play a key role in enabling real-time experimentation in production. They allow you to segment your users into different groups and assign them varying experiences or configurations. This way, you can track and compare how each group experiences the application in real-world conditions. Running these experiments with actual users gives you accurate, data-driven insights.


The Experiment: To Boldly Test SQLite, Turso & SQLiteCloud

For this experiment, we developed the SQLite Trek Series Searcher, a Star Trek-themed Node.js application. Built with Express.js & tinyhttp, the app lets users search for Star Trek series information, retrieving results in JSON format from a SQLite database.

To determine which database configuration offers the best performance, we're using DevCycle's feature flags to assign users to one of three database setups—SQLite (Local), Turso, or SQLiteCloud. Then, we're leveraging DevCycle Custom Metrics to track and analyze performance data, helping us identify which database consistently delivers the fastest and most reliable query results.

0:00
/0:20

What We're Testing and Measuring

Our objective is straightforward: find the SQLite database setup that offers the best query response time. We're comparing three different options:

  1. SQLite (Local): The traditional local database setup serving as our control group.
  2. Turso: A remote solution optimized for edge computing.
  3. SQLiteCloud: A fully managed, cloud-hosted database service.

The primary metric we're focusing on is query response time—how quickly each database processes a search and returns results to the user.

Methodology and Data Tracking

To ensure a fair and unbiased comparison, we're utilizing DevCycle’s feature flags to automatically assign users to one of the three database solutions:

  • Control: SQLite (Local)
  • Variation A: Turso
  • Variation B: SQLiteCloud

Each database handles an equal share of users (33%), ensuring that the load is evenly distributed. Here's how the process works:

  • User Assignment: When a user interacts with our application, they're seamlessly routed to one of the three database setups without any disruption to their experience.
  • Data Collection: Each time a user submits a query, we record the response time and send this data back to DevCycle for analysis.
  • Performance Analysis: Individual response times are used to calculate and display the average response times, which are used to compare the overall performance of each database solution. This metric helps identify which database consistently delivers the fastest and most reliable performance across all users.

Why DevCycle for This Experiment


DevCycle streamlines our experimentation process with a powerful set of features that help us manage and optimize our tests efficiently. These include:

Preset Variables and Variations: When creating a new experimentation feature, DevCycle automatically sets up Control, Variation A, and Variation B. We simply update the variable strings for our different database types, making setup quick and effortless.

Predefined Targeting Rules: DevCycle simplifies experiment setup with built-in targeting rule presets for the development environment:

  • Audience Targeting: All users are automatically included in the "Testing group."
  • Equal Distribution: Users are split evenly with 34% assigned to Control, and 33% to Variation A, and Variation B, ensuring a balanced comparison between the three database solutions.
  • Consistent Randomization: Assignments are based on User ID, maintaining fair and consistent group allocation.

Custom Metrics with Useful Presets: DevCycle offers predefined metrics that cover many standard use cases, allowing us to track:

  • Count per Unique User
  • Averages per User
  • Count per Variable Evaluation
  • Total Average: This shows the overall average query times across all users, which is especially useful for assessing performance.

Optimized Dashboards: DevCycle’s intuitive dashboards provide real-time insights, making it easy to monitor performance across each variation and adjust as needed for optimal results.


Beam Me Up, SQLite: Running Real-Time Performance Experiments with DevCycle

In this post, we’ve explored how feature flags are essential for running dynamic, real-world experiments, and how they can be used to compare database solutions like SQLite, Turso, and SQLiteCloud. However, the real power lies in the setup.

In our sister blog post, we dive deeper into the technical implementation of this experiment within DevCycle. We guide you through creating feature flags, configuring targeting groups, and setting up metrics to track query response times—all inside the DevCycle platform.

Beam Me Up, SQLite: Running Real-Time Performance Experiments with DevCycle
This post guides you through setting up an engineering-led experiment to compare the performance of SQLite (local), Turso, and SQLiteCloud using the SQLite Trek app and DevCycle.

Want to skip the blog post and jump into the code? Well here is the source code for the SQLite Trek app so you can try this experiment yourself right now!

GitHub - DevCycleHQ-Sandbox/sqlite-trek-experimentation: SQLite Trek: Feature Flag-Based Experimentation with Express and tinyhttp
SQLite Trek: Feature Flag-Based Experimentation with Express and tinyhttp - DevCycleHQ-Sandbox/sqlite-trek-experimentation