Beam Me Up, SQLite: Running Real-Time Performance Experiments with DevCycle

This post guides you through setting up an engineering-led experiment to compare the performance of SQLite (local), Turso, and SQLiteCloud using the SQLite Trek app and DevCycle.

Beam Me Up, SQLite: Running Real-Time Performance Experiments with DevCycle
Photo by Wonderlane / Unsplash

In our previous post, we explored the power of feature flags and DevCycle for real-time experimentation. Now, let's dive into running your own experiment with the SQLite Trek app and DevCycle. This post will guide you through setting up the features, configuring targeting rules, and creating the Query Response Time metric to compare the performance of SQLite (local), Turso, and SQLiteCloud.

Set Phasers to Experiment: Using Feature Flags to Drive SQLite Performance Tests
In this blog post, we explore how feature flags are essential for effective experimentation and introduce a practical example of performance testing across different SQLite options to showcase DevCycle’s experimentation capabilities.

Prerequisites:

Before you go any further, complete the following steps:

  1. Create accounts on:
  2. Get familiar with the DevCycle by watching the Quickstart tutorial.
  3. Populate the Databases by creating the required table structure.

Note: You should also have a basic understanding (or at least a passing familiarity) with the Express and tinyhttp web frameworks.


Step 1. Setting Up the Database:

To ensure consistency across all platforms, you’ll need to create a star_trek_series table in each SQLite database (Local, Turso, and SQLiteCloud).

Table Structure:

  • id: The unique ID for each series.
  • series_name: The name of the series.
  • premiere_year: The year the series premiered.
  • seasons: Number of seasons.
  • captain: The name of the captain in the series.
  • crew: A comma-separated list of crew members (stored as text).
  • description: A brief description of the series.

Sample SQL Query:

CREATE TABLE star_trek_series (
    id INTEGER PRIMARY KEY,
    series_name TEXT NOT NULL,
    premiere_year INTEGER NOT NULL,
    seasons INTEGER NOT NULL,
    captain TEXT NOT NULL,
    crew TEXT NOT NULL,  -- We will store crew as a comma-separated string
    description TEXT NOT NULL
);

-- Insert the data for each series
INSERT INTO star_trek_series (id, series_name, premiere_year, seasons, captain, crew, description) VALUES
(1, 'Star Trek: The Original Series', 1966, 3, 'James T. Kirk', 'Spock, Leonard McCoy, Montgomery Scott, Uhura, Hikaru Sulu', 'The original adventures of Captain Kirk and the crew of the starship USS Enterprise as they explore space in the 23rd century.'),
(2, 'Star Trek: The Next Generation', 1987, 7, 'Jean-Luc Picard', 'William Riker, Data, Deanna Troi, Worf, Geordi La Forge', 'Set in the 24th century, Captain Picard leads the next generation of Starfleet officers aboard the USS Enterprise-D.'),
(3, 'Star Trek: Deep Space Nine', 1993, 7, 'Benjamin Sisko', 'Kira Nerys, Odo, Jadzia Dax, Worf, Julian Bashir', 'Captain Sisko and his crew defend the strategic space station Deep Space Nine, located near a wormhole leading to the Gamma Quadrant.'),
(4, 'Star Trek: Voyager', 1995, 7, 'Kathryn Janeway', 'Chakotay, Tuvok, Tom Paris, Seven of Nine, The Doctor', 'Captain Janeway leads the USS Voyager and her crew on a journey through the Delta Quadrant after being stranded 70,000 light-years from Earth.'),
(5, 'Star Trek: Enterprise', 2001, 4, 'Jonathan Archer', 'T''Pol, Trip Tucker, Malcolm Reed, Hoshi Sato, Travis Mayweather', 'Set before the formation of the United Federation of Planets, Captain Archer and the crew of the first Enterprise explore the galaxy.'),
(6, 'Star Trek: Discovery', 2017, 4, 'Michael Burnham', 'Saru, Paul Stamets, Sylvia Tilly, Hugh Culber, Philippa Georgiou', 'The adventures of Captain Burnham and her crew as they navigate an ever-evolving universe, leading Starfleet''s exploration and defense efforts.'),
(7, 'Star Trek: Picard', 2020, 3, 'Jean-Luc Picard', 'Raffi Musiker, Cristóbal Rios, Soji Asha, Elnor, Agnes Jurati', 'Retired Admiral Picard embarks on a new adventure years after the events of Star Trek: The Next Generation.');

How to Use:

  • Local SQLite: The test.sqlite file in the GitHub repository already contains the required data.
  • Turso/SQLiteCloud: Run the provided SQL query on both your Turso and SQLiteCloud databases to ensure the data structure is consistent across all platforms.

This will ensure that your databases are properly structured and ready for querying during your performance tests.


Step 2: Setup the Feature in DevCycle

With the databases setup, the next step is setting up the feature flags in DevCycle that will dynamically assign users to one of the three database solutions.

Create the Feature:

In DevCycle, create a new Experiment feature named "SQLite Trek Experiment" and key sqlite-trek-experiment

  1. Set the feature variable type to String to support variations for each database solution.

Define Variations:

  • Control: local
  • Variation A: turso
  • Variation B: cloud

These variations represent the three database options you’re testing.

Set Up User Targeting:

By default, users will be evenly distributed across the three variations, with:

  • 34% of users will be assigned to Control SQLite (local).
  • 33% to Variation A Turso.
  • 33% to Variation B SQLiteCloud.

This ensures each solution receives enough traffic for a valid comparison. However, you can adjust this distribution to suit your specific needs.

Create the Metric:

  1. In the Experiment Results section, create a new metric named "SQLite Query Response".
  2. Set the Event Type to response_time and Optimize For to Decrease, as we want to highlight the fastest response time possible.
  3. Set the Type to Total Average (numerical) to track the total average of query response times.

Step 3: Setup the SQLite Trek App

Once the feature flags are set up in DevCycle, it’s time to clone the SQLite Trek repository to get ready to start the experiment.

GitHub - DevCycleHQ-Sandbox/sqlite-trek-experimentation: SQLite Trek: Feature Flag-Based Experimentation with Express and tinyhttp
SQLite Trek: Feature Flag-Based Experimentation with Express and tinyhttp - DevCycleHQ-Sandbox/sqlite-trek-experimentation

Follow the README for Setup

Make sure you have followed the instructions in the README, most importantly that you have set up your environment variables correctly:

  • LOCAL_SQLITE: Path to your local SQLite database.
  • TURSO_URL: The URL for your Turso instance.
  • TURSO_AUTH_TOKEN: The authentication token for accessing Turso.
  • SQLITE_CLOUD_CONNECTION: The connection string for SQLiteCloud.
  • DEVCYCLE_SERVER_SDK_KEY: The Server SDK key for your DevCycle Feature.

How It Works

While there are many moving parts, below is a basic overview of how the application works:

  1. User Submission
    • A user submits a query string to the /search endpoint via the input box.
  2. Start Timer
    • A startTime variable is initialized to begin timing the request.
  3. Feature Flag Evaluation
    • The DevCycle feature flag with the key sqlite-trek-experiment determines which database to query by assigning the user to one of the following options:
      • local
      • turso
      • cloud
  4. Database Query
    • The selected database is queried, and the retrieved data is stored in a result variable.
  5. Calculate Response Time
    • A responseTime variable is calculated using Date.now() - startTime.
  6. Render Response
    • The result data, response time, and database type are sent to the index page for rendering.
  7. Track and Visualize Performance
    • Upon finishing the response, the time taken to process each search query is sent back to DevCycle via the track hook for processing and visualization.

Core Implementation

Here's the core of this implementation, located in app.js:

// Define the DevCycle User Object
app.use((req, res, next) => {
    req.user = {
        user_id: generateUserId(),
    };
    next();
});


// Handle the Search Functionality
app.post("/search", async (req, res) => {
    const user = req.user;
    const searchTerm = req.body.query;
    const startTime = Date.now();

    try {
        // Use DevCycle feature flag to determine which database to query
        const flag = devcycleClient.variable(user, "sqlite-trek-experiment", "local");

        // Map flag value to the corresponding database query function
        const dbQueries = {
            local: queryLocalSQLite,
            turso: queryTurso,
            cloud: querySQLiteCloud,
        };

        // Select the appropriate database based on the feature flag value
        const {
            query,
            dbType
        } = dbQueries[flag.value] || dbQueries.local;
        const result = await query(searchTerm);
        const responseTime = Date.now() - startTime;

        // Render the result, query time, and database type used
        res.render("index", {
            query_time: `${responseTime}ms`,
            db_type: dbType,
            data: result || `No results found for '${searchTerm}'`,
            host: `http://${hostname}:5000`,
        });

        // Send the response time metric back to DevCycle
        res.on("finish", () =>
            devcycleClient.track(user, {
                type: "response_time",
                value: responseTime
            })
        );


    } catch (error) {
        res.status(500).render("index", {
            query_time: null,
            db_type: null,
            data: "Internal Server Error",
            host: `http://${hostname}:5000`
        });
    }
});

For a more detailed breakdown of the full implementation and additional code, check out the repository.


Step 4: Run the Experiment

This part is easy! Head over to your application and run npm start. If you've followed the README instructions properly, this command will spin up two versions of the application:

  • localhost:3000 – A tinyhttp implementation
  • localhost:5000 – An Express.js implementation

Now, simply try some searches. The applications automatically generate a random user ID on every page refresh, so you should see a fairly even distribution between the different database options, simulating a real-world experiment.

0:00
/0:20

Step 5: Review the Results

Once you've completed a few searches, you can start analyzing the data in DevCycle. To view the results, head back to your Feature Dashboard and navigate to the Experiment Results page where you set up your metric. Select a date range, the environment (i.e. All or Development), and then click Calculate.

Sample Results

Here are some sample results from running the experiment locally:

Important Note:

These results will likely be obtained by running the experiment from a single location, which limits the true comparison, especially for edge solutions like Turso that are optimized for global deployments. To fully understand the advantages of these solutions, running the experiment in a globally distributed environment would provide more comprehensive insights.


Step 6: Next Steps for Global Experimentation

Now that you’ve successfully run a localized experiment, you can expand the scope to test globally distributed performance. Here’s what you can do next:

  • Expand to Global Regions: Spin up this project on a service like Netlify, share the link with some friends, and run the same experiment across different geographic regions to see how each solution performs under various network conditions.
  • Fine-tune the Experiment: Adjust the percentage rollouts for each variation or add new database solutions to the mix.
  • Track Additional Metrics: Beyond query response time, consider tracking error rates, user engagement, or other key performance indicators that matter to your application.

Step 7: Continue Exploring Experimentation with DevCycle

Now that you've successfully run your experiment locally, it's time to dive deeper into experimentation.

With DevCycle’s feature flags, you can easily start implementing your own experiments—whether you’re optimizing performance, testing new features, or exploring different user segments. The flexibility of feature flags allows you to gather valuable insights and refine your approach, all while maintaining a seamless user experience.

To learn more about how to set up and manage experiments, check out the experimentation docs.

Feature Experimentation | DevCycle Docs
Overview