CI Integration
Timing tests require more care in CI than typical unit tests. This guide covers how to run tacet reliably in CI environments.
Basic test setup
Section titled “Basic test setup”A basic test with formatted output:
use tacet::{TimingOracle, AttackerModel, helpers::InputPair};use std::time::Duration;
#[test]fn test_crypto_constant_time() { let inputs = InputPair::new( || [0u8; 32], || rand::random::<[u8; 32]>(), );
let outcome = TimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .time_budget(Duration::from_secs(30)) .test(inputs, |data| { my_crypto_function(&data); });
// Display prints formatted output with colors println!("{outcome}");
// Assert the test passed assert!(outcome.passed(), "Timing leak detected");}import { TimingOracle, AttackerModel } from 'tacet';
test('crypto constant time', async () => { const outcome = await new TimingOracle(AttackerModel.AdjacentNetwork) .timeBudget(30_000) // milliseconds .test({ baseline: () => new Uint8Array(32), sample: () => crypto.getRandomValues(new Uint8Array(32)), measure: (data) => myCryptoFunction(data) });
// Display formatted output console.log(outcome.toString());
// Assert the test passed expect(outcome.passed()).toBe(true);});#include <tacet.h>#include <assert.h>
void test_crypto_constant_time(void) { // Create oracle tacet_oracle_t* oracle = tacet_oracle_for_attacker( TACET_ATTACKER_ADJACENT_NETWORK ); tacet_oracle_set_time_budget_secs(oracle, 30);
// Run test tacet_outcome_t outcome = tacet_test( oracle, generate_baseline, // Returns all zeros generate_sample, // Returns random data measure_crypto, // The function to test NULL // Optional context pointer );
// Display formatted output tacet_outcome_print(outcome);
// Assert the test passed assert(tacet_outcome_passed(outcome));
// Cleanup tacet_outcome_free(outcome); tacet_oracle_free(oracle);}#include <tacet.hpp>#include <catch2/catch_test_macros.hpp>
TEST_CASE("crypto constant time") { auto outcome = tacet::TimingOracle::forAttacker( tacet::AttackerModel::AdjacentNetwork ) .timeBudget(std::chrono::seconds(30)) .test( []() { return std::array<uint8_t, 32>{}; }, // baseline []() { return randomArray<uint8_t, 32>(); }, // sample [](const auto& data) { myCryptoFunction(data); } // measure );
// Display formatted output std::cout << outcome << std::endl;
// Assert the test passed REQUIRE(outcome.passed());}package crypto_test
import ( "crypto/rand" "testing" "time" "github.com/yourusername/tacet-go")
func TestCryptoConstantTime(t *testing.T) { outcome := tacet.ForAttacker(tacet.AdjacentNetwork). TimeBudget(30 * time.Second). Test(tacet.TestConfig{ Baseline: func() []byte { return make([]byte, 32) }, Sample: func() []byte { data := make([]byte, 32) rand.Read(data) return data }, Measure: func(data []byte) { myCryptoFunction(data) }, })
// Display formatted output t.Log(outcome)
// Assert the test passed if !outcome.Passed() { t.Fatalf("Timing leak detected") }}Running tests
Section titled “Running tests”The critical flag: single-threaded execution
Section titled “The critical flag: single-threaded execution”Timing tests should run single-threaded to avoid interference:
cargo test --test timing_tests -- --test-threads=1 --nocapture# Bun runs tests single-threaded by defaultbun test
# npm/yarn with Jest: use --runInBandnpm test -- --runInBand# CMake with CTestctest --test-dir build --serial
# Or run test binary directly./build/timing_tests# CMake with CTestctest --test-dir build --serial
# Or run test binary directly./build/timing_tests# Go runs tests single-threaded by defaultgo test
# Explicitly set parallelism to 1go test -p 1 -parallel 1Why single-threaded?
-
Measurement isolation: Parallel tests compete for CPU resources, adding noise to timing measurements.
-
Cycle-accurate timers: On macOS, the kperf cycle counter can only be accessed by one thread at a time. Parallel tests cause silent fallback to the coarse timer.
-
Reproducibility: Single-threaded execution produces more consistent results across runs.
Enabling cycle-accurate timers
Section titled “Enabling cycle-accurate timers”For higher precision on ARM64, enable cycle-accurate timers. Use TimerSpec::CyclePrecision in your test code to explicitly request high precision:
# Requires both sudo AND single-threadedsudo -E cargo test --test timing_tests -- --test-threads=1The -E preserves your environment variables (like PATH and CARGO_HOME).
# Option 1: Run as rootsudo cargo test --test timing_tests -- --test-threads=1
# Option 2: Grant capability (for CI runners)sudo setcap cap_perfmon+ep ./target/debug/deps/timing_tests-*cargo test --test timing_tests -- --test-threads=1Time budget recommendations
Section titled “Time budget recommendations”| Context | Time budget | Rationale |
|---|---|---|
| Development | 10s | Fast iteration |
| PR checks | 30s | Balance speed and accuracy |
| Nightly/security audit | 60s+ | Higher confidence |
use tacet::{TimingOracle, AttackerModel};use std::time::Duration;
// Development: quick feedbackTimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .time_budget(Duration::from_secs(10))
// CI: standard checksTimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .time_budget(Duration::from_secs(30))
// Security audit: thoroughTimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .time_budget(Duration::from_secs(60)) .max_samples(100_000)import { TimingOracle, AttackerModel } from 'tacet';
// Development: quick feedbacknew TimingOracle(AttackerModel.AdjacentNetwork) .timeBudget(10_000) // milliseconds
// CI: standard checksnew TimingOracle(AttackerModel.AdjacentNetwork) .timeBudget(30_000)
// Security audit: thoroughnew TimingOracle(AttackerModel.AdjacentNetwork) .timeBudget(60_000) .maxSamples(100_000)#include <tacet.h>
// Development: quick feedbacktacet_oracle_t* oracle = tacet_oracle_for_attacker( TACET_ATTACKER_ADJACENT_NETWORK);tacet_oracle_set_time_budget_secs(oracle, 10);
// CI: standard checkstacet_oracle_set_time_budget_secs(oracle, 30);
// Security audit: thoroughtacet_oracle_set_time_budget_secs(oracle, 60);tacet_oracle_set_max_samples(oracle, 100000);#include <tacet.hpp>#include <chrono>
using namespace std::chrono_literals;
// Development: quick feedbackauto oracle = tacet::TimingOracle::forAttacker( tacet::AttackerModel::AdjacentNetwork).timeBudget(10s);
// CI: standard checksoracle.timeBudget(30s);
// Security audit: thoroughoracle.timeBudget(60s).maxSamples(100'000);import ( "time" "github.com/yourusername/tacet-go")
// Development: quick feedbackoracle := tacet.ForAttacker(tacet.AdjacentNetwork). TimeBudget(10 * time.Second)
// CI: standard checksoracle.TimeBudget(30 * time.Second)
// Security audit: thoroughoracle.TimeBudget(60 * time.Second). MaxSamples(100_000)Handling noisy environments
Section titled “Handling noisy environments”CI runners are often noisier than local development machines. The library gives you tools for handling this:
Skip unreliable tests (fail-open)
Section titled “Skip unreliable tests (fail-open)”Skip tests that can’t produce reliable results:
use tacet::skip_if_unreliable;
#[test]fn test_crypto() { let inputs = InputPair::new(|| [0u8; 32], || rand::random()); let outcome = TimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .test(inputs, |data| my_function(&data));
// Skips if Inconclusive or Unmeasurable let outcome = skip_if_unreliable!(outcome, "test_crypto");
assert!(outcome.passed(), "Timing leak detected");}test('crypto timing', async () => { const outcome = await new TimingOracle(AttackerModel.AdjacentNetwork) .test({ baseline: () => new Uint8Array(32), sample: () => crypto.getRandomValues(new Uint8Array(32)), measure: (data) => myFunction(data) });
// Skip if inconclusive or unmeasurable if (!outcome.isConclusive() || !outcome.isMeasurable()) { console.warn('Test skipped: environment too noisy'); return; }
expect(outcome.passed()).toBe(true);});void test_crypto(void) { tacet_oracle_t* oracle = tacet_oracle_for_attacker( TACET_ATTACKER_ADJACENT_NETWORK ); tacet_outcome_t outcome = tacet_test(oracle, gen_baseline, gen_sample, measure, NULL);
// Skip if inconclusive or unmeasurable if (!tacet_outcome_is_conclusive(outcome) || !tacet_outcome_is_measurable(outcome)) { printf("Test skipped: environment too noisy\n"); tacet_outcome_free(outcome); tacet_oracle_free(oracle); return; }
assert(tacet_outcome_passed(outcome)); tacet_outcome_free(outcome); tacet_oracle_free(oracle);}TEST_CASE("crypto timing") { auto outcome = tacet::TimingOracle::forAttacker( tacet::AttackerModel::AdjacentNetwork ).test( []() { return std::array<uint8_t, 32>{}; }, []() { return randomArray<uint8_t, 32>(); }, [](const auto& data) { myFunction(data); } );
// Skip if inconclusive or unmeasurable if (!outcome.isConclusive() || !outcome.isMeasurable()) { SKIP("Environment too noisy"); }
REQUIRE(outcome.passed());}func TestCryptoTiming(t *testing.T) { outcome := tacet.ForAttacker(tacet.AdjacentNetwork). Test(tacet.TestConfig{ Baseline: func() []byte { return make([]byte, 32) }, Sample: func() []byte { data := make([]byte, 32) rand.Read(data) return data }, Measure: func(data []byte) { myFunction(data) }, })
// Skip if inconclusive or unmeasurable if !outcome.IsConclusive() || !outcome.IsMeasurable() { t.Skip("Environment too noisy") }
if !outcome.Passed() { t.Fatal("Timing leak detected") }}Require reliable results (fail-closed)
Section titled “Require reliable results (fail-closed)”Require reliable results, failing on uncertainty:
use tacet::require_reliable;
#[test]fn test_security_critical() { let inputs = InputPair::new(|| [0u8; 32], || rand::random()); let outcome = TimingOracle::for_attacker(AttackerModel::AdjacentNetwork) .test(inputs, |data| critical_function(&data));
// Fails if Inconclusive (environment too noisy) let outcome = require_reliable!(outcome, "test_security_critical");
assert!(outcome.passed());}test('security critical timing', async () => { const outcome = await new TimingOracle(AttackerModel.AdjacentNetwork) .test({ baseline: () => new Uint8Array(32), sample: () => crypto.getRandomValues(new Uint8Array(32)), measure: (data) => criticalFunction(data) });
// Fail if inconclusive or unmeasurable if (!outcome.isConclusive()) { throw new Error('Test inconclusive: environment too noisy'); } if (!outcome.isMeasurable()) { throw new Error('Test unmeasurable: operation too fast'); }
expect(outcome.passed()).toBe(true);});void test_security_critical(void) { tacet_oracle_t* oracle = tacet_oracle_for_attacker( TACET_ATTACKER_ADJACENT_NETWORK ); tacet_outcome_t outcome = tacet_test(oracle, gen_baseline, gen_sample, measure_critical, NULL);
// Fail if inconclusive or unmeasurable if (!tacet_outcome_is_conclusive(outcome)) { fprintf(stderr, "Test inconclusive: environment too noisy\n"); exit(1); } if (!tacet_outcome_is_measurable(outcome)) { fprintf(stderr, "Test unmeasurable: operation too fast\n"); exit(1); }
assert(tacet_outcome_passed(outcome)); tacet_outcome_free(outcome); tacet_oracle_free(oracle);}TEST_CASE("security critical timing") { auto outcome = tacet::TimingOracle::forAttacker( tacet::AttackerModel::AdjacentNetwork ).test( []() { return std::array<uint8_t, 32>{}; }, []() { return randomArray<uint8_t, 32>(); }, [](const auto& data) { criticalFunction(data); } );
// Fail if inconclusive or unmeasurable if (!outcome.isConclusive()) { FAIL("Test inconclusive: environment too noisy"); } if (!outcome.isMeasurable()) { FAIL("Test unmeasurable: operation too fast"); }
REQUIRE(outcome.passed());}func TestSecurityCriticalTiming(t *testing.T) { outcome := tacet.ForAttacker(tacet.AdjacentNetwork). Test(tacet.TestConfig{ Baseline: func() []byte { return make([]byte, 32) }, Sample: func() []byte { data := make([]byte, 32) rand.Read(data) return data }, Measure: func(data []byte) { criticalFunction(data) }, })
// Fail if inconclusive or unmeasurable if !outcome.IsConclusive() { t.Fatal("Test inconclusive: environment too noisy") } if !outcome.IsMeasurable() { t.Fatal("Test unmeasurable: operation too fast") }
if !outcome.Passed() { t.Fatal("Timing leak detected") }}Environment variable override
Section titled “Environment variable override”Force a policy for all tests:
# Fail-closed: all tests must produce reliable resultsTIMING_ORACLE_UNRELIABLE_POLICY=fail_closed cargo test
# Fail-open: skip unreliable tests (default)TIMING_ORACLE_UNRELIABLE_POLICY=skip cargo test# Fail-closed: all tests must produce reliable resultsTIMING_ORACLE_UNRELIABLE_POLICY=fail_closed bun test
# Fail-open: skip unreliable tests (default)TIMING_ORACLE_UNRELIABLE_POLICY=skip bun test# Fail-closed: all tests must produce reliable resultsTIMING_ORACLE_UNRELIABLE_POLICY=fail_closed ./build/timing_tests
# Fail-open: skip unreliable tests (default)TIMING_ORACLE_UNRELIABLE_POLICY=skip ./build/timing_tests# Fail-closed: all tests must produce reliable resultsTIMING_ORACLE_UNRELIABLE_POLICY=fail_closed go test
# Fail-open: skip unreliable tests (default)TIMING_ORACLE_UNRELIABLE_POLICY=skip go testGitHub Actions example
Section titled “GitHub Actions example”name: Timing Tests
on: push: branches: [main] pull_request:
jobs: timing-tests: runs-on: ubuntu-latest # or macos-latest
steps: - uses: actions/checkout@v4
- name: Install Rust uses: dtolnay/rust-toolchain@stable
- name: Run timing tests run: | cargo test --test timing_tests -- \ --test-threads=1 \ --nocapture env: # Optional: stricter handling for security-critical code # TIMING_ORACLE_UNRELIABLE_POLICY: fail_closed RUST_BACKTRACE: 1
- name: Run timing tests (with cycle-accurate timer, Linux only) if: runner.os == 'Linux' run: | # Grant perf_event access to test binary sudo setcap cap_perfmon+ep ./target/debug/deps/timing_tests-* cargo test --test timing_tests -- \ --test-threads=1 \ --nocapturePlatform considerations
Section titled “Platform considerations”Cloud VMs
Section titled “Cloud VMs”Cloud VMs have:
- Higher noise from noisy neighbors and hypervisor overhead
- Variable CPU frequency unless you pin to specific instance types
- No cycle-accurate timer access in most configurations
For CI on cloud VMs:
- Use
AdjacentNetwork(100ns), notSharedHardware - Increase time budgets (60s for thorough testing)
- Use
skip_if_unreliable!for graceful degradation
Bare metal
Section titled “Bare metal”Bare metal runners give you better measurement quality:
- Lower noise
- Stable CPU frequency (if configured)
- Cycle-accurate timers available
If you have bare metal CI (GitHub large runners, self-hosted), use it for security-critical timing tests.
Self-hosted runners
Section titled “Self-hosted runners”For self-hosted runners:
-
Disable CPU frequency scaling:
Terminal window sudo cpupower frequency-set -g performance -
Disable turbo boost:
Terminal window echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo -
Minimize background processes during timing test runs
-
Pin tests to specific cores if possible
Organizing tests
Section titled “Organizing tests”Consider separating timing tests from your regular test suite:
tests/├── unit/ # Fast unit tests├── integration/ # Integration tests└── timing/ # Timing side channel tests ├── aes.rs ├── ecdh.rs └── rsa.rsRun them separately:
# Regular tests (fast, parallel)cargo test --test unit --test integration
# Timing tests (slower, single-threaded)cargo test --test timing -- --test-threads=1tests/├── unit/ # Fast unit tests├── integration/ # Integration tests└── timing/ # Timing side channel tests ├── aes.test.js ├── ecdh.test.js └── rsa.test.jsRun them separately:
# Regular tests (fast, parallel)bun test tests/unit tests/integration
# Timing tests (slower, single-threaded)bun test tests/timingtests/├── unit/ # Fast unit tests├── integration/ # Integration tests└── timing/ # Timing side channel tests ├── test_aes.c ├── test_ecdh.c └── test_rsa.cConfigure in CMakeLists.txt:
# Regular tests (fast, can run in parallel)add_test(NAME unit_tests COMMAND unit_tests)add_test(NAME integration_tests COMMAND integration_tests)
# Timing tests (slower, single-threaded)add_test(NAME timing_aes COMMAND timing_tests aes)add_test(NAME timing_ecdh COMMAND timing_tests ecdh)add_test(NAME timing_rsa COMMAND timing_tests rsa)
# Mark timing tests to run seriallyset_tests_properties(timing_aes timing_ecdh timing_rsa PROPERTIES RUN_SERIAL TRUE)Run them:
# Regular testsctest --test-dir build -R "unit|integration"
# Timing testsctest --test-dir build -R "timing" --serialtests/├── unit/ # Fast unit tests├── integration/ # Integration tests└── timing/ # Timing side channel tests ├── test_aes.cpp ├── test_ecdh.cpp └── test_rsa.cppConfigure in CMakeLists.txt:
# Regular tests (fast, can run in parallel)add_test(NAME unit_tests COMMAND unit_tests)add_test(NAME integration_tests COMMAND integration_tests)
# Timing tests (slower, single-threaded)add_test(NAME timing_aes COMMAND timing_tests [aes])add_test(NAME timing_ecdh COMMAND timing_tests [ecdh])add_test(NAME timing_rsa COMMAND timing_tests [rsa])
# Mark timing tests to run seriallyset_tests_properties(timing_aes timing_ecdh timing_rsa PROPERTIES RUN_SERIAL TRUE)Run them:
# Regular testsctest --test-dir build -R "unit|integration"
# Timing testsctest --test-dir build -R "timing" --serialtests/├── unit/ # Fast unit tests├── integration/ # Integration tests└── timing/ # Timing side channel tests ├── aes_test.go ├── ecdh_test.go └── rsa_test.goRun them separately:
# Regular tests (fast, parallel)go test ./tests/unit/... ./tests/integration/...
# Timing tests (slower, single-threaded)go test ./tests/timing/... -p 1 -parallel 1Test configuration
Section titled “Test configuration”Set default single-threaded execution for timing tests:
# In .cargo/config.toml[env]# Only applies when running testsRUST_TEST_THREADS = "1"Or per-test-file:
// At the top of your timing test file#![cfg(test)]
// Force single-threaded for this test modulefn main() { // This file's tests will respect --test-threads=1}// In package.json{ "scripts": { "test": "bun test", "test:timing": "bun test tests/timing" }}Or with Jest:
// In jest.config.jsmodule.exports = { testMatch: ['**/tests/timing/**/*.test.js'], maxWorkers: 1, // Single-threaded bail: true};# In CMakeLists.txt# Mark timing tests to run seriallyset_tests_properties( timing_aes timing_ecdh timing_rsa PROPERTIES RUN_SERIAL TRUE)
# Or set a custom property for all timing testsset_tests_properties( timing_aes timing_ecdh timing_rsa PROPERTIES LABELS "timing")
# Run timing tests with: ctest -L timing --serial# Create a timing_test.sh script#!/bin/bashgo test ./tests/timing/... -p 1 -parallel 1 "$@"Or use a Makefile:
.PHONY: test test-timing
test: go test ./tests/unit/... ./tests/integration/...
test-timing: go test ./tests/timing/... -p 1 -parallel 1 -vSummary
Section titled “Summary”- Always run timing tests single-threaded to avoid measurement interference
- Use 30s time budget for CI, longer for security audits
- Use cycle-accurate timers (with sudo/capabilities) for higher precision on ARM64
- Handle noisy environments with skip/fail policies based on your requirements
- Separate timing tests from regular tests in your CI workflow
- Cloud VMs are noisier: use relaxed thresholds or longer budgets