Skip to content

Power Side-Channel Analysis

This guide is intended for security researchers and hardware evaluators with experience in power analysis or electromagnetic (EM) side-channel attacks. It assumes familiarity with concepts like TVLA, DPA, CPA, and the two-class (fixed-vs-random) testing methodology.

tacet’s power module applies Bayesian inference to detect data-dependent power consumption in pre-collected trace datasets. Unlike traditional t-test or Welch’s test approaches (TVLA, DudeCT), it:

  • Quantifies uncertainty via posterior probabilities rather than p-values
  • Handles temporal dependence through IACT-based effective sample size
  • Provides effect localization with per-partition posterior probabilities
  • Adapts to high dimension (up to d=32) with regularized covariance estimation

The statistical engine is shared with timing analysis (see Power Module Specification), extended to handle multidimensional power features extracted from trace partitioning.

Enable the power feature:

[dependencies]
tacet = { version = "0.2", features = ["power"] }

Basic usage:

use tacet::power::{analyze, Config, Dataset, Trace, Class, PowerUnits};
// Create dataset from your traces
let dataset = Dataset {
traces: vec![
Trace {
class: Class::Fixed,
samples: vec![/* power measurements as f32 */],
markers: None,
id: 0,
},
// ... more traces (interleaved Fixed and Random)
],
units: PowerUnits::Volts,
sample_rate_hz: Some(1e9), // 1 GS/s
meta: Default::default(),
};
// Analyze with default configuration
let report = analyze(&Config::default(), &dataset);
// Check results
println!("Leak probability: {:.1}%", report.leak_probability * 100.0);
println!("Max effect: {:.2} {} (CI: [{:.2}, {:.2}])",
report.max_effect,
report.units.as_str(),
report.max_effect_ci95.0,
report.max_effect_ci95.1
);

Power analysis uses the DudeCT two-class pattern to detect data-dependent leakage:

Fixed class: Device operates on constant data (typically all zeros: 0x00...00)

  • Same plaintext/key/operand for all Fixed traces
  • If implementation is constant-time, traces should cluster (low variance)

Random class: Device operates on random data

  • Different plaintext/key/operand for each Random trace
  • If implementation is constant-time, should have similar variance to Fixed
  • If data-dependent, variance will differ from Fixed

Example for AES:

// Fixed: encrypt plaintext=0x00000000000000000000000000000000 with key=0x000...000
// Random: encrypt plaintext=rand() with key=0x000...000
// (or vice versa: fixed plaintext, random key)

The Bayesian engine detects differences in feature distributions between classes. If power consumption is data-dependent, the two classes will show systematically different patterns.

Typical measurement setup:

┌─────────────────┐
│ Target Device │ ← Trigger signal
│ (DUT) │
└────────┬────────┘
│ Power/EM probe
┌─────────────────┐
│ Oscilloscope/ │ ← Acquisition controller
│ Digitizer │ (Python/MATLAB/ChipWhisperer)
└────────┬────────┘
│ Trace files (.npy, .bin, .mat)
┌─────────────────┐
│ tacet analysis │
└─────────────────┘

Acquisition workflow:

  1. Configure trigger: Align traces to operation start (e.g., AES first round begins)
  2. Interleave classes: Alternate Fixed and Random (FRFRFR…) — critical for validity
  3. Label each trace: Tag with Class::Fixed or Class::Random during acquisition
  4. Record metadata: Units, sample rate, timestamps
  5. Optional stage markers: Record boundaries if analyzing multi-stage operations
  6. Export: Save as binary arrays for tacet

Supported devices:

  • Oscilloscopes: Keysight, Tektronix, Rigol (CSV/binary export)
  • ChipWhisperer (.npy)
  • Riscure Inspector (.bin, .mat)
  • PicoScope (Python API)
  • Custom ADCs (any source producing Vec<f32>)
use tacet::power::{Trace, Class};
let trace = Trace {
class: Class::Fixed, // or Class::Random
samples: vec![/* f32 values */],
markers: None, // or Some(vec![...]) for multi-stage
id: 0, // unique trace ID (preserves acquisition order)
};

Requirements:

  • Samples: f32 values (ADC counts, volts, millivolts—units tracked separately)
  • Equal length: All traces same sample count (unless using stage markers)
  • Minimum count: 1,000+ traces recommended (500 per class minimum)
  • Balanced classes preferred but not required

The library processes traces in acquisition order, preserving temporal structure for IACT estimation.

For multi-stage operations (AES rounds, RSA modexp steps), markers segment traces for independent analysis:

use tacet::power::{Marker, StageId};
let markers = vec![
Marker { stage: StageId("key_schedule".into()), start: 0, end: 800 },
Marker { stage: StageId("round_00".into()), start: 800, end: 1600 },
Marker { stage: StageId("round_01".into()), start: 1600, end: 2400 },
// ... more rounds ...
];
let trace = Trace {
class: Class::Fixed,
samples: vec![/* 10000 samples */],
markers: Some(markers),
id: 0,
};

Marker extraction methods:

  1. Manual annotation (inspect averaged trace)
  2. Hardware triggers at stage boundaries
  3. Algorithmic detection (peak finding, zero-crossing)
  4. GPIO toggles from instrumented firmware

Benefits:

  • Isolate leakage sources (“Round 3 leaks, others safe”)
  • Stage-specific threshold estimates (varying SNR across stages)
  • Handle variable-length stages (automatic resampling to median length)
import numpy as np
from tacet_python import Trace, Class, Dataset, PowerUnits # hypothetical binding
traces_fixed = np.load("fixed_traces.npy") # Shape: (n_traces, n_samples)
traces_random = np.load("random_traces.npy")
dataset_traces = []
for i, samples in enumerate(traces_fixed):
dataset_traces.append(Trace(
class_=Class.Fixed,
samples=samples.tolist(),
markers=None,
id=i
))
for i, samples in enumerate(traces_random):
dataset_traces.append(Trace(
class_=Class.Random,
samples=samples.tolist(),
markers=None,
id=len(traces_fixed) + i
))
dataset = Dataset(
traces=dataset_traces,
units=PowerUnits.Arbitrary("ChipWhisperer ADC"),
sample_rate_hz=29.5e6, # Lite: 29.5 MS/s
meta={}
)

Three feature families extract different statistics from partitioned traces:

Mean: Average power per partition. Fast and effective for first-order leakage.

use tacet::power::{Config, FeatureFamily};
let config = Config {
features: FeatureFamily::Mean,
..Default::default()
};

Dimension: d = partitions × stages

Use when:

  • First-order leakage expected (power ∝ Hamming weight)
  • Large datasets (n > 1000)
  • Quick exploratory analysis

Control spatial resolution (how traces are divided into bins):

use tacet::power::{Config, PartitionConfig};
use std::collections::HashMap;
// Global: same partition count for all stages
let config = Config {
partitions: PartitionConfig::Global { count: 32 }, // Default
..Default::default()
};
// Per-stage: custom partition count per stage
let mut per_stage = HashMap::new();
per_stage.insert("key_schedule".into(), 16);
per_stage.insert("round_0".into(), 64);
let config = Config {
partitions: PartitionConfig::PerStage(per_stage),
..Default::default()
};

Trade-offs:

  • Fewer partitions (8-16): Coarse spatial resolution, lower dimension, less data needed
  • More partitions (64-128): Fine spatial resolution, higher dimension, more data needed

Default: 32 (balances resolution vs data requirements)

If total dimension exceeds d_max (default 32), adjacent partitions merge automatically.

let config = Config {
d_max: 64, // Allow higher dimension (default: 32)
..Default::default()
};

Higher dimension requires more data to avoid Overstressed regime (r = d/n_eff > 0.15).

When to increase d_max:

  • Many stages needing fine partitioning
  • Large dataset (n > 5000)
  • Willing to accept Stressed regime warnings

When to keep default:

  • Limited data (n < 2000)
  • Want WellConditioned regime (r ≤ 0.05)
use tacet::power::{Config, Preprocessing};
let config = Config {
preprocessing: Preprocessing {
dc_removal: true, // Subtract per-trace mean (recommended)
scale_normalization: false, // Divide by per-trace std (NOT recommended)
winsor_percentile: 0.9999, // Outlier clipping (normative)
},
..Default::default()
};

DC removal (safe): Removes baseline drift between traces

Scale normalization (use with caution): Masks amplitude-based leakage. Only enable if amplitude variations are purely environmental noise.

Winsorization (always enabled): Clips extreme outliers at 99.99th/0.01th percentiles. Winsorization rate reported in diagnostics.

let config = Config {
floor_multiplier: 2.0, // More conservative (default: 1.0)
..Default::default()
};

Effective threshold: θ_eff = floor_multiplier × θ_floor

  • 1.0 (default): Detect anything above measurement resolution
  • 2.0-3.0: More conservative, reduce noise sensitivity
  • < 1.0: Not recommended (below measurement floor)
let report = analyze(&config, &dataset);
// Effect estimates
println!("Max effect: {:.2} {}", report.max_effect, report.units.as_str());
println!("95% CI: [{:.2}, {:.2}]",
report.max_effect_ci95.0,
report.max_effect_ci95.1
);
// Leak probability
println!("P(leak > θ_eff): {:.1}%", report.leak_probability * 100.0);
// Threshold context (prominent in output)
println!("Measurement floor (θ_floor): {:.2} {}", report.theta_floor, report.units.as_str());
println!("Effective threshold (θ_eff): {:.2} {}", report.theta_eff, report.units.as_str());
// Dimension diagnostics
println!("Dimension: {} (regime: {:?})", report.dimension.d, report.dimension.regime);
println!("Effective samples: {:.0} (r = {:.4})",
report.dimension.n_eff,
report.dimension.r
);

Top features identify where leakage occurs:

for hotspot in report.top_features.iter().take(5) {
println!("Stage: {:?}, Partition: {}, Type: {:?}",
hotspot.stage,
hotspot.partition,
hotspot.feature_type
);
println!(" Effect: {:.2} {} ({:.1}% exceed θ_eff)",
hotspot.effect_mean,
report.units.as_str(),
hotspot.exceed_prob * 100.0
);
}

Use to:

  • Identify leaking stages (“Round 3 problematic, others OK”)
  • Pinpoint spatial location (partition index in trace)
  • Understand feature type (Mean/Median/Tail/CenteredSquare)

If markers present:

if let Some(stages) = &report.stages {
for stage in stages {
println!("\n{}: P(leak)={:.1}%, effect={:.2} {}",
stage.stage.0,
stage.leak_probability * 100.0,
stage.max_effect,
report.units.as_str()
);
}
}

Regime indicates covariance estimation quality:

Regimer = d/n_effInterpretation
WellConditioned≤ 0.05Excellent: covariance well-estimated
Stressed0.05-0.15Acceptable: automatic regularization applied
Overstressed> 0.15Poor: reduce d or collect more data

If Overstressed:

  1. Reduce d_max (forces partition merging)
  2. Use Mean instead of Robust3 (3× dimension reduction)
  3. Collect more traces (increase n_eff)
  4. Use coarser partitions

Problem: All Fixed traces, then all Random (or vice versa).

Consequence: Environmental drift causes false positives—high leak probability even for constant-time code.

Detection: BlockedAcquisitionDetected warning (if classes fully separated and span >1 hour).

Solution:

  • Re-acquire with interleaving (FRFRFR… or small blocks)
  • If impossible, treat results as exploratory only
  • Monitor temperature/voltage logs to assess drift magnitude

Problem: Computing alignment template from only one class.

Consequence: Leaks class information into preprocessing, creates artificial differences.

Correct:

use tacet::power::{Config, Alignment};
let config = Config {
alignment: Alignment::TemplateXCorr {
max_shift: 100,
// Template computed from BOTH classes (pooled calibration)
},
..Default::default()
};

Problem: Using Robust3 with n_eff < 150.

Symptoms: Robust3Fallback warning, automatic fallback to Mean.

Solution:

  • Collect more traces
  • Use Mean feature family
  • Set force_robust3: true (not recommended unless justified)

Problem: Proceeding with Overstressed regime (r > 0.15).

Consequence: Unreliable covariance, inflated leak probabilities.

Solution: See Dimension diagnostics.

Problem: Enabling scale_normalization when amplitude carries information.

Example: Hamming weight leakage manifests as amplitude differences—per-trace std normalization removes this signal.

When safe:

  • Amplitude variations are purely environmental
  • Only timing-based features matter (peak arrival time)

Default: OFF (safe choice)

For traces with trigger jitter:

use tacet::power::{Config, Alignment};
let config = Config {
alignment: Alignment::None, // Default if markers present
..Default::default()
};

Use when markers handle segmentation or jitter is minimal.

let config = Config {
calibration_traces: 1000, // Default: 500
..Default::default()
};

Calibration traces estimate V_cal, IACT, θ_floor, and Bayesian prior.

Trade-off:

  • More calibration → better estimates, fewer inference traces
  • Fewer calibration → more inference traces, noisier estimates

Default (500) usually adequate.

let config = Config {
bootstrap_iterations: 5000, // Default: 2000
..Default::default()
};

For covariance estimation. Increase if dimension is high (d > 20) or covariance is complex.

let config = Config {
seed: Some(42), // Fixed RNG seed
..Default::default()
};

For reproducible bootstrap and Gibbs sampling in publications.

let dataset = load_dataset("traces.bin")?;
let report = analyze(&Config::default(), &dataset);
if report.leak_probability > 0.95 {
println!("⚠️ Strong leakage: {:.2} {}", report.max_effect, report.units.as_str());
for hotspot in report.top_features.iter().take(3) {
println!(" Partition {}: {:.2} {}",
hotspot.partition,
hotspot.effect_mean,
report.units.as_str()
);
}
} else {
println!("✓ No significant leakage (P={:.1}%)", report.leak_probability * 100.0);
}
let dataset = load_dataset_with_markers("aes_traces.bin")?;
let config = Config {
partitions: PartitionConfig::Global { count: 64 },
..Default::default()
};
let report = analyze(&config, &dataset);
if let Some(stages) = &report.stages {
let mut leaky: Vec<_> = stages.iter()
.filter(|s| s.leak_probability > 0.5)
.collect();
leaky.sort_by(|a, b| b.leak_probability.partial_cmp(&a.leak_probability).unwrap());
for s in leaky {
println!("{}: P={:.1}%, effect={:.2} {}",
s.stage.0,
s.leak_probability * 100.0,
s.max_effect,
report.units.as_str()
);
}
}
// First: check first-order
let config_1st = Config {
features: FeatureFamily::Mean,
..Default::default()
};
let report_1st = analyze(&config_1st, &dataset);
if report_1st.leak_probability < 0.5 {
println!("✓ First-order masking effective (P={:.1}%)",
report_1st.leak_probability * 100.0);
} else {
println!("⚠️ First-order leakage!");
}
// Second: check second-order
let config_2nd = Config {
features: FeatureFamily::CenteredSquare,
..Default::default()
};
let report_2nd = analyze(&config_2nd, &dataset);
if report_2nd.leak_probability > 0.95 {
println!("⚠️ Second-order leakage: {:.2} {}²",
report_2nd.max_effect,
report_2nd.units.as_str()
);
} else {
println!("✓ No second-order leakage (P={:.1}%)",
report_2nd.leak_probability * 100.0);
}
for count in [8, 16, 32, 64, 128] {
let config = Config {
partitions: PartitionConfig::Global { count },
..Default::default()
};
let report = analyze(&config, &dataset);
println!("Partitions: {}", count);
println!(" d={} ({:?}), P(leak)={:.1}%, θ_floor={:.2} {}",
report.dimension.d,
report.dimension.regime,
report.leak_probability * 100.0,
report.theta_floor,
report.units.as_str()
);
}
let diag = &report.diagnostics;
// IACT
println!("IACT: {:.2} (cal: {:.2})", diag.iact_combined, diag.iact_cal);
// Gibbs sampler health
println!("Gibbs ESS: {:.0}", diag.gibbs_ess);
println!("Lambda mixing: {}", diag.lambda_mixing_ok);
println!("Kappa mixing: {}", diag.kappa_mixing_ok);
// Quality issues
for issue in &diag.issues {
println!("⚠️ {:?}: {}", issue.code, issue.message);
}

Key issues:

  • OverstressedRegime → reduce d or collect more data
  • StressedRegime → consider reducing d
  • HighWinsorRate → outliers present, check acquisition
  • LowEffectiveSamples → n_eff < 100, need more data
  • BlockedAcquisitionDetected → drift likely
  1. Use release mode: cargo build --release --features power
  2. Parallel enabled by default: parallel feature (4-8× speedup)
  3. Reduce dimension for large datasets:
    • Start with Mean (not Robust3)
    • Coarser partitions (16-32, not 64-128)
    • Lower d_max

Need help? Report issues at github.com/agucova/tacet/issues.