Skip to content

ME-rPPG Estimator

NOTE

Web SDKs Only Realtime estimation is currently available for web-based SDKs only (JavaScript, React, Vue). It is not available for mobile SDKs (iOS, Android, Flutter, React Native).

Overview

The ME-rPPG (Memory-Efficient remote Photoplethysmography) Estimator is an AI-powered heart rate measurement system that uses deep learning to analyze facial video. It provides state-of-the-art accuracy with excellent tolerance to motion and lighting variations.

NOTE

Open-Source Technology ME-rPPG is open-source software, freely available for research, development, and commercial use.

Key Features

  • AI-Powered: Neural network learns optimal signal extraction
  • Fast Results: First estimate in ~3 seconds
  • Motion Tolerant: Works even with slight movement
  • Lighting Robust: Handles varying lighting conditions
  • Memory Efficient: Only 3.6 MB runtime memory
  • Open Source: Free to use and modify

Basic Usage

typescript
import { createVitalSignCamera, RealtimeEstimatorType } from 'ts-vital-sign-camera';

const camera = createVitalSignCamera({
  realtimeEstimationConfig: {
    estimatorType: RealtimeEstimatorType.MeRppg,
    earlyEstimation: true,
    minDuration: 3,
    minConfidence: 0.3
  }
});

camera.onVideoFrameProcessed = (event) => {
  const estimation = event.realtimeEstimation;
  if (estimation) {
    console.log(`HR: ${estimation.heartRate} BPM`);
    console.log(`Confidence: ${estimation.confidence}`);
  }
};

Configuration

Basic Configuration

typescript
{
  estimatorType: RealtimeEstimatorType.MeRppg,
  earlyEstimation: true,    // Show results early
  minDuration: 3,           // Minimum 3 seconds
  minConfidence: 0.3,       // Lower threshold for early results
  debug: false              // Enable for troubleshooting
}

Advanced Configuration

typescript
{
  estimatorType: RealtimeEstimatorType.MeRppg,
  earlyEstimation: true,
  minDuration: 3,
  minConfidence: 0.3,
  
  // Custom model paths (optional)
  modelPath: '/models/me-rppg/model.onnx',
  statePath: '/models/me-rppg/state.json',
  welchPath: '/models/me-rppg/welch_psd.onnx',
  hrPath: '/models/me-rppg/get_hr.onnx',
  
  // Lambda parameter for temporal normalization
  lambda: 1.0  // Default: 1.0 second half-life
}

Lambda Parameter

Controls how quickly the estimator adapts to changes:

ValueBehaviorUse Case
0.33Fast adaptationQuick response to HR changes
1.0DefaultBalanced performance
3.0Slow adaptationMaximum stability
typescript
// For fitness apps (quick response)
{ lambda: 0.33 }

// For medical apps (maximum stability)
{ lambda: 3.0 }

Model Setup

1. Model Files

Ensure the following files are available:

/public/models/me-rppg/
├── model.onnx        (2.5 MB)  - Main ME-rPPG model
├── state.json        (7 MB)    - Initial temporal state
├── welch_psd.onnx    (93 KB)   - Welch PSD model
└── get_hr.onnx       (1.5 KB)  - HR extraction model

2. Server Configuration

Ensure your web server serves ONNX files correctly:

nginx
# Nginx example
location /models/ {
    types {
        application/octet-stream onnx;
    }
    add_header Access-Control-Allow-Origin *;
}

3. Verify Loading

Enable debug mode to check model loading:

typescript
const camera = createVitalSignCamera({
  realtimeEstimationConfig: {
    estimatorType: RealtimeEstimatorType.MeRppg,
    debug: true  // Check console for loading messages
  }
});

SDK Behavior

Initialization

typescript
// Models load asynchronously (1-2 seconds)
const camera = createVitalSignCamera({
  realtimeEstimationConfig: {
    estimatorType: RealtimeEstimatorType.MeRppg
  }
});

// Wait for ready event
camera.onInitialized = () => {
  console.log('ME-rPPG models loaded');
  enableStartButton();
});

Processing Timeline

0s ────► 1-2s ────► 3s ────► 10s ────► 30s
│        │          │        │         │
│        │          │        │         └─ Scan complete
│        │          │        └─ Optimal accuracy
│        │          └─ First estimate
│        └─ Models loaded
└─ Scan starts

Estimation Updates

typescript
camera.onVideoFrameProcessed = (event) => {
  const estimation = event.realtimeEstimation;
  if (!estimation) return;
  
  const elapsed = getCurrentScanTime();
  
  if (elapsed < 5) {
    // Early estimate (lower confidence)
    showHeartRate(estimation.heartRate, 'Analyzing...');
  } else if (elapsed < 10) {
    // Improving estimate
    showHeartRate(estimation.heartRate, 'Refining...');
  } else {
    // Stable estimate
    showHeartRate(estimation.heartRate, 'Stable ✓');
  }
};

Performance Characteristics

MetricValue
First Result~3 seconds
Optimal Accuracy~10 seconds
Processing Speed10-30ms per frame
Memory Usage~4 MB runtime
Model Size~10 MB total
Typical Error±2-3 BPM
Heart Rate Range40-180 BPM

Advantages

🎯 Superior Accuracy

  • State-of-the-art neural network
  • Learns optimal signal extraction
  • Robust to individual variations

🏃 Motion Tolerance

  • AI learns to ignore motion artifacts
  • Spatial attention on stable regions
  • Temporal memory maintains continuity

💡 Lighting Robustness

  • Trained on diverse lighting conditions
  • Adaptive normalization
  • Works in bright and dim environments

⚡ Low Latency

  • Real-time processing at 30 FPS
  • Web Worker prevents UI blocking
  • Fast initial results

When to Use

✅ Best For

  • Consumer fitness and wellness apps
  • Applications with varying lighting
  • Scenarios with user movement
  • Modern devices with good connectivity
  • Projects requiring open-source licensing

❌ Avoid If

  • Instant initialization is critical
  • Bundle size must be minimal (< 1 MB)
  • Offline-first is required immediately
  • Target devices have limited memory
  • Slow network connections are common

Troubleshooting

Models Not Loading

Symptoms: No heart rate estimates, console errors

Solutions:

typescript
// 1. Enable debug mode
const camera = createVitalSignCamera({
  realtimeEstimationConfig: {
    estimatorType: RealtimeEstimatorType.MeRppg,
    debug: true  // Check console
  }
});

// 2. Use absolute paths
{
  modelPath: '/models/me-rppg/model.onnx',  // Not relative
  // ...
}

// 3. Check CORS headers
// Ensure server allows cross-origin requests

// 4. Verify file accessibility
// Open model URL directly in browser

Low Confidence Scores

Symptoms: Confidence < 0.3

Solutions:

  1. Improve lighting (natural light is best)
  2. Ensure face is centered and stable
  3. Wait for full 10 seconds
  4. Reduce motion and talking
  5. Check face detection is stable

Unstable Readings

Symptoms: Heart rate jumps significantly

Solutions:

typescript
// Increase lambda for more stability
{
  lambda: 3.0  // Slower adaptation, more stable
}

// Increase minimum confidence
{
  minConfidence: 0.5  // Higher threshold
}

Best Practices

1. Progressive Feedback

typescript
camera.onVideoFrameProcessed = (event) => {
  const estimation = event.realtimeEstimation;
  if (!estimation) return;
  
  if (estimation.isStable && estimation.confidence > 0.6) {
    // High confidence
    displayHeartRate(estimation.heartRate);
    showQualityIndicator('excellent');
  } else if (estimation.confidence > 0.3) {
    // Medium confidence
    displayHeartRate(estimation.heartRate, 'Refining...');
    showQualityIndicator('good');
  } else {
    // Low confidence
    showMessage('Adjusting... Please stay still');
    showQualityIndicator('poor');
  }
});

2. Error Handling

typescript
camera.onError = (error) => {
  if (error.message.includes('ONNX')) {
    showError('Failed to load AI models. Please check your connection.');
  } else {
    showError('An error occurred. Please try again.');
  }
});

3. User Guidance

typescript
const instructions = [
  "Position your face in the center",
  "Stay still and don't talk",
  "Ensure good lighting",
  "Wait for stable indicator"
];

showInstructions(instructions);

Comparison with FDA Estimator

FeatureME-rPPGFDA
Accuracy⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Motion Tolerance⭐⭐⭐⭐⭐⭐⭐⭐⭐
Lighting Tolerance⭐⭐⭐⭐⭐⭐⭐⭐⭐
Initialization⭐⭐⭐ (1-2s)⭐⭐⭐⭐⭐ (instant)
Bundle Size⭐⭐⭐ (~10 MB)⭐⭐⭐⭐⭐ (~100 KB)
Licensing⭐⭐⭐⭐⭐ (open)⭐⭐⭐ (proprietary)
First Result~3 seconds~5 seconds

Example: Complete Implementation

typescript
import { createVitalSignCamera, RealtimeEstimatorType } from 'ts-vital-sign-camera';

// Create camera with ME-rPPG
const camera = createVitalSignCamera({
  realtimeEstimationConfig: {
    estimatorType: RealtimeEstimatorType.MeRppg,
    earlyEstimation: true,
    minDuration: 3,
    minConfidence: 0.3,
    lambda: 1.0,
    debug: false
  }
});

// Handle model loading
camera.onInitialized = () => {
  console.log('ME-rPPG ready');
  document.getElementById('start-btn').disabled = false;
});

// Handle realtime updates
camera.onVideoFrameProcessed = (event) => {
  const estimation = event.realtimeEstimation;
  if (!estimation) return;
  
  updateHeartRateDisplay(estimation.heartRate);
  updateConfidenceBar(estimation.confidence);
  
  if (estimation.isStable) {
    showStableIndicator();
  }
};

// Handle errors
camera.onError = (error) => {
  console.error('Error:', error);
  showErrorMessage(error.message);
});

// Start scan
document.getElementById('start-btn').onclick = () => {
  camera.startScan();
};

Next Steps