Jesús Pérez 09a97ac8f5
chore: update platform submodule to monorepo crates structure
Platform restructured into crates/, added AI service and detector,
       migrated control-center-ui to Leptos 0.8
2026-01-08 21:32:59 +00:00

9.0 KiB

Testing Guide for Multi-Storage Orchestrator

This document provides comprehensive guidance for testing the multi-storage orchestrator system, including unit tests, integration tests, benchmarks, and performance analysis.

Overview

The orchestrator uses a multi-tiered testing approach:

  1. Unit Tests: Test individual components in isolation
  2. Integration Tests: Test complete workflows across storage backends
  3. Migration Tests: Validate data migration between backends
  4. Factory Tests: Test configuration and backend selection
  5. Benchmarks: Performance testing and regression detection

Test Structure

tests/
├── helpers/mod.rs              # Test utilities and mock implementations
├── storage_integration.rs      # Cross-backend integration tests
├── migration_tests.rs         # Migration validation tests
└── factory_tests.rs           # Factory and configuration tests

benches/
├── storage_benchmarks.rs      # Storage performance benchmarks
└── migration_benchmarks.rs    # Migration performance benchmarks

src/
├── storage/                   # Unit tests embedded in modules
├── migration/tests.rs         # Migration unit tests
└── main.rs                    # Application integration tests
```plaintext

## Running Tests

### Basic Test Commands

```bash
# Run all tests (filesystem backend only)
cargo test

# Run all tests with SurrealDB backends
cargo test --features surrealdb

# Run specific test suites
cargo test --test storage_integration
cargo test --test migration_tests
cargo test --test factory_tests

# Run unit tests only
cargo test --lib
```plaintext

### Using Cargo Aliases

The project includes convenient aliases (defined in `.cargo/config.toml`):

```bash
# Test all backends with all features
cargo test-all

# Test only filesystem backend
cargo test-fs

# Test with SurrealDB features
cargo test-surrealdb

# Test specific areas
cargo test-integration
cargo test-migration
cargo test-factory
cargo test-unit
```plaintext

## Test Features and Backends

### Backend Support

- **Filesystem**: Always available, no additional dependencies
- **SurrealDB Embedded**: Requires `--features surrealdb`
- **SurrealDB Server**: Requires `--features surrealdb`

### Feature-Gated Tests

Tests automatically adapt to available features:

```rust
#[cfg(feature = "surrealdb")]
#[tokio::test]
async fn test_surrealdb_specific_feature() {
    // This test only runs when SurrealDB feature is enabled
}
```plaintext

## Integration Tests

### Storage Integration Tests

Location: `tests/storage_integration.rs`

These tests verify consistent behavior across all storage backends:

```rust
// Example: Test runs against all available backends
test_all_backends!(test_basic_crud_operations, |storage, gen| async move {
    let task = gen.workflow_task();
    storage.enqueue(task.clone(), 1).await?;
    // ... test implementation
    Ok(())
});
```plaintext

**Key Test Scenarios:**

- Basic CRUD operations
- Queue management and priorities
- Task status updates
- Batch operations
- Search and filtering
- Concurrent operations
- Error handling
- Performance characteristics

### Migration Tests

Location: `tests/migration_tests.rs`

Validates data migration between all backend combinations:

```bash
# Run migration tests
cargo test --features surrealdb --test migration_tests

# Test specific migration scenarios
cargo test --features surrealdb test_filesystem_to_embedded_migration
cargo test --features surrealdb test_large_dataset_migration_performance
```plaintext

**Migration Test Coverage:**

- Data integrity verification
- Rollback functionality
- Progress tracking
- Error recovery
- Performance scaling
- Filtering and batch operations

### Factory Tests

Location: `tests/factory_tests.rs`

Tests configuration validation and backend selection:

```bash
# Run factory tests
cargo test --test factory_tests

# Test configuration validation
cargo test test_storage_config_validation_failures
```plaintext

## Benchmarks

### Storage Benchmarks

Location: `benches/storage_benchmarks.rs`

```bash
# Run all storage benchmarks
cargo bench-storage

# Run specific backend benchmarks
cargo bench-fs
cargo bench-surrealdb  # Requires --features surrealdb

# Run specific benchmark categories
cargo bench -- single_enqueue
cargo bench -- batch_operations
cargo bench -- concurrent_operations
```plaintext

**Benchmark Categories:**

- Single operations (enqueue/dequeue)
- Batch operations
- Search and retrieval
- Concurrent operations
- Cleanup operations

### Migration Benchmarks

Location: `benches/migration_benchmarks.rs`

```bash
# Run migration benchmarks
cargo bench-migration

# Test migration performance
cargo bench -- basic_migration
cargo bench -- migration_batch_sizes
```plaintext

**Migration Benchmarks:**

- Basic migration throughput
- Batch size optimization
- Verification overhead
- Progress tracking overhead
- Dry run performance

## Test Helpers and Utilities

### TestDataGenerator

Provides consistent test data across all tests:

```rust
use crate::helpers::TestDataGenerator;

let gen = TestDataGenerator::new();
let task = gen.workflow_task();
let batch = gen.workflow_tasks_batch(10);
```plaintext

### StorageTestRunner

Runs tests against all available storage backends:

```rust
use crate::helpers::StorageTestRunner;

let mut runner = StorageTestRunner::new();
runner.run_against_all_backends(test_function).await;
```plaintext

### MockStorage

Mock implementation for testing migration scenarios:

```rust
use crate::helpers::MockStorage;

let mock = MockStorage::new();
mock.set_health(false); // Simulate failure
```plaintext

## Performance Testing

### Benchmark Configuration

Benchmarks are configured with:

- Small sample sizes for expensive operations
- Throughput measurement for batch operations
- Memory usage tracking
- Concurrent operation testing

### Performance Targets

**Storage Operations:**

- Single enqueue: < 1ms average
- Batch enqueue (100 tasks): < 100ms average
- Task retrieval: < 0.5ms average
- Search operations: < 50ms average

**Migration Operations:**

- Small dataset (100 tasks): < 5 seconds
- Large dataset (1000 tasks): < 30 seconds
- Throughput: > 10 tasks/second

## Continuous Integration

### CI Test Matrix

```yaml
# Example CI configuration
strategy:
  matrix:
    features:
      - ""              # Filesystem only
      - "surrealdb"     # All backends
    rust:
      - stable
      - beta
```plaintext

### Test Commands for CI

```bash
# Basic functionality tests
cargo test --no-default-features
cargo test --all-features

# Documentation tests
cargo test --doc --all-features

# Benchmark regression tests
cargo bench --all-features -- --test
```plaintext

## Debugging and Troubleshooting

### Verbose Test Output

```bash
# Enable detailed logging
RUST_LOG=debug cargo test --features surrealdb

# Show test output
cargo test -- --nocapture

# Run single test with full output
cargo test test_name -- --exact --nocapture
```plaintext

### Common Issues

1. **SurrealDB tests failing**: Ensure `--features surrealdb` is specified
2. **Temporary directory errors**: Tests clean up automatically, but manual cleanup may be needed
3. **Port conflicts**: Tests use ephemeral ports, but conflicts can occur
4. **Timing issues**: Some tests use sleeps for async operations

### Test Data Isolation

- Each test uses unique temporary directories
- Mock storage is reset between tests
- Concurrent tests use separate data spaces
- Cleanup is automatic via `Drop` implementations

## Coverage Analysis

```bash
# Generate coverage report
cargo install cargo-tarpaulin
cargo test-coverage

# View coverage report
open target/tarpaulin-report.html
```plaintext

## Performance Profiling

```bash
# Profile storage operations
cargo bench --bench storage_benchmarks -- --profile-time=10

# Profile migration operations
cargo bench --bench migration_benchmarks -- --profile-time=10

# Generate flame graphs
cargo install flamegraph
cargo flamegraph --bench storage_benchmarks
```plaintext

## Best Practices

### Writing Tests

1. **Use descriptive test names** that explain what is being tested
2. **Test error conditions** as well as success paths
3. **Use feature gates** for backend-specific tests
4. **Clean up resources** using RAII patterns
5. **Test concurrency** where applicable

### Test Data

1. **Use test generators** for consistent data
2. **Test with realistic data sizes**
3. **Include edge cases** (empty data, large data, malformed data)
4. **Use deterministic data** where possible

### Performance Testing

1. **Set appropriate baselines** for performance regression
2. **Test with various data sizes** to understand scaling
3. **Include warmup iterations** for accurate measurements
4. **Document performance expectations** in code comments

## Contributing

When adding new features:

1. Add unit tests for new components
2. Update integration tests for new storage methods
3. Add migration tests for new backends
4. Update benchmarks for performance-critical code
5. Document any new test utilities

For more information on the storage architecture and API, see the main project documentation.