8.9 KiB
Testing Guide for Multi-Storage Orchestrator
This document provides comprehensive guidance for testing the multi-storage orchestrator system, including unit tests, integration tests, benchmarks, and performance analysis.
Overview
The orchestrator uses a multi-tiered testing approach:
- Unit Tests: Test individual components in isolation
- Integration Tests: Test complete workflows across storage backends
- Migration Tests: Validate data migration between backends
- Factory Tests: Test configuration and backend selection
- Benchmarks: Performance testing and regression detection
Test Structure
tests/
├── helpers/mod.rs # Test utilities and mock implementations
├── storage_integration.rs # Cross-backend integration tests
├── migration_tests.rs # Migration validation tests
└── factory_tests.rs # Factory and configuration tests
benches/
├── storage_benchmarks.rs # Storage performance benchmarks
└── migration_benchmarks.rs # Migration performance benchmarks
src/
├── storage/ # Unit tests embedded in modules
├── migration/tests.rs # Migration unit tests
└── main.rs # Application integration tests
Running Tests
Basic Test Commands
# Run all tests (filesystem backend only)
cargo test
# Run all tests with SurrealDB backends
cargo test --features surrealdb
# Run specific test suites
cargo test --test storage_integration
cargo test --test migration_tests
cargo test --test factory_tests
# Run unit tests only
cargo test --lib
Using Cargo Aliases
The project includes convenient aliases (defined in .cargo/config.toml):
# Test all backends with all features
cargo test-all
# Test only filesystem backend
cargo test-fs
# Test with SurrealDB features
cargo test-surrealdb
# Test specific areas
cargo test-integration
cargo test-migration
cargo test-factory
cargo test-unit
Test Features and Backends
Backend Support
- Filesystem: Always available, no additional dependencies
- SurrealDB Embedded: Requires
--features surrealdb - SurrealDB Server: Requires
--features surrealdb
Feature-Gated Tests
Tests automatically adapt to available features:
#[cfg(feature = "surrealdb")]
#[tokio::test]
async fn test_surrealdb_specific_feature() {
// This test only runs when SurrealDB feature is enabled
}
Integration Tests
Storage Integration Tests
Location: tests/storage_integration.rs
These tests verify consistent behavior across all storage backends:
// Example: Test runs against all available backends
test_all_backends!(test_basic_crud_operations, |storage, gen| async move {
let task = gen.workflow_task();
storage.enqueue(task.clone(), 1).await?;
// ... test implementation
Ok(())
});
Key Test Scenarios:
- Basic CRUD operations
- Queue management and priorities
- Task status updates
- Batch operations
- Search and filtering
- Concurrent operations
- Error handling
- Performance characteristics
Migration Tests
Location: tests/migration_tests.rs
Validates data migration between all backend combinations:
# Run migration tests
cargo test --features surrealdb --test migration_tests
# Test specific migration scenarios
cargo test --features surrealdb test_filesystem_to_embedded_migration
cargo test --features surrealdb test_large_dataset_migration_performance
Migration Test Coverage:
- Data integrity verification
- Rollback functionality
- Progress tracking
- Error recovery
- Performance scaling
- Filtering and batch operations
Factory Tests
Location: tests/factory_tests.rs
Tests configuration validation and backend selection:
# Run factory tests
cargo test --test factory_tests
# Test configuration validation
cargo test test_storage_config_validation_failures
Benchmarks
Storage Benchmarks
Location: benches/storage_benchmarks.rs
# Run all storage benchmarks
cargo bench-storage
# Run specific backend benchmarks
cargo bench-fs
cargo bench-surrealdb # Requires --features surrealdb
# Run specific benchmark categories
cargo bench -- single_enqueue
cargo bench -- batch_operations
cargo bench -- concurrent_operations
Benchmark Categories:
- Single operations (enqueue/dequeue)
- Batch operations
- Search and retrieval
- Concurrent operations
- Cleanup operations
Migration Benchmarks
Location: benches/migration_benchmarks.rs
# Run migration benchmarks
cargo bench-migration
# Test migration performance
cargo bench -- basic_migration
cargo bench -- migration_batch_sizes
Migration Benchmarks:
- Basic migration throughput
- Batch size optimization
- Verification overhead
- Progress tracking overhead
- Dry run performance
Test Helpers and Utilities
TestDataGenerator
Provides consistent test data across all tests:
use crate::helpers::TestDataGenerator;
let gen = TestDataGenerator::new();
let task = gen.workflow_task();
let batch = gen.workflow_tasks_batch(10);
StorageTestRunner
Runs tests against all available storage backends:
use crate::helpers::StorageTestRunner;
let mut runner = StorageTestRunner::new();
runner.run_against_all_backends(test_function).await;
MockStorage
Mock implementation for testing migration scenarios:
use crate::helpers::MockStorage;
let mock = MockStorage::new();
mock.set_health(false); // Simulate failure
Performance Testing
Benchmark Configuration
Benchmarks are configured with:
- Small sample sizes for expensive operations
- Throughput measurement for batch operations
- Memory usage tracking
- Concurrent operation testing
Performance Targets
Storage Operations:
- Single enqueue: < 1ms average
- Batch enqueue (100 tasks): < 100ms average
- Task retrieval: < 0.5ms average
- Search operations: < 50ms average
Migration Operations:
- Small dataset (100 tasks): < 5 seconds
- Large dataset (1000 tasks): < 30 seconds
- Throughput: > 10 tasks/second
Continuous Integration
CI Test Matrix
# Example CI configuration
strategy:
matrix:
features:
- "" # Filesystem only
- "surrealdb" # All backends
rust:
- stable
- beta
Test Commands for CI
# Basic functionality tests
cargo test --no-default-features
cargo test --all-features
# Documentation tests
cargo test --doc --all-features
# Benchmark regression tests
cargo bench --all-features -- --test
Debugging and Troubleshooting
Verbose Test Output
# Enable detailed logging
RUST_LOG=debug cargo test --features surrealdb
# Show test output
cargo test -- --nocapture
# Run single test with full output
cargo test test_name -- --exact --nocapture
Common Issues
- SurrealDB tests failing: Ensure
--features surrealdbis specified - Temporary directory errors: Tests clean up automatically, but manual cleanup may be needed
- Port conflicts: Tests use ephemeral ports, but conflicts can occur
- Timing issues: Some tests use sleeps for async operations
Test Data Isolation
- Each test uses unique temporary directories
- Mock storage is reset between tests
- Concurrent tests use separate data spaces
- Cleanup is automatic via
Dropimplementations
Coverage Analysis
# Generate coverage report
cargo install cargo-tarpaulin
cargo test-coverage
# View coverage report
open target/tarpaulin-report.html
Performance Profiling
# Profile storage operations
cargo bench --bench storage_benchmarks -- --profile-time=10
# Profile migration operations
cargo bench --bench migration_benchmarks -- --profile-time=10
# Generate flame graphs
cargo install flamegraph
cargo flamegraph --bench storage_benchmarks
Best Practices
Writing Tests
- Use descriptive test names that explain what is being tested
- Test error conditions as well as success paths
- Use feature gates for backend-specific tests
- Clean up resources using RAII patterns
- Test concurrency where applicable
Test Data
- Use test generators for consistent data
- Test with realistic data sizes
- Include edge cases (empty data, large data, malformed data)
- Use deterministic data where possible
Performance Testing
- Set appropriate baselines for performance regression
- Test with various data sizes to understand scaling
- Include warmup iterations for accurate measurements
- Document performance expectations in code comments
Contributing
When adding new features:
- Add unit tests for new components
- Update integration tests for new storage methods
- Add migration tests for new backends
- Update benchmarks for performance-critical code
- Document any new test utilities
For more information on the storage architecture and API, see the main project documentation.