# Testing Guide for Multi-Storage Orchestrator\n\nThis document provides comprehensive guidance for testing the multi-storage orchestrator system,\nincluding unit tests, integration tests, benchmarks, and performance analysis.\n\n## Overview\n\nThe orchestrator uses a multi-tiered testing approach:\n\n1. **Unit Tests**: Test individual components in isolation\n2. **Integration Tests**: Test complete workflows across storage backends\n3. **Migration Tests**: Validate data migration between backends\n4. **Factory Tests**: Test configuration and backend selection\n5. **Benchmarks**: Performance testing and regression detection\n\n## Test Structure\n\n```{$detected_lang}\ntests/\n├── helpers/mod.rs # Test utilities and mock implementations\n├── storage_integration.rs # Cross-backend integration tests\n├── migration_tests.rs # Migration validation tests\n└── factory_tests.rs # Factory and configuration tests\n\nbenches/\n├── storage_benchmarks.rs # Storage performance benchmarks\n└── migration_benchmarks.rs # Migration performance benchmarks\n\nsrc/\n├── storage/ # Unit tests embedded in modules\n├── migration/tests.rs # Migration unit tests\n└── main.rs # Application integration tests\n```\n\n## Running Tests\n\n### Basic Test Commands\n\n```{$detected_lang}\n# Run all tests (filesystem backend only)\ncargo test\n\n# Run all tests with SurrealDB backends\ncargo test --features surrealdb\n\n# Run specific test suites\ncargo test --test storage_integration\ncargo test --test migration_tests\ncargo test --test factory_tests\n\n# Run unit tests only\ncargo test --lib\n```\n\n### Using Cargo Aliases\n\nThe project includes convenient aliases (defined in `.cargo/config.toml`):\n\n```{$detected_lang}\n# Test all backends with all features\ncargo test-all\n\n# Test only filesystem backend\ncargo test-fs\n\n# Test with SurrealDB features\ncargo test-surrealdb\n\n# Test specific areas\ncargo test-integration\ncargo test-migration\ncargo test-factory\ncargo test-unit\n```\n\n## Test Features and Backends\n\n### Backend Support\n\n- **Filesystem**: Always available, no additional dependencies\n- **SurrealDB Embedded**: Requires `--features surrealdb`\n- **SurrealDB Server**: Requires `--features surrealdb`\n\n### Feature-Gated Tests\n\nTests automatically adapt to available features:\n\n```{$detected_lang}\n#[cfg(feature = "surrealdb")]\n#[tokio::test]\nasync fn test_surrealdb_specific_feature() {\n // This test only runs when SurrealDB feature is enabled\n}\n```\n\n## Integration Tests\n\n### Storage Integration Tests\n\nLocation: `tests/storage_integration.rs`\n\nThese tests verify consistent behavior across all storage backends:\n\n```{$detected_lang}\n// Example: Test runs against all available backends\ntest_all_backends!(test_basic_crud_operations, |storage, gen| async move {\n let task = gen.workflow_task();\n storage.enqueue(task.clone(), 1).await?;\n // ... test implementation\n Ok(())\n});\n```\n\n**Key Test Scenarios:**\n\n- Basic CRUD operations\n- Queue management and priorities\n- Task status updates\n- Batch operations\n- Search and filtering\n- Concurrent operations\n- Error handling\n- Performance characteristics\n\n### Migration Tests\n\nLocation: `tests/migration_tests.rs`\n\nValidates data migration between all backend combinations:\n\n```{$detected_lang}\n# Run migration tests\ncargo test --features surrealdb --test migration_tests\n\n# Test specific migration scenarios\ncargo test --features surrealdb test_filesystem_to_embedded_migration\ncargo test --features surrealdb test_large_dataset_migration_performance\n```\n\n**Migration Test Coverage:**\n\n- Data integrity verification\n- Rollback functionality\n- Progress tracking\n- Error recovery\n- Performance scaling\n- Filtering and batch operations\n\n### Factory Tests\n\nLocation: `tests/factory_tests.rs`\n\nTests configuration validation and backend selection:\n\n```{$detected_lang}\n# Run factory tests\ncargo test --test factory_tests\n\n# Test configuration validation\ncargo test test_storage_config_validation_failures\n```\n\n## Benchmarks\n\n### Storage Benchmarks\n\nLocation: `benches/storage_benchmarks.rs`\n\n```{$detected_lang}\n# Run all storage benchmarks\ncargo bench-storage\n\n# Run specific backend benchmarks\ncargo bench-fs\ncargo bench-surrealdb # Requires --features surrealdb\n\n# Run specific benchmark categories\ncargo bench -- single_enqueue\ncargo bench -- batch_operations\ncargo bench -- concurrent_operations\n```\n\n**Benchmark Categories:**\n\n- Single operations (enqueue/dequeue)\n- Batch operations\n- Search and retrieval\n- Concurrent operations\n- Cleanup operations\n\n### Migration Benchmarks\n\nLocation: `benches/migration_benchmarks.rs`\n\n```{$detected_lang}\n# Run migration benchmarks\ncargo bench-migration\n\n# Test migration performance\ncargo bench -- basic_migration\ncargo bench -- migration_batch_sizes\n```\n\n**Migration Benchmarks:**\n\n- Basic migration throughput\n- Batch size optimization\n- Verification overhead\n- Progress tracking overhead\n- Dry run performance\n\n## Test Helpers and Utilities\n\n### TestDataGenerator\n\nProvides consistent test data across all tests:\n\n```{$detected_lang}\nuse crate::helpers::TestDataGenerator;\n\nlet gen = TestDataGenerator::new();\nlet task = gen.workflow_task();\nlet batch = gen.workflow_tasks_batch(10);\n```\n\n### StorageTestRunner\n\nRuns tests against all available storage backends:\n\n```{$detected_lang}\nuse crate::helpers::StorageTestRunner;\n\nlet mut runner = StorageTestRunner::new();\nrunner.run_against_all_backends(test_function).await;\n```\n\n### MockStorage\n\nMock implementation for testing migration scenarios:\n\n```{$detected_lang}\nuse crate::helpers::MockStorage;\n\nlet mock = MockStorage::new();\nmock.set_health(false); // Simulate failure\n```\n\n## Performance Testing\n\n### Benchmark Configuration\n\nBenchmarks are configured with:\n\n- Small sample sizes for expensive operations\n- Throughput measurement for batch operations\n- Memory usage tracking\n- Concurrent operation testing\n\n### Performance Targets\n\n**Storage Operations:**\n\n- Single enqueue: < 1ms average\n- Batch enqueue (100 tasks): < 100ms average\n- Task retrieval: < 0.5ms average\n- Search operations: < 50ms average\n\n**Migration Operations:**\n\n- Small dataset (100 tasks): < 5 seconds\n- Large dataset (1000 tasks): < 30 seconds\n- Throughput: > 10 tasks/second\n\n## Continuous Integration\n\n### CI Test Matrix\n\n```{$detected_lang}\n# Example CI configuration\nstrategy:\n matrix:\n features:\n - "" # Filesystem only\n - "surrealdb" # All backends\n rust:\n - stable\n - beta\n```\n\n### Test Commands for CI\n\n```{$detected_lang}\n# Basic functionality tests\ncargo test --no-default-features\ncargo test --all-features\n\n# Documentation tests\ncargo test --doc --all-features\n\n# Benchmark regression tests\ncargo bench --all-features -- --test\n```\n\n## Debugging and Troubleshooting\n\n### Verbose Test Output\n\n```{$detected_lang}\n# Enable detailed logging\nRUST_LOG=debug cargo test --features surrealdb\n\n# Show test output\ncargo test -- --nocapture\n\n# Run single test with full output\ncargo test test_name -- --exact --nocapture\n```\n\n### Common Issues\n\n1. **SurrealDB tests failing**: Ensure `--features surrealdb` is specified\n2. **Temporary directory errors**: Tests clean up automatically, but manual cleanup may be needed\n3. **Port conflicts**: Tests use ephemeral ports, but conflicts can occur\n4. **Timing issues**: Some tests use sleeps for async operations\n\n### Test Data Isolation\n\n- Each test uses unique temporary directories\n- Mock storage is reset between tests\n- Concurrent tests use separate data spaces\n- Cleanup is automatic via `Drop` implementations\n\n## Coverage Analysis\n\n```{$detected_lang}\n# Generate coverage report\ncargo install cargo-tarpaulin\ncargo test-coverage\n\n# View coverage report\nopen target/tarpaulin-report.html\n```\n\n## Performance Profiling\n\n```{$detected_lang}\n# Profile storage operations\ncargo bench --bench storage_benchmarks -- --profile-time=10\n\n# Profile migration operations\ncargo bench --bench migration_benchmarks -- --profile-time=10\n\n# Generate flame graphs\ncargo install flamegraph\ncargo flamegraph --bench storage_benchmarks\n```\n\n## Best Practices\n\n### Writing Tests\n\n1. **Use descriptive test names** that explain what is being tested\n2. **Test error conditions** as well as success paths\n3. **Use feature gates** for backend-specific tests\n4. **Clean up resources** using RAII patterns\n5. **Test concurrency** where applicable\n\n### Test Data\n\n1. **Use test generators** for consistent data\n2. **Test with realistic data sizes**\n3. **Include edge cases** (empty data, large data, malformed data)\n4. **Use deterministic data** where possible\n\n### Performance Testing\n\n1. **Set appropriate baselines** for performance regression\n2. **Test with various data sizes** to understand scaling\n3. **Include warmup iterations** for accurate measurements\n4. **Document performance expectations** in code comments\n\n## Contributing\n\nWhen adding new features:\n\n1. Add unit tests for new components\n2. Update integration tests for new storage methods\n3. Add migration tests for new backends\n4. Update benchmarks for performance-critical code\n5. Document any new test utilities\n\nFor more information on the storage architecture and API, see the main project documentation.