chore: review docs and fix fence lint errors

This commit is contained in:
Jesús Pérez 2026-01-14 03:09:18 +00:00
parent b8c3cb22b7
commit 79fc5e4365
Signed by: jesus
GPG Key ID: 9F243E355E0BC939
196 changed files with 959 additions and 959 deletions

View File

@ -1 +1 @@
# Contributing to provisioning\n\nThank you for your interest in contributing! This document provides guidelines and instructions for contributing to this project.\n\n## Code of Conduct\n\nThis project adheres to a Code of Conduct. By participating, you are expected to uphold this code. Please see [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) for details.\n\n## Getting Started\n\n### Prerequisites\n\n- Rust 1.70+ (if project uses Rust)\n- NuShell (if project uses Nushell scripts)\n- Git\n\n### Development Setup\n\n1. Fork the repository\n2. Clone your fork: `git clone https://repo.jesusperez.pro/jesus/provisioning`\n3. Add upstream: `git remote add upstream https://repo.jesusperez.pro/jesus/provisioning`\n4. Create a branch: `git checkout -b feature/your-feature`\n\n## Development Workflow\n\n### Before You Code\n\n- Check existing issues and pull requests to avoid duplication\n- Create an issue to discuss major changes before implementing\n- Assign yourself to let others know you're working on it\n\n### Code Standards\n\n#### Rust\n\n- Run `cargo fmt --all` before committing\n- All code must pass `cargo clippy -- -D warnings`\n- Write tests for new functionality\n- Maintain 100% documentation coverage for public APIs\n\n#### Nushell\n\n- Validate scripts with `nu --ide-check 100 script.nu`\n- Follow consistent naming conventions\n- Use type hints where applicable\n\n#### Nickel\n\n- Type check schemas with `nickel typecheck`\n- Document schema fields with comments\n- Test schema validation\n\n### Commit Guidelines\n\n- Write clear, descriptive commit messages\n- Reference issues with `Fixes #123` or `Related to #123`\n- Keep commits focused on a single concern\n- Use imperative mood: "Add feature" not "Added feature"\n\n### Testing\n\nAll changes must include tests:\n\n```\n# Run all tests\ncargo test --workspace\n\n# Run with coverage\ncargo llvm-cov --all-features --lcov\n\n# Run locally before pushing\njust ci-full\n```\n\n### Pull Request Process\n\n1. Update documentation for any changed functionality\n2. Add tests for new code\n3. Ensure all CI checks pass\n4. Request review from maintainers\n5. Be responsive to feedback and iterate quickly\n\n## Review Process\n\n- Maintainers will review your PR within 3-5 business days\n- Feedback is constructive and meant to improve the code\n- All discussions should be respectful and professional\n- Once approved, maintainers will merge the PR\n\n## Reporting Bugs\n\nFound a bug? Please file an issue with:\n\n- **Title**: Clear, descriptive title\n- **Description**: What happened and what you expected\n- **Steps to reproduce**: Minimal reproducible example\n- **Environment**: OS, Rust version, etc.\n- **Screenshots**: If applicable\n\n## Suggesting Enhancements\n\nHave an idea? Please file an issue with:\n\n- **Title**: Clear feature title\n- **Description**: What, why, and how\n- **Use cases**: Real-world scenarios where this would help\n- **Alternative approaches**: If you've considered any\n\n## Documentation\n\n- Keep README.md up to date\n- Document public APIs with rustdoc comments\n- Add examples for non-obvious functionality\n- Update CHANGELOG.md with your changes\n\n## Release Process\n\nMaintainers handle releases following semantic versioning:\n\n- MAJOR: Breaking changes\n- MINOR: New features (backward compatible)\n- PATCH: Bug fixes\n\n## Questions?\n\n- Check existing documentation and issues\n- Ask in discussions or open an issue\n- Join our community channels\n\nThank you for contributing! # Contributing to provisioning\n\nThank you for your interest in contributing! This document provides guidelines and instructions for contributing to this project.\n\n## Code of Conduct\n\nThis project adheres to a Code of Conduct. By participating, you are expected to uphold this code. Please see [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) for details.\n\n## Getting Started\n\n### Prerequisites\n\n- Rust 1.70+ (if project uses Rust)\n- NuShell (if project uses Nushell scripts)\n- Git\n\n### Development Setup\n\n1. Fork the repository\n2. Clone your fork: `git clone https://repo.jesusperez.pro/jesus/provisioning`\n3. Add upstream: `git remote add upstream https://repo.jesusperez.pro/jesus/provisioning`\n4. Create a branch: `git checkout -b feature/your-feature`\n\n## Development Workflow\n\n### Before You Code\n\n- Check existing issues and pull requests to avoid duplication\n- Create an issue to discuss major changes before implementing\n- Assign yourself to let others know you're working on it\n\n### Code Standards\n\n#### Rust\n\n- Run `cargo fmt --all` before committing\n- All code must pass `cargo clippy -- -D warnings`\n- Write tests for new functionality\n- Maintain 100% documentation coverage for public APIs\n\n#### Nushell\n\n- Validate scripts with `nu --ide-check 100 script.nu`\n- Follow consistent naming conventions\n- Use type hints where applicable\n\n#### Nickel\n\n- Type check schemas with `nickel typecheck`\n- Document schema fields with comments\n- Test schema validation\n\n### Commit Guidelines\n\n- Write clear, descriptive commit messages\n- Reference issues with `Fixes #123` or `Related to #123`\n- Keep commits focused on a single concern\n- Use imperative mood: "Add feature" not "Added feature"\n\n### Testing\n\nAll changes must include tests:\n\n```\n# Run all tests\ncargo test --workspace\n\n# Run with coverage\ncargo llvm-cov --all-features --lcov\n\n# Run locally before pushing\njust ci-full\n```\n\n### Pull Request Process\n\n1. Update documentation for any changed functionality\n2. Add tests for new code\n3. Ensure all CI checks pass\n4. Request review from maintainers\n5. Be responsive to feedback and iterate quickly\n\n## Review Process\n\n- Maintainers will review your PR within 3-5 business days\n- Feedback is constructive and meant to improve the code\n- All discussions should be respectful and professional\n- Once approved, maintainers will merge the PR\n\n## Reporting Bugs\n\nFound a bug? Please file an issue with:\n\n- **Title**: Clear, descriptive title\n- **Description**: What happened and what you expected\n- **Steps to reproduce**: Minimal reproducible example\n- **Environment**: OS, Rust version, etc.\n- **Screenshots**: If applicable\n\n## Suggesting Enhancements\n\nHave an idea? Please file an issue with:\n\n- **Title**: Clear feature title\n- **Description**: What, why, and how\n- **Use cases**: Real-world scenarios where this would help\n- **Alternative approaches**: If you've considered any\n\n## Documentation\n\n- Keep README.md up to date\n- Document public APIs with rustdoc comments\n- Add examples for non-obvious functionality\n- Update CHANGELOG.md with your changes\n\n## Release Process\n\nMaintainers handle releases following semantic versioning:\n\n- MAJOR: Breaking changes\n- MINOR: New features (backward compatible)\n- PATCH: Bug fixes\n\n## Questions?\n\n- Check existing documentation and issues\n- Ask in discussions or open an issue\n- Join our community channels\n\nThank you for contributing!

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 434 KiB

After

Width:  |  Height:  |  Size: 433 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -18,7 +18,7 @@ with others.
The OFL allows the licensed fonts to be used, studied, modified and The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded, fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives, names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The however, cannot be released under any other type of license. The

File diff suppressed because one or more lines are too long

View File

@ -3487,7 +3487,7 @@ provisioning server create --infra my-infra --check
# Expected output: # Expected output:
# ✓ Validation passed # ✓ Validation passed
# ⚠ Check mode: No changes will be made # ⚠ Check mode: No changes will be made
# #
# Would create: # Would create:
# - Server: dev-server-01 (2 cores, 4 GB RAM, 50 GB disk) # - Server: dev-server-01 (2 cores, 4 GB RAM, 50 GB disk)
</code></pre> </code></pre>
@ -4663,7 +4663,7 @@ let {
db_url = "surreal://localhost:8000", db_url = "surreal://localhost:8000",
namespace = "provisioning", namespace = "provisioning",
database = "ai_rag", database = "ai_rag",
# Collections for different document types # Collections for different document types
collections = { collections = {
documentation = { documentation = {
@ -4682,14 +4682,14 @@ let {
overlap = 512, overlap = 512,
}, },
}, },
# Embedding configuration # Embedding configuration
embedding = { embedding = {
provider = "openai", # or "anthropic", "local" provider = "openai", # or "anthropic", "local"
model = "text-embedding-3-small", model = "text-embedding-3-small",
cache_vectors = true, cache_vectors = true,
}, },
# Search configuration # Search configuration
search = { search = {
hybrid_enabled = true, hybrid_enabled = true,
@ -4739,7 +4739,7 @@ Each chunk preserves:
<pre><code class="language-text">// Find semantically similar documents <pre><code class="language-text">// Find semantically similar documents
async fn vector_search(query: &amp;str, top_k: usize) -&gt; Vec&lt;Document&gt; { async fn vector_search(query: &amp;str, top_k: usize) -&gt; Vec&lt;Document&gt; {
let embedding = embed(query).await?; let embedding = embed(query).await?;
// L2 distance in SurrealDB // L2 distance in SurrealDB
db.query(" db.query("
SELECT *, vector::similarity::cosine(embedding, $embedding) AS score SELECT *, vector::similarity::cosine(embedding, $embedding) AS score
@ -4788,21 +4788,21 @@ async fn keyword_search(query: &amp;str, top_k: usize) -&gt; Vec&lt;Document&gt;
) -&gt; Vec&lt;Document&gt; { ) -&gt; Vec&lt;Document&gt; {
let vector_results = vector_search(query, top_k * 2).await?; let vector_results = vector_search(query, top_k * 2).await?;
let keyword_results = keyword_search(query, top_k * 2).await?; let keyword_results = keyword_search(query, top_k * 2).await?;
let mut scored = HashMap::new(); let mut scored = HashMap::new();
// Score from vector search // Score from vector search
for (i, doc) in vector_results.iter().enumerate() { for (i, doc) in vector_results.iter().enumerate() {
*scored.entry(doc.id).or_insert(0.0) += *scored.entry(doc.id).or_insert(0.0) +=
vector_weight * (1.0 - (i as f32 / top_k as f32)); vector_weight * (1.0 - (i as f32 / top_k as f32));
} }
// Score from keyword search // Score from keyword search
for (i, doc) in keyword_results.iter().enumerate() { for (i, doc) in keyword_results.iter().enumerate() {
*scored.entry(doc.id).or_insert(0.0) += *scored.entry(doc.id).or_insert(0.0) +=
keyword_weight * (1.0 - (i as f32 / top_k as f32)); keyword_weight * (1.0 - (i as f32 / top_k as f32));
} }
// Return top-k by combined score // Return top-k by combined score
let mut results: Vec&lt;_&gt; = scored.into_iter().collect(); let mut results: Vec&lt;_&gt; = scored.into_iter().collect();
| results.sort_by( | a, b | b.1.partial_cmp(&amp;a.1).unwrap()); | | results.sort_by( | a, b | b.1.partial_cmp(&amp;a.1).unwrap()); |
@ -4819,7 +4819,7 @@ async fn keyword_search(query: &amp;str, top_k: usize) -&gt; Vec&lt;Document&gt;
impl SemanticCache { impl SemanticCache {
async fn get(&amp;self, query: &amp;str) -&gt; Option&lt;CachedResult&gt; { async fn get(&amp;self, query: &amp;str) -&gt; Option&lt;CachedResult&gt; {
let embedding = embed(query).await?; let embedding = embed(query).await?;
// Find cached query with similar embedding // Find cached query with similar embedding
// (cosine distance &lt; threshold) // (cosine distance &lt; threshold)
for entry in self.queries.iter() { for entry in self.queries.iter() {
@ -4830,7 +4830,7 @@ impl SemanticCache {
} }
None None
} }
async fn insert(&amp;self, query: &amp;str, result: CachedResult) { async fn insert(&amp;self, query: &amp;str, result: CachedResult) {
let embedding = embed(query).await?; let embedding = embed(query).await?;
self.queries.insert(embedding, result); self.queries.insert(embedding, result);
@ -4861,19 +4861,19 @@ provisioning ai watch docs provisioning/docs/src
<pre><code class="language-text">// In ai-service on startup <pre><code class="language-text">// In ai-service on startup
async fn initialize_rag() -&gt; Result&lt;()&gt; { async fn initialize_rag() -&gt; Result&lt;()&gt; {
let rag = RAGSystem::new(&amp;config.rag).await?; let rag = RAGSystem::new(&amp;config.rag).await?;
// Index documentation // Index documentation
let docs = load_markdown_docs("provisioning/docs/src")?; let docs = load_markdown_docs("provisioning/docs/src")?;
for doc in docs { for doc in docs {
rag.ingest_document(&amp;doc).await?; rag.ingest_document(&amp;doc).await?;
} }
// Index schemas // Index schemas
let schemas = load_nickel_schemas("provisioning/schemas")?; let schemas = load_nickel_schemas("provisioning/schemas")?;
for schema in schemas { for schema in schemas {
rag.ingest_schema(&amp;schema).await?; rag.ingest_schema(&amp;schema).await?;
} }
Ok(()) Ok(())
} }
</code></pre> </code></pre>
@ -4894,16 +4894,16 @@ provisioning ai chat
async fn generate_config(user_request: &amp;str) -&gt; Result&lt;String&gt; { async fn generate_config(user_request: &amp;str) -&gt; Result&lt;String&gt; {
// Retrieve relevant context // Retrieve relevant context
let context = rag.search(user_request, top_k=5).await?; let context = rag.search(user_request, top_k=5).await?;
// Build prompt with context // Build prompt with context
let prompt = build_prompt_with_context(user_request, &amp;context); let prompt = build_prompt_with_context(user_request, &amp;context);
// Generate configuration // Generate configuration
let config = llm.generate(&amp;prompt).await?; let config = llm.generate(&amp;prompt).await?;
// Validate against schemas // Validate against schemas
validate_nickel_config(&amp;config)?; validate_nickel_config(&amp;config)?;
Ok(config) Ok(config)
} }
</code></pre> </code></pre>
@ -4915,14 +4915,14 @@ async function suggestFieldValue(fieldName, currentInput) {
`Field: ${fieldName}, Input: ${currentInput}`, `Field: ${fieldName}, Input: ${currentInput}`,
{ topK: 3, semantic: true } { topK: 3, semantic: true }
); );
// Generate suggestion using context // Generate suggestion using context
const suggestion = await ai.suggest({ const suggestion = await ai.suggest({
field: fieldName, field: fieldName,
input: currentInput, input: currentInput,
context: context, context: context,
}); });
return suggestion; return suggestion;
} }
</code></pre> </code></pre>
@ -5098,27 +5098,27 @@ mcp-client provisioning generate_config \
database = { database = {
engine = "postgresql", engine = "postgresql",
version = "15.0", version = "15.0",
instance = { instance = {
instance_class = "db.r6g.xlarge", instance_class = "db.r6g.xlarge",
allocated_storage_gb = 100, allocated_storage_gb = 100,
iops = 3000, iops = 3000,
}, },
security = { security = {
encryption_enabled = true, encryption_enabled = true,
encryption_key_id = "kms://prod-db-key", encryption_key_id = "kms://prod-db-key",
tls_enabled = true, tls_enabled = true,
tls_version = "1.3", tls_version = "1.3",
}, },
backup = { backup = {
enabled = true, enabled = true,
retention_days = 30, retention_days = 30,
preferred_window = "03:00-04:00", preferred_window = "03:00-04:00",
copy_to_region = "us-west-2", copy_to_region = "us-west-2",
}, },
monitoring = { monitoring = {
enhanced_monitoring_enabled = true, enhanced_monitoring_enabled = true,
monitoring_interval_seconds = 60, monitoring_interval_seconds = 60,
@ -6780,12 +6780,12 @@ provisioning ai analytics failures --period month
Month Summary: Month Summary:
Total deployments: 50 Total deployments: 50
Failed: 5 (10% failure rate) Failed: 5 (10% failure rate)
Common causes: Common causes:
1. Security group rules (3 failures, 60%) 1. Security group rules (3 failures, 60%)
2. Resource limits (1 failure, 20%) 2. Resource limits (1 failure, 20%)
3. Configuration error (1 failure, 20%) 3. Configuration error (1 failure, 20%)
Improvement opportunities: Improvement opportunities:
- Pre-check security groups before deployment - Pre-check security groups before deployment
- Add health checks for resource sizing - Add health checks for resource sizing
@ -7088,7 +7088,7 @@ Complex (agents): Claude Opus 4 ($15/$45)
Example optimization: Example optimization:
Before: All tasks use Sonnet 4 Before: All tasks use Sonnet 4
- 5000 form assists/month: 5000 × $0.006 = $30 - 5000 form assists/month: 5000 × $0.006 = $30
After: Route by complexity After: Route by complexity
- 5000 form assists → Haiku: 5000 × $0.001 = $5 (83% savings) - 5000 form assists → Haiku: 5000 × $0.001 = $5 (83% savings)
- 200 config gen → Sonnet: 200 × $0.005 = $1 - 200 config gen → Sonnet: 200 × $0.005 = $1
@ -7212,7 +7212,7 @@ provisioning admin costs report ai \
Time saved: 1.83 hours/config Time saved: 1.83 hours/config
Hourly rate: $100 Hourly rate: $100
Value: $183/config Value: $183/config
AI cost: $0.005/config AI cost: $0.005/config
ROI: 36,600x (far exceeds cost) ROI: 36,600x (far exceeds cost)
@ -7221,7 +7221,7 @@ Scenario 2: Troubleshooting Efficiency
Solution: AI troubleshooting analysis, 2 minutes Solution: AI troubleshooting analysis, 2 minutes
Time saved: 3.97 hours Time saved: 3.97 hours
Value: $397/incident Value: $397/incident
AI cost: $0.045/incident AI cost: $0.045/incident
ROI: 8,822x ROI: 8,822x
@ -7229,11 +7229,11 @@ Scenario 3: Reduction in Failed Deployments
Before: 5% of 1000 deployments fail (50 failures) Before: 5% of 1000 deployments fail (50 failures)
Failure cost: $500 each (lost time, data cleanup) Failure cost: $500 each (lost time, data cleanup)
Total: $25,000/month Total: $25,000/month
After: With AI analysis, 2% fail (20 failures) After: With AI analysis, 2% fail (20 failures)
Total: $10,000/month Total: $10,000/month
Savings: $15,000/month Savings: $15,000/month
AI cost: $200/month AI cost: $200/month
Net savings: $14,800/month Net savings: $14,800/month
ROI: 74:1 ROI: 74:1
@ -7483,19 +7483,19 @@ Configuration saved to: workspaces/prod/database.ncl
database = { database = {
engine = "postgresql", engine = "postgresql",
version = "15.0", version = "15.0",
instance = { instance = {
instance_class = "db.t3.medium", instance_class = "db.t3.medium",
allocated_storage_gb = 50, allocated_storage_gb = 50,
iops = 1000, iops = 1000,
}, },
security = { security = {
encryption_enabled = true, encryption_enabled = true,
tls_enabled = true, tls_enabled = true,
tls_version = "1.3", tls_version = "1.3",
}, },
backup = { backup = {
enabled = true, enabled = true,
retention_days = 7, retention_days = 7,
@ -7519,26 +7519,26 @@ auto-scaling from 3 to 10 nodes, managed PostgreSQL, and monitoring"
<pre><code class="language-text">let { <pre><code class="language-text">let {
kubernetes = { kubernetes = {
version = "1.28.0", version = "1.28.0",
cluster = { cluster = {
name = "prod-cluster", name = "prod-cluster",
region = "us-east-1", region = "us-east-1",
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"], availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"],
}, },
node_group = { node_group = {
min_size = 3, min_size = 3,
max_size = 10, max_size = 10,
desired_size = 3, desired_size = 3,
instance_type = "t3.large", instance_type = "t3.large",
auto_scaling = { auto_scaling = {
enabled = true, enabled = true,
target_cpu = 70, target_cpu = 70,
scale_down_delay = 300, scale_down_delay = 300,
}, },
}, },
managed_services = { managed_services = {
postgres = { postgres = {
enabled = true, enabled = true,
@ -7547,13 +7547,13 @@ auto-scaling from 3 to 10 nodes, managed PostgreSQL, and monitoring"
storage_gb = 100, storage_gb = 100,
}, },
}, },
monitoring = { monitoring = {
prometheus = {enabled = true}, prometheus = {enabled = true},
grafana = {enabled = true}, grafana = {enabled = true},
cloudwatch_integration = true, cloudwatch_integration = true,
}, },
networking = { networking = {
vpc_cidr = "10.0.0.0/16", vpc_cidr = "10.0.0.0/16",
enable_nat_gateway = true, enable_nat_gateway = true,
@ -7871,7 +7871,7 @@ Suggestions appear:
<pre><code class="language-text">Scenario: User filling database configuration form <pre><code class="language-text">Scenario: User filling database configuration form
1. Engine selection 1. Engine selection
User types: "post" User types: "post"
Suggestion: "postgresql" (99% match) Suggestion: "postgresql" (99% match)
Explanation: "PostgreSQL is the most popular open-source relational database" Explanation: "PostgreSQL is the most popular open-source relational database"
@ -7899,10 +7899,10 @@ Current behavior:
Planned AI behavior: Planned AI behavior:
✗ Storage must be positive (1-65535 GB) ✗ Storage must be positive (1-65535 GB)
Why: Negative storage doesn't make sense. Why: Negative storage doesn't make sense.
Storage capacity must be at least 1 GB. Storage capacity must be at least 1 GB.
Fix suggestions: Fix suggestions:
• Use 100 GB (typical production size) • Use 100 GB (typical production size)
• Use 50 GB (development environment) • Use 50 GB (development environment)
@ -7984,7 +7984,7 @@ interface AIFieldProps {
function AIAssistedField({fieldName, formContext, schema}: AIFieldProps) { function AIAssistedField({fieldName, formContext, schema}: AIFieldProps) {
const [suggestions, setSuggestions] = useState&lt;Suggestion[]&gt;([]); const [suggestions, setSuggestions] = useState&lt;Suggestion[]&gt;([]);
const [explanation, setExplanation] = useState&lt;string&gt;(""); const [explanation, setExplanation] = useState&lt;string&gt;("");
// Debounced suggestion generation // Debounced suggestion generation
useEffect(() =&gt; { useEffect(() =&gt; {
const timer = setTimeout(async () =&gt; { const timer = setTimeout(async () =&gt; {
@ -7996,17 +7996,17 @@ function AIAssistedField({fieldName, formContext, schema}: AIFieldProps) {
setSuggestions(suggestions); setSuggestions(suggestions);
| setExplanation(suggestions[0]?.explanation | | ""); | | setExplanation(suggestions[0]?.explanation | | ""); |
}, 300); // Debounce 300ms }, 300); // Debounce 300ms
return () =&gt; clearTimeout(timer); return () =&gt; clearTimeout(timer);
}, [formContext[fieldName]]); }, [formContext[fieldName]]);
return ( return (
&lt;div className="ai-field"&gt; &lt;div className="ai-field"&gt;
&lt;input &lt;input
value={formContext[fieldName]} value={formContext[fieldName]}
onChange={(e) =&gt; handleChange(e.target.value)} onChange={(e) =&gt; handleChange(e.target.value)}
/&gt; /&gt;
{suggestions.length &gt; 0 &amp;&amp; ( {suggestions.length &gt; 0 &amp;&amp; (
&lt;div className="ai-suggestions"&gt; &lt;div className="ai-suggestions"&gt;
{suggestions.map((s) =&gt; ( {suggestions.map((s) =&gt; (
@ -8030,10 +8030,10 @@ async fn suggest_field_value(
) -&gt; Result&lt;Vec&lt;Suggestion&gt;&gt; { ) -&gt; Result&lt;Vec&lt;Suggestion&gt;&gt; {
// Build context for the suggestion // Build context for the suggestion
let context = build_field_context(&amp;req.form_context, &amp;req.field_name)?; let context = build_field_context(&amp;req.form_context, &amp;req.field_name)?;
// Retrieve relevant examples from RAG // Retrieve relevant examples from RAG
let examples = rag.search_by_field(&amp;req.field_name, &amp;context)?; let examples = rag.search_by_field(&amp;req.field_name, &amp;context)?;
// Generate suggestions via LLM // Generate suggestions via LLM
let suggestions = llm.generate_suggestions( let suggestions = llm.generate_suggestions(
&amp;req.field_name, &amp;req.field_name,
@ -8041,10 +8041,10 @@ async fn suggest_field_value(
&amp;context, &amp;context,
&amp;examples, &amp;examples,
).await?; ).await?;
// Rank and format suggestions // Rank and format suggestions
let ranked = rank_suggestions(suggestions, &amp;context); let ranked = rank_suggestions(suggestions, &amp;context);
Ok(ranked) Ok(ranked)
} }
</code></pre> </code></pre>
@ -8111,7 +8111,7 @@ track_rejected_suggestions = true
6. Validation error on next field 6. Validation error on next field
- Old behavior: "Invalid backup_days value" - Old behavior: "Invalid backup_days value"
- New behavior: - New behavior:
"Backup retention must be 1-35 days. Recommended: 30 days. "Backup retention must be 1-35 days. Recommended: 30 days.
30-day retention meets compliance requirements for production systems." 30-day retention meets compliance requirements for production systems."
@ -8237,7 +8237,7 @@ Agent approach:
- Try smaller instance: db.r6g.large (may be insufficient) - Try smaller instance: db.r6g.large (may be insufficient)
- Try different region: different cost, latency - Try different region: different cost, latency
- Request quota increase (requires human approval) - Request quota increase (requires human approval)
4. Ask human: "Quota exceeded. Suggest: use db.r6g.large instead 4. Ask human: "Quota exceeded. Suggest: use db.r6g.large instead
(slightly reduced performance). Approve? [yes/no/try-other]" (slightly reduced performance). Approve? [yes/no/try-other]"
5. Execute based on approval 5. Execute based on approval
6. Continue workflow 6. Continue workflow
@ -53874,7 +53874,7 @@ curl -X GET http://localhost:8083/v1/servers \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"username":"${{ secrets.API_USER }}","password":"${{ secrets.API_PASS }}"}' \ -d '{"username":"${{ secrets.API_USER }}","password":"${{ secrets.API_PASS }}"}' \
| jq -r '.token') | jq -r '.token')
curl -X POST https://api.example.com/v1/servers/create \ curl -X POST https://api.example.com/v1/servers/create \
-H "Authorization: Bearer $TOKEN" \ -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Cost-Optimized Multi-Provider Workspace # Cost-Optimized Multi-Provider Workspace

View File

@ -1 +1 @@
# Multi-Provider Web App Workspace # Multi-Provider Web App Workspace

View File

@ -1 +1 @@
# Multi-Region High Availability Workspace # Multi-Region High Availability Workspace

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Configuration Generation (typdialog-prov-gen)\n\n**Status**: 🔴 Planned for Q2 2025\n\n## Overview\n\nThe Configuration Generator (typdialog-prov-gen) will provide template-based Nickel configuration generation with AI-powered customization.\n\n## Planned Features\n\n### Template Selection\n- Library of production-ready infrastructure templates\n- AI recommends templates based on requirements\n- Preview before generation\n\n### Customization via Natural Language\n```\nprovisioning ai config-gen \\n --template "kubernetes-cluster" \\n --customize "Add Prometheus monitoring, increase replicas to 5, use us-east-1"\n```\n\n### Multi-Provider Support\n- AWS, Hetzner, UpCloud, local infrastructure\n- Automatic provider-specific optimizations\n- Cost estimation across providers\n\n### Validation and Testing\n- Type-checking via Nickel before deployment\n- Dry-run execution for safety\n- Test data fixtures for verification\n\n## Architecture\n\n```\nTemplate Library\n ↓\nTemplate Selection (AI + User)\n ↓\nCustomization Layer (NL → Nickel)\n ↓\nValidation (Type + Runtime)\n ↓\nGenerated Configuration\n```\n\n## Integration Points\n\n- typdialog web UI for template browsing\n- CLI for batch generation\n- AI service for customization suggestions\n- Nickel for type-safe validation\n\n## Related Documentation\n\n- [Natural Language Configuration](natural-language-config.md) - NL to config generation\n- [Architecture](architecture.md) - AI system overview\n- [Configuration Guide](configuration.md) - Setup instructions\n\n---\n\n**Status**: 🔴 Planned\n**Expected Release**: Q2 2025\n**Priority**: High (enables non-technical users to generate configs) # Configuration Generation (typdialog-prov-gen)\n\n**Status**: 🔴 Planned for Q2 2025\n\n## Overview\n\nThe Configuration Generator (typdialog-prov-gen) will provide template-based Nickel configuration generation with AI-powered customization.\n\n## Planned Features\n\n### Template Selection\n- Library of production-ready infrastructure templates\n- AI recommends templates based on requirements\n- Preview before generation\n\n### Customization via Natural Language\n```\nprovisioning ai config-gen \\n --template "kubernetes-cluster" \\n --customize "Add Prometheus monitoring, increase replicas to 5, use us-east-1"\n```\n\n### Multi-Provider Support\n- AWS, Hetzner, UpCloud, local infrastructure\n- Automatic provider-specific optimizations\n- Cost estimation across providers\n\n### Validation and Testing\n- Type-checking via Nickel before deployment\n- Dry-run execution for safety\n- Test data fixtures for verification\n\n## Architecture\n\n```\nTemplate Library\n ↓\nTemplate Selection (AI + User)\n ↓\nCustomization Layer (NL → Nickel)\n ↓\nValidation (Type + Runtime)\n ↓\nGenerated Configuration\n```\n\n## Integration Points\n\n- typdialog web UI for template browsing\n- CLI for batch generation\n- AI service for customization suggestions\n- Nickel for type-safe validation\n\n## Related Documentation\n\n- [Natural Language Configuration](natural-language-config.md) - NL to config generation\n- [Architecture](architecture.md) - AI system overview\n- [Configuration Guide](configuration.md) - Setup instructions\n\n---\n\n**Status**: 🔴 Planned\n**Expected Release**: Q2 2025\n**Priority**: High (enables non-technical users to generate configs)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# API Documentation\n\nAPI reference for programmatic access to the Provisioning Platform.\n\n## Available APIs\n\n- [REST API](rest-api.md) - HTTP endpoints for all operations\n- [WebSocket API](websocket.md) - Real-time event streams\n- [Extensions API](extensions.md) - Extension integration interfaces\n- [SDKs](sdks.md) - Client libraries for multiple languages\n- [Integration Examples](integration-examples.md) - Code examples and patterns\n\n## Quick Start\n\n```\n# Check API health\ncurl http://localhost:9090/health\n\n# List tasks via API\ncurl http://localhost:9090/tasks\n\n# Submit workflow\ncurl -X POST http://localhost:9090/workflows/servers/create \\n -H "Content-Type: application/json" \\n -d '{"infra": "my-project", "servers": ["web-01"]}'\n```\n\nSee [REST API](rest-api.md) for complete endpoint documentation. # API Documentation\n\nAPI reference for programmatic access to the Provisioning Platform.\n\n## Available APIs\n\n- [REST API](rest-api.md) - HTTP endpoints for all operations\n- [WebSocket API](websocket.md) - Real-time event streams\n- [Extensions API](extensions.md) - Extension integration interfaces\n- [SDKs](sdks.md) - Client libraries for multiple languages\n- [Integration Examples](integration-examples.md) - Code examples and patterns\n\n## Quick Start\n\n```\n# Check API health\ncurl http://localhost:9090/health\n\n# List tasks via API\ncurl http://localhost:9090/tasks\n\n# Submit workflow\ncurl -X POST http://localhost:9090/workflows/servers/create \\n -H "Content-Type: application/json" \\n -d '{"infra": "my-project", "servers": ["web-01"]}'\n```\n\nSee [REST API](rest-api.md) for complete endpoint documentation.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Nushell API Reference\n\nAPI documentation for Nushell library functions in the provisioning platform.\n\n## Overview\n\nThe provisioning platform provides a comprehensive Nushell library with reusable functions for infrastructure automation.\n\n## Core Modules\n\n### Configuration Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/config/`\n\n- `get-config <key>` - Retrieve configuration values\n- `validate-config` - Validate configuration files\n- `load-config <path>` - Load configuration from file\n\n### Server Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/servers/`\n\n- `create-servers <plan>` - Create server infrastructure\n- `list-servers` - List all provisioned servers\n- `delete-servers <ids>` - Remove servers\n\n### Task Service Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/taskservs/`\n\n- `install-taskserv <name>` - Install infrastructure service\n- `list-taskservs` - List installed services\n- `generate-taskserv-config <name>` - Generate service configuration\n\n### Workspace Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/workspace/`\n\n- `init-workspace <name>` - Initialize new workspace\n- `get-active-workspace` - Get current workspace\n- `switch-workspace <name>` - Switch to different workspace\n\n### Provider Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/providers/`\n\n- `discover-providers` - Find available providers\n- `load-provider <name>` - Load provider module\n- `list-providers` - List loaded providers\n\n## Diagnostics & Utilities\n\n### Diagnostics Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/diagnostics/`\n\n- `system-status` - Check system health (13+ checks)\n- `health-check` - Deep validation (7 areas)\n- `next-steps` - Get progressive guidance\n- `deployment-phase` - Check deployment progress\n\n### Hints Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/utils/hints.nu`\n\n- `show-next-step <context>` - Display next step suggestion\n- `show-doc-link <topic>` - Show documentation link\n- `show-example <command>` - Display command example\n\n## Usage Example\n\n```\n# Load provisioning library\nuse provisioning/core/nulib/lib_provisioning *\n\n# Check system status\nsystem-status | table\n\n# Create servers\ncreate-servers --plan "3-node-cluster" --check\n\n# Install kubernetes\ninstall-taskserv kubernetes --check\n\n# Get next steps\nnext-steps\n```\n\n## API Conventions\n\nAll API functions follow these conventions:\n\n- **Explicit types**: All parameters have type annotations\n- **Early returns**: Validate first, fail fast\n- **Pure functions**: No side effects (mutations marked with `!`)\n- **Pipeline-friendly**: Output designed for Nu pipelines\n\n## Best Practices\n\nSee [Nushell Best Practices](../development/NUSHELL_BEST_PRACTICES.md) for coding guidelines.\n\n## Source Code\n\nBrowse the complete source code:\n\n- **Core library**: `provisioning/core/nulib/lib_provisioning/`\n- **Module index**: `provisioning/core/nulib/lib_provisioning/mod.nu`\n\n---\n\nFor integration examples, see [Integration Examples](integration-examples.md). # Nushell API Reference\n\nAPI documentation for Nushell library functions in the provisioning platform.\n\n## Overview\n\nThe provisioning platform provides a comprehensive Nushell library with reusable functions for infrastructure automation.\n\n## Core Modules\n\n### Configuration Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/config/`\n\n- `get-config <key>` - Retrieve configuration values\n- `validate-config` - Validate configuration files\n- `load-config <path>` - Load configuration from file\n\n### Server Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/servers/`\n\n- `create-servers <plan>` - Create server infrastructure\n- `list-servers` - List all provisioned servers\n- `delete-servers <ids>` - Remove servers\n\n### Task Service Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/taskservs/`\n\n- `install-taskserv <name>` - Install infrastructure service\n- `list-taskservs` - List installed services\n- `generate-taskserv-config <name>` - Generate service configuration\n\n### Workspace Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/workspace/`\n\n- `init-workspace <name>` - Initialize new workspace\n- `get-active-workspace` - Get current workspace\n- `switch-workspace <name>` - Switch to different workspace\n\n### Provider Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/providers/`\n\n- `discover-providers` - Find available providers\n- `load-provider <name>` - Load provider module\n- `list-providers` - List loaded providers\n\n## Diagnostics & Utilities\n\n### Diagnostics Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/diagnostics/`\n\n- `system-status` - Check system health (13+ checks)\n- `health-check` - Deep validation (7 areas)\n- `next-steps` - Get progressive guidance\n- `deployment-phase` - Check deployment progress\n\n### Hints Module\n\n**Location**: `provisioning/core/nulib/lib_provisioning/utils/hints.nu`\n\n- `show-next-step <context>` - Display next step suggestion\n- `show-doc-link <topic>` - Show documentation link\n- `show-example <command>` - Display command example\n\n## Usage Example\n\n```\n# Load provisioning library\nuse provisioning/core/nulib/lib_provisioning *\n\n# Check system status\nsystem-status | table\n\n# Create servers\ncreate-servers --plan "3-node-cluster" --check\n\n# Install kubernetes\ninstall-taskserv kubernetes --check\n\n# Get next steps\nnext-steps\n```\n\n## API Conventions\n\nAll API functions follow these conventions:\n\n- **Explicit types**: All parameters have type annotations\n- **Early returns**: Validate first, fail fast\n- **Pure functions**: No side effects (mutations marked with `!`)\n- **Pipeline-friendly**: Output designed for Nu pipelines\n\n## Best Practices\n\nSee [Nushell Best Practices](../development/NUSHELL_BEST_PRACTICES.md) for coding guidelines.\n\n## Source Code\n\nBrowse the complete source code:\n\n- **Core library**: `provisioning/core/nulib/lib_provisioning/`\n- **Module index**: `provisioning/core/nulib/lib_provisioning/mod.nu`\n\n---\n\nFor integration examples, see [Integration Examples](integration-examples.md).

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Provider API Reference\n\nAPI documentation for creating and using infrastructure providers.\n\n## Overview\n\nProviders handle cloud-specific operations and resource provisioning. The provisioning platform supports multiple cloud providers through a unified API.\n\n## Supported Providers\n\n- **UpCloud** - European cloud provider\n- **AWS** - Amazon Web Services\n- **Local** - Local development environment\n\n## Provider Interface\n\nAll providers must implement the following interface:\n\n### Required Functions\n\n```\n# Provider initialization\nexport def init [] -> record { ... }\n\n# Server operations\nexport def create-servers [plan: record] -> list { ... }\nexport def delete-servers [ids: list] -> bool { ... }\nexport def list-servers [] -> table { ... }\n\n# Resource information\nexport def get-server-plans [] -> table { ... }\nexport def get-regions [] -> list { ... }\nexport def get-pricing [plan: string] -> record { ... }\n```\n\n### Provider Configuration\n\nEach provider requires configuration in Nickel format:\n\n```\n# Example: UpCloud provider configuration\n{\n provider = {\n name = "upcloud",\n type = "cloud",\n enabled = true,\n config = {\n username = "{{env.UPCLOUD_USERNAME}}",\n password = "{{env.UPCLOUD_PASSWORD}}",\n default_zone = "de-fra1",\n },\n }\n}\n```\n\n## Creating a Custom Provider\n\n### 1. Directory Structure\n\n```\nprovisioning/extensions/providers/my-provider/\n├── nulib/\n│ └── my_provider.nu # Provider implementation\n├── schemas/\n│ ├── main.ncl # Nickel schema\n│ └── defaults.ncl # Default configuration\n└── README.md # Provider documentation\n```\n\n### 2. Implementation Template\n\n```\n# my_provider.nu\nexport def init [] {\n {\n name: "my-provider"\n type: "cloud"\n ready: true\n }\n}\n\nexport def create-servers [plan: record] {\n # Implementation here\n []\n}\n\nexport def list-servers [] {\n # Implementation here\n []\n}\n\n# ... other required functions\n```\n\n### 3. Nickel Schema\n\n```\n# main.ncl\n{\n MyProvider = {\n # My custom provider schema\n name | String = "my-provider",\n type | String | "cloud" | "local" = "cloud",\n config | MyProviderConfig,\n },\n\n MyProviderConfig = {\n api_key | String,\n region | String = "us-east-1",\n },\n}\n```\n\n## Provider Discovery\n\nProviders are automatically discovered from:\n\n- `provisioning/extensions/providers/*/nu/*.nu`\n- User workspace: `workspace/extensions/providers/*/nu/*.nu`\n\n```\n# Discover available providers\nprovisioning module discover providers\n\n# Load provider\nprovisioning module load providers workspace my-provider\n```\n\n## Provider API Examples\n\n### Create Servers\n\n```\nuse my_provider.nu *\n\nlet plan = {\n count: 3\n size: "medium"\n zone: "us-east-1"\n}\n\ncreate-servers $plan\n```\n\n### List Servers\n\n```\nlist-servers | where status == "running" | select hostname ip_address\n```\n\n### Get Pricing\n\n```\nget-pricing "small" | to yaml\n```\n\n## Testing Providers\n\nUse the test environment system to test providers:\n\n```\n# Test provider without real resources\nprovisioning test env single my-provider --check\n```\n\n## Provider Development Guide\n\nFor complete provider development guide, see:\n\n- **[Provider Development](../development/QUICK_PROVIDER_GUIDE.md)** - Quick start guide\n- **[Extension Development](../development/extensions.md)** - Complete extension guide\n- **[Integration Examples](integration-examples.md)** - Example implementations\n\n## API Stability\n\nProvider API follows semantic versioning:\n\n- **Major**: Breaking changes\n- **Minor**: New features, backward compatible\n- **Patch**: Bug fixes\n\nCurrent API version: `2.0.0`\n\n---\n\nFor more examples, see [Integration Examples](integration-examples.md). # Provider API Reference\n\nAPI documentation for creating and using infrastructure providers.\n\n## Overview\n\nProviders handle cloud-specific operations and resource provisioning. The provisioning platform supports multiple cloud providers through a unified API.\n\n## Supported Providers\n\n- **UpCloud** - European cloud provider\n- **AWS** - Amazon Web Services\n- **Local** - Local development environment\n\n## Provider Interface\n\nAll providers must implement the following interface:\n\n### Required Functions\n\n```\n# Provider initialization\nexport def init [] -> record { ... }\n\n# Server operations\nexport def create-servers [plan: record] -> list { ... }\nexport def delete-servers [ids: list] -> bool { ... }\nexport def list-servers [] -> table { ... }\n\n# Resource information\nexport def get-server-plans [] -> table { ... }\nexport def get-regions [] -> list { ... }\nexport def get-pricing [plan: string] -> record { ... }\n```\n\n### Provider Configuration\n\nEach provider requires configuration in Nickel format:\n\n```\n# Example: UpCloud provider configuration\n{\n provider = {\n name = "upcloud",\n type = "cloud",\n enabled = true,\n config = {\n username = "{{env.UPCLOUD_USERNAME}}",\n password = "{{env.UPCLOUD_PASSWORD}}",\n default_zone = "de-fra1",\n },\n }\n}\n```\n\n## Creating a Custom Provider\n\n### 1. Directory Structure\n\n```\nprovisioning/extensions/providers/my-provider/\n├── nulib/\n│ └── my_provider.nu # Provider implementation\n├── schemas/\n│ ├── main.ncl # Nickel schema\n│ └── defaults.ncl # Default configuration\n└── README.md # Provider documentation\n```\n\n### 2. Implementation Template\n\n```\n# my_provider.nu\nexport def init [] {\n {\n name: "my-provider"\n type: "cloud"\n ready: true\n }\n}\n\nexport def create-servers [plan: record] {\n # Implementation here\n []\n}\n\nexport def list-servers [] {\n # Implementation here\n []\n}\n\n# ... other required functions\n```\n\n### 3. Nickel Schema\n\n```\n# main.ncl\n{\n MyProvider = {\n # My custom provider schema\n name | String = "my-provider",\n type | String | "cloud" | "local" = "cloud",\n config | MyProviderConfig,\n },\n\n MyProviderConfig = {\n api_key | String,\n region | String = "us-east-1",\n },\n}\n```\n\n## Provider Discovery\n\nProviders are automatically discovered from:\n\n- `provisioning/extensions/providers/*/nu/*.nu`\n- User workspace: `workspace/extensions/providers/*/nu/*.nu`\n\n```\n# Discover available providers\nprovisioning module discover providers\n\n# Load provider\nprovisioning module load providers workspace my-provider\n```\n\n## Provider API Examples\n\n### Create Servers\n\n```\nuse my_provider.nu *\n\nlet plan = {\n count: 3\n size: "medium"\n zone: "us-east-1"\n}\n\ncreate-servers $plan\n```\n\n### List Servers\n\n```\nlist-servers | where status == "running" | select hostname ip_address\n```\n\n### Get Pricing\n\n```\nget-pricing "small" | to yaml\n```\n\n## Testing Providers\n\nUse the test environment system to test providers:\n\n```\n# Test provider without real resources\nprovisioning test env single my-provider --check\n```\n\n## Provider Development Guide\n\nFor complete provider development guide, see:\n\n- **[Provider Development](../development/QUICK_PROVIDER_GUIDE.md)** - Quick start guide\n- **[Extension Development](../development/extensions.md)** - Complete extension guide\n- **[Integration Examples](integration-examples.md)** - Example implementations\n\n## API Stability\n\nProvider API follows semantic versioning:\n\n- **Major**: Breaking changes\n- **Minor**: New features, backward compatible\n- **Patch**: Bug fixes\n\nCurrent API version: `2.0.0`\n\n---\n\nFor more examples, see [Integration Examples](integration-examples.md).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Architecture Decision Records (ADRs)\n\nThis directory contains all Architecture Decision Records for the provisioning platform. ADRs document significant architectural decisions and their rationale.\n\n## Index of Decisions\n\n### Core Architecture (ADR-001 to ADR-006)\n\n- **ADR-001**: [Project Structure](adr-001-project-structure.md) - Overall project organization and directory layout\n- **ADR-002**: [Distribution Strategy](adr-002-distribution-strategy.md) - How the platform is packaged and distributed\n- **ADR-003**: [Workspace Isolation](adr-003-workspace-isolation.md) - Workspace management and isolation boundaries\n- **ADR-004**: [Hybrid Architecture](adr-004-hybrid-architecture.md) - Rust/Nushell hybrid system design\n- **ADR-005**: [Extension Framework](adr-005-extension-framework.md) - Plugin/extension system architecture\n- **ADR-006**: [Provisioning CLI Refactoring](adr-006-provisioning-cli-refactoring.md) - CLI modularization and command handling\n\n### Infrastructure & Configuration (ADR-007 to ADR-011)\n\n- **ADR-007**: [KMS Simplification](adr-007-kms-simplification.md) - Key Management System design\n- **ADR-008**: [Cedar Authorization](adr-008-cedar-authorization.md) - Fine-grained authorization via Cedar policies\n- **ADR-009**: [Security System Complete](adr-009-security-system-complete.md) - Comprehensive security implementation\n- **ADR-010**: [Configuration Format Strategy](adr-010-configuration-format-strategy.md) - When to use Nickel, TOML, YAML, or KCL\n- **ADR-011**: [Nickel Migration](adr-011-nickel-migration.md) - Migration from KCL to Nickel as primary IaC language\n\n### Platform Services (ADR-012 to ADR-014)\n\n- **ADR-012**: [Nushell Nickel Plugin CLI Wrapper](adr-012-nushell-nickel-plugin-cli-wrapper.md) - Plugin architecture for Nickel integration\n- **ADR-013**: [Typdialog Web UI Backend Integration](adr-013-typdialog-integration.md) - Browser-based configuration forms with multi-user collaboration\n- **ADR-014**: [SecretumVault Integration](adr-014-secretumvault-integration.md) - Centralized secrets management with dynamic credentials\n\n### AI and Intelligence (ADR-015)\n\n- **ADR-015**: [AI Integration Architecture](adr-015-ai-integration-architecture.md) - Comprehensive AI system for intelligent infrastructure provisioning\n\n## How to Use ADRs\n\n1. **For decisions affecting architecture**: Create a new ADR with the next sequential number\n2. **For reading decisions**: Browse this list or check SUMMARY.md\n3. **For understanding context**: Each ADR includes context, rationale, and consequences\n\n## ADR Format\n\nEach ADR follows this standard structure:\n\n- **Context**: What problem we're solving\n- **Decision**: What we decided\n- **Rationale**: Why we chose this approach\n- **Consequences**: Positive and negative impacts\n- **Alternatives Considered**: Other options we evaluated\n\n## Status Markers\n\n- **Proposed**: Under review, not yet final\n- **Accepted**: Approved and adopted\n- **Superseded**: Replaced by a later ADR\n- **Deprecated**: No longer recommended\n\n---\n\n**Last Updated**: 2025-01-08\n**Total ADRs**: 15 # Architecture Decision Records (ADRs)\n\nThis directory contains all Architecture Decision Records for the provisioning platform. ADRs document significant architectural decisions and their rationale.\n\n## Index of Decisions\n\n### Core Architecture (ADR-001 to ADR-006)\n\n- **ADR-001**: [Project Structure](adr-001-project-structure.md) - Overall project organization and directory layout\n- **ADR-002**: [Distribution Strategy](adr-002-distribution-strategy.md) - How the platform is packaged and distributed\n- **ADR-003**: [Workspace Isolation](adr-003-workspace-isolation.md) - Workspace management and isolation boundaries\n- **ADR-004**: [Hybrid Architecture](adr-004-hybrid-architecture.md) - Rust/Nushell hybrid system design\n- **ADR-005**: [Extension Framework](adr-005-extension-framework.md) - Plugin/extension system architecture\n- **ADR-006**: [Provisioning CLI Refactoring](adr-006-provisioning-cli-refactoring.md) - CLI modularization and command handling\n\n### Infrastructure & Configuration (ADR-007 to ADR-011)\n\n- **ADR-007**: [KMS Simplification](adr-007-kms-simplification.md) - Key Management System design\n- **ADR-008**: [Cedar Authorization](adr-008-cedar-authorization.md) - Fine-grained authorization via Cedar policies\n- **ADR-009**: [Security System Complete](adr-009-security-system-complete.md) - Comprehensive security implementation\n- **ADR-010**: [Configuration Format Strategy](adr-010-configuration-format-strategy.md) - When to use Nickel, TOML, YAML, or KCL\n- **ADR-011**: [Nickel Migration](adr-011-nickel-migration.md) - Migration from KCL to Nickel as primary IaC language\n\n### Platform Services (ADR-012 to ADR-014)\n\n- **ADR-012**: [Nushell Nickel Plugin CLI Wrapper](adr-012-nushell-nickel-plugin-cli-wrapper.md) - Plugin architecture for Nickel integration\n- **ADR-013**: [Typdialog Web UI Backend Integration](adr-013-typdialog-integration.md) - Browser-based configuration forms with multi-user collaboration\n- **ADR-014**: [SecretumVault Integration](adr-014-secretumvault-integration.md) - Centralized secrets management with dynamic credentials\n\n### AI and Intelligence (ADR-015)\n\n- **ADR-015**: [AI Integration Architecture](adr-015-ai-integration-architecture.md) - Comprehensive AI system for intelligent infrastructure provisioning\n\n## How to Use ADRs\n\n1. **For decisions affecting architecture**: Create a new ADR with the next sequential number\n2. **For reading decisions**: Browse this list or check SUMMARY.md\n3. **For understanding context**: Each ADR includes context, rationale, and consequences\n\n## ADR Format\n\nEach ADR follows this standard structure:\n\n- **Context**: What problem we're solving\n- **Decision**: What we decided\n- **Rationale**: Why we chose this approach\n- **Consequences**: Positive and negative impacts\n- **Alternatives Considered**: Other options we evaluated\n\n## Status Markers\n\n- **Proposed**: Under review, not yet final\n- **Accepted**: Approved and adopted\n- **Superseded**: Replaced by a later ADR\n- **Deprecated**: No longer recommended\n\n---\n\n**Last Updated**: 2025-01-08\n**Total ADRs**: 15

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Command Reference\n\nComplete command reference for the provisioning CLI.\n\n## 📖 Service Management Guide\n\nThe primary command reference is now part of the Service Management Guide:\n\n**→ [Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md)** - Complete CLI reference\n\nThis guide includes:\n\n- All CLI commands and shortcuts\n- Command syntax and examples\n- Service lifecycle management\n- Troubleshooting commands\n\n## Quick Reference\n\n### Essential Commands\n\n```\n# System status\nprovisioning status\nprovisioning health\n\n# Server management\nprovisioning server create\nprovisioning server list\nprovisioning server ssh <hostname>\n\n# Task services\nprovisioning taskserv create <service>\nprovisioning taskserv list\n\n# Workspace management\nprovisioning workspace list\nprovisioning workspace switch <name>\n\n# Get help\nprovisioning help\nprovisioning <command> help\n```\n\n## Additional References\n\n- **[Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md)** - Complete CLI reference\n- **[Service Management Quick Reference](SERVICE_MANAGEMENT_QUICKREF.md)** - Quick lookup\n- **[Quick Start Cheatsheet](../guides/quickstart-cheatsheet.md)** - All shortcuts\n- **[Authentication Guide](AUTHENTICATION_LAYER_GUIDE.md)** - Auth commands\n\n---\n\nFor complete command documentation, see [Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md). # Command Reference\n\nComplete command reference for the provisioning CLI.\n\n## 📖 Service Management Guide\n\nThe primary command reference is now part of the Service Management Guide:\n\n**→ [Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md)** - Complete CLI reference\n\nThis guide includes:\n\n- All CLI commands and shortcuts\n- Command syntax and examples\n- Service lifecycle management\n- Troubleshooting commands\n\n## Quick Reference\n\n### Essential Commands\n\n```\n# System status\nprovisioning status\nprovisioning health\n\n# Server management\nprovisioning server create\nprovisioning server list\nprovisioning server ssh <hostname>\n\n# Task services\nprovisioning taskserv create <service>\nprovisioning taskserv list\n\n# Workspace management\nprovisioning workspace list\nprovisioning workspace switch <name>\n\n# Get help\nprovisioning help\nprovisioning <command> help\n```\n\n## Additional References\n\n- **[Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md)** - Complete CLI reference\n- **[Service Management Quick Reference](SERVICE_MANAGEMENT_QUICKREF.md)** - Quick lookup\n- **[Quick Start Cheatsheet](../guides/quickstart-cheatsheet.md)** - All shortcuts\n- **[Authentication Guide](AUTHENTICATION_LAYER_GUIDE.md)** - Auth commands\n\n---\n\nFor complete command documentation, see [Service Management Guide](SERVICE_MANAGEMENT_GUIDE.md).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# MCP Server - Model Context Protocol\n\nA Rust-native Model Context Protocol (MCP) server for infrastructure automation and AI-assisted DevOps operations.\n\n> **Source**: `provisioning/platform/mcp-server/`\n> **Status**: Proof of Concept Complete\n\n## Overview\n\nReplaces the Python implementation with significant performance improvements while maintaining philosophical consistency with the Rust ecosystem approach.\n\n## Performance Results\n\n```\n🚀 Rust MCP Server Performance Analysis\n==================================================\n\n📋 Server Parsing Performance:\n • Sub-millisecond latency across all operations\n • 0μs average for configuration access\n\n🤖 AI Status Performance:\n • AI Status: 0μs avg (10000 iterations)\n\n💾 Memory Footprint:\n • ServerConfig size: 80 bytes\n • Config size: 272 bytes\n\n✅ Performance Summary:\n • Server parsing: Sub-millisecond latency\n • Configuration access: Microsecond latency\n • Memory efficient: Small struct footprint\n • Zero-copy string operations where possible\n```\n\n## Architecture\n\n```\nsrc/\n├── simple_main.rs # Lightweight MCP server entry point\n├── main.rs # Full MCP server (with SDK integration)\n├── lib.rs # Library interface\n├── config.rs # Configuration management\n├── provisioning.rs # Core provisioning engine\n├── tools.rs # AI-powered parsing tools\n├── errors.rs # Error handling\n└── performance_test.rs # Performance benchmarking\n```\n\n## Key Features\n\n1. **AI-Powered Server Parsing**: Natural language to infrastructure config\n2. **Multi-Provider Support**: AWS, UpCloud, Local\n3. **Configuration Management**: TOML-based with environment overrides\n4. **Error Handling**: Comprehensive error types with recovery hints\n5. **Performance Monitoring**: Built-in benchmarking capabilities\n\n## Rust vs Python Comparison\n\n| Metric | Python MCP Server | Rust MCP Server | Improvement |\n| -------- | ------------------ | ----------------- | ------------- |\n| **Startup Time** | ~500 ms | ~50 ms | **10x faster** |\n| **Memory Usage** | ~50 MB | ~5 MB | **10x less** |\n| **Parsing Latency** | ~1 ms | ~0.001 ms | **1000x faster** |\n| **Binary Size** | Python + deps | ~15 MB static | **Portable** |\n| **Type Safety** | Runtime errors | Compile-time | **Zero runtime errors** |\n\n## Usage\n\n```\n# Build and run\ncargo run --bin provisioning-mcp-server --release\n\n# Run with custom config\nPROVISIONING_PATH=/path/to/provisioning cargo run --bin provisioning-mcp-server -- --debug\n\n# Run tests\ncargo test\n\n# Run benchmarks\ncargo run --bin provisioning-mcp-server --release\n```\n\n## Configuration\n\nSet via environment variables:\n\n```\nexport PROVISIONING_PATH=/path/to/provisioning\nexport PROVISIONING_AI_PROVIDER=openai\nexport OPENAI_API_KEY=your-key\nexport PROVISIONING_DEBUG=true\n```\n\n## Integration Benefits\n\n1. **Philosophical Consistency**: Rust throughout the stack\n2. **Performance**: Sub-millisecond response times\n3. **Memory Safety**: No segfaults, no memory leaks\n4. **Concurrency**: Native async/await support\n5. **Distribution**: Single static binary\n6. **Cross-compilation**: ARM64/x86_64 support\n\n## Next Steps\n\n1. Full MCP SDK integration (schema definitions)\n2. WebSocket/TCP transport layer\n3. Plugin system for extensibility\n4. Metrics collection and monitoring\n5. Documentation and examples\n\n## Related Documentation\n\n- **Architecture**: [MCP Integration](../architecture/orchestrator-integration-model.md) # MCP Server - Model Context Protocol\n\nA Rust-native Model Context Protocol (MCP) server for infrastructure automation and AI-assisted DevOps operations.\n\n> **Source**: `provisioning/platform/mcp-server/`\n> **Status**: Proof of Concept Complete\n\n## Overview\n\nReplaces the Python implementation with significant performance improvements while maintaining philosophical consistency with the Rust ecosystem approach.\n\n## Performance Results\n\n```\n🚀 Rust MCP Server Performance Analysis\n==================================================\n\n📋 Server Parsing Performance:\n • Sub-millisecond latency across all operations\n • 0μs average for configuration access\n\n🤖 AI Status Performance:\n • AI Status: 0μs avg (10000 iterations)\n\n💾 Memory Footprint:\n • ServerConfig size: 80 bytes\n • Config size: 272 bytes\n\n✅ Performance Summary:\n • Server parsing: Sub-millisecond latency\n • Configuration access: Microsecond latency\n • Memory efficient: Small struct footprint\n • Zero-copy string operations where possible\n```\n\n## Architecture\n\n```\nsrc/\n├── simple_main.rs # Lightweight MCP server entry point\n├── main.rs # Full MCP server (with SDK integration)\n├── lib.rs # Library interface\n├── config.rs # Configuration management\n├── provisioning.rs # Core provisioning engine\n├── tools.rs # AI-powered parsing tools\n├── errors.rs # Error handling\n└── performance_test.rs # Performance benchmarking\n```\n\n## Key Features\n\n1. **AI-Powered Server Parsing**: Natural language to infrastructure config\n2. **Multi-Provider Support**: AWS, UpCloud, Local\n3. **Configuration Management**: TOML-based with environment overrides\n4. **Error Handling**: Comprehensive error types with recovery hints\n5. **Performance Monitoring**: Built-in benchmarking capabilities\n\n## Rust vs Python Comparison\n\n| Metric | Python MCP Server | Rust MCP Server | Improvement |\n| -------- | ------------------ | ----------------- | ------------- |\n| **Startup Time** | ~500 ms | ~50 ms | **10x faster** |\n| **Memory Usage** | ~50 MB | ~5 MB | **10x less** |\n| **Parsing Latency** | ~1 ms | ~0.001 ms | **1000x faster** |\n| **Binary Size** | Python + deps | ~15 MB static | **Portable** |\n| **Type Safety** | Runtime errors | Compile-time | **Zero runtime errors** |\n\n## Usage\n\n```\n# Build and run\ncargo run --bin provisioning-mcp-server --release\n\n# Run with custom config\nPROVISIONING_PATH=/path/to/provisioning cargo run --bin provisioning-mcp-server -- --debug\n\n# Run tests\ncargo test\n\n# Run benchmarks\ncargo run --bin provisioning-mcp-server --release\n```\n\n## Configuration\n\nSet via environment variables:\n\n```\nexport PROVISIONING_PATH=/path/to/provisioning\nexport PROVISIONING_AI_PROVIDER=openai\nexport OPENAI_API_KEY=your-key\nexport PROVISIONING_DEBUG=true\n```\n\n## Integration Benefits\n\n1. **Philosophical Consistency**: Rust throughout the stack\n2. **Performance**: Sub-millisecond response times\n3. **Memory Safety**: No segfaults, no memory leaks\n4. **Concurrency**: Native async/await support\n5. **Distribution**: Single static binary\n6. **Cross-compilation**: ARM64/x86_64 support\n\n## Next Steps\n\n1. Full MCP SDK integration (schema definitions)\n2. WebSocket/TCP transport layer\n3. Plugin system for extensibility\n4. Metrics collection and monitoring\n5. Documentation and examples\n\n## Related Documentation\n\n- **Architecture**: [MCP Integration](../architecture/orchestrator-integration-model.md)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
# Taskserv Categorization Plan\n\n## Categories and Taskservs (38 total)\n\n### **kubernetes/** (1)\n\n- kubernetes\n\n### **networking/** (6)\n\n- cilium\n- coredns\n- etcd\n- ip-aliases\n- proxy\n- resolv\n\n### **container-runtime/** (6)\n\n- containerd\n- crio\n- crun\n- podman\n- runc\n- youki\n\n### **storage/** (4)\n\n- external-nfs\n- mayastor\n- oci-reg\n- rook-ceph\n\n### **databases/** (2)\n\n- postgres\n- redis\n\n### **development/** (6)\n\n- coder\n- desktop\n- gitea\n- nushell\n- oras\n- radicle\n\n### **infrastructure/** (6)\n\n- kms\n- os\n- provisioning\n- polkadot\n- webhook\n- kubectl\n\n### **misc/** (1)\n\n- generate\n\n### **Keep in root/** (6)\n\n- info.md\n- manifest.toml\n- manifest.lock\n- README.md\n- REFERENCE.md\n- version.ncl\n\nTotal categorized: 32 taskservs + 6 root files = 38 items ✓ # Taskserv Categorization Plan\n\n## Categories and Taskservs (38 total)\n\n### **kubernetes/** (1)\n\n- kubernetes\n\n### **networking/** (6)\n\n- cilium\n- coredns\n- etcd\n- ip-aliases\n- proxy\n- resolv\n\n### **container-runtime/** (6)\n\n- containerd\n- crio\n- crun\n- podman\n- runc\n- youki\n\n### **storage/** (4)\n\n- external-nfs\n- mayastor\n- oci-reg\n- rook-ceph\n\n### **databases/** (2)\n\n- postgres\n- redis\n\n### **development/** (6)\n\n- coder\n- desktop\n- gitea\n- nushell\n- oras\n- radicle\n\n### **infrastructure/** (6)\n\n- kms\n- os\n- provisioning\n- polkadot\n- webhook\n- kubectl\n\n### **misc/** (1)\n\n- generate\n\n### **Keep in root/** (6)\n\n- info.md\n- manifest.toml\n- manifest.lock\n- README.md\n- REFERENCE.md\n- version.ncl\n\nTotal categorized: 32 taskservs + 6 root files = 38 items ✓

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More