Table of Contents
What is Kiro Steering?
Kiro steering files provide context and guidance to AI agents working on your project. Think of them as configuration files that teach the AI about your project's structure, conventions, preferences, and goals.
Without steering, AI agents work with generic knowledge. With proper steering, they understand your specific codebase, follow your team's conventions, and make decisions aligned with your project goals.
Steering Benefits
- Consistent code style - AI follows your formatting and naming conventions
- Architecture awareness - AI understands your project structure and patterns
- Technology preferences - AI chooses libraries and approaches you prefer
- Team alignment - All AI agents work according to team standards
- Domain knowledge - AI learns your business logic and terminology
Setting Up Steering Files
Steering files live in the .kiro/
directory at your project root:
To create your first steering configuration:
- Open Command Palette (
Ctrl+Shift+P
) - Run "Kiro: Initialize Steering"
- Choose from templates or start with blank configuration
- Customize the generated files for your project
Project Configuration
The main steering.yml
file contains project-wide settings:
# .kiro/steering.yml
name: "E-commerce Platform"
description: "Multi-tenant e-commerce platform with React frontend and Node.js backend"
# Project metadata
project:
type: "web-application"
primary_language: "typescript"
framework: "react"
backend: "express"
database: "postgresql"
deployment: "aws"
# Technology stack
tech_stack:
frontend:
- "react@18"
- "typescript@5"
- "tailwindcss@3"
- "react-router@6"
- "react-query@4"
backend:
- "express@4"
- "prisma@5"
- "jsonwebtoken@9"
- "bcryptjs@2"
- "helmet@7"
testing:
- "jest@29"
- "react-testing-library@14"
- "cypress@13"
- "supertest@6"
# Architecture patterns
architecture:
frontend_pattern: "component-composition"
state_management: "react-query + context"
api_pattern: "rest"
auth_pattern: "jwt-tokens"
data_pattern: "prisma-orm"
# Development preferences
preferences:
code_style: "prettier + eslint"
import_style: "absolute-imports"
component_style: "functional-components"
error_handling: "error-boundaries"
testing_style: "test-driven-development"
Configuration Sections
Project Metadata
Basic information about your project type and primary technologies
primary_language: "typescript"
Technology Stack
Specific versions and libraries the AI should prefer
- "react@18"
- "tailwindcss@3"
Architecture Patterns
Design patterns and architectural decisions to follow
api_pattern: "rest"
Development Preferences
Code style, testing approach, and development practices
error_handling: "error-boundaries"
AI Agent Personas
Personas define different AI agent behaviors for specific tasks:
# .kiro/personas.yml
personas:
architect:
role: "Senior Software Architect"
personality: "methodical, security-conscious, performance-focused"
expertise:
- "system design"
- "scalability patterns"
- "security best practices"
- "performance optimization"
guidelines: |
When designing systems:
1. Always consider scalability from the start
2. Design for security by default
3. Choose battle-tested technologies
4. Document architectural decisions
5. Consider operational complexity
developer:
role: "Full-Stack Developer"
personality: "pragmatic, test-driven, detail-oriented"
expertise:
- "react development"
- "node.js backends"
- "database design"
- "testing strategies"
guidelines: |
When writing code:
1. Write tests before implementation
2. Use TypeScript strictly
3. Follow existing patterns
4. Handle errors gracefully
5. Write self-documenting code
reviewer:
role: "Senior Code Reviewer"
personality: "thorough, constructive, quality-focused"
expertise:
- "code quality"
- "security vulnerabilities"
- "performance issues"
- "maintainability"
guidelines: |
When reviewing code:
1. Check for security vulnerabilities
2. Verify test coverage
3. Ensure consistent style
4. Look for performance issues
5. Suggest improvements kindly
debugger:
role: "Senior Debug Specialist"
personality: "analytical, patient, systematic"
expertise:
- "root cause analysis"
- "performance debugging"
- "error tracking"
- "log analysis"
guidelines: |
When debugging:
1. Reproduce the issue first
2. Check logs systematically
3. Use debugging tools effectively
4. Document findings clearly
5. Verify fixes thoroughly
How to Use Personas
Reference personas in your specs or chat with specific roles:
# In tasks.md
- [ ] @architect: Design the user authentication system
- [ ] @developer: Implement login form with validation
- [ ] @reviewer: Review authentication implementation
- [ ] @debugger: Investigate login performance issues
# In Kiro chat
/persona architect
How should I structure the microservices architecture?
/persona reviewer
Please review this component for security issues
Creating Custom Personas
Define personas specific to your domain:
personas:
ux_designer:
role: "UX/UI Designer"
personality: "user-focused, empathetic, creative"
expertise:
- "user experience design"
- "accessibility standards"
- "design systems"
- "user research"
guidelines: |
When designing interfaces:
1. Put user needs first
2. Follow WCAG accessibility guidelines
3. Maintain design system consistency
4. Test with real users
5. Consider mobile-first design
data_analyst:
role: "Senior Data Analyst"
personality: "analytical, detail-oriented, insight-driven"
expertise:
- "data modeling"
- "analytics implementation"
- "performance metrics"
- "reporting"
guidelines: |
When working with data:
1. Validate data quality first
2. Choose appropriate metrics
3. Design for performance
4. Document assumptions
5. Ensure privacy compliance
Persona Output Examples
See how different personas respond to the same request:
Request: "How should I handle user authentication?"
@architect response:
"For enterprise-grade authentication, I recommend:
1. OAuth 2.0 with PKCE for web apps
2. JWT tokens with short expiry (15 min)
3. Refresh token rotation
4. Rate limiting on auth endpoints
5. Multi-factor authentication support
Consider AWS Cognito or Auth0 for managed solutions."
@developer response:
"Let's implement JWT authentication step by step:
1. Install jsonwebtoken and bcryptjs
2. Create user model with hashed passwords
3. Build login/register endpoints
4. Add middleware for token verification
5. Write tests for all auth flows
I'll start with the user registration endpoint..."
@reviewer response:
"Reviewing your auth implementation:
✅ Passwords are properly hashed
❌ Missing rate limiting on login attempts
❌ JWT secret should be environment variable
❌ No input validation on auth endpoints
⚠️ Consider adding refresh token mechanism
Overall: Secure foundation but needs hardening."
Code Style & Conventions
Define specific coding standards for consistent AI output:
# .kiro/conventions.yml
code_style:
# General formatting
formatting:
line_length: 100
indent_style: "spaces"
indent_size: 2
quote_style: "single"
trailing_commas: true
semicolons: true
# Naming conventions
naming:
variables: "camelCase"
functions: "camelCase"
classes: "PascalCase"
components: "PascalCase"
files: "kebab-case"
directories: "kebab-case"
constants: "UPPER_SNAKE_CASE"
# TypeScript specific
typescript:
strict_mode: true
no_any: true
explicit_return_types: false
interface_prefix: false
prefer_type_over_interface: false
# React conventions
react:
# Component structure
component_style: "functional"
prop_types: "typescript"
default_props: "default-parameters"
# File organization
component_files:
- "ComponentName/index.tsx" # Main component
- "ComponentName/ComponentName.tsx" # Implementation
- "ComponentName/types.ts" # Type definitions
- "ComponentName/hooks.ts" # Custom hooks
- "ComponentName/styles.ts" # Styled components
- "ComponentName/test.tsx" # Tests
# Naming patterns
hooks: "use[PascalCase]"
contexts: "[PascalCase]Context"
providers: "[PascalCase]Provider"
hocs: "with[PascalCase]"
# Import/Export patterns
imports:
# Import order
order:
- "react and react-dom"
- "external libraries"
- "internal utilities"
- "components"
- "types"
- "styles"
# Import styles
default_imports: "prefer"
named_imports: "destructured"
path_style: "absolute"
# Examples
examples:
react: "import React from 'react';"
external: "import { z } from 'zod';"
internal: "import { formatDate } from '@/utils/date';"
component: "import { Button } from '@/components/ui/Button';"
type: "import type { User } from '@/types/user';"
# Testing conventions
testing:
# File naming
test_files: "[ComponentName].test.tsx"
test_directory: "__tests__"
# Test structure
describe_blocks: "Component: [ComponentName]"
test_names: "should [expected behavior] when [condition]"
# Testing patterns
patterns:
- "Arrange, Act, Assert"
- "One assertion per test"
- "Descriptive test names"
- "Mock external dependencies"
- "Test user interactions, not implementation"
# API conventions
api:
# REST endpoints
endpoint_naming: "kebab-case"
http_methods:
get: "retrieve data"
post: "create new resource"
put: "update entire resource"
patch: "partial update"
delete: "remove resource"
# Response format
response_structure:
success: "{ data: T, meta?: object }"
error: "{ error: { message: string, code: string, details?: object } }"
# Status codes
status_codes:
200: "successful GET, PUT, PATCH"
201: "successful POST (creation)"
204: "successful DELETE"
400: "bad request/validation error"
401: "authentication required"
403: "insufficient permissions"
404: "resource not found"
422: "unprocessable entity"
500: "internal server error"
Advanced Steering Configurations
Environment-Specific Steering
Different configurations for development, staging, and production:
# .kiro/environments.yml
default: &default
logging_level: "info"
error_reporting: true
performance_monitoring: true
development:
<<: *default
logging_level: "debug"
hot_reload: true
strict_type_checking: true
ai_behavior:
verbosity: "high"
explain_decisions: true
suggest_alternatives: true
staging:
<<: *default
logging_level: "warn"
ai_behavior:
verbosity: "medium"
focus_on_testing: true
validate_deployments: true
production:
<<: *default
logging_level: "error"
strict_validation: true
ai_behavior:
verbosity: "low"
conservative_changes: true
require_approval: true
Domain-Specific Glossary
Teach AI agents your business terminology:
# .kiro/glossary.yml
business_terms:
tenant: "A customer organization using our multi-tenant platform"
workspace: "A tenant's isolated environment with their data and settings"
billing_cycle: "Monthly or yearly subscription period for a tenant"
seat: "A licensed user account within a workspace"
usage_quota: "Monthly limits on API calls, storage, or features"
technical_terms:
service_layer: "Business logic layer between controllers and repositories"
repository_pattern: "Data access layer abstracting database operations"
aggregate: "Domain entity with related objects treated as a single unit"
event_sourcing: "Storing state changes as a sequence of events"
cqrs: "Command Query Responsibility Segregation pattern"
api_conventions:
endpoints:
"/tenants/{tenantId}/workspaces": "Workspace management for a tenant"
"/workspaces/{workspaceId}/users": "User management within workspace"
"/billing/subscriptions": "Subscription and billing management"
"/usage/metrics": "Usage tracking and quota monitoring"
common_patterns:
multi_tenancy: |
All database queries must include tenant_id for data isolation.
Use row-level security policies in PostgreSQL.
Validate tenant access in middleware.
event_handling: |
Use domain events for cross-aggregate communication.
Events are stored in event_store table.
Event handlers are idempotent and fault-tolerant.
caching_strategy: |
Cache tenant configuration in Redis with TTL.
Use workspace-scoped cache keys.
Invalidate cache on configuration changes.
Team Configurations
Share steering configurations across your team:
# .kiro/team.yml
team:
name: "Platform Engineering Team"
size: 8
experience_level: "senior"
# Team preferences
communication:
code_review_style: "collaborative"
documentation_level: "comprehensive"
meeting_frequency: "weekly"
# Shared tools and practices
tools:
version_control: "git"
ci_cd: "github-actions"
monitoring: "datadog"
error_tracking: "sentry"
documentation: "notion"
# Development workflow
workflow:
branching_strategy: "git-flow"
review_process: "pull-request"
deployment_strategy: "blue-green"
testing_strategy: "test-pyramid"
# Role-based permissions
roles:
senior_engineer:
can_modify_architecture: true
can_approve_prs: true
can_deploy_production: true
steering_override: true
engineer:
can_modify_architecture: false
can_approve_prs: true
can_deploy_production: false
steering_override: false
junior_engineer:
can_modify_architecture: false
can_approve_prs: false
can_deploy_production: false
steering_override: false
# Shared guidelines
shared_guidelines: |
Our team values:
1. Code quality over speed
2. Testing is not optional
3. Security by design
4. Performance matters
5. Documentation enables collaboration
When AI agents work on our code:
- Always follow our conventions.yml
- Write comprehensive tests
- Update documentation
- Consider security implications
- Ask for clarification when uncertain
Steering Best Practices
Common Steering Mistakes
- Too generic - Steering should be specific to your project
- Overly complex - Start simple and add complexity gradually
- Outdated information - Keep steering files current with your tech stack
- Conflicting rules - Ensure consistency across all steering files
- Missing context - Explain the "why" behind your conventions
Effective Steering Principles
- Start small - Begin with basic project info and core conventions
- Be specific - Vague guidance leads to inconsistent AI behavior
- Include examples - Show AI agents exactly what you want
- Explain rationale - Help AI understand why rules exist
- Update regularly - Keep steering current with your evolving project
Testing Your Steering
Validate that your steering configuration works effectively:
# Test steering with specific requests
kiro chat "Create a new React component following our conventions"
kiro chat "@developer: Implement user authentication"
kiro chat "@reviewer: Check this component for issues"
# Verify consistent outputs across team members
# Compare AI responses with same steering configuration
# Check that generated code follows your style guide
Steering Success Indicators
- Consistent code style - AI output matches your project patterns
- Appropriate technology choices - AI selects libraries from your stack
- Domain-aware responses - AI uses your business terminology correctly
- Architecture alignment - AI suggestions fit your system design
- Team satisfaction - Developers find AI assistance more helpful
Steering Maintenance
Keep your steering configuration effective over time:
- Review monthly - Check if steering matches current practices
- Update with tech stack changes - Add new libraries and frameworks
- Gather team feedback - Ask developers how AI assistance could improve
- Version control steering - Track changes and maintain history
- Share improvements - Document effective steering patterns
Related Resources
- Getting Started with Kiro - Basic project setup
- Hooks Automation Guide - Automated workflows
- Spec Writing Guide - Requirements and design
- Kiro Cheat Sheets - Quick reference guides