Kiro Best Practices Guide

Production-ready development patterns and proven practices for AWS Kiro teams

Kiro Fundamentals

Before diving into specific practices, understand these core Kiro principles that drive successful implementations:

📋

Spec-First Development

Always start with clear requirements and design before implementation. This prevents scope creep and ensures AI agents understand your intent.

✅ Write specs before code
❌ Skip to implementation directly
🔄

Iterative Refinement

Kiro works best with incremental improvements. Build features in small, testable chunks rather than monolithic implementations.

✅ Small, frequent iterations
❌ Big bang releases
🤖

AI Collaboration

Treat AI agents as junior developers who need clear instructions and context. The better your guidance, the better their output.

✅ Provide clear context and examples
❌ Assume AI knows your preferences
🔍

Quality Gates

Use hooks and automation to maintain consistent quality. Never rely on manual checks for critical quality measures.

✅ Automated quality checks
❌ Manual code review only

Spec Writing Best Practices

Well-written specs are the foundation of successful Kiro projects. Follow these patterns for maximum effectiveness:

Requirements (requirements.md)

# ✅ Good Requirements Structure

## User Stories (EARS Format)

### Authentication
WHEN a user enters valid login credentials THE SYSTEM SHALL:
- Authenticate against the user database
- Create a secure JWT token with 15-minute expiry  
- Set refresh token cookie with HttpOnly flag
- Redirect to user dashboard
- Log successful login event

WHEN login fails THE SYSTEM SHALL:
- Increment failed attempt counter
- Display generic "Invalid credentials" message
- Rate limit after 5 failures (15-minute lockout)
- Log security event with IP address

### Data Validation
WHEN a user submits a form THE SYSTEM SHALL:
- Validate all required fields are present
- Sanitize input to prevent XSS attacks
- Enforce field-specific validation rules
- Display field-level error messages
- Preserve valid input on validation failure

## Non-Functional Requirements

### Performance
- API responses ≤ 200ms for 95th percentile
- Page load time ≤ 3 seconds on 3G connection
- Support 1000 concurrent users

### Security  
- All data transmission over HTTPS/TLS 1.3
- Passwords hashed with bcrypt (12+ rounds)
- Input validation on all user data
- OWASP security headers implemented

### Accessibility
- WCAG 2.1 AA compliance
- Keyboard navigation support
- Screen reader compatibility
- Color contrast ratio ≥ 4.5:1

Design (design.md)

# ✅ Good Design Structure

## Architecture Overview

### System Components
```
┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   React     │    │   Express   │    │ PostgreSQL  │
│   Frontend  │◄──►│   Backend   │◄──►│  Database   │
└─────────────┘    └─────────────┘    └─────────────┘
       │                   │                   │
       │            ┌─────────────┐           │
       └───────────►│    Redis    │◄──────────┘
                    │    Cache    │
                    └─────────────┘
```

### Database Schema
```sql
-- Users table with proper indexing
CREATE TABLE users (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  email VARCHAR(255) UNIQUE NOT NULL,
  password_hash VARCHAR(60) NOT NULL,
  created_at TIMESTAMP DEFAULT NOW(),
  updated_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_created_at ON users(created_at);
```

### API Design
```typescript
// RESTful API endpoints
GET    /api/users          // List users (admin only)
POST   /api/users          // Create user  
GET    /api/users/:id      // Get user details
PUT    /api/users/:id      // Update user
DELETE /api/users/:id      // Delete user

// Authentication endpoints
POST   /api/auth/login     // User login
POST   /api/auth/logout    // User logout  
POST   /api/auth/refresh   // Refresh token
```

### Component Architecture
```typescript
// Component hierarchy
App
├── Layout
│   ├── Header
│   ├── Navigation
│   └── Footer
├── Pages
│   ├── Dashboard
│   ├── Profile
│   └── Settings
└── Shared
    ├── Forms
    ├── UI Components
    └── Utilities
```

Tasks (tasks.md)

# ✅ Good Task Structure

## Phase 1: Foundation
- [ ] Set up project structure with TypeScript
- [ ] Configure ESLint, Prettier, and pre-commit hooks
- [ ] Create database schema and migrations  
- [ ] Set up basic Express server with middleware
- [ ] Implement error handling and logging

## Phase 2: Authentication
- [ ] Create User model with Prisma
- [ ] Implement password hashing utilities
- [ ] Build registration endpoint with validation
- [ ] Build login endpoint with JWT tokens
- [ ] Add refresh token mechanism
- [ ] Create authentication middleware

## Phase 3: Frontend Core
- [ ] Set up React with TypeScript and Tailwind
- [ ] Create routing structure with React Router
- [ ] Build reusable UI components (Button, Input, etc.)
- [ ] Implement authentication context
- [ ] Create login and registration forms

## Phase 4: Testing
- [ ] Write unit tests for authentication logic
- [ ] Add integration tests for API endpoints
- [ ] Create E2E tests for user registration flow
- [ ] Set up test database and CI pipeline
- [ ] Add performance testing for API endpoints

## Phase 5: Security & Polish
- [ ] Implement rate limiting
- [ ] Add security headers
- [ ] Perform security audit
- [ ] Optimize database queries
- [ ] Add monitoring and alerting

Testing Strategy

Implement comprehensive testing that catches issues early and provides confidence for deployment:

Test Pyramid Implementation

# Testing Strategy for Kiro Projects

## Unit Tests (70% of tests)
- Test individual functions and components
- Mock external dependencies
- Fast execution (< 1 second per test)
- High code coverage (>90%)

## Integration Tests (20% of tests)  
- Test API endpoints with real database
- Test component interaction
- Database transactions and rollbacks
- Authentication flows

## E2E Tests (10% of tests)
- Critical user journeys
- Cross-browser compatibility
- Performance benchmarks
- Security scenarios

## Hook-Driven Testing
# .kiro/hooks/test-on-save.yml
name: "Smart Test Runner"
trigger: onSave
pattern: "**/*.{js,ts,jsx,tsx}"
action: |
  Run relevant tests for the changed file:
  1. Unit tests for the specific file
  2. Integration tests that depend on this file
  3. Report coverage changes
  4. Suggest additional test cases if coverage drops

Testing Best Practices

  • Test behavior, not implementation - Focus on what the code does, not how it does it
  • Use descriptive test names - "should return error when password is too short"
  • Arrange, Act, Assert pattern - Clear test structure with setup, execution, and verification
  • Test edge cases - Empty inputs, boundary values, error conditions
  • Keep tests isolated - Each test should be independent and repeatable

Security Practices

Security must be built in from the start, not added as an afterthought:

Security Checklist

  • All user input validated and sanitized
  • Passwords hashed with bcrypt (12+ rounds)
  • HTTPS enforced in production
  • Security headers implemented (HSTS, CSP, etc.)
  • SQL injection prevention (parameterized queries)
  • XSS prevention (input sanitization, CSP)
  • Authentication tokens properly secured
  • Rate limiting on sensitive endpoints
  • Regular security dependency updates
  • Secrets stored in environment variables
  • Database access with least privilege
  • Audit logging for sensitive operations

Security Automation

# .kiro/hooks/security-scan.yml
name: "Security Scanner"
trigger: onSave
pattern: "**/*.{js,ts,jsx,tsx}"
action: |
  Perform security analysis on the changed file:
  
  1. **Vulnerability Scan**:
     - Check for hardcoded secrets/API keys
     - Identify potential SQL injection vectors
     - Look for XSS vulnerabilities
     - Verify input validation
  
  2. **Authentication Security**:
     - Ensure proper authentication checks
     - Verify authorization logic
     - Check session management
     - Validate token handling
  
  3. **Data Protection**:
     - Confirm sensitive data encryption
     - Check for information leakage
     - Verify secure data transmission
     - Ensure GDPR compliance
  
  Report findings with severity levels and fix suggestions.

Performance Optimization

Build performance considerations into every aspect of your Kiro project:

Frontend Performance

  • Code splitting - Use React.lazy() and Suspense for route-based splitting
  • Image optimization - Use next/image or similar for automatic optimization
  • Bundle analysis - Regular bundle size monitoring and optimization
  • Caching strategies - Browser caching, CDN, and service worker caching
  • Core Web Vitals - Monitor LCP, FID, and CLS metrics

Backend Performance

  • Database optimization - Proper indexing, query optimization, connection pooling
  • API caching - Redis for session storage and frequent queries
  • Response compression - Gzip/Brotli compression for all responses
  • Async processing - Background jobs for heavy operations
  • Rate limiting - Protect against abuse while maintaining performance
# Performance Monitoring Hook
# .kiro/hooks/performance-check.yml
name: "Performance Monitor"
trigger: onSave
pattern: "**/*.{js,ts,jsx,tsx}"
action: |
  Analyze performance impact of changes:
  
  1. **Bundle Size Impact**:
     - Check if new dependencies increase bundle size
     - Suggest alternatives for heavy libraries
     - Identify opportunities for tree shaking
  
  2. **Runtime Performance**:
     - Look for potential memory leaks
     - Identify expensive operations
     - Check for unnecessary re-renders
     - Verify proper cleanup in useEffect
  
  3. **Database Performance**:
     - Analyze new queries for optimization
     - Check for N+1 query problems
     - Verify proper indexing usage
     - Suggest caching opportunities
  
  4. **Core Web Vitals**:
     - Estimate impact on LCP, FID, CLS
     - Suggest performance improvements
     - Flag potential regressions

Team Collaboration

Effective team practices ensure consistent quality and knowledge sharing:

Code Review Process

👥

Review Standards

Establish clear criteria for code reviews to maintain consistency across the team.

✅ Use review checklists
❌ Inconsistent review quality
🔄

Feedback Culture

Create a supportive environment where feedback improves code quality and team knowledge.

✅ Constructive, educational feedback
❌ Criticism without context
📚

Knowledge Sharing

Document decisions and share learning to prevent knowledge silos.

✅ Decision records and documentation
❌ Undocumented architectural choices

Async Collaboration

Use tools and practices that support distributed teams and async work.

✅ Detailed PR descriptions
❌ Context-free code changes

Shared Kiro Configuration

# .kiro/team-standards.yml
team_practices:
  code_review:
    required_reviewers: 2
    review_checklist:
      - "Code follows project conventions"
      - "Tests are comprehensive and passing"
      - "Security considerations addressed"
      - "Performance impact evaluated"
      - "Documentation updated if needed"
  
  definition_of_done:
    - "Feature requirements fully implemented"
    - "Unit tests written and passing"
    - "Integration tests passing"
    - "Code reviewed and approved"
    - "Documentation updated"
    - "Security review completed"
    - "Performance impact assessed"
  
  communication:
    pr_template: |
      ## What Changed
      Brief description of the changes
      
      ## Why
      Reasoning behind the changes
      
      ## Testing
      How the changes were tested
      
      ## Screenshots (if UI changes)
      
      ## Checklist
      - [ ] Tests added/updated
      - [ ] Documentation updated
      - [ ] Breaking changes documented
    
  shared_hooks:
    enabled:
      - "test-on-save"
      - "lint-on-save" 
      - "security-scan"
      - "performance-check"
    
    disabled_in_production:
      - "debug-logging"
      - "verbose-output"

Project Structure

Organize your Kiro project for maximum maintainability and team productivity:

# Recommended Kiro Project Structure

my-kiro-project/
├── .kiro/                          # Kiro configuration
│   ├── steering.yml                # Project context and preferences  
│   ├── personas.yml                # AI agent personalities
│   ├── conventions.yml             # Code style and patterns
│   ├── team-standards.yml          # Team collaboration rules
│   └── hooks/                      # Automation hooks
│       ├── test-on-save.yml
│       ├── security-scan.yml
│       └── performance-check.yml
│
├── docs/                           # Project documentation
│   ├── requirements.md             # EARS format requirements
│   ├── design.md                   # Architecture and design
│   ├── tasks.md                    # Implementation tasks
│   ├── adr/                        # Architecture decision records
│   └── api/                        # API documentation
│
├── src/                            # Source code
│   ├── components/                 # Reusable UI components
│   │   ├── ui/                     # Basic UI components
│   │   ├── forms/                  # Form components
│   │   └── layout/                 # Layout components
│   ├── pages/                      # Page components
│   ├── hooks/                      # Custom React hooks
│   ├── services/                   # API and external services
│   ├── utils/                      # Utility functions
│   ├── types/                      # TypeScript type definitions
│   └── __tests__/                  # Test files
│
├── server/                         # Backend code (if applicable)
│   ├── routes/                     # API routes
│   ├── middleware/                 # Express middleware
│   ├── models/                     # Data models
│   ├── services/                   # Business logic
│   └── __tests__/                  # Backend tests
│
├── .env.example                    # Environment variables template
├── .gitignore                      # Git ignore rules
├── package.json                    # Dependencies and scripts
├── tsconfig.json                   # TypeScript configuration
├── tailwind.config.js              # Tailwind CSS configuration
└── README.md                       # Project overview

Common Anti-patterns

Avoid these common mistakes that lead to maintenance headaches and poor performance:

❌ Vague Specifications

Problem: Writing requirements like "The system should be user-friendly" or "Add good error handling"
Solution: Use EARS format with specific, measurable criteria: "WHEN a form validation fails THE SYSTEM SHALL display field-specific error messages within 100ms"

❌ Monolithic Task Lists

Problem: Creating massive tasks like "Build entire user management system"
Solution: Break into small, testable tasks: "Create User model", "Implement registration endpoint", "Add email validation"

❌ Skipping Steering Configuration

Problem: Letting AI agents work without project context, leading to inconsistent code
Solution: Set up comprehensive steering files with your tech stack, conventions, and preferences before AI implementation

❌ Testing as an Afterthought

Problem: Writing tests after implementation, leading to poor coverage and fragile tests
Solution: Include testing requirements in specs and use hooks to enforce test coverage on every change

❌ Ignoring Performance from Start

Problem: Building features without considering performance impact, leading to optimization nightmares later
Solution: Include performance requirements in specs and use performance monitoring hooks to catch regressions early

Production Readiness

Ensure your Kiro project is ready for production deployment:

Production Deployment Checklist

  • Environment variables configured for production
  • Database migrations tested and documented
  • SSL/TLS certificates configured
  • Security headers implemented
  • Error monitoring and alerting setup
  • Performance monitoring in place
  • Backup and disaster recovery plan
  • Health check endpoints implemented
  • Log aggregation and monitoring
  • Load testing completed
  • Security audit performed
  • Documentation updated for operations team
  • Rollback plan tested
  • CI/CD pipeline validated
  • Rate limiting configured

Monitoring and Observability

# Production Monitoring Setup

## Application Metrics
- Response time percentiles (p50, p95, p99)
- Error rates by endpoint
- Request volume and patterns
- Database query performance
- Memory and CPU usage

## Business Metrics  
- User registration rate
- Feature adoption metrics
- Conversion funnel analytics
- Customer satisfaction scores

## Infrastructure Metrics
- Server health and availability
- Database connection pool status
- Cache hit rates
- CDN performance
- Third-party service health

## Alerting Rules
Critical:
- Error rate > 5% for 5 minutes
- Response time p95 > 1 second for 10 minutes  
- Database connections > 80% for 5 minutes
- Any 5xx errors in authentication endpoints

Warning:
- Error rate > 1% for 15 minutes
- Response time p95 > 500ms for 15 minutes
- Cache hit rate < 80% for 30 minutes

Success Indicators

You know your Kiro practices are working when:

  • Consistent code quality - New code follows established patterns
  • Fast feature delivery - Requirements to production in days, not weeks
  • Low bug rates - Comprehensive testing catches issues early
  • Team confidence - Developers feel safe making changes
  • Stable performance - Production metrics remain within acceptable ranges
  • Security posture - Regular security audits pass without major issues

Continuous Improvement

Kiro best practices evolve with your team and project. Regularly assess and improve:

  • Monthly retrospectives - Review what's working and what isn't
  • Metric-driven decisions - Use data to guide process improvements
  • Tool evaluation - Regularly assess if tools still serve your needs
  • Knowledge sharing - Share successes and lessons learned with the community
  • Experimentation - Try new approaches in low-risk environments

Related Resources