Skills
Reusable capabilities that extend your AI assistant with specialized knowledge, opinions, and tools.
Build a custom factory specifically for each unique software project and team using features common to most AI IDEs.
Skills
Reusable capabilities that extend your AI assistant with specialized knowledge, opinions, and tools.
Commands
Single-purpose instructions for specific tasks and procedures, composable and easy to use.
Agents
Roles with memory that handle complex construction and decision-making processes.
Workflows
Multi-step processes that coordinate multiple agents to accomplish sophisticated orchestrations.
See how skills, commands, agents, and workflows compose for different application styles.
---
description: Guidelines for writing LINQ queries using Entity Framework Core. Use when writing LINQ queries.
---
## Separate specification from execution
Always use query syntax (`from...select`) and separate IQueryable definition from async execution:
```csharp
// ✅ Preferred
var customersSpec =
from customer in context.Customer
where customer.CustomerGUID == customerGuid
select new { customer.CustomerID, customer.Name };
var customers = await customersSpec.ToListAsync();
// ❌ Avoid: Method chaining with immediate execution
var customers = await context.Customer
.Where(customer => customer.CustomerGUID == customerGuid)
.Select(customer => new { customer.CustomerID, customer.Name })
.ToListAsync();
```
---
description: Guidelines for writing Entity Framework Core migrations. Use when adding, modifying, or reviewing EF Core migrations.
---
## One migration per logical change
Each migration should represent a single, focused schema change. If a change requires backfilling data, split it into two migrations: one for the schema change, one for the data.
```csharp
// ✅ Preferred: focused migration
public partial class AddCustomerEmail : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.AddColumn<string>(
name: "Email",
table: "Customer",
nullable: true);
}
protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropColumn(
name: "Email",
table: "Customer");
}
}
// ❌ Avoid: combining schema change with data migration
public partial class AddEmailAndBackfill : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.AddColumn<string>("Email", "Customer", nullable: true);
migrationBuilder.Sql("UPDATE Customer SET Email = ...");
migrationBuilder.AlterColumn<string>("Email", "Customer", nullable: false);
}
}
```
## Always implement Down
Every migration should be reversible. Implement `Down` so that failed deployments can roll back cleanly.
---
description: Guidelines for building Razor Pages in ASP.NET Core. Use when creating or editing page models, handlers, and views.
---
## Page model owns the logic
Keep the `.cshtml` file focused on markup. All data loading, validation, and business logic belong in the `PageModel`:
```csharp
// ✅ Preferred: logic in the page model
public class EditModel : PageModel
{
[BindProperty]
public CustomerInput Input { get; set; }
public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid) return Page();
await service.UpdateCustomer(Input);
return RedirectToPage("./Index");
}
}
// ❌ Avoid: logic in the view
@if (Model.Customer != null && Model.Customer.IsActive && ...)
{
// complex branching in cshtml
}
```
## Use tag helpers over raw HTML helpers
Tag helpers read like HTML and integrate with model binding and validation automatically:
```html
<!-- ✅ Preferred -->
<input asp-for="Input.Name" />
<span asp-validation-for="Input.Name"></span>
<!-- ❌ Avoid -->
@Html.TextBoxFor(m => m.Input.Name)
@Html.ValidationMessageFor(m => m.Input.Name)
```
Add a new EF Core migration. 1. Define the schema change in code. 2. Run the migration command. 3. Verify Up and Down.
Scaffold a new Razor Page. 1. Add the PageModel and view. 2. Wire routing and handlers.
The user will provide the link to a pull request. Use a git worktree to check out the source branch and perform the code review. 1. Find the closest commit in common between the source branch and the target branch. 2. Generate a diff between the common commit and the source branch. 3. Review the changes represented in the diff using the standards in `docs/standards/rubric.md`. 4. Produce a summary report.
Write a user story for a feature or fix. 1. Use the As a / I want / So that format. 2. Add acceptance criteria.
Write a technical spec for implementation. 1. Describe context, approach, interfaces, and rollout.
--- description: Expert frontend developer. Use for UI, forms, and browser-facing behavior. --- You are an elite frontend developer. You build pages, components, and client-side logic. You are proficient in the following technologies: - .NET Razor Pages - HTML and CSS - JavaScript
--- description: Expert backend developer. Use for APIs and server-side behavior. --- You are an elite backend developer. You implement server-side logic and APIs. You are proficient in the following technologies: - .NET - ASP.NET Core - Entity Framework Core
--- description: Expert database developer. Use for schema changes and data access. --- You are an elite database developer. You define entities, migrations, and queries. You are proficient in the following technologies: - Entity Framework Core - SQL Server - Migrations
Deliver a feature from story to production. The user will provide a specification. 1. Delegate to **fe-dev** to implement using stub data. 2. Delegate to **db-dev** to implement the database schema. 3. Delegate to **be-dev** to implement CRUD operations for the database schema. 4. Delegate to **fe-dev** to use the CRUD operations. 5. Continue until the feature is complete.
---
description: Guidelines for Next.js App Router conventions. Use when creating routes, layouts, loading states, or choosing between server and client components.
---
## Server components by default
Every component is a server component unless it needs browser APIs or interactivity. Only add `"use client"` when the component uses hooks, event handlers, or browser-only APIs:
```tsx
// ✅ Preferred: server component fetches its own data
export default async function CustomersPage() {
const customers = await getCustomers();
return <CustomerList customers={customers} />;
}
// ❌ Avoid: unnecessary client component for static rendering
"use client";
export default function CustomersPage() {
const [customers, setCustomers] = useState([]);
useEffect(() => { fetchCustomers().then(setCustomers); }, []);
return <CustomerList customers={customers} />;
}
```
## Colocate data fetching with the route segment
Fetch data in the `page.tsx` or `layout.tsx` that needs it, not in a parent that passes it down. Use `loading.tsx` for Suspense boundaries at each route segment.
---
description: Guidelines for extracting and composing React custom hooks. Use when encapsulating reusable state logic, side effects, or subscriptions.
---
## Extract shared logic into hooks
When two or more components share the same stateful logic, extract it into a custom hook. Each hook should do one thing:
```tsx
// ✅ Preferred: focused hook with clear contract
function useDebounce<T>(value: T, delayMs: number): T {
const [debounced, setDebounced] = useState(value);
useEffect(() => {
const id = setTimeout(() => setDebounced(value), delayMs);
return () => clearTimeout(id);
}, [value, delayMs]);
return debounced;
}
// ❌ Avoid: god hook that manages unrelated concerns
function useEverything() {
const [search, setSearch] = useState("");
const [cart, setCart] = useState([]);
const [theme, setTheme] = useState("light");
// ...hundreds of lines
}
```
## Hooks are testable units
Design hooks so they can be tested with `renderHook` in isolation. Avoid coupling hooks to specific UI components or global singletons.
---
description: Guidelines for structuring React components using atomic design. Use when deciding component granularity, folder structure, or composition patterns.
---
## Build from atoms up
Compose UI from small, reusable pieces. Each level builds on the one below — atoms are indivisible, molecules combine atoms, organisms combine molecules:
```
components/
atoms/ # Button, Input, Label, Icon
molecules/ # SearchField (Input + Button), FormField (Label + Input)
organisms/ # ProductCard, NavigationBar, CheckoutForm
templates/ # PageLayout, DashboardLayout
pages/ # HomePage, ProductDetailPage
```
## No one-off composite components
If a component is used in only one place and mixes multiple concerns, break it apart. Every molecule or organism should be independently meaningful:
```tsx
// ✅ Preferred: composable organisms
<ProductCard>
<ProductImage src={product.image} />
<ProductInfo name={product.name} price={product.price} />
<AddToCartButton productId={product.id} />
</ProductCard>
// ❌ Avoid: monolithic blob
<ProductCardWithImageAndInfoAndCartButton product={product} />
```
Add a new React component. 1. Create the file, define props, and export. 2. Add to Storybook if used.
Add a new Next.js page or route. 1. Add the route file under app/. 2. Define layout and metadata.
Add a new API route (Route Handler). 1. Create route.ts. 2. Implement GET/POST (or other methods) and validation.
Add a new DB migration. 1. Define the change. 2. Run the migration. 3. Verify Up and Down.
Add a client or server state store. 1. Define the slice or store. 2. Wire it to components or server actions.
The user will provide the link to a pull request. Use a git worktree to check out the source branch and perform the code review. 1. Find the closest commit in common between the source branch and the target branch. 2. Generate a diff between the common commit and the source branch. 3. Review the changes represented in the diff using the standards in `docs/standards/rubric.md`. 4. Produce a summary report.
Write a user story for a feature or fix. 1. Use the As a / I want / So that format. 2. Add acceptance criteria.
Write a technical spec for implementation. 1. Describe context, approach, interfaces, and rollout.
--- description: Expert frontend developer. Use for UI, forms, and browser-facing behavior. --- You are an elite frontend developer. You build pages, components, and client-side logic. You are proficient in the following technologies: - Next.js - React - TypeScript
--- description: Expert backend developer. Use for APIs and server-side behavior. --- You are an elite backend developer. You build endpoints, services, and business rules. You are proficient in the following technologies: - Next.js API routes - Server actions - TypeScript
--- description: Expert database developer. Use for schema changes and data access. --- You are an elite database developer. You define entities and relationships. - Define primary keys as identity columns. - Create indexes for all foreign key relationships. - Favor third normal form.
--- description: Expert TDD planner. Use before writing code in a TDD cycle. --- You are an elite TDD planner. You turn requirements into a small, test-first plan. 1. Analyze the goal or specification thoroughly to understand all required behaviors and edge cases. 2. Design a test progression that follows these principles: - First tests: Test minimal behavior under empty/initial state. - Middle tests: Build complexity incrementally, one degree at a time. - Final tests: Exercise edge cases, boundary conditions, error handling, and failure modes. 3. For each proposed test, describe: - What specific behavior it verifies - What code structure it drives - Why it comes at this point in the progression
--- description: Expert TDD test writer. Use at the start of each TDD micro-cycle. --- You are an elite TDD test writer. You write the failing test for the next behavior. - Write **ONLY ONE** failing test. - Define meaningful failure messages. - Run the test to confirm it fails for the expected reason.
--- description: Expert TDD implementer. Use after a failing test in the TDD cycle. --- You are an elite TDD implementer. You write the minimum code to pass the test. 1. Run the test to understand the failure reason. 2. Write just enough code to pass. 3. Avoid extra implementation. 4. Run the test to confirm it passes.
--- description: Expert at refactoring without changing behavior. Use after tests are green. --- You are an elite refactoring specialist. You improve code quality without changing behavior. 1. VERIFY all tests are passing before starting refactoring. 2. Identify code smells: duplication, long methods, poor naming, complex conditionals, etc. 3. Apply refactoring techniques: Extract Method, Rename, Extract Class, Inline, etc. 4. Run tests to confirm no regression.
Deliver a feature from story to production. The user will provide a specification. 1. Delegate to **fe-dev** to implement using stub data. 2. Delegate to **db-dev** to implement the database schema. 3. Delegate to **be-dev** to implement API endpoints that read and write data to the database. 4. Delegate to **fe-dev** to call the API endpoints. 5. Continue until the feature is complete.
Implement a small slice of behavior using TDD. 1. Delegate to **tdd-plan**. 2. Repeat until the story is done: - Delegate to **tdd-test** (write one failing test). - Delegate to **tdd-code** (make it pass). - If the code or tests could be cleaned up, delegate to **tdd-refactor**.
---
description: Guidelines for structuring Spring Boot services. Use when creating a new service, defining its API boundary, or configuring health checks and profiles.
---
## One bounded context per service
Each Spring Boot application owns exactly one bounded context. Expose a clear API (REST or messaging) and never reach directly into another service's database:
```java
// ✅ Preferred: service with a clear domain boundary
@SpringBootApplication
public class OrderServiceApplication { }
@RestController
@RequestMapping("/api/orders")
class OrderController {
private final OrderService orderService;
// endpoints scoped to the Order bounded context
}
// ❌ Avoid: one service reaching into another's schema
@Repository
interface InventoryRepository extends JpaRepository<InventoryItem, Long> { }
// InventoryItem belongs to a different service — use an API call instead
```
## Health checks and configuration
Always define a health indicator and externalize configuration with `@ConfigurationProperties`. Use Spring profiles for environment-specific settings, not conditionals in code.
---
description: Guidelines for Spring Data JPA repository design. Use when writing repositories, queries, or projections against JPA entities.
---
## Repository per aggregate root
Define one repository per aggregate root, not per table. Avoid exposing child entities through their own repositories:
```java
// ✅ Preferred: repository for the aggregate root only
interface OrderRepository extends JpaRepository<Order, Long> {
@Query("SELECT o FROM Order o JOIN FETCH o.lineItems WHERE o.id = :id")
Optional<Order> findWithLineItems(@Param("id") Long id);
}
// ❌ Avoid: separate repository for a child entity
interface OrderLineItemRepository extends JpaRepository<OrderLineItem, Long> { }
```
## Prevent N+1 queries
Use `JOIN FETCH`, `@EntityGraph`, or projections to load associations in a single query. Never rely on lazy loading inside a loop:
```java
// ✅ Preferred: fetch association eagerly in the query
@EntityGraph(attributePaths = "lineItems")
List<Order> findByCustomerId(Long customerId);
// ❌ Avoid: lazy loading triggered per iteration
orders.forEach(order -> order.getLineItems().size());
```
---
description: Guidelines for defining Kafka event schemas and topic contracts. Use when creating new events, evolving existing schemas, or documenting topic ownership.
---
## Schema-first event design
Define the event schema before writing producer or consumer code. Every event must document its topic, key strategy, and payload fields:
```json
{
"type": "record",
"name": "OrderShipped",
"namespace": "com.fulfillment.events",
"fields": [
{ "name": "orderId", "type": "long" },
{ "name": "shippedAt", "type": { "type": "long", "logicalType": "timestamp-millis" } },
{ "name": "trackingNumber", "type": ["null", "string"], "default": null }
]
}
```
## Evolve schemas with backward compatibility
New fields must have defaults. Never remove or rename existing fields — add new ones and deprecate the old:
```json
// ✅ Preferred: add optional field with default
{ "name": "carrier", "type": ["null", "string"], "default": null }
// ❌ Avoid: renaming a field (breaks existing consumers)
// "shippingProvider" renamed to "carrier"
```
---
description: Guidelines for implementing saga orchestration in distributed systems. Use when coordinating multi-step transactions across services that require compensating actions on failure.
---
## Explicit state machine with compensation
Model each saga as a state machine. Every forward step has a corresponding compensating action. Never rely on distributed locks:
```java
// ✅ Preferred: clear states and compensations
public enum OrderSagaState {
STARTED, PAYMENT_RESERVED, INVENTORY_RESERVED, CONFIRMED, COMPENSATING, FAILED
}
public class OrderSaga {
void onPaymentReserved() { state = PAYMENT_RESERVED; reserveInventory(); }
void onInventoryFailed() { state = COMPENSATING; releasePayment(); }
}
// ❌ Avoid: ad-hoc try/catch without state tracking
try {
reservePayment();
reserveInventory();
confirm();
} catch (Exception e) {
// unclear which steps succeeded — can't compensate reliably
}
```
## Idempotent steps
Every saga step and compensation must be idempotent. Use unique request IDs so retries and replays produce the same result.
---
description: Guidelines for implementing the transactional outbox pattern. Use when publishing domain events to Kafka while guaranteeing consistency with the local database.
---
## Write event and state in one transaction
Insert the domain event into an outbox table inside the same database transaction that modifies state. A separate process reads the outbox and publishes to Kafka:
```java
// ✅ Preferred: single transaction for state + outbox
@Transactional
public void completeOrder(Long orderId) {
Order order = orderRepository.findById(orderId).orElseThrow();
order.markCompleted();
orderRepository.save(order);
outboxRepository.save(new OutboxEvent(
"order.completed", orderId.toString(), serialize(order)));
}
// ❌ Avoid: publishing directly — if Kafka is down, state and events diverge
@Transactional
public void completeOrder(Long orderId) {
order.markCompleted();
orderRepository.save(order);
kafkaTemplate.send("order.completed", serialize(order)); // not transactional
}
```
## At-least-once delivery
The outbox publisher guarantees at-least-once delivery. Consumers must be idempotent — use event IDs to deduplicate.
---
description: Guidelines for integration testing with Testcontainers. Use when writing tests that need real Kafka, PostgreSQL, or other infrastructure dependencies.
---
## Real dependencies, not mocks
Use Testcontainers to start actual infrastructure in tests. One container instance per type, shared across the test class:
```java
// ✅ Preferred: shared container started once per test class
@Testcontainers
@SpringBootTest
class OrderRepositoryTest {
@Container
static PostgreSQLContainer<?> postgres =
new PostgreSQLContainer<>("postgres:16-alpine");
@DynamicPropertySource
static void configure(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", postgres::getJdbcUrl);
registry.add("spring.datasource.username", postgres::getUsername);
registry.add("spring.datasource.password", postgres::getPassword);
}
}
// ❌ Avoid: new container per test method — slow and wasteful
@Container
PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16-alpine");
```
## Keep tests independent
Even though the container is shared, each test should set up and tear down its own data. Never rely on ordering or state from a previous test.
---
description: Guidelines for instrumenting Spring Boot services with Micrometer metrics and distributed tracing. Use when adding timers, counters, or trace propagation.
---
## Low-cardinality tags only
Every metric must use bounded tag values. Never use user IDs, request IDs, or other unbounded values as tags:
```java
// ✅ Preferred: bounded tag values
registry.counter("orders.created", "region", order.getRegion()).increment();
// ❌ Avoid: high-cardinality tag — explodes metric storage
registry.counter("orders.created", "orderId", order.getId().toString()).increment();
```
## Correlate logs with trace IDs
Configure log patterns to include the trace and span IDs automatically. Every log line in a request should be traceable to its distributed context:
```properties
# application.properties
logging.pattern.level=%5p [${spring.application.name},%X{traceId},%X{spanId}]
```
Bootstrap a new Spring Boot service. 1. Create the module and add dependencies. 2. Add the main class and config.
Add a REST endpoint to a service. 1. Add the controller and DTOs. 2. Add validation. 3. Add OpenAPI spec if used.
Define and publish a domain event. 1. Define the event class. 2. Add the producer. 3. Define schema and topic. 4. Document the contract.
Add a Kafka consumer for a topic. 1. Add the listener and deserializer. 2. Add error handling. 3. Add idempotency if needed.
Add a Kafka producer for a topic. 1. Configure the producer. 2. Send with key and schema. 3. Add error and retry handling.
Add a step to a saga (orchestration or choreography). 1. Define the step and compensation. 2. Update saga state. 3. Ensure idempotency.
Add a JPA entity and repository. 1. Define the entity and mapping. 2. Add the repository interface. 3. Add a migration if needed.
Add a Flyway migration.
1. Create V{n}__description.sql.
2. Implement Up only or pair with a down script.
3. Test.
Add an integration test using Testcontainers. 1. Define the containers. 2. Use @DynamicPropertySource or config to wire them. 3. Run the test.
The user will provide the link to a pull request. Use a git worktree to check out the source branch and perform the code review. 1. Find the closest commit in common between the source branch and the target branch. 2. Generate a diff between the common commit and the source branch. 3. Review the changes represented in the diff using the standards in `docs/standards/rubric.md`. 4. Produce a summary report.
Write a user story for a feature or fix. 1. Use the As a / I want / So that format. 2. Add acceptance criteria.
Write a technical spec for implementation. 1. Describe context, approach, interfaces, and rollout.
--- description: Expert microservice developer. Use for service implementation and evolution. --- You are an elite microservice developer. You build endpoints, events, and domain logic. You are proficient in the following technologies: - Spring Boot - Java - REST and domain events
--- description: Expert integration developer. Use for cross-service flows and event wiring. --- You are an elite integration developer. You define contracts, consumers, and saga steps. You are proficient in the following technologies: - Kafka - Event contracts - Testcontainers
--- description: Expert data developer. Use for schema and data access in a service. --- You are an elite data developer. You define entities, Flyway migrations, and queries. You are proficient in the following technologies: - Spring Data JPA - Flyway - PostgreSQL
--- description: Expert QA engineer. Use for test strategy and CI coverage. --- You are an elite QA engineer. You write and run integration and E2E tests. You are proficient in the following technologies: - Testcontainers - Contract and scenario tests - Integration testing
--- description: Expert SRE. Use for deployment, observability, and incidents. --- You are an elite SRE. You set up metrics, alerts, runbooks, and reliability. You are proficient in the following technologies: - Micrometer - Dashboards and alerting - On-call and postmortems
--- description: Expert release manager. Use for multi-service releases and compatibility. --- You are an elite release manager. You manage release train, dependencies, and rollback. You are proficient in the following: - Version policy - Release notes - Staged rollout
Deliver a feature from story to production. The user will provide a specification. 1. Delegate to **integration-dev** to write service contracts and events. 2. Delegate to **data-dev** to design topics and schemas. 3. Delegate to **service-dev** to implement the service. 4. Continue until the feature is complete.
Resolve a production incident with a targeted fix. 1. Delegate to **sre-dev** to triage and confirm impact. 2. Delegate to **service-dev**, **integration-dev**, or **data-dev** to implement the fix. 3. Delegate to **qa-dev** to test the fix. 4. Delegate to **release-manager** to deploy and verify. 5. Delegate to **sre-dev** for postmortem and follow-up.
---
description: Guidelines for writing PostgreSQL queries and schema design. Use when creating tables, writing queries, or tuning performance in PostgreSQL.
---
## Set-based operations over row-by-row
Express logic as single SQL statements rather than cursors or application-side loops. Use CTEs and window functions to keep complex queries readable:
```sql
-- ✅ Preferred: CTE with window function
WITH ranked_orders AS (
SELECT customer_id, total,
ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
FROM orders
)
SELECT customer_id, total
FROM ranked_orders
WHERE rn = 1;
-- ❌ Avoid: fetching all rows and filtering in application code
SELECT customer_id, total FROM orders;
-- then loop in C# to find the max per customer
```
## Index and constrain early
Define constraints (`NOT NULL`, `UNIQUE`, `CHECK`, foreign keys) at table creation time. Add indexes for columns that appear in `WHERE`, `JOIN`, or `ORDER BY` clauses. Run `EXPLAIN ANALYZE` to verify the query plan uses them.
---
description: Guidelines for writing FluentMigrator migrations. Use when adding or modifying database schema through FluentMigrator in .NET projects.
---
## One migration per change, always reversible
Each migration class handles a single schema change. Always implement both `Up` and `Down` so deployments can roll back:
```csharp
// ✅ Preferred: focused and reversible
[Migration(20260115_0001)]
public class AddCustomerRegion : Migration
{
public override void Up()
{
Alter.Table("Customer").AddColumn("Region").AsString(50).Nullable();
}
public override void Down()
{
Delete.Column("Region").FromTable("Customer");
}
}
// ❌ Avoid: multiple unrelated changes in one migration
[Migration(20260115_0002)]
public class MixedChanges : Migration
{
public override void Up()
{
Alter.Table("Customer").AddColumn("Region").AsString(50).Nullable();
Create.Table("AuditLog").WithColumn("Id").AsInt64().PrimaryKey();
Delete.Column("LegacyFlag").FromTable("Orders");
}
}
```
## Use descriptive, timestamped names
Name migration classes to describe the change (e.g., `AddCustomerRegion`). Use a timestamp-based version number to avoid ordering conflicts across branches.
---
description: Guidelines for designing star/snowflake schemas and reporting views. Use when creating fact tables, dimension tables, or materialized reporting views.
---
## Star schema with consistent grain
Every fact table must have a clearly documented grain — one row represents one measurable event. Dimension tables hold descriptive attributes and are joined by surrogate keys:
```sql
-- ✅ Preferred: clear grain and surrogate keys
CREATE TABLE fact_order_line (
order_line_key BIGINT PRIMARY KEY,
order_key BIGINT REFERENCES dim_order(order_key),
product_key BIGINT REFERENCES dim_product(product_key),
date_key INT REFERENCES dim_date(date_key),
quantity INT NOT NULL,
line_total NUMERIC(12,2) NOT NULL
);
-- ❌ Avoid: mixed grain — order-level and line-level in the same table
CREATE TABLE fact_orders (
order_id BIGINT,
line_item_id BIGINT, -- sometimes NULL for order-level rows
order_total NUMERIC(12,2),
line_total NUMERIC(12,2)
);
```
## Document definitions and refresh cadence
Every reporting view or materialized table must document what it measures, its grain, and how often it refreshes. If a downstream dashboard depends on it, note the SLA.
Add a new FluentMigrator migration. 1. Create the migration class. 2. Implement Up and Down. 3. Test.
Add a SQL script for ETL or one-off use. 1. Make it idempotent where possible. 2. Document inputs and outputs.
The user will provide the link to a pull request. Use a git worktree to check out the source branch and perform the code review. 1. Find the closest commit in common between the source branch and the target branch. 2. Generate a diff between the common commit and the source branch. 3. Review the changes represented in the diff using the standards in `docs/standards/rubric.md`. 4. Produce a summary report.
Write a user story for a feature or fix. 1. Use the As a / I want / So that format. 2. Add acceptance criteria.
Write a technical spec for implementation. 1. Describe context, approach, interfaces, and rollout.
--- description: Expert data engineer. Use for schema, ingestion, and data quality. --- You are an elite data engineer. You define migrations, ELT (extract, load, transform), and data models. You are proficient in the following technologies: - PostgreSQL - FluentMigrator - ELT pipelines
--- description: Expert analytics developer. Use for analytics logic and consistency. --- You are an elite analytics developer. You define aggregations, models, and definitions. You are proficient in the following technologies: - SQL and aggregations - Data models and definitions - Sample data validation
--- description: Expert reporting developer. Use for stakeholder-facing reporting. --- You are an elite reporting developer. You create reports, visualizations, and exports. You are proficient in the following technologies: - Reports and dashboards - Approved definitions - Source documentation and validation
Implement or update an ETL pipeline. 1. Use **write-spec** for source, target, and transform. 2. Delegate to **data-dev** (and **analytics-dev** if needed) for implementation; use **create-migration**, **create-script**. 3. Schedule and validate; delegate to **data-dev** or **analytics-dev** for validation.
Implement or update a report or dataset. 1. Capture requirements; delegate to **reporting-dev** for definition and **data-dev** or **analytics-dev** for data shape. 2. Delegate to **data-dev** or **analytics-dev** for query/view implementation; use **create-script** where needed. 3. Delegate to **reporting-dev** to deliver and validate.
Learn concepts, see examples, and discover tools to build your custom development factory.
Get answers to common questions about Factory Engineering.
Understanding factory engineering basics.
How to compose and configure factories.
Available components and how to use them.
Discover tips, resources, and guidance to maximize experience with your software factory.