Mark fields as deprecated in GraphQL
When removing fields from a GraphQL schema, follow a progressive deprecation process: 1. First, mark the field to be removed as deprecated using the @deprecated directive 2. Introduce the new alternative field in the same operation 3. Only remove deprecated fields after they have been deprecated for a sufficient time period Bad: `graphql // Before type User { id: ID! oldEmail: String! } // After (directly removing the field) type User { id: ID! } ` Good: `graphql // Step 1: Mark as deprecated and introduce alternative type User { id: ID! oldEmail: String! @deprecated(reason: "Use 'email' field instead") email: String! } // Step 2: Only later, remove the deprecated field type User { id: ID! email: String! } `
Do not rename columns
When renaming columns in PostgreSQL, follow a safe migration pattern to avoid breaking changes to applications: 1. Create a new column 2. Update application code to write to both the old and new columns 3. Backfill data from the old column to the new column 4. Update application code to read from the new column instead of the old one 5. Once all deployments are complete, stop writing to the old column 6. Drop the old column in a later migration Bad: `sql ALTER TABLE users RENAME COLUMN some_column TO new_name; `
postgresql
mysql
drizzle
migrations
Do not rename tables
When renaming tables, use a multi-step approach instead of direct renaming to prevent downtime. 1. Create a new table 2. Write to both tables 3. Backfill data from the old table to the new table 4. Move reads from the old table to the new table 5. Stop writing to the old table 6. Drop the old table Bad: `sql ALTER TABLE users RENAME TO customers; `
postgresql
mysql
drizzle
migrations
Limit non unique indexes in PostgreSQL
Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `sql CREATE INDEX index_users_on_multiple_columns ON users (column_a, column_b, column_c, column_d); ` Good: `sql CREATE INDEX CONCURRENTLY index_users_on_selective_columns ON users (column_d, column_b); `
postgresql
drizzle
migrations
Only concurrent indexes in PostgreSQL
When creating indexes in PostgreSQL, always use the CONCURRENTLY option to prevent blocking writes during index creation. Bad: `sql CREATE INDEX idx_users_email ON users(email); ` Good: `sql CREATE INDEX CONCURRENTLY idx_users_email ON users(email); `
postgresql
drizzle
migrations
Always use JSONB in PostgreSQL
Always use jsonb instead of json data type when creating columns in PostgreSQL databases. Bad: `sql ALTER TABLE users ADD COLUMN properties json; ` Good: `sql ALTER TABLE users ADD COLUMN properties jsonb; `
postgresql
drizzle
migrations
Use check constraints for setting NOT NULL columns in PostgreSQL
When adding a NOT NULL constraint to an existing column in PostgreSQL, use a check constraint first to avoid blocking reads and writes while every row is checked. Bad: `sql -- This can cause performance issues with large tables ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ` Good: `sql -- Step 1: Add a check constraint without validation ALTER TABLE users ADD CONSTRAINT users_some_column_null CHECK (some_column IS NOT NULL) NOT VALID; -- Step 2: In a separate transaction, validate the constraint ALTER TABLE users VALIDATE CONSTRAINT users_some_column_null; -- Step 3: Add the NOT NULL constraint and remove the check constraint ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ALTER TABLE users DROP CONSTRAINT users_some_column_null; `
postgresql
drizzle
migrations
Split foreign keys in PostgreSQL
When adding foreign keys in Postgres migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders; `
postgresql
drizzle
migrations
Split unique constraints in PostgreSQL
When adding unique constraints in PostgreSQL, create the unique index concurrently first before adding the constraint to avoid blocking reads and writes. Bad: `sql -- Creates a unique constraint directly, which blocks reads and writes ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE (email); ` Good: `sql -- First create a unique index concurrently (non-blocking) CREATE UNIQUE INDEX CONCURRENTLY users_email_unique_idx ON users (email); -- Then add the constraint using the existing index ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE USING INDEX users_email_unique_idx; `
postgresql
drizzle
migrations
Python DRY
Avoid duplicating code in Python. Extract repeated logic into reusable functions, classes, or constants. You may have to search the codebase to see if the function or class is already defined. Bad: `python Duplicated class definitions class User: def __init__(self, id: str, name: str): self.id = id self.name = name class UserProfile: def __init__(self, id: str, name: str): self.id = id self.name = name Magic numbers repeated page_size = 10 items_per_page = 10 ` Good: `python Reusable class and constant class User: def __init__(self, id: str, name: str): self.id = id self.name = name PAGE_SIZE = 10 `
Avoid duplicate variable reassignment in Python
Avoid assigning a variable to itself or reassigning a variable with the same value. Bad: `python Redundant self-assignment x = 10 x = x # Unnecessary reassignment Duplicate assignment with the same value y = "hello" ... some code ... y = "hello" # Unnecessary reassignment with identical value ` Good: `python Single, clear assignment x = 10 Only reassign when the value changes y = "hello" ... some code ... y = "updated value" # Value actually changes `
No unused code in python
Do not leave commented-out code blocks in Python files. If code is no longer needed, remove it entirely rather than commenting it out. Bad: `python def calculate_total(items): total = 0 for item in items: total += item.price # Old calculation method that we might need later # subtotal = 0 # for item in items: # if item.type != 'tax': # subtotal += item.price # tax = calculate_tax(subtotal) # total = subtotal + tax return total ` Good: `python def calculate_total(items): total = 0 for item in items: total += item.price return total `
Avoid unncecessary try except in Python
When using try-except blocks in Python, keep the try block focused only on the code that can raise the expected exception. Bad: `python try: # Large block of code with many potential errors user_data = get_user_data() process_data(user_data) save_to_db(user_data) except (NetworkError, DBError): logger.error("Operation failed") ` Bad: `python try: # Contains only one potential error but still # has a block of code unrelated to the exception url = "https://google.com" url += "/?search=hello" response = requests.get(url) data = response.json() print(data) except NetworkError as e: logger.error(f"Error: {e}") ` Bad: `python Try except blocks are nested into each other try: response = client.beta.chat.completions.parse( model="some-model", messages=[ {"role": "system", "content": "hello"}, {"role": "user", "content": "how are you"}, ], ) try: json.loads(response.choices[0].message.parsed) except json.JSONDecodeError as e: logger.error(f"Decode failed: {e}") except requests.RequestException as e: logger.error(f"Error: {e}") ` Good: `python try: # Only one function that could have an error user_data = get_user_data() except NetworkError: logger.error("Failed to fetch user data") return Cannot raise an exception so it doesn't need to be handled process_data(user_data) try: # Only one potential error save_to_db(user_data) except DBError: logger.error("Failed to save to database") return ` Good: `python url = "https://google.com" url += "/?search=hello" Network call is a separate try except block try: response = requests.get(url) response.raise_for_status() except RequestException as e: logger.error(f"Error: {e}") Getting response in json is a separate try except block try: data = response.json() except JSONDecodeError as e: logger.error(f"Error: {e}") ` Good: `python Blocks that were nested before are now unnested into separate blocks try: response = client.beta.chat.completions.parse( model="some-model", messages=[ {"role": "system", "content": "hello"}, {"role": "user", "content": "how are you"}, ], ) except requests.RequestException as e: logger.error(f"Error: {e}") try: json.loads(response.choices[0].message.parsed) except json.JSONDecodeError as e: logger.error(f"Decode failed: {e}") `
No random numbers in React
Do not generate non-deterministic values like random IDs during render in React components. This causes hydration errors because the server-rendered HTML will not match what the client generates. Avoid using functions like Math.random(), Date.now(), uuid(), or any other source of randomness directly in your render function or JSX. Instead: - Generate IDs in useEffect hooks - Use stable IDs based on props or state - Use refs to store generated values - Use libraries that support SSR (like uuid with specific configuration) Bad: `jsx function UserCard() { // This will generate different values on server and client const id = user-${Math.random()} return ( <div id={id}> <input aria-labelledby={label-${Math.floor(Math.random() * 1000)}} /> </div> ) } ` Good: `jsx function UserCard({ userId }) { // Using stable props for IDs const id = user-${userId} // For dynamic IDs, use useEffect and useState const [randomId, setRandomId] = useState(null) useEffect(() => { // Generate random values after mounting setRandomId(label-${Math.floor(Math.random() * 1000)}) }, []) return <div id={id}>{randomId && <input aria-labelledby={randomId} />}</div> } `
Split check constraints
When adding check constraints in migrations, split the operation into two steps to avoid blocking writes during the table scan: 1. First create the check constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive; `
postgresql
mysql
drizzle
migrations
Change column types safely in SQLAlchemy
When changing a column type that requires a table rewrite, follow these steps: 1. Create a new column with the desired type 2. Write to both columns during the transition period 3. Backfill data from the old column to the new column 4. Move reads from the old column to the new column 5. Stop writing to the old column 6. Drop the old column Bad: `python def upgrade(): # Directly changing a column type can cause table locks op.alter_column('users', 'some_column', type_=sa.String(50), existing_type=sa.Integer()) def downgrade(): op.alter_column('users', 'some_column', type_=sa.Integer(), existing_type=sa.String(50)) ` Good: `python Migration 1: Add new column def upgrade(): # Adding a new column first op.add_column('users', sa.Column('some_column_new', sa.String(50))) def downgrade(): op.drop_column('users', 'some_column_new') ` `python Migration 2: Complete the transition (after backfilling data) def upgrade(): # After ensuring all data is migrated op.drop_column('users', 'some_column') op.alter_column('users', 'some_column_new', new_column_name='some_column') def downgrade(): op.alter_column('users', 'some_column', new_column_name='some_column_new') op.add_column('users', sa.Column('some_column', sa.Integer())) `
postgresql
mysql
sqlalchemy
migrations
alembic
Limit non unique indexes in SQLAlchemy
Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index( 'index_users_on_multiple_columns', 'users', ['column_a', 'column_b', 'column_c', 'column_d'], postgresql_concurrently=True ) ` Good: `python def upgrade(): # Limit to most selective columns for better performance with op.get_context().autocommit_block(): op.create_index( 'index_users_on_selective_columns', 'users', ['column_d', 'column_b'], postgresql_concurrently=True ) `
postgresql
sqlalchemy
alembic
migrations
Only concurrent indexes in SQLAlchemy
When creating or dropping indexes in PostgreSQL using SQLAlchemy migrations, always use the postgresql_concurrently=True option within an autocommit block. This prevents blocking writes during index operations. For upgrade(): Bad: `python def upgrade(): op.create_index('idx_users_email', 'users', ['email']) ` Good: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index('idx_users_email', 'users', ['email'], postgresql_concurrently=True) ` For downgrade(): Bad: `python def downgrade(): op.drop_index('idx_users_email', 'users') ` Good: `python def downgrade(): with op.get_context().autocommit_block(): op.drop_index('idx_users_email', 'users', postgresql_concurrently=True) `
postgresql
sqlalchemy
migrations
alembic
Add check constraints safely in SQLAlchemy
When adding check constraints that could affect large tables, create the constraint with NOT VALID first to avoid blocking writes during the validation scan. Bad: `python def upgrade(): # Directly creating a check constraint blocks writes during table scan op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0' ) ` Good: `python Migration 1: Create check constraint without validation def upgrade(): # Create the check constraint without validating existing data (non-blocking) op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0', postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive') `
postgresql
sqlalchemy
alembic
migrations
Add foreign keys safely in SQLAlchemy
When adding foreign keys in SQLAlchemy migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `python def upgrade(): # Directly creating a foreign key constraint can block writes on both tables op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'] ) ` Good: `python Migration 1: Add foreign key without validation def upgrade(): # Create the foreign key constraint without validating existing data op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'], postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders') `
postgresql
sqlalchemy
alembic
migrations
Add unique constraints safely in SQLAlchemy
When adding unique constraints that could affect large tables, create the unique index concurrently first to avoid blocking reads and writes during the migration. Bad: `python def upgrade(): # Directly creating a unique constraint can block reads and writes op.create_unique_constraint('users_email_unique', 'users', ['email']) ` Good: `python Migration 1: Create unique index concurrently def upgrade(): # Create the unique index concurrently (non-blocking) op.create_index( 'users_email_unique_idx', 'users', ['email'], unique=True, postgresql_concurrently=True ) ` `python Migration 2: Add constraint using existing index def upgrade(): # Add the unique constraint using the existing index op.create_unique_constraint( 'users_email_unique', 'users', ['email'], postgresql_using_index='users_email_unique_idx' ) `
postgresql
sqlalchemy
alembic
migrations
Prefer tailwind design tokens
Use Tailwind's predefined design tokens instead of arbitrary values. Do not use custom pixel values, color codes, or arbitrary numbers in your Tailwind CSS classes. 1. Use Tailwind's spacing scale instead of arbitrary pixel values 2. Use Tailwind's color palette instead of custom color codes 3. Use Tailwind's z-index scale instead of arbitrary z-index values 4. Use Tailwind's percentage-based positioning values instead of arbitrary percentages Bad: `html <div class="mt-[37px] text-[#3366FF] z-[9999] top-[37%] w-[142px]"> Custom content </div> ` Good: `html <div class="mt-10 text-blue-600 z-50 top-1/3 w-36">Custom content</div> `
Typescript DRY
Avoid duplicating code in TypeScript. Extract repeated logic into reusable functions, types, or constants. You may have to search the codebase to see if the method or type is already defined. Bad: `typescript // Duplicated type definitions interface User { id: string name: string } interface UserProfile { id: string name: string } // Magic numbers repeated const pageSize = 10 const itemsPerPage = 10 ` Good: `typescript // Reusable type and constant type User = { id: string name: string } const PAGE_SIZE = 10 `
Avoid duplicate assignment in Typescript
Avoid assigning values to the same variable multiple times in succession without using the variable in between. Bad: `typescript let count = 0 count = 1 // Duplicate assignment without using the initial value count = 2 // Another duplicate assignment function process() { let result = calculateSomething() result = transformData() // Original calculation is discarded return result } ` Good: `typescript let count = 2 // Direct assignment to final value function process() { const initialResult = calculateSomething() const finalResult = transformData(initialResult) // Or use the value before reassigning return finalResult } `
No unused code in typescript
Do not leave commented-out code blocks. Delete unused code instead of commenting it out. Bad: `typescript function calculateTotal(items: Item[]): number { let total = 0 // Old implementation // for (let i = 0; i < items.length; i++) { // const item = items[i]; // total += item.price * item.quantity; // if (item.discounted) { // total -= item.discountAmount; // } // } // New implementation for (const item of items) { total += item.price item.quantity (item.discounted ? 0.9 : 1) } return total } ` Good: `typescript function calculateTotal(items: Item[]): number { let total = 0 for (const item of items) { total += item.price item.quantity (item.discounted ? 0.9 : 1) } return total } `
Avoid unnecessary try catch in Typescript
Don't wrap large blocks of code in try/catch when you're only logging the error message without preserving the stack trace. Bad `typescript async function doSomething() { try { // Large block of code with multiple potential error sources await fetchData() await processData() await saveResults() } catch (error) { console.error(Error: ${(error as Error).message}) process.exit(1) } } ` Good `typescript async function doSomething() { // Let errors propagate with their full stack trace // or handle specific errors at appropriate points await fetchData() await processData() await saveResults() } // If you need top-level error handling: async function main() { try { await doSomething() } catch (error) { console.error("Unexpected error:", error) process.exit(1) } } `
Prefer Composition API over Options API in Vue components
Favor the Composition API (<script setup> or setup() function) instead of the Options API when writing new Vue components. Bad – Options API component `vue <script> export default { name: "Counter", data() { return { count: 0, } }, methods: { increment() { this.count++ }, }, mounted() { console.log(The initial count is ${this.count}.) }, } </script> <template> <button @click="increment">Count is: {{ count }}</button> </template> ` Good – Composition API component (<script setup>) `vue <script setup lang="ts"> import { ref, onMounted } from "vue" const count = ref(0) function increment() { count.value++ } onMounted(() => { console.log(The initial count is ${count.value}.) }) </script> <template> <button @click="increment">Count is: {{ count }}</button> </template> `