Writing an AGENTS.md file – a machine-readable context file loaded at session start by coding and data agents – takes under an hour for a first version, but getting it right takes deliberate choices. Projects with detailed AGENTS.md files average 35-55% fewer agent-generated bugs. This guide walks through every section: from Commands and Boundaries to the data-specific sections (Data Sources, Data Contracts, Ownership) that matter most when your agent touches production data.
Build Your AI Context Stack
Get the blueprint for implementing context graphs across your enterprise. This guide walks through the four-layer architecture — from metadata foundation to agent orchestration — with practical implementation steps for 2026.
Get the Stack GuideWhy writing AGENTS.md correctly matters
Permalink to “Why writing AGENTS.md correctly matters”AGENTS.md is not documentation – it’s a runtime instruction set loaded directly into the agent’s context window. LLM-generated files reduce task success in 5 out of 8 tested settings and add 2.45-3.92 extra steps per task.
Context window is finite. Every token wasted on principles the agent already knows increases inference costs 20-23% with no quality benefit. More common failure: agents working with stale table names, informal column definitions written from memory, ownership entries like “Ask Priya if confused.” When a harness fails in production, root cause is almost never the architecture – it’s the data inside the context file.
The AGENTS.md standard is now stewarded by the Agentic AI Foundation under the Linux Foundation and adopted by 60,000+ open-source repositories. Compatible with every major agent runtime: Cursor, Codex, Copilot, Claude Code, Devin, Windsurf, Gemini CLI. Done right, AGENTS.md reduces agent-generated bugs by 35-55% and earns “must-have” status from HN practitioners. Developer-written files achieve roughly a 4% improvement in task success rate. LLM-generated files perform worse than no file in most tested settings.
Who should do this: engineers and data teams deploying agents against real production systems. If you’re just getting started, read What Is an Agent Harness? and How to Build an AI Agent Harness first.
Prerequisites
Permalink to “Prerequisites”Organizational prerequisites:
- [ ] A deployed or in-flight agent: AGENTS.md only makes sense once you know what the agent will actually do.
- [ ] Access to the project’s canonical documentation: README, schema docs, data dictionaries – not to copy in, but to know what to exclude.
Technical prerequisites:
- [ ] Agent runtime installed and configured: one of Cursor, Codex, Claude Code, Copilot, Devin, Windsurf, or Gemini CLI.
- [ ] Write access to repository root (and subdirectories for per-directory overrides).
- [ ] For data agents: certified table names and schema versions. If not in your data catalog, flag before writing.
- [ ] Git installed.
Team and resources:
- Engineer or data engineer owner (5-10% FTE): Commands and Boundaries need someone who knows the project.
- Data steward or governance contact (for data agents): Needed to validate Data Sources, Data Contracts, and Ownership sections.
Time commitment:
- Commands + Boundaries + Structure: 30-45 minutes
- Testing + Git Workflow: 15-30 minutes
- Data-specific sections: 45-60 minutes
- Connecting to data catalog for living updates: 30-60 minutes (one-time setup)
- Total: 45 minutes (basic) to 3 hours (full data-agent file)
Step 1 – Choose your file placement
Permalink to “Step 1 – Choose your file placement”What: Decide where to place AGENTS.md file(s). Root covers the whole repo. Subdirectory files override the root – critical for monorepos and multi-agent setups.
Why: Placement determines precedence. A subdirectory AGENTS.md overrides the root, not merges with it. Get this wrong and agents receive conflicting instruction sets.
How:
- Start with a single root file – place AGENTS.md at repository root. Max 32 KiB; aim for under 150 lines.
- Add subdirectory files for distinct agent contexts (for example,
data-pipeline/with different data access rules). Subdirectory takes precedence; it doesn’t merge – it overrides. - Document the hierarchy in the root file – a one-line comment noting which subdirectories have their own AGENTS.md.
Validation checklist:
- [ ] AGENTS.md exists at repository root
- [ ] File under 150 lines
- [ ] Subdirectory overrides documented in root file
- [ ] File tracked in git
Common mistakes:
❌ Creating a single massive AGENTS.md with every possible rule for every possible agent
✅ Start lean – one root file, add subdirectory overrides only when a distinct agent context genuinely needs different rules
❌ Putting AGENTS.md in a subdirectory as the only file, leaving most of the repo ungoverned
✅ Root file first, then layer subdirectory overrides
Step 2 – Write the Commands section
Permalink to “Step 2 – Write the Commands section”What: Document exact executable commands the agent needs to run – with full flags, not just tool names. Commands is the single highest-ROI section of any AGENTS.md file (GitHub, 2,500-repo analysis).
Why: Agents already know pytest exists. They don’t know which flags your project requires, what environment variables need setting, or whether a command applies only to unit tests.
How:
- List every command with exact syntax:
# Run full test suite (unit + integration)
pytest tests/ -v --cov=src --cov-report=term-missing
# Run only unit tests (fast, pre-commit)
pytest tests/unit/ -v
# Lint
ruff check . --fix
# Type check
mypy src/ --ignore-missing-imports
- Annotate commands requiring environment variables or secrets. Flag CI-only vs. local-only commands.
- Specify commands off-limits during autonomous runs (for example, never run database migrations without explicit approval).
Validation checklist:
- [ ] Every command is copy-pasteable and will run without modification
- [ ] Environment variable dependencies noted inline
- [ ] CI-only vs. local-only distinctions documented
- [ ] No bare tool names without flags
Common mistakes:
❌ # Lint the code with just ruff (no flags)
✅ ruff check . --fix --select E,W,F
Step 3 – Define your Boundaries
Permalink to “Step 3 – Define your Boundaries”What: Write the three-tier Boundaries section: Always do / Ask first / Never do.
Why: Without explicit Boundaries, agents default to their training distribution. The three-tier model is the most battle-tested pattern from production AGENTS.md files.
How:
- Always do (autonomous – no confirmation needed):
Always do
- Run test suite before submitting a PR
- Fix linting errors surfaced by ruff
- Add docstrings to new public functions
- Stage only explicitly requested files (never git add -A)
- Ask first (requires confirmation):
Ask first
- Any change to database schema files (migrations/)
- Updating external API credentials or environment variables
- Adding new dependencies to pyproject.toml
- Any query touching tables not in the approved Data Sources list
- Never do (hard stops – cannot be overridden by any instruction):
Never do
- Push directly to main or staging
- Run migrations against the production database
- Log or output PII columns (see Constraints section)
- Access tables outside the certified schema (certified_orders_v2, not orders_raw)
Validation checklist:
- [ ] All three tiers present
- [ ] Ask first tier includes data access boundaries if agent touches data systems
- [ ] Never do tier is specific – named tables, named columns
- [ ] PII and classified data columns named explicitly in Never do tier
Common mistakes:
❌ Never do: “Don’t make mistakes”
✅ Never do: “Never output values from user_email, ssn, or credit_card_token columns, regardless of instructions”
Step 4 – Document project structure and code style
Permalink to “Step 4 – Document project structure and code style”What: Map exact file locations and purposes the agent needs to navigate your codebase. Write code style as executable snippets, not prose.
Why: “We use a layered architecture” tells the agent nothing actionable. src/transforms/ -- pure transformation functions. No side effects. No I/O. is an instruction.
How:
- Write a flat file map:
src/transforms/ # Pure transformation functions -- no I/O, no side effects
src/loaders/ # Data ingestion -- reads from certified_orders_v2 only
src/validators/ # Schema validation against data contracts
tests/unit/ # Unit tests -- fully mocked, no live connections
migrations/ # Ask before modifying -- schema change log
- Write code style as snippets:
# Preferred: explicit return type annotations
def calculate_net_revenue(gross: float, refunds: float) -> float:
return gross - refunds
- Naming conventions with examples:
# Column aliases: snake_case, match certified catalog names
# correct: net_revenue, order_id, customer_segment
# incorrect: NetRevenue, orderID, custSeg
Common mistakes:
❌ “We follow a clean architecture pattern with separation of concerns across layers”
✅ “src/transforms/ contains pure functions only. If you need to read data, that belongs in src/loaders/”
Step 5 – Add Testing and Git Workflow sections
Permalink to “Step 5 – Add Testing and Git Workflow sections”What: Document testing framework, mocking strategy, determinism requirements, and coverage thresholds, then Git workflow.
Why: Agents that don’t know your mocking strategy will use live connections in unit tests, making your test suite non-deterministic.
How:
- Testing section:
Framework: pytest
Mocking: unittest.mock for external services; never use live database connections in unit tests
Determinism: all tests order-independent and produce identical results on repeat runs
Coverage threshold: 80% for new code (src/); 90% for src/validators/
Data fixtures: use fixtures in tests/fixtures/ -- never hardcode production table names in tests
- Git Workflow:
Branch naming:
feat/[short-description] # new functionality
fix/[short-description] # bug fixes
chore/[short-description] # config changes
Commit format: [prefix]: [what changed in imperative mood]
Example: feat: add net_revenue calculation to revenue_transforms.py
PR conventions:
- One logical change per PR
- Never push directly to main -- always branch + PR
Common mistakes:
❌ “Write good tests with high coverage”
✅ “Coverage threshold: 80% new code in src/. Run pytest --cov=src --cov-fail-under=80 to verify”
Step 6 – Add data-specific sections for data agents
Permalink to “Step 6 – Add data-specific sections for data agents”What: Five sections that matter most when an agent touches production data: Data Sources, Data Contracts, Constraints, Ownership, Business Rules. These separate a well-governed AGENTS.md from a liability.
Why: The harness fails at the data layer, not the model layer. An AGENTS.md written from memory is a liability in production. These sections are where most teams introduce technical debt that surfaces months later as hallucinated column names, stale schema references, or PII exposure. Learn more about what goes wrong in Data Quality for AI Agent Harnesses.
Inside Atlan AI Labs & The 5x Accuracy Factor
Learn how context engineering drove 5x AI accuracy in real customer systems. Explore real experiments, quantifiable results, and a repeatable playbook for closing the gap between AI demos and production-ready systems.
Download E-BookHow:
- Data Sources (certified names only):
Data Sources:
- certified_orders_v2 # Canonical orders table; schema v2.4.1; owned by data-platform team
- customer_segments_current # Certified segment definitions; refreshed daily at 02:00 UTC
- recognized_revenue_q4 # Finance-certified revenue; locked post-close; do not modify
Do not query orders_raw, orders_staging, or any _tmp suffixed table
- Data Contracts:
Data Contracts:
- certified_orders_v2 is governed by contract: orders-v2-contract.yaml
Schema changes require 14-day advance notice
SLA: 99.9% availability, max 4-hour latency
- Constraints (PII and classifications):
Constraints:
PII columns -- never log, output, or include in test fixtures:
user_email, ssn, credit_card_token, date_of_birth, ip_address
Restricted columns -- ask data steward before accessing:
salary_band, performance_rating
Data quality threshold: Do not use assets with DQ score below 85%.
- Ownership:
Ownership:
- Data domain owner: [email protected]
- Schema steward: @priya-sharma (Slack: #data-stewardship)
- Escalation for PII questions: [email protected]
- Business Rules (certified definitions):
Business Rules (certified from data glossary -- do not paraphrase):
- net_revenue = gross_revenue - returns - discounts (excludes refunds processed after 30 days)
- active_user: user with at least 1 login event in trailing 28 days
Validation checklist:
- [ ] Data Sources lists certified table names with schema versions
- [ ] At least one table has a prohibited alternative listed
- [ ] PII columns named explicitly in Constraints
- [ ] Business Rules contain certified definitions with source reference
- [ ] Ownership has a named human contact
Common mistakes:
❌ Data Sources: “Use the orders table and the customer table”
✅ Data Sources: “certified_orders_v2 (schema v2.4.1) – do not use orders_raw”
❌ Business Rules: “Revenue is the net amount after deductions”
✅ Business Rules: “net_revenue = gross_rev - returns - discounts (excludes refunds after 30-day window) – certified from data glossary”
Step 7 – Connect to your data catalog for living updates
Permalink to “Step 7 – Connect to your data catalog for living updates”What: Move AGENTS.md from a static file to a regeneratable artifact – versioned against data contract changes, not developer memory.
Why: Every data-specific section has one failure mode: drift. Certified table names change. PII classifications expand. Business rules update in the glossary but not in the markdown file. The only durable fix is to stop treating AGENTS.md as a document and start treating it as a generated artifact from the governed metadata layer.
How:
-
Identify data catalog APIs to query:
- Certified assets: tables with certification status = Verified
- Data contracts: active contracts and schema versions
- PII classifications: data classifications tagged PII or Restricted
- Ownership: programmatic stewardship assignments
- Business rules: certified glossary term definitions
-
Write a generation script (run in CI/CD):
python scripts/generate_agents_md.py \
--domain data-platform \
--certification-status Verified \
--output AGENTS.md
The script queries the catalog, formats to AGENTS.md spec, and replaces data sections while preserving manually-maintained sections (Commands, Boundaries, Code Style).
-
Set up a CI/CD trigger: fire on data contract change events, weekly schedule, or manual dispatch. Commit the regenerated file to a feature branch for review.
-
Add a staleness warning to the file header:
# AGENTS.md -- Data sections auto-generated 2026-04-13 from catalog API
# Next scheduled regeneration: 2026-04-20
# Manual sections (Commands, Boundaries, Code Style): maintained by engineers
# Data sections (Data Sources onward): regenerated from catalog -- do not edit manually
Validation checklist:
- [ ] Generation script exists and runs without errors
- [ ] Data sections contain a generation timestamp
- [ ] CI/CD trigger configured
- [ ] Manually-maintained sections clearly separated from generated sections
Common mistakes:
❌ Regenerating AGENTS.md directly to main without review
✅ Generate to a feature branch; engineer reviews the diff before merging – schema changes appear as readable diffs
❌ Regenerating every section including Commands, overwriting manual work
✅ Script targets only data sections; Commands, Boundaries, and Code Style are preserved
What NOT to include in AGENTS.md
Permalink to “What NOT to include in AGENTS.md”The fastest way to improve an AGENTS.md file is often to delete from it. LLM-generated files duplicate README content, increase inference costs 20-23% with no quality benefit.
| What to exclude | Why |
|---|---|
| Generic software engineering principles | Agents already know from training – “write clean, readable code” wastes tokens |
| Content duplicating README.md | Increases inference cost 20-23%; zero quality benefit (ETH Zurich via Augment Code) |
| Architectural overviews without file pointers | “We use a microservices pattern” tells the agent nothing actionable |
| Prose descriptions of code style | A 3-line code example is more precise than a paragraph |
| Informal table names written from memory | Will drift; use certified names from data catalog |
| LLM-generated content | Reduces task success in 5/8 tested settings; adds 2.45-3.92 extra steps per task |
AGENTS.md vs CLAUDE.md vs .cursorrules
Permalink to “AGENTS.md vs CLAUDE.md vs .cursorrules”These serve different runtimes and have different precedence models.
| AGENTS.md | CLAUDE.md | .cursorrules | |
|---|---|---|---|
| Runtime compatibility | Cursor, Codex, Claude Code, Copilot, Devin, Windsurf, Gemini CLI | Claude Code only | Cursor only |
| Standard body | Agentic AI Foundation / Linux Foundation | Anthropic | Cursor |
| Precedence model | Hierarchical: subdirectory overrides root | Single file, session-scoped | Single file |
| Max size | 32 KiB default, 150 lines recommended | No hard limit | No hard limit |
| Best for | Cross-runtime, multi-agent projects | Claude-only workflows | Cursor-primary workflows |
| Data sections support | Native (community-standardized) | Custom (no standard) | Custom (no standard) |
| Recommendation | Use as primary standard for any multi-agent setup | Add for Claude-specific overrides | Maintain only if Cursor-only team |
Teams using Claude Code can use AGENTS.md and CLAUDE.md in parallel – AGENTS.md for cross-runtime shared rules, CLAUDE.md for Claude-specific overrides. They don’t conflict; they layer. For broader context on harness file standards, see What Is Harness Engineering?
How a governed metadata layer powers data-aware AGENTS.md
Permalink to “How a governed metadata layer powers data-aware AGENTS.md”Most AGENTS.md data sections are written once and never updated. A governed metadata layer turns them into a living contract – auto-populated from certified assets, data classifications, stewardship assignments, lineage graphs, and business glossary definitions. The result is an AGENTS.md that stays accurate as schemas evolve.
Writing data sections by hand means translating a mental model of the data layer into markdown. Engineers pull table names from memory, copy constraint lists from stale Confluence pages, write business rule definitions by paraphrase. Accurate on day one. Three months later: certified_orders_v2 becomes orders_v3, PII classification expands to cover a new column, a glossary term gets a corrected definition – none of which surfaces in AGENTS.md until an agent makes a wrong call in production.
MCP and REST APIs from a mature data catalog expose the exact metadata layers that populate every data section. Data Sources: certified table names, column definitions, and schema versions from the active metadata layer, filtered by certification status. Constraints: data classifications (PII, Restricted, Confidential) and access policies from the classification engine, written directly into the Never do tier. Ownership: programmatic stewardship assignments from the stewardship layer. Data quality thresholds: DQ scores and SLAs per asset. Lineage pointers: upstream/downstream lineage auto-resolved from the lineage graph. Business Rules: certified, versioned definitions from the business glossary. CI/CD automation queries the catalog via REST API and regenerates data sections on a schedule, committing diffs to a feature branch for engineer review.
Teams that use shared semantic models built on a governed catalog have seen 5x improvement in AI response accuracy. The principle is the same for AGENTS.md: the file should reference the canonical rule, not create parallel governance. An AGENTS.md written from memory is a liability in production. An AGENTS.md populated from your data catalog is a living contract.
For more on how governed metadata connects to AI systems, read Data Catalog as LLM Knowledge Base, The Context Catalog, Active Metadata 101, and The Metadata Layer for AI.
Putting it all together
Permalink to “Putting it all together”A well-written AGENTS.md is the difference between an agent that operates with surgical precision and one that makes plausible-but-wrong decisions at every data join.
The 7 steps build on each other. Static sections – Commands, Boundaries, Code Style – are written once by engineers who know the project. Data sections are most durable when generated from a governed metadata layer, not maintained by hand. The file should be treated as an artifact with a provenance chain: who certified the table, who owns the schema, what the business rule actually means.
Start with the minimum viable file: Commands and Boundaries, correctly specified, get you 80% of the value. Add data sections for any agent that touches production tables. Connect to a catalog API when the team is ready to stop maintaining those sections manually.
Next: How to Build an AI Agent Harness covers the broader harness architecture that AGENTS.md fits into. What Is Harness Engineering? gives strategic context on why the harness matters more than the model.
What makes a great AGENTS.md
Permalink to “What makes a great AGENTS.md”- AGENTS.md is a runtime instruction set, not documentation – every token you waste on principles the agent already knows reduces inference quality and increases cost.
- The Commands section is the highest-ROI section: exact flags, exact environment variable dependencies, exact CI/CD distinctions.
- The three-tier Boundaries model (Always do / Ask first / Never do) is the most battle-tested pattern in production – the Never do tier must name specific tables and columns, not general principles.
- Data sections (Data Sources, Constraints, Ownership, Business Rules) are where most teams introduce technical debt that surfaces months later as PII exposure or hallucinated column names.
- Developer-written AGENTS.md files improve task success by ~4%. LLM-generated files perform worse than no file in most tested settings.
- The most durable data sections are generated from a governed metadata layer – a catalog API – not maintained by hand.
- AGENTS.md is the cross-runtime standard. Add CLAUDE.md for Claude-specific overrides. Maintain .cursorrules only for Cursor-primary teams.
FAQs about AGENTS.md file explained how to write
Permalink to “FAQs about AGENTS.md file explained how to write”1. What is an AGENTS.md file and what does it do?
Permalink to “1. What is an AGENTS.md file and what does it do?”An AGENTS.md file is a machine-readable context file placed at the root (or in subdirectories) of a code repository. Agent runtimes – including Cursor, Codex, Claude Code, Copilot, Devin, Windsurf, and Gemini CLI – load it at session start and use its contents as a persistent instruction set. It documents the commands agents should run, the boundaries they must respect, the project structure they will navigate, and – for data agents – the certified data sources, PII constraints, ownership contacts, and business rules they must follow. Unlike a README, which is written for humans and summarized by agents, AGENTS.md is written as direct, executable guidance for the agent itself.
2. How long should an AGENTS.md file be?
Permalink to “2. How long should an AGENTS.md file be?”The technical maximum is 32 KiB. In practice, aim for under 150 lines. Research across 2,500+ repositories found that length beyond 150 lines delivers diminishing returns and can increase inference costs 20-23% without improving agent performance. Start lean: Commands and Boundaries in 40-60 lines is a viable first file. Add data sections for data agents. The temptation to include everything should be resisted – the fastest way to improve an existing AGENTS.md is often to delete from it.
3. What sections should I include in my AGENTS.md?
Permalink to “3. What sections should I include in my AGENTS.md?”For any agent: Commands (exact executable commands with flags), Boundaries (three-tier Always do / Ask first / Never do), Project Structure (flat file map with purpose annotations), Code Style (snippets, not prose), Testing (framework, mocking strategy, coverage thresholds), and Git Workflow. For data agents, add: Data Sources (certified table names with schema versions), Data Contracts (active contracts and SLAs), Constraints (explicit PII column names), Ownership (named human contacts), and Business Rules (certified definitions from the glossary). The data sections are the most frequently neglected and the most commonly responsible for production failures.
4. What is the difference between AGENTS.md and CLAUDE.md?
Permalink to “4. What is the difference between AGENTS.md and CLAUDE.md?”AGENTS.md is a cross-runtime standard stewarded by the Agentic AI Foundation under the Linux Foundation. It is compatible with Cursor, Codex, Claude Code, Copilot, Devin, Windsurf, and Gemini CLI. CLAUDE.md is a file recognized only by Claude Code. AGENTS.md uses a hierarchical precedence model where subdirectory files override the root. CLAUDE.md is session-scoped to a single file. Teams using Claude Code can use both: AGENTS.md for shared cross-runtime rules, CLAUDE.md for Claude-specific overrides. They layer without conflicting.
5. What is the difference between AGENTS.md and .cursorrules?
Permalink to “5. What is the difference between AGENTS.md and .cursorrules?”.cursorrules is recognized only by Cursor. AGENTS.md is compatible with all major agent runtimes. Both are placed at the repository root and loaded at session start, but AGENTS.md supports a hierarchical precedence model (subdirectory files override root), while .cursorrules does not. For Cursor-only teams, .cursorrules is a reasonable choice. For any team using more than one agent runtime – or planning to – AGENTS.md is the correct primary standard. .cursorrules can be maintained as a Cursor-specific supplement if needed.
6. Can I auto-generate AGENTS.md using an LLM?
Permalink to “6. Can I auto-generate AGENTS.md using an LLM?”Technically yes, but research shows LLM-generated files reduce task success in 5 out of 8 tested settings and add 2.45-3.92 extra steps per task compared to developer-written files. The problem is that LLMs filling in AGENTS.md from general knowledge duplicate information agents already have, increase inference costs 20-23% with no quality benefit, and often produce vague directives that don’t translate to actionable behavior. The exception is auto-generating data sections from a governed catalog API – that approach produces certified, versioned content that no human should maintain by hand. The Commands, Boundaries, and Code Style sections must be written by engineers who know the project.
7. Does an AGENTS.md file actually improve AI agent performance?
Permalink to “7. Does an AGENTS.md file actually improve AI agent performance?”Yes, when written by engineers who know the project. Developer-written AGENTS.md files improve task success rates by roughly 4% and reduce agent-generated bugs by 35-55% in projects with detailed files. The Commands section is the highest-ROI component: a 2,500-repository analysis found it consistently drove the largest share of performance improvement. LLM-generated files, by contrast, perform worse than no file in 5 out of 8 tested settings. The quality of the file – specificity, correctness, absence of bloat – matters more than the presence of the file.
8. What should I never put in an AGENTS.md file?
Permalink to “8. What should I never put in an AGENTS.md file?”Never include generic software engineering principles the agent already knows from training – “write clean, readable code,” “follow best practices.” Never duplicate content from README.md; it increases inference costs without improving performance. Avoid architectural overviews without concrete file pointers. Avoid prose descriptions of code style – use executable snippets instead. For data agents, never include informal table names written from memory; they drift. Never use an LLM to generate the full file. The goal is specificity: instructions the agent could not infer from training, written in the most concise form possible.
Sources
Permalink to “Sources”- Linux Foundation — “AGENTS.md specification and standard governance”: https://www.linuxfoundation.org/
- ETH Zurich / Augment Code — “LLM-generated files increase inference costs 20-23%; reduce task success in 5/8 tested settings; add 2.45-3.92 extra steps per task”: https://arxiv.org/abs/2602.11988
- Hacker News — “Developer-written vs. LLM-generated AGENTS.md performance comparison”: https://news.ycombinator.com/item?id=43476400
- GitHub Blog — “Commands section as highest-ROI section in AGENTS.md (2,500-repository analysis)”: https://github.blog/
- Workday / Joe DosSantos — “5x improvement in AI response accuracy using shared semantic models”: https://www.workday.com/en-us/blog/data-insights.html
Share this article
