API Documentation
Programmatic access to the GEO (Generative Engine Optimization) Audit Engine. All endpoints require an API key.
The API lets you run GEO audits from your own tools. You can trigger scans, poll for results, pull completed reports as JSON (JavaScript Object Notation), and delete scans you no longer need. Common use cases: batch-scanning a list of client sites, integrating audit scores into a dashboard, or building automated monitoring that re-scans on a schedule.
Getting Started
- Sign in to your AgentLayer account.
- Go to API Keys and create a new key.
- Include the key in every request as an
X-API-Keyheader.
Example request:
curl -H "X-API-Key: geo_your_key_here" \
https://agent-layer.ai/api/v1/scans
Your API key starts with geo_. Keep it secret.
If a key is compromised, revoke it from the API Keys page and create a new one.
Base URL
https://agent-layer.ai/api/v1
All endpoints are prefixed with /api/v1. Responses are JSON with Content-Type: application/json.
Scan Lifecycle
A scan moves through four statuses after creation:
- pending means the scan is queued and waiting for a worker to pick it up.
- running means the audit engine is actively crawling and analyzing the site. The
current_stageandprogress_pctfields update as it progresses. - complete means all checks finished. Scores are populated and the report is available at
GET /reports/{"{scan_id}"}. - failed means something went wrong. Check
error_messagefor details.
Polling for completion:
# Poll every 5 seconds until the scan finishes
while true; do
STATUS=$(curl -s -H "X-API-Key: geo_your_key_here" \
https://agent-layer.ai/api/v1/scans/YOUR_SCAN_ID \
| python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
echo "Status: $STATUS"
[ "$STATUS" = "complete" ] || [ "$STATUS" = "failed" ] && break
sleep 5
done
Most scans complete in 30 to 90 seconds. Larger sites with many pages may take longer.
Endpoints
/api/v1/scans
Create a new scan. The engine queues it immediately and returns the scan object with status pending. Poll GET /scans/{"{scan_id}"} to track progress.
Request body (JSON)
{
"url": "https://example.com",
"brand_name": "Example Corp"
}
url(required) the site to audit. Must be a valid HTTP or HTTPS URL (Uniform Resource Locator).brand_name(optional) improves entity checks by telling the engine what brand name to look for in AI responses.
Example response (201 Created)
{
"id": "a1b2c3d4-...",
"url": "https://example.com",
"brand_name": "Example Corp",
"status": "pending",
"overall_score": null,
"technical_score": null,
"structural_score": null,
"entity_score": null,
"seo_score": null,
"current_stage": "Queued",
"progress_pct": 0,
"error_message": null,
"created_at": "2026-02-15T12:00:00+00:00",
"updated_at": "2026-02-15T12:00:00+00:00"
}
/api/v1/scans
List your scans, newest first. Supports pagination and filtering by status.
Query parameters
limit(default: 50, max: 100) how many scans to return per page.offset(default: 0) skip this many scans before returning results.status(optional) filter by scan status. One of:pending,running,complete,failed.
Example response
{
"data": [
{
"id": "a1b2c3d4-...",
"url": "https://example.com",
"brand_name": "Example Corp",
"status": "complete",
"overall_score": 72.5,
"technical_score": 85.0,
"structural_score": 60.0,
"entity_score": 72.5,
"seo_score": 74.0,
"current_stage": "Done",
"progress_pct": 100,
"error_message": null,
"created_at": "2026-02-15T12:00:00+00:00",
"updated_at": "2026-02-15T12:01:15+00:00"
}
],
"total": 1,
"limit": 50,
"offset": 0,
"has_more": false
}
/api/v1/scans/{"{scan_id}"}
Get a single scan by its ID. Use this to poll for progress after creating a scan. Returns 404 if the scan does not exist or belongs to another user.
The response shape is the same as a single item in the GET /scans data array (see example above).
/api/v1/reports/{"{scan_id}"}
Get the full audit report for a completed scan. Returns 400 if the scan has not finished yet. This is where you get the actual audit findings.
Response fields
overall_score,overall_gradethe site's GEO score (0-100) and letter grade (A through F).technical_score,structural_score,entity_score,seo_scorepillar scores, each 0-100. The SEO score is independent from the GEO score.total_checks,passing_checks,warning_checks,failing_checkssummary counts.checksarray of individual check results (see below).access_leveleither"lite"or"full". Lite reports include scores and summaries but omit detailed findings and recommendations.
Each check object contains
check_idmachine-readable identifier (e.g.robots_txt).check_namehuman-readable name (e.g. "Robots.txt AI Access").pillarone oftechnical,structural,entity.score0-100 for this check.severityone ofpass,info,warning,fail.summaryone-line description of the finding.detailsstructured data specific to each check (full access only). Every check includes ascore_componentsobject. See Score Components below.recommendationsactionable fix suggestions (full access only).
Example response (abbreviated)
{
"scan_id": "a1b2c3d4-...",
"url": "https://example.com",
"overall_score": 72.5,
"overall_grade": "C",
"technical_score": 85.0,
"structural_score": 60.0,
"entity_score": 72.5,
"seo_score": 74.0,
"total_checks": 18,
"passing_checks": 7,
"warning_checks": 3,
"failing_checks": 2,
"checks": [
{
"check_id": "robots_txt",
"check_name": "Robots.txt AI Access",
"pillar": "technical",
"score": 90.0,
"severity": "pass",
"summary": "AI crawlers are allowed by robots.txt.",
"details": { "...": "..." },
"recommendations": []
}
],
"access_level": "full",
"is_lite": false
}
/api/v1/scans/{"{scan_id}"}
Permanently delete a scan and all its check results. Returns 204 with no body on success, 404 if the scan does not exist or belongs to another user.
This cannot be undone. If you need the report data, fetch it before deleting.
Score Components
Every check includes a score_components object inside its
details field. This object breaks the check's score into
weighted components so you can see exactly what contributed to the number.
Three guarantees hold for every check:
- Each component has a
weight(its maximum contribution) and anearnedvalue (what the page actually scored). - Weights always sum to 1.0.
- The check's
scoreequals the sum of allearnedvalues.
Example: Content Freshness check
{
"check_id": "technical.freshness",
"score": 0.85,
"details": {
"date_modified": "2026-01-20",
"date_published": "2025-06-01",
"score_components": {
"recency": { "weight": 0.50, "earned": 0.50 },
"modified_present": { "weight": 0.20, "earned": 0.20 },
"published_present": { "weight": 0.15, "earned": 0.15 },
"dual_dates": { "weight": 0.15, "earned": 0.00 }
}
}
}
In this example the page was recently modified (full recency credit) and has both date fields,
but the published date was more than 90 days before the modified date so dual_dates earned zero.
The total: 0.50 + 0.20 + 0.15 + 0.00 = 0.85.
Components by check
Grouped by pillar. Components listed in weight order, highest first.
| Check | Components (weight) |
|---|---|
| Technical | |
| Content Freshness | recency (.50), modified_present (.20), published_present (.15), dual_dates (.15) |
| AI Crawler Access | high_impact_access (.60), general_access (.40) |
| llms.txt | title (.25), links (.20), description (.15), body (.15), sections (.15), quality_signals (.10) |
| JS Rendering Gap | content_gap (.40), structural_survival (.25), meta_in_raw (.15), noscript_fallback (.10), script_footprint (.10) |
| Schema Markup | organization (.20), content_type (.20), property_completeness (.20), faq (.15), dates (.15), breadcrumb (.10) |
| Structural | |
| Heading Hierarchy | descriptive_headings (.25), single_h1 (.20), no_level_skips (.20), heading_quality (.20), has_h2s (.15) |
| Markdown Fidelity | content_preservation (.25), headings_survive (.20), low_nesting (.20), lists_survive (.20), tables_survive (.15) |
| Answer-First Content | no_filler (.50), concrete_openings (.50) |
| Passage Self-Containment | low_anaphora (.35), specificity (.35), citation_length (.30) |
| Fact Density | density (.60), variety (.25), proper_nouns (.15) |
| Entity | |
| AI Brand Check | recognition (.40), consistency (.30), richness (.30) |
| Knowledge Graph Presence | has_description (.30), property_coverage (.30), entity_exists (.20), sitelinks (.20) |
Errors
When something goes wrong, every endpoint returns a JSON object with a detail field explaining the problem:
{
"detail": "Scan is not complete (status: running)."
}
Status codes
| Code | Meaning |
|---|---|
200 | Success. The response body contains the requested data. |
201 | Scan created. The response body contains the new scan object. |
204 | Deleted. No response body. |
400 | Bad request. Usually means you tried to fetch a report for a scan that is not complete yet. |
401 | Missing or invalid API key. Check the X-API-Key header. |
404 | Scan not found, or it belongs to a different account. |
422 | Validation error. The URL you submitted is not valid. |
Rate Limits
Scan creation (POST /scans) is limited to 10 requests per minute per IP address. All other endpoints are unrestricted.
If you hit the rate limit, you will receive a 429 Too Many Requests response. Wait a few seconds before retrying.
Full Example: Scan and Retrieve a Report
This script creates a scan, waits for it to finish, and prints the overall score:
import time, requests
API_KEY = "geo_your_key_here"
BASE = "https://agent-layer.ai/api/v1"
HEADERS = {"X-API-Key": API_KEY}
# 1. Create a scan
scan = requests.post(f"{BASE}/scans",
json={"url": "https://example.com", "brand_name": "Example Corp"},
headers=HEADERS
).json()
print(f"Scan created: {scan['id']}")
# 2. Poll until complete
while scan["status"] not in ("complete", "failed"):
time.sleep(5)
scan = requests.get(f"{BASE}/scans/{scan['id']}",
headers=HEADERS).json()
print(f" {scan['status']} ({scan['progress_pct']}%)")
# 3. Fetch the report
if scan["status"] == "complete":
report = requests.get(f"{BASE}/reports/{scan['id']}",
headers=HEADERS).json()
print(f"Score: {report['overall_score']}/100 ({report['overall_grade']})")
for check in report["checks"]:
print(f" [{check['severity']}] {check['check_name']}: {check['score']}")
Interactive API docs are also available at /docs (Swagger UI).