OpenClaw
Use XCrawl with OpenClaw to give your agents scraping, URL discovery, crawling, and search capabilities through reusable local skills.
XCrawl currently provides four OpenClaw-compatible skills:
xcrawl-scrapexcrawl-mapxcrawl-crawlxcrawl-search
These skills are defined in the xcrawl-skills repository and already include metadata.openclaw fields for OpenClaw discovery.
Published ClawHub pages:
Why XCrawl + OpenClaw
- One provider for scrape, map, crawl, and search workflows
- Skill files are API-first and production-oriented
- Skills read credentials from a local config file instead of relying on ad-hoc prompt state
- Output stays close to raw XCrawl API responses, which makes agent behavior easier to audit
Setup
1. Prerequisites
Make sure you have:
- OpenClaw installed
- An XCrawl account and API key
curlandnodeavailable on the machine where OpenClaw runs- A local clone or copy of the
xcrawl-skillsrepository
If you do not have an XCrawl account yet, register at dash.xcrawl.com and activate the free 1000 credits plan.
2. Configure your XCrawl API key
Create the local config file used by all XCrawl skills:
mkdir -p ~/.xcrawl
cat > ~/.xcrawl/config.json <<'EOF'
{
"XCRAWL_API_KEY": "<your_api_key>"
}
EOFAll current XCrawl OpenClaw skills read credentials from ~/.xcrawl/config.json.
3. Install the skills into OpenClaw
OpenClaw can load skills from:
~/.openclaw/skills<workspace>/skills- Extra directories configured with
skills.load.extraDirs
You can install XCrawl skills as shared local skills:
git clone https://github.com/xcrawl-api/xcrawl-skills.git
mkdir -p ~/.openclaw/skills
cp -R xcrawl-skills/skills/xcrawl-* ~/.openclaw/skills/Or install them only for the current workspace:
git clone https://github.com/xcrawl-api/xcrawl-skills.git
mkdir -p ./skills
cp -R xcrawl-skills/skills/xcrawl-* ./skills/If you already maintain the repo locally, you can copy or symlink the individual skill directories from xcrawl-skills/skills/.
4. Reload OpenClaw
Start a new OpenClaw session, or refresh the gateway so it reloads the skill directories. Once reloaded, OpenClaw will discover the XCrawl skills automatically.
Skill Catalog
| Skill | Purpose |
|---|---|
xcrawl-scrape | Single-page extraction, sync/async scraping, JSON extraction |
xcrawl-map | Site URL discovery and crawl planning |
xcrawl-crawl | Bounded site crawling and async result polling |
xcrawl-search | Query-based web search with location/language controls |
Example Prompts
Scrape
Use xcrawl-scrape to fetch https://example.com in sync mode and return markdown and links.Map
Use xcrawl-map to list only /docs/ URLs under https://docs.xcrawl.com with a limit of 2000.Crawl
Use xcrawl-crawl to start a crawl for https://docs.xcrawl.com/doc/ with max depth 2 and limit 100, then poll until it completes.Search
Use xcrawl-search to search for "XCrawl API" in US English and return the top 10 results.Notes
xcrawl-scrapeandxcrawl-crawlinclude async flows, so the agent may create a task first and then poll for results.- Current XCrawl OpenClaw skills require
curl,node, and the local config file~/.xcrawl/config.json. - The skills are designed to return upstream XCrawl responses directly unless you explicitly ask the agent to summarize or transform them.
