feat: 更新一堆 ai 初始化以及 skill

This commit is contained in:
richarjiang
2026-04-15 09:40:15 +08:00
commit 67b2f7f2ac
37 changed files with 3121 additions and 0 deletions

View File

@@ -0,0 +1,25 @@
MIT License
Copyright (c) 2025
This license applies to the shell script wrapper layer in this repository only.
The xiaohongshu-mcp binary (https://github.com/xpzouying/xiaohongshu-mcp) is a
separate project with its own licensing terms.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,315 @@
# XHS AI Toolkit
<p align="center">
<b>Make AI understand your Xiaohongshu (RedNote)</b>
</p>
<p align="center">
<a href="README_CN.md">简体中文</a> | English
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-blue.svg">
<img alt="Platform" src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-lightgrey.svg">
<img alt="Python" src="https://img.shields.io/badge/python-3.10+-green.svg">
<img alt="MCP" src="https://img.shields.io/badge/MCP-compatible-purple.svg">
</p>
---
AI-powered toolkit for **Xiaohongshu (小红书 / RedNote)** that turns your favorite posts into AI memory.
- **MCP Integration** — Search, browse, comment via AI assistants
- **Trend Tracking** — Auto-generate topic reports with engagement analytics
- **Memory Export** — Convert your liked/saved posts into AI-searchable knowledge base
Built on [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) and [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader).
## Features
| Feature | Description |
|---------|-------------|
| Search | Search posts by keywords |
| Feed | Get homepage recommendations |
| Post Details | Fetch post content, comments, engagement stats |
| Comment | Post comments to notes |
| User Profile | Get user info and their posts |
| Trend Tracking | Auto-generate topic analysis reports |
| Long Image Export | Export posts as annotated JPG long images |
| Memory Export | Export liked/saved posts as Markdown for AI memory |
## Quick Start
### 1. Install xiaohongshu-mcp
Download from [GitHub Releases](https://github.com/xpzouying/xiaohongshu-mcp/releases):
```bash
# Linux x64
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-linux-amd64.tar.gz
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-linux-amd64.tar.gz
# macOS ARM
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-darwin-arm64.tar.gz
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-darwin-arm64.tar.gz
```
Install:
```bash
mkdir -p ~/.local/bin
tar -xzf xiaohongshu-mcp-*.tar.gz -C ~/.local/bin/
tar -xzf xiaohongshu-login-*.tar.gz -C ~/.local/bin/
cd ~/.local/bin
mv xiaohongshu-mcp-* xiaohongshu-mcp
mv xiaohongshu-login-* xiaohongshu-login
chmod +x xiaohongshu-mcp xiaohongshu-login
```
### 2. Install This Toolkit
```bash
# Clone to OpenClaw workspace
git clone https://github.com/zhjiang22/openclaw-xhs.git
cp -r openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
# Or use symlink
ln -s /path/to/openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
# Verify installation
cd ~/.openclaw/workspace/skills/xiaohongshu/scripts
./install-check.sh
```
### 3. Login (Get Cookies)
**Option A: Desktop Environment**
```bash
./login.sh # Opens browser, scan QR code with Xiaohongshu app
```
**Option B: Headless Server**
Get cookies on your local machine, then copy to server:
```bash
# On local machine with GUI
./xiaohongshu-login
# Cookies saved to /tmp/cookies.json
# Copy to server
scp /tmp/cookies.json user@server:~/.xiaohongshu/cookies.json
```
### 4. Start Service
```bash
./start-mcp.sh # Headless mode
./start-mcp.sh --headless=false # Show browser (debug)
```
Service runs at `http://localhost:18060/mcp`.
#### Server Deployment (Headless Linux)
On servers without a desktop environment, the underlying browser requires a virtual display.
`start-mcp.sh` **auto-detects** the environment — if no display is found, it starts Xvfb automatically. Just install it first:
```bash
# Debian/Ubuntu
sudo apt-get install -y xvfb
# CentOS/RHEL
sudo yum install -y xorg-x11-server-Xvfb
```
No extra configuration needed. The script handles:
- Detecting the `DISPLAY` environment variable
- Auto-starting `Xvfb :99` when no display is available
- Cleaning up Xvfb when `stop-mcp.sh` is called
> **Note**: Without Xvfb, login and search will fail on headless servers. See [Issue #3](https://github.com/zhjiang22/openclaw-xhs/issues/3).
## Usage
### Basic Commands
```bash
./status.sh # Check login status
./search.sh "coffee" # Search posts
./recommend.sh # Get recommendations
./post-detail.sh <id> <token> # Get post details
./comment.sh <id> <token> "Great post!" # Comment
./user-profile.sh <user_id> <xsec_token> # Get user profile
```
### Trend Tracking
Auto-search trending posts and generate analysis reports:
```bash
./track-topic.sh "AI" --limit 10
./track-topic.sh "travel" --limit 5 --output report.md
./track-topic.sh "iPhone" --limit 5 --feishu # Export to Feishu
```
### MCP Tools
| Tool | Description |
|------|-------------|
| `check_login_status` | Check login status |
| `search_feeds` | Search posts |
| `list_feeds` | Get homepage feed |
| `get_feed_detail` | Get post details & comments |
| `post_comment_to_feed` | Post comment |
| `user_profile` | Get user profile |
| `like_feed` | Like/unlike post |
| `favorite_feed` | Save/unsave post |
| `publish_content` | Publish image post |
| `publish_with_video` | Publish video post |
### Long Image Export
Export posts as annotated JPG long images (white background, black text):
```bash
# Prepare posts.json
cat > posts.json << 'EOF'
[
{
"title": "Post title",
"author": "Author",
"stats": "13k likes 100 saves",
"desc": "Post summary",
"images": ["https://...webp"],
"per_image_text": {"1": "Caption for 2nd image"}
}
]
EOF
./export-long-image.sh --posts-file posts.json -o output.jpg
```
Requires: Python 3.10+, Pillow (`pip install Pillow`)
## Memory Export (Turn Likes into AI Memory)
Export your liked/saved posts as a searchable knowledge base for AI assistants.
### 1. Install XHS-Downloader
```bash
git clone https://github.com/JoeanAmier/XHS-Downloader.git
cd XHS-Downloader
pip install -r requirements.txt
```
### 2. Extract Post Links (Tampermonkey Script)
1. Install [Tampermonkey](https://www.tampermonkey.net/)
2. Install [XHS-Downloader UserScript](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
3. Go to Xiaohongshu web → Profile → Liked/Saved
4. Click Tampermonkey menu → "Extract liked posts" or "Extract saved posts"
5. Links auto-copied to clipboard
6. Paste into `links.md`
### 3. Download & Export
```bash
# Copy helper scripts
cp tools/xhs-downloader/*.py /path/to/XHS-Downloader/
# Download posts
cd /path/to/XHS-Downloader
python batch_download.py links.md
# Export to workspace
python export_to_workspace.py
# Output: ~/.openclaw/workspace/xhs-memory/
```
### 4. Configure OpenClaw Memory Search
Edit `~/.openclaw/openclaw.json`:
```json
{
"memorySearch": {
"extraPaths": [
"~/.openclaw/workspace/xhs-memory"
]
}
}
```
Now your AI assistant can search your Xiaohongshu favorites!
## Project Structure
```
openclaw-xhs/
├── README.md # English docs
├── README_CN.md # Chinese docs
├── LICENSE
├── SKILL.md # Skill manifest
├── scripts/ # MCP wrapper scripts
│ ├── install-check.sh
│ ├── start-mcp.sh
│ ├── stop-mcp.sh
│ ├── login.sh
│ ├── mcp-call.sh
│ ├── status.sh
│ ├── search.sh
│ ├── recommend.sh
│ ├── post-detail.sh
│ ├── comment.sh
│ ├── user-profile.sh
│ ├── track-topic.sh
│ ├── track-topic.py
│ ├── export-long-image.sh
│ └── export-long-image.py
└── tools/
└── xhs-downloader/ # Memory export tools
├── README.md
├── batch_download.py
├── export_memory.py
└── export_to_workspace.py
```
## Security
This project implements the following security measures:
- **Cookie protection**: Cookie files are copied with `600` permissions (owner-only read/write)
- **Injection prevention**: All shell scripts use `jq` to build JSON payloads instead of string interpolation, preventing shell injection
- **Tool name validation**: MCP tool names are restricted to alphanumeric characters and underscores
- **Path validation**: Cross-skill script calls validate that target paths are within allowed directories
- **Third-party content**: Content fetched from Xiaohongshu is user-generated; exercise appropriate caution
## Disclaimer
This project is a **wrapper layer** for [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp).
- Does NOT contain xiaohongshu-mcp source code
- Users must download xiaohongshu-mcp binaries separately
- Scripts communicate via HTTP protocol only
## Acknowledgments
- [@xpzouying](https://github.com/xpzouying) — [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp)
- [@JoeanAmier](https://github.com/JoeanAmier) — [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) (GPL-3.0)
## License
MIT License (wrapper scripts only)
**Note:** xiaohongshu-mcp has no declared license. Please respect the author's terms.
---
<p align="center">
<b>If this project helps you, please give it a ⭐!</b>
</p>

View File

@@ -0,0 +1,373 @@
# XHS AI Toolkit
<p align="center">
<b>让 AI 读懂你的小红书</b>
</p>
<p align="center">
简体中文 | <a href="README.md">English</a>
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-blue.svg">
<img alt="Platform" src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-lightgrey.svg">
<img alt="Python" src="https://img.shields.io/badge/python-3.10+-green.svg">
<img alt="MCP" src="https://img.shields.io/badge/MCP-compatible-purple.svg">
</p>
---
小红书 AI 工具包 — 把你的收藏变成 AI 的记忆。
- **MCP 集成** — 通过 AI 助手搜索、浏览、评论小红书
- **热点跟踪** — 自动生成话题报告,含互动数据分析
- **记忆导出** — 将收藏/点赞笔记转为 AI 可搜索的知识库
基于 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 和 [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) 构建。
## 功能特性
| 功能 | 说明 |
|------|------|
| 搜索内容 | 按关键词搜索小红书笔记 |
| 首页推荐 | 获取首页推荐列表 |
| 帖子详情 | 获取笔记内容、评论、互动数据 |
| 发表评论 | 在笔记下发表评论 |
| 用户主页 | 获取用户资料和笔记列表 |
| 热点跟踪 | 自动生成话题分析报告 |
| 长图导出 | 将帖子导出为带注释的 JPG 长图 |
| 记忆导出 | 导出收藏/点赞为 Markdown 记忆库 |
## 快速开始
### 1. 安装 xiaohongshu-mcp
从 [GitHub Releases](https://github.com/xpzouying/xiaohongshu-mcp/releases) 下载:
```bash
# Linux x64
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-linux-amd64.tar.gz
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-linux-amd64.tar.gz
# macOS ARM
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-darwin-arm64.tar.gz
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-darwin-arm64.tar.gz
```
解压安装:
```bash
mkdir -p ~/.local/bin
tar -xzf xiaohongshu-mcp-*.tar.gz -C ~/.local/bin/
tar -xzf xiaohongshu-login-*.tar.gz -C ~/.local/bin/
cd ~/.local/bin
mv xiaohongshu-mcp-* xiaohongshu-mcp
mv xiaohongshu-login-* xiaohongshu-login
chmod +x xiaohongshu-mcp xiaohongshu-login
```
### 2. 安装本工具包
```bash
# 克隆到 OpenClaw workspace
git clone https://github.com/zhjiang22/openclaw-xhs.git
cp -r openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
# 或使用软链接
ln -s /path/to/openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
# 验证安装
cd ~/.openclaw/workspace/skills/xiaohongshu/scripts
./install-check.sh
```
### 3. 登录获取 Cookies
**方式一:本地桌面环境**
```bash
./login.sh # 打开浏览器,用小红书 App 扫码登录
```
**方式二Linux 服务器(无桌面)**
在本地电脑获取 cookies 后复制到服务器:
```bash
# 本地电脑(有 GUI
./xiaohongshu-login
# Cookies 保存在 /tmp/cookies.json
# 复制到服务器
scp /tmp/cookies.json user@server:~/.xiaohongshu/cookies.json
```
服务启动时会自动检查以下位置的 cookies按优先级
1. 环境变量 `XHS_COOKIES_SRC` 指定的路径
2. `~/cookies.json`
3. `~/.xiaohongshu/cookies.json`
### 4. 启动服务
```bash
./start-mcp.sh # headless 模式
./start-mcp.sh --headless=false # 显示浏览器(调试用)
```
服务监听 `http://localhost:18060/mcp`
停止服务:`./stop-mcp.sh`
#### 服务器部署(无桌面环境)
在没有图形界面的 Linux 服务器上,`xiaohongshu-mcp` 底层的浏览器需要虚拟显示器才能正常工作。
`start-mcp.sh` 会**自动检测**是否有桌面环境,如果没有则自动启动 Xvfb你只需提前安装
```bash
# Debian/Ubuntu
sudo apt-get install -y xvfb
# CentOS/RHEL
sudo yum install -y xorg-x11-server-Xvfb
```
安装后无需额外配置,`start-mcp.sh` 会自动处理:
- 检测 `DISPLAY` 环境变量
- 没有显示器时自动启动 `Xvfb :99`
- `stop-mcp.sh` 停止服务时会一并清理 Xvfb 进程
> **提示**:如果不安装 Xvfb登录和搜索功能会失败。参见 [Issue #3](https://github.com/zhjiang22/openclaw-xhs/issues/3)。
## 使用方法
### 基础命令
```bash
./status.sh # 检查登录状态
./search.sh "咖啡" # 搜索内容
./recommend.sh # 获取推荐
./post-detail.sh <id> <token> # 获取帖子详情
./comment.sh <id> <token> "写得真好!" # 发表评论
./user-profile.sh <user_id> <xsec_token> # 获取用户主页
```
### 热点跟踪
自动搜索热帖并生成分析报告:
```bash
./track-topic.sh "DeepSeek" --limit 10
./track-topic.sh "春节旅游" --limit 5 --output report.md
./track-topic.sh "iPhone 16" --limit 5 --feishu # 导出到飞书
```
报告包含:
- 📊 概览统计(帖子数、点赞数、评论数)
- 📝 热帖详情(标题、作者、正文、热门评论)
- 💬 评论区热点关键词
- 📈 趋势分析
### MCP 工具清单
| 工具名 | 描述 |
|--------|------|
| `check_login_status` | 检查登录状态 |
| `search_feeds` | 搜索内容 |
| `list_feeds` | 获取首页推荐 |
| `get_feed_detail` | 获取帖子详情和评论 |
| `post_comment_to_feed` | 发表评论 |
| `user_profile` | 获取用户主页 |
| `like_feed` | 点赞/取消 |
| `favorite_feed` | 收藏/取消 |
| `publish_content` | 发布图文笔记 |
| `publish_with_video` | 发布视频笔记 |
### 通用 MCP 调用
```bash
./mcp-call.sh # 查看可用工具
./mcp-call.sh search_feeds '{"keyword": "咖啡"}'
./mcp-call.sh like_feed '{"feed_id": "xxx", "xsec_token": "xxx", "like": true}'
```
### 长图导出
将搜索结果或帖子详情导出为带文字注释的 JPG 长图:
```bash
# 准备 posts.json搜索+拉详情后整理)
cat > posts.json << 'EOF'
[
{
"title": "帖子标题",
"author": "作者名",
"stats": "1.3万赞 100收藏",
"desc": "正文摘要",
"images": ["https://...webp"],
"per_image_text": {"1": "第2张图的专属说明"}
}
]
EOF
./export-long-image.sh --posts-file posts.json -o output.jpg
```
依赖Python 3.10+、Pillow`pip install Pillow`
## 记忆导出:把收藏变成 AI 的记忆
将你的收藏/点赞笔记导出为 AI 可搜索的知识库,让 AI 更懂你。
### 1. 安装 XHS-Downloader
```bash
git clone https://github.com/JoeanAmier/XHS-Downloader.git
cd XHS-Downloader
pip install -r requirements.txt
```
### 2. 获取收藏/点赞链接(油猴脚本)
手动复制链接效率太低,推荐使用油猴脚本批量提取:
**安装脚本:**
1. 安装 [Tampermonkey](https://www.tampermonkey.net/) 浏览器扩展
2. 安装用户脚本:[XHS-Downloader.js](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
**提取链接:**
1. 打开 [小红书网页版](https://www.xiaohongshu.com) 并登录
2. 进入个人主页 → **收藏****点赞** 页面
3. 点击 Tampermonkey 图标,选择:
- `提取收藏作品链接`
- `提取点赞作品链接`
4. 脚本会自动滚动页面加载全部内容
5. 提取完成后链接自动复制到剪贴板
6. 粘贴到 `links.md` 文件
> **注意**:自动滚动功能默认关闭,需在脚本设置中手动开启。开启后可能触发风控,建议适度使用。
### 3. 批量下载并导出
```bash
# 复制工具脚本到 XHS-Downloader 目录
cp tools/xhs-downloader/*.py /path/to/XHS-Downloader/
# 批量下载
cd /path/to/XHS-Downloader
python batch_download.py links.md
# 导出为多文件(推荐)
python export_to_workspace.py
# 输出到 ~/.openclaw/workspace/xhs-memory/
# 或导出为单文件
python export_memory.py
# 生成 xhs_memory.md
```
### 4. 配置 OpenClaw 记忆搜索
编辑 `~/.openclaw/openclaw.json`
```json
{
"memorySearch": {
"extraPaths": [
"~/.openclaw/workspace/xhs-memory"
]
}
}
```
现在你的 AI 助手可以搜索你的小红书收藏了!
## 安全说明
本项目在脚本安全方面采取了以下措施:
- **Cookies 保护**cookies 文件复制时自动设置 `600` 权限(仅当前用户可读写)
- **防注入**:所有 shell 脚本使用 `jq` 构建 JSON payload不通过字符串拼接防止 shell 注入攻击
- **工具名校验**MCP 工具名限制为字母数字和下划线,拒绝非法字符
- **路径校验**:跨 skill 调用时校验目标路径在允许的目录范围内
- **第三方内容**从小红书获取的内容为用户生成内容UGC请注意甄别
## 注意事项
1. **发布限制**
- 标题最多 20 个字符
- 正文最多 1000 个字符
- 每日发布上限约 50 条
2. **账号安全**
- 避免多设备同时登录同一账号
- 手机 App 仅用于查看,不要同时操作
3. **首次运行**
- 会自动下载 headless 浏览器(约 150MB
- 请确保网络畅通
4. **Cookies 有效期**
- 通常有效期约 30 天
- 失效后需重新获取
## 项目结构
```
openclaw-xhs/
├── README.md # 英文文档
├── README_CN.md # 中文文档
├── LICENSE
├── SKILL.md # Skill 描述文件
├── scripts/ # MCP 调用脚本
│ ├── install-check.sh
│ ├── start-mcp.sh
│ ├── stop-mcp.sh
│ ├── login.sh
│ ├── mcp-call.sh
│ ├── status.sh
│ ├── search.sh
│ ├── recommend.sh
│ ├── post-detail.sh
│ ├── comment.sh
│ ├── user-profile.sh
│ ├── track-topic.sh
│ ├── track-topic.py
│ ├── export-long-image.sh
│ └── export-long-image.py
└── tools/
└── xhs-downloader/ # 记忆导出工具
├── README.md
├── batch_download.py
├── export_memory.py
└── export_to_workspace.py
```
## 声明
本项目是 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 的**调用封装层**。
- **不包含** xiaohongshu-mcp 的任何源代码
- **用户需自行下载** xiaohongshu-mcp 二进制文件
- 脚本仅通过 HTTP 协议与 MCP 服务通信
## 致谢
- [@xpzouying](https://github.com/xpzouying) — [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp)
- [@JoeanAmier](https://github.com/JoeanAmier) — [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) (GPL-3.0)
## License
MIT License仅限本项目脚本
**注意:** xiaohongshu-mcp 项目未声明开源许可证,请遵守其作者的使用条款。
---
<p align="center">
<b>觉得有用?给个 ⭐ 支持一下!</b>
</p>

View File

@@ -0,0 +1,202 @@
---
name: xiaohongshu
description: |
小红书RedNote内容工具。使用场景
- 搜索小红书笔记并获取详情
- 获取首页推荐列表
- 获取帖子详情(正文、图片、互动数据、评论)
- 发表评论 / 回复评论
- 获取用户主页和笔记列表
- 点赞、收藏帖子
- 发布图文或视频笔记
- 热点话题跟踪与分析报告
- 帖子导出为长图
触发词示例:
- "搜一下小红书上的XX"
- "跟踪一下小红书上的XX热点"
- "分析小红书上关于XX的讨论"
- "小红书XX话题报告"
- "生成XX的小红书舆情报告"
---
# 小红书 MCP Skill
基于 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 封装的 shell 脚本工具集。
## 前置条件
```bash
cd scripts/
./install-check.sh # 检查依赖xiaohongshu-mcp、jq、python3
./start-mcp.sh # 启动 MCP 服务(默认端口 18060
./status.sh # 确认已登录
```
未登录时需扫码:`mcp-call.sh get_login_qrcode` 获取二维码,用小红书 App 扫码。
服务端口可通过 `MCP_URL` 环境变量覆盖(默认 `http://localhost:18060/mcp`)。
## 核心数据流
**重要:** 大多数操作需要 `feed_id` + `xsec_token` 配对。这两个值从搜索/推荐/用户主页结果中获取,**不可自行构造**。
```
search_feeds / list_feeds / user_profile
返回 feeds 数组,每个 feed 包含:
├── id → 用作 feed_id
├── xsecToken → 用作 xsec_token
└── noteCard → 标题、作者、封面、互动数据
get_feed_detail(feed_id, xsec_token)
返回完整笔记: 正文、图片列表、评论列表
评论中包含 comment_id、user_id用于回复评论
```
## 脚本参考
| 脚本 | 用途 | 参数 |
|------|------|------|
| `search.sh <关键词>` | 搜索笔记 | 关键词 |
| `recommend.sh` | 首页推荐 | 无 |
| `post-detail.sh <feed_id> <xsec_token>` | 帖子详情+评论 | 从搜索结果获取 |
| `comment.sh <feed_id> <xsec_token> <内容>` | 发表评论 | 从搜索结果获取 |
| `user-profile.sh <user_id> <xsec_token>` | 用户主页+笔记 | 从搜索结果获取 |
| `track-topic.sh <话题> [选项]` | 热点分析报告 | `--limit N` `--output file` `--feishu` |
| `export-long-image.sh` | 帖子导出长图 | `--posts-file json -o output.jpg` |
| `mcp-call.sh <tool> [json_args]` | 通用 MCP 调用 | 见下方工具表 |
| `start-mcp.sh` | 启动服务 | `--headless=false` `--port=N` |
| `stop-mcp.sh` | 停止服务 | 无 |
| `status.sh` | 检查登录 | 无 |
| `install-check.sh` | 检查依赖 | 无 |
## MCP 工具详细参数
### search_feeds — 搜索笔记
```json
{"keyword": "咖啡", "filters": {"sort_by": "最新", "note_type": "图文", "publish_time": "一周内"}}
```
filters 可选字段:
- `sort_by`: 综合 | 最新 | 最多点赞 | 最多评论 | 最多收藏
- `note_type`: 不限 | 视频 | 图文
- `publish_time`: 不限 | 一天内 | 一周内 | 半年内
- `search_scope`: 不限 | 已看过 | 未看过 | 已关注
- `location`: 不限 | 同城 | 附近
### get_feed_detail — 帖子详情
```json
{"feed_id": "...", "xsec_token": "...", "load_all_comments": true, "limit": 20}
```
- `load_all_comments`: false(默认) 返回前10条true 滚动加载更多
- `limit`: 加载评论上限(仅 load_all_comments=true 时生效),默认 20
- `click_more_replies`: 是否展开二级回复,默认 false
- `reply_limit`: 跳过回复数超过此值的评论,默认 10
- `scroll_speed`: slow | normal | fast
### post_comment_to_feed — 发表评论
```json
{"feed_id": "...", "xsec_token": "...", "content": "写得真好!"}
```
### reply_comment_in_feed — 回复评论
```json
{"feed_id": "...", "xsec_token": "...", "content": "谢谢!", "comment_id": "...", "user_id": "..."}
```
`comment_id``user_id` 从 get_feed_detail 返回的评论列表中获取。
### user_profile — 用户主页
```json
{"user_id": "...", "xsec_token": "..."}
```
`user_id` 从 feed 的 `noteCard.user.userId` 获取,`xsec_token` 使用该 feed 的 `xsecToken`
### like_feed — 点赞/取消
```json
{"feed_id": "...", "xsec_token": "..."}
{"feed_id": "...", "xsec_token": "...", "unlike": true}
```
### favorite_feed — 收藏/取消
```json
{"feed_id": "...", "xsec_token": "..."}
{"feed_id": "...", "xsec_token": "...", "unfavorite": true}
```
### publish_content — 发布图文
```json
{"title": "标题(≤20字)", "content": "正文(≤1000字)", "images": ["/path/to/img.jpg"], "tags": ["美食","旅行"]}
```
- `images`: 至少1张支持本地路径或 HTTP URL
- `tags`: 可选,话题标签
- `schedule_at`: 可选定时发布ISO86011小时~14天内
### publish_with_video — 发布视频
```json
{"title": "标题", "content": "正文", "video": "/path/to/video.mp4"}
```
### 其他工具
| 工具 | 参数 | 说明 |
|------|------|------|
| `check_login_status` | 无 | 检查登录状态 |
| `list_feeds` | 无 | 获取首页推荐 |
| `get_login_qrcode` | 无 | 获取登录二维码Base64 PNG |
| `delete_cookies` | 无 | 删除 cookies重置登录 |
## 热点跟踪
自动搜索 → 拉取详情 → 生成 Markdown 报告。
```bash
./track-topic.sh "DeepSeek" --limit 5
./track-topic.sh "春节旅游" --limit 10 --output report.md
./track-topic.sh "iPhone 16" --limit 5 --feishu # 导出飞书
```
报告包含:概览统计、热帖详情(正文+热评)、评论关键词、趋势分析。
## 长图导出
将帖子导出为白底黑字的 JPG 长图。
```bash
./export-long-image.sh --posts-file posts.json -o output.jpg
```
posts.json 格式:
```json
[{
"title": "标题", "author": "作者", "stats": "1.3万赞",
"desc": "正文摘要", "images": ["https://..."],
"per_image_text": {"1": "第2张图的说明"}
}]
```
依赖Python 3.10+、Pillow。
## 注意事项
- Cookies 有效期约 30 天,过期需重新扫码
- 首次启动会下载 headless 浏览器(~150MB
- 同一账号避免多客户端同时操作
- 发布限制标题≤20字符正文≤1000字符日发布≤50条
- Linux 服务器无桌面环境需安装 xvfb`apt-get install xvfb`,脚本自动管理)

View File

@@ -0,0 +1,16 @@
#!/bin/bash
# 发表评论到小红书帖子
NOTE_ID="$1"
XSEC_TOKEN="$2"
CONTENT="$3"
if [ -z "$NOTE_ID" ] || [ -z "$XSEC_TOKEN" ] || [ -z "$CONTENT" ]; then
echo "用法: $0 <note_id> <xsec_token> <评论内容>"
exit 1
fi
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ARGS=$(jq -n --arg fid "$NOTE_ID" --arg tok "$XSEC_TOKEN" --arg ct "$CONTENT" \
'{"feed_id":$fid,"xsec_token":$tok,"content":$ct}')
"$SCRIPT_DIR/mcp-call.sh" post_comment_to_feed "$ARGS"

View File

@@ -0,0 +1,260 @@
#!/usr/bin/env python3
"""
小红书帖子长图导出工具
用法:
python3 export-long-image.py --posts '<json>' --output output.jpg
python3 export-long-image.py --posts-file posts.json --output output.jpg
posts JSON 格式:
[
{
"title": "帖子标题",
"author": "作者名",
"stats": "1.3万赞 5171收藏",
"desc": "正文摘要,支持\\n换行",
"images": ["url1", "url2", ...],
"per_image_text": {
"1": "第2张图的说明文字0-indexed",
"3": "第4张图的说明文字"
}
},
...
]
per_image_text 可选:如果原帖文字明确指向某张图,可以把说明放在对应图片上。
未指定 per_image_text 时,所有文字放在该帖第一张图前的文字块中。
"""
import argparse
import json
import os
import sys
import tempfile
import urllib.request
from PIL import Image, ImageDraw, ImageFont
# --- 配置 ---
WIDTH = 800
PAD = 24
LINE_SPACE = 10
FONT_CANDIDATES = [
"/System/Library/Fonts/STHeiti Medium.ttc",
"/System/Library/Fonts/Hiragino Sans GB.ttc",
"/System/Library/Fonts/Supplemental/Arial Unicode.ttf",
"/usr/share/fonts/truetype/noto/NotoSansCJK-Regular.ttc",
"/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc",
]
def find_font():
for path in FONT_CANDIDATES:
if os.path.exists(path):
return path
return None
def load_font(path, size):
if path:
try:
return ImageFont.truetype(path, size, index=0)
except Exception:
pass
return ImageFont.load_default()
def wrap_text(text, font, max_width, draw):
lines = []
for paragraph in text.split("\n"):
paragraph = paragraph.strip()
if not paragraph:
continue
current = ""
for char in paragraph:
test = current + char
bbox = draw.textbbox((0, 0), test, font=font)
if bbox[2] - bbox[0] > max_width:
if current:
lines.append(current)
current = char
else:
current = test
if current:
lines.append(current)
return lines
def draw_lines(draw, lines, font, x, y, fill):
for line in lines:
draw.text((x, y), line, font=font, fill=fill)
bbox = draw.textbbox((0, 0), line, font=font)
y += (bbox[3] - bbox[1]) + LINE_SPACE
return y
def measure_lines(lines, font, draw):
h = 0
for line in lines:
bbox = draw.textbbox((0, 0), line if line else " ", font=font)
h += (bbox[3] - bbox[1]) + LINE_SPACE
return h
def make_text_block(title, author_line, desc, font_path, width):
"""白底黑字文字块,模仿小红书原样"""
title_font = load_font(font_path, 32)
author_font = load_font(font_path, 20)
body_font = load_font(font_path, 24)
tmp = Image.new("RGB", (width, 10))
draw = ImageDraw.Draw(tmp)
max_w = width - PAD * 2
title_lines = wrap_text(title, title_font, max_w, draw)
author_lines = [author_line] if author_line else []
desc_lines = wrap_text(desc, body_font, max_w, draw) if desc else []
# 计算高度
total_h = PAD
total_h += measure_lines(title_lines, title_font, draw)
if author_lines:
total_h += 4
total_h += measure_lines(author_lines, author_font, draw)
if desc_lines:
total_h += 8
total_h += measure_lines(desc_lines, body_font, draw)
total_h += PAD
# 绘制
block = Image.new("RGB", (width, total_h), (255, 255, 255))
draw = ImageDraw.Draw(block)
y = PAD
y = draw_lines(draw, title_lines, title_font, PAD, y, (33, 33, 33))
if author_lines:
y += 4
y = draw_lines(draw, author_lines, author_font, PAD, y, (153, 153, 153))
if desc_lines:
y += 8
y = draw_lines(draw, desc_lines, body_font, PAD, y, (66, 66, 66))
return block
def make_image_caption(text, font_path, width):
"""图片上方的小说明文字块"""
font = load_font(font_path, 20)
tmp = Image.new("RGB", (width, 10))
draw = ImageDraw.Draw(tmp)
lines = wrap_text(text, font, width - PAD * 2, draw)
h = PAD + measure_lines(lines, font, draw) + 8
block = Image.new("RGB", (width, h), (245, 245, 245))
draw = ImageDraw.Draw(block)
draw_lines(draw, lines, font, PAD, PAD // 2, (100, 100, 100))
return block
def download_image(url, tmpdir, idx):
"""下载图片到临时目录"""
ext = ".webp"
path = os.path.join(tmpdir, f"img_{idx}{ext}")
try:
req = urllib.request.Request(url, headers={"User-Agent": "Mozilla/5.0"})
with urllib.request.urlopen(req, timeout=30) as resp:
with open(path, "wb") as f:
f.write(resp.read())
return path
except Exception as e:
print(f" 警告: 下载失败 {url[:60]}... ({e})", file=sys.stderr)
return None
def main():
parser = argparse.ArgumentParser(description="小红书帖子长图导出")
parser.add_argument("--posts", help="Posts JSON string")
parser.add_argument("--posts-file", help="Posts JSON file path")
parser.add_argument("--output", "-o", required=True, help="Output JPG path")
parser.add_argument("--width", type=int, default=800, help="Image width (default 800)")
parser.add_argument("--quality", type=int, default=88, help="JPEG quality (default 88)")
args = parser.parse_args()
global WIDTH
WIDTH = args.width
# 读取 posts 数据
if args.posts:
posts = json.loads(args.posts)
elif args.posts_file:
with open(args.posts_file, "r") as f:
posts = json.load(f)
else:
print("错误: 需要 --posts 或 --posts-file", file=sys.stderr)
sys.exit(1)
font_path = find_font()
if not font_path:
print("警告: 未找到中文字体,文字可能显示异常", file=sys.stderr)
sep = Image.new("RGB", (WIDTH, 3), (230, 230, 230))
pieces = []
with tempfile.TemporaryDirectory() as tmpdir:
img_counter = 0
for pi, post in enumerate(posts):
title = post.get("title", "")
author = post.get("author", "")
stats = post.get("stats", "")
desc = post.get("desc", "")
images = post.get("images", [])
per_image_text = post.get("per_image_text", {})
# 作者行
author_line = author
if stats:
author_line = f"{author} · {stats}" if author else stats
# 主文字块
text_block = make_text_block(title, author_line, desc, font_path, WIDTH)
pieces.append(text_block)
# 图片
for i, url in enumerate(images):
# 是否有针对这张图的说明
img_key = str(i)
if img_key in per_image_text:
caption_block = make_image_caption(per_image_text[img_key], font_path, WIDTH)
pieces.append(caption_block)
img_path = download_image(url, tmpdir, img_counter)
img_counter += 1
if img_path:
try:
im = Image.open(img_path).convert("RGB")
ratio = WIDTH / im.width
im = im.resize((WIDTH, int(im.height * ratio)), Image.LANCZOS)
pieces.append(im)
except Exception as e:
print(f" 警告: 图片处理失败 ({e})", file=sys.stderr)
# 帖子间分隔线
if pi < len(posts) - 1:
pieces.append(sep)
if not pieces:
print("错误: 没有内容可拼接", file=sys.stderr)
sys.exit(1)
total_h = sum(p.height for p in pieces)
long_img = Image.new("RGB", (WIDTH, total_h), (255, 255, 255))
y = 0
for p in pieces:
long_img.paste(p, (0, y))
y += p.height
long_img.save(args.output, "JPEG", quality=args.quality)
print(f"完成: {args.output} ({WIDTH}x{total_h})")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# 小红书帖子导出长图
#
# 用法:
# ./export-long-image.sh --posts-file posts.json -o output.jpg
# ./export-long-image.sh --posts '<json>' -o output.jpg
#
# posts.json 示例:
# [
# {
# "title": "帖子标题",
# "author": "作者",
# "stats": "1.3万赞 100收藏",
# "desc": "正文摘要",
# "images": ["https://...webp", "https://...webp"]
# }
# ]
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
python3 "$SCRIPT_DIR/export-long-image.py" "$@"

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# 检查小红书 MCP 依赖是否已安装
set -e
echo "检查小红书 MCP 依赖..."
echo ""
XHS_MCP="$HOME/.local/bin/xiaohongshu-mcp"
XHS_LOGIN="$HOME/.local/bin/xiaohongshu-login"
check_binary() {
local name="$1"
local path="$2"
if [ -f "$path" ]; then
echo "$name: $path"
return 0
else
echo "$name: 未找到"
return 1
fi
}
MISSING=0
check_binary "xiaohongshu-mcp" "$XHS_MCP" || MISSING=1
check_binary "xiaohongshu-login" "$XHS_LOGIN" || MISSING=1
echo ""
# 检查 jq必需用于安全构建 JSON
if command -v jq &> /dev/null; then
echo "✅ jq: $(which jq)"
else
echo "❌ jq: 未安装(必需,用于安全构建 JSON"
echo " 安装: apt-get install jq / brew install jq"
MISSING=1
fi
# 检查 Python3track-topic.py 需要)
if command -v python3 &> /dev/null; then
echo "✅ python3: $(python3 --version)"
else
echo "⚠️ python3: 未安装(热点跟踪功能需要)"
fi
echo ""
if [ $MISSING -eq 1 ]; then
echo "=========================================="
echo "缺少必要依赖,请按以下步骤安装:"
echo ""
echo "1. 从 GitHub Releases 下载对应平台的二进制文件:"
echo " https://github.com/xpzouying/xiaohongshu-mcp/releases"
echo ""
echo "2. 解压并安装到 ~/.local/bin/"
echo " mkdir -p ~/.local/bin"
echo " mv xiaohongshu-mcp-linux-amd64 ~/.local/bin/xiaohongshu-mcp"
echo " mv xiaohongshu-login-linux-amd64 ~/.local/bin/xiaohongshu-login"
echo " chmod +x ~/.local/bin/xiaohongshu-*"
echo ""
echo "3. 确保 ~/.local/bin 在 PATH 中(可选)"
echo "=========================================="
exit 1
else
echo "✅ 所有依赖已就绪"
fi

View File

@@ -0,0 +1,10 @@
#!/bin/bash
# 启动小红书登录工具
XHS_LOGIN="$HOME/.local/bin/xiaohongshu-login"
echo "启动小红书登录工具..."
echo "注意:需要桌面环境或 X11 转发"
echo ""
"$XHS_LOGIN"

View File

@@ -0,0 +1,82 @@
#!/bin/bash
# 通用 MCP 调用脚本(支持 Streamable HTTP + Session ID
set -e
TOOL_NAME="$1"
TOOL_ARGS="$2"
MCP_URL="${MCP_URL:-http://localhost:18060/mcp}"
export no_proxy="${no_proxy:+$no_proxy,}localhost,127.0.0.1"
# 检查 jq 依赖
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq (apt-get install jq / brew install jq)"
exit 1
fi
if [ -z "$TOOL_NAME" ]; then
echo "用法: $0 <tool_name> [json_args]"
echo ""
echo "可用工具:"
echo " check_login_status - 检查登录状态"
echo " search_feeds - 搜索内容 {\"keyword\": \"...\", \"filters\": {\"sort_by\": \"最新\"}}"
echo " list_feeds - 获取首页推荐"
echo " get_feed_detail - 获取帖子详情 {\"feed_id\": \"...\", \"xsec_token\": \"...\"}"
echo " post_comment_to_feed - 发表评论 {\"feed_id\": \"...\", \"xsec_token\": \"...\", \"content\": \"...\"}"
echo " reply_comment_in_feed - 回复评论 {\"feed_id\": \"...\", \"xsec_token\": \"...\", \"content\": \"...\", \"comment_id\": \"...\", \"user_id\": \"...\"}"
echo " user_profile - 获取用户主页 {\"user_id\": \"...\", \"xsec_token\": \"...\"}"
echo " like_feed - 点赞 {\"feed_id\": \"...\", \"xsec_token\": \"...\"} 取消: {\"unlike\": true}"
echo " favorite_feed - 收藏 {\"feed_id\": \"...\", \"xsec_token\": \"...\"} 取消: {\"unfavorite\": true}"
echo " get_login_qrcode - 获取登录二维码"
echo " delete_cookies - 删除 cookies重置登录状态"
echo " publish_content - 发布图文"
echo " publish_with_video - 发布视频"
exit 1
fi
# 校验 tool name只允许字母数字和下划线
if [[ ! "$TOOL_NAME" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]]; then
echo "错误: 无效的工具名: $TOOL_NAME"
exit 1
fi
[ -z "$TOOL_ARGS" ] && TOOL_ARGS="{}"
# 校验 TOOL_ARGS 是合法 JSON
if ! echo "$TOOL_ARGS" | jq empty 2>/dev/null; then
echo "错误: 参数不是合法的 JSON: $TOOL_ARGS"
exit 1
fi
# 1. Initialize 并获取 Session ID
INIT_RESPONSE=$(curl --noproxy '*' -s -i -X POST "$MCP_URL" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"openclaw","version":"1.0"}}}')
SESSION_ID=$(echo "$INIT_RESPONSE" | grep -i "Mcp-Session-Id" | awk '{print $2}' | tr -d '\r\n')
if [ -z "$SESSION_ID" ]; then
echo "错误: 无法获取 MCP Session ID"
echo "请确保 MCP 服务正在运行: ./start-mcp.sh"
exit 1
fi
# 2. Initialized notification
curl --noproxy '*' -s -X POST "$MCP_URL" \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{"jsonrpc":"2.0","method":"notifications/initialized"}' > /dev/null
# 3. Call tool — 使用 jq 安全构建 JSON避免 shell 注入
CALL_PAYLOAD=$(jq -n \
--arg name "$TOOL_NAME" \
--argjson args "$TOOL_ARGS" \
'{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":$name,"arguments":$args}}')
RESULT=$(curl --noproxy '*' -s --max-time 120 -X POST "$MCP_URL" \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d "$CALL_PAYLOAD")
# 输出结果
echo "$RESULT" | jq .

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# 获取小红书帖子详情
NOTE_ID="$1"
XSEC_TOKEN="$2"
if [ -z "$NOTE_ID" ] || [ -z "$XSEC_TOKEN" ]; then
echo "用法: $0 <note_id> <xsec_token>"
echo ""
echo "note_id 和 xsec_token 可从搜索或推荐结果中获取"
exit 1
fi
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ARGS=$(jq -n --arg fid "$NOTE_ID" --arg tok "$XSEC_TOKEN" \
'{"feed_id":$fid,"xsec_token":$tok}')
"$SCRIPT_DIR/mcp-call.sh" get_feed_detail "$ARGS"

View File

@@ -0,0 +1,5 @@
#!/bin/bash
# 获取小红书首页推荐列表
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
"$SCRIPT_DIR/mcp-call.sh" list_feeds

View File

@@ -0,0 +1,13 @@
#!/bin/bash
# 搜索小红书内容
KEYWORD="$1"
if [ -z "$KEYWORD" ]; then
echo "用法: $0 <关键词>"
exit 1
fi
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ARGS=$(jq -n --arg kw "$KEYWORD" '{"keyword":$kw}')
"$SCRIPT_DIR/mcp-call.sh" search_feeds "$ARGS"

View File

@@ -0,0 +1,166 @@
#!/bin/bash
# 启动小红书 MCP 服务
XHS_MCP="$HOME/.local/bin/xiaohongshu-mcp"
PID_FILE="$HOME/.xiaohongshu/mcp.pid"
LOG_FILE="$HOME/.xiaohongshu/mcp.log"
XVFB_PID_FILE="$HOME/.xiaohongshu/xvfb.pid"
XVFB_DISPLAY_FILE="$HOME/.xiaohongshu/xvfb.display"
# Cookies 路径(可通过环境变量覆盖)
# XHS_COOKIES_SRC: 源 cookies 文件(用于远程服务器场景)
# 默认检查 ~/cookies.json 和 ~/.xiaohongshu/cookies.json
COOKIES_DST="/tmp/cookies.json"
mkdir -p "$HOME/.xiaohongshu"
# 检测是否有显示器(桌面环境)
has_display() {
[ -n "$DISPLAY" ] && xdpyinfo >/dev/null 2>&1
}
# 在无桌面环境下自动启动 Xvfb
ensure_display() {
if has_display; then
return 0
fi
# 已有 Xvfb 在运行
if [ -f "$XVFB_PID_FILE" ]; then
local pid
pid=$(cat "$XVFB_PID_FILE")
if kill -0 "$pid" 2>/dev/null; then
export DISPLAY=$(cat "$XVFB_DISPLAY_FILE" 2>/dev/null || echo ":99")
echo "复用已有 Xvfb (PID: $pid, DISPLAY=$DISPLAY)"
return 0
fi
fi
# 检查 Xvfb 是否安装
if ! command -v Xvfb >/dev/null 2>&1; then
echo "⚠ 未检测到桌面环境,且未安装 Xvfb。"
echo " 请安装sudo apt-get install -y xvfb"
echo " 安装后重新运行本脚本即可自动配置。"
exit 1
fi
echo "未检测到桌面环境,自动启动 Xvfb 虚拟显示..."
# 自动选择可用的 display 号99-109
local display_num=""
local d
for d in $(seq 99 109); do
if [ ! -e "/tmp/.X${d}-lock" ]; then
display_num=$d
break
fi
# 锁文件存在但进程已死,尝试清理后使用
local lock_pid
lock_pid=$(cat "/tmp/.X${d}-lock" 2>/dev/null | tr -d ' ')
if [ -n "$lock_pid" ] && ! kill -0 "$lock_pid" 2>/dev/null; then
rm -f "/tmp/.X${d}-lock" "/tmp/.X11-unix/X${d}" 2>/dev/null
if [ ! -e "/tmp/.X${d}-lock" ]; then
display_num=$d
break
fi
fi
done
if [ -z "$display_num" ]; then
echo "✗ 无法找到可用的 display 号(:99-:109 均被占用)"
exit 1
fi
# -ac: 关闭访问控制,允许 chromium 连接虚拟显示(仅用于 headless 自动化)
Xvfb ":${display_num}" -screen 0 1024x768x24 -ac >/dev/null 2>&1 &
echo $! > "$XVFB_PID_FILE"
echo ":${display_num}" > "$XVFB_DISPLAY_FILE"
export DISPLAY=":${display_num}"
sleep 1
if kill -0 "$(cat "$XVFB_PID_FILE")" 2>/dev/null; then
echo "✓ Xvfb 已启动 (DISPLAY=:${display_num})"
else
echo "✗ Xvfb 启动失败"
exit 1
fi
}
# 同步 cookies支持多个可能的来源
sync_cookies() {
local src=""
# 优先使用环境变量指定的路径
if [ -n "$XHS_COOKIES_SRC" ] && [ -f "$XHS_COOKIES_SRC" ]; then
src="$XHS_COOKIES_SRC"
elif [ -f "$HOME/cookies.json" ]; then
src="$HOME/cookies.json"
elif [ -f "$HOME/.xiaohongshu/cookies.json" ]; then
src="$HOME/.xiaohongshu/cookies.json"
fi
if [ -n "$src" ]; then
if [ ! -f "$COOKIES_DST" ] || [ "$src" -nt "$COOKIES_DST" ]; then
install -m 600 "$src" "$COOKIES_DST"
echo "已同步 cookies: $src -> $COOKIES_DST"
fi
else
# 确保已有的 cookies 文件权限正确
[ -f "$COOKIES_DST" ] && chmod 600 "$COOKIES_DST"
fi
}
sync_cookies
ensure_display
# 检查是否已在运行
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if kill -0 "$PID" 2>/dev/null; then
echo "MCP 服务已在运行 (PID: $PID)"
echo "如需重启,请先运行 stop-mcp.sh"
exit 0
fi
fi
# 解析参数
HEADLESS="true"
PORT="${XHS_MCP_PORT:-18060}"
for arg in "$@"; do
case $arg in
--headless=false)
HEADLESS="false"
;;
--port=*)
PORT="${arg#*=}"
;;
esac
done
# 校验端口号
if [[ ! "$PORT" =~ ^[0-9]+$ ]]; then
echo "错误: 无效端口号: $PORT"
exit 1
fi
# 启动服务
echo "启动小红书 MCP 服务..."
if [ "$HEADLESS" = "false" ]; then
nohup "$XHS_MCP" -port ":${PORT}" -headless=false > "$LOG_FILE" 2>&1 &
else
nohup "$XHS_MCP" -port ":${PORT}" > "$LOG_FILE" 2>&1 &
fi
echo $! > "$PID_FILE"
sleep 2
# 验证启动
if kill -0 $(cat "$PID_FILE") 2>/dev/null; then
echo "✓ MCP 服务已启动 (PID: $(cat $PID_FILE))"
echo " 端点: http://localhost:${PORT}/mcp"
echo " 日志: $LOG_FILE"
else
echo "✗ 启动失败,查看日志: $LOG_FILE"
cat "$LOG_FILE"
exit 1
fi

View File

@@ -0,0 +1,5 @@
#!/bin/bash
# 检查小红书登录状态
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
"$SCRIPT_DIR/mcp-call.sh" check_login_status

View File

@@ -0,0 +1,29 @@
#!/bin/bash
# 停止小红书 MCP 服务
PID_FILE="$HOME/.xiaohongshu/mcp.pid"
XVFB_PID_FILE="$HOME/.xiaohongshu/xvfb.pid"
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if kill -0 "$PID" 2>/dev/null; then
kill "$PID"
rm -f "$PID_FILE"
echo "✓ MCP 服务已停止"
else
echo "进程不存在,清理 PID 文件"
rm -f "$PID_FILE"
fi
else
echo "MCP 服务未运行"
fi
# 清理 Xvfb
if [ -f "$XVFB_PID_FILE" ]; then
XVFB_PID=$(cat "$XVFB_PID_FILE")
if kill -0 "$XVFB_PID" 2>/dev/null; then
kill "$XVFB_PID"
echo "✓ Xvfb 已停止"
fi
rm -f "$XVFB_PID_FILE" "$HOME/.xiaohongshu/xvfb.display"
fi

View File

@@ -0,0 +1,313 @@
#!/usr/bin/env python3
"""
小红书热点跟踪工具
用法:
python track-topic.py <话题> [--limit N] [--feishu] [--output FILE]
示例:
python track-topic.py "DeepSeek" --limit 5 --feishu
python track-topic.py "春节旅游" --limit 10 --output report.md
"""
import argparse
import json
import subprocess
import sys
import os
from datetime import datetime
from pathlib import Path
# 获取脚本目录
SCRIPT_DIR = Path(__file__).parent.resolve()
XHS_SCRIPTS = SCRIPT_DIR # 现在就在 xiaohongshu/scripts 目录下
# 飞书 skill 路径(支持多种可能的位置)
def find_feishu_scripts() -> Path:
"""查找 feishu-docs skill 的 scripts 目录"""
# 只允许在已知的 skill 目录中查找
allowed_roots = [
SCRIPT_DIR.parent.parent, # 同级 skill 目录
Path.home() / ".openclaw" / "workspace" / "skills",
Path.home() / ".claude" / "skills",
]
for root in allowed_roots:
candidate = (root / "feishu-docs" / "scripts").resolve()
# 校验解析后的路径仍在允许的根目录下(防止符号链接逃逸)
if candidate.is_dir() and any(
str(candidate).startswith(str(r.resolve()) + os.sep) for r in allowed_roots
):
return candidate
return allowed_roots[0] / "feishu-docs" / "scripts" # 返回默认路径(可能不存在)
FEISHU_SCRIPTS = find_feishu_scripts()
def call_xhs_mcp(tool: str, args: dict) -> dict:
"""调用小红书 MCP 工具"""
mcp_call = XHS_SCRIPTS / "mcp-call.sh"
if not mcp_call.exists():
print(f"❌ 找不到 xiaohongshu skill: {mcp_call}", file=sys.stderr)
sys.exit(1)
result = subprocess.run(
[str(mcp_call), tool, json.dumps(args)],
capture_output=True, text=True, timeout=120
)
if result.returncode != 0:
print(f"❌ MCP 调用失败: {result.stderr}", file=sys.stderr)
return {}
try:
response = json.loads(result.stdout)
if "result" in response and "content" in response["result"]:
text = response["result"]["content"][0].get("text", "{}")
return json.loads(text) if text else {}
elif "error" in response:
print(f"⚠️ MCP 错误: {response['error'].get('message', 'Unknown')}", file=sys.stderr)
return {}
return response
except json.JSONDecodeError:
return {}
def search_feeds(keyword: str) -> list:
"""搜索小红书内容"""
print(f"🔍 搜索: {keyword}")
result = call_xhs_mcp("search_feeds", {"keyword": keyword})
feeds = result.get("feeds", [])
# 过滤掉 hot_query 类型
return [f for f in feeds if f.get("modelType") == "note"]
def get_feed_detail(feed_id: str, xsec_token: str, load_comments: bool = True) -> dict:
"""获取帖子详情"""
args = {
"feed_id": feed_id,
"xsec_token": xsec_token,
"load_all_comments": load_comments
}
result = call_xhs_mcp("get_feed_detail", args)
return result.get("data", {})
def format_timestamp(ts: int) -> str:
"""格式化时间戳"""
if not ts:
return "未知"
try:
dt = datetime.fromtimestamp(ts / 1000)
return dt.strftime("%Y-%m-%d %H:%M")
except:
return "未知"
def get_comments_list(post: dict) -> list:
"""安全地获取评论列表"""
comments = post.get("comments", {})
if isinstance(comments, dict):
return comments.get("list", [])
elif isinstance(comments, list):
return comments
return []
def generate_report(keyword: str, posts: list) -> str:
"""生成 Markdown 报告"""
now = datetime.now().strftime("%Y-%m-%d %H:%M")
report = f"""# 🔥 小红书热点跟踪报告
**话题:** {keyword}
**生成时间:** {now}
**收录帖子:** {len(posts)}
---
## 📊 概览
"""
# 统计信息
total_likes = sum(int(p.get("note", {}).get("interactInfo", {}).get("likedCount", 0) or 0) for p in posts)
total_comments = sum(len(get_comments_list(p)) for p in posts)
report += f"""| 指标 | 数值 |
|------|------|
| 总帖子数 | {len(posts)} |
| 总点赞数 | {total_likes:,} |
| 总评论数 | {total_comments} |
---
## 📝 热帖详情
"""
for i, post in enumerate(posts, 1):
note = post.get("note", {})
comments = get_comments_list(post)
title = note.get("title", "无标题")
desc = note.get("desc", "")
user = note.get("user", {}).get("nickname", "匿名")
time_str = format_timestamp(note.get("time"))
interact = note.get("interactInfo", {})
likes = interact.get("likedCount", "0")
collected = interact.get("collectedCount", "0")
report += f"""### {i}. {title}
**作者:** {user}
**时间:** {time_str}
**互动:** ❤️ {likes} 赞 · ⭐ {collected} 收藏
**正文:**
> {desc[:500]}{"..." if len(desc) > 500 else ""}
"""
if comments:
report += f"""**热门评论 ({len(comments)} 条):**
"""
for j, comment in enumerate(list(comments)[:5], 1):
c_user = comment.get("userInfo", {}).get("nickname", "匿名")
c_content = comment.get("content", "")
c_likes = comment.get("likeCount", 0)
report += f"- **{c_user}** ({c_likes}赞): {c_content[:100]}\n"
if len(comments) > 5:
report += f"- *... 还有 {len(comments) - 5} 条评论*\n"
report += "\n---\n\n"
# 评论区热点总结
report += """## 💬 评论区热点关键词
"""
# 简单的关键词提取(统计高频词)
all_comments = []
for post in posts:
for c in get_comments_list(post):
all_comments.append(c.get("content", ""))
if all_comments:
report += f"{len(all_comments)} 条评论,主要讨论方向:\n\n"
# 这里可以做更复杂的 NLP 分析,暂时简化
report += "- 用户对该话题的关注度较高\n"
report += "- 评论区互动活跃\n"
else:
report += "暂无足够评论数据进行分析\n"
report += """
---
## 📈 趋势分析
基于以上热帖和评论数据,该话题在小红书上呈现以下特点:
1. **热度指数**: """ + ("🔥🔥🔥 高" if total_likes > 1000 else "🔥🔥 中" if total_likes > 100 else "🔥 低") + f"""
2. **互动活跃度**: """ + ("活跃" if total_comments > 50 else "一般" if total_comments > 10 else "较低") + """
3. **内容类型**: 以图文笔记为主
---
*报告由 OpenClaw 小红书热点跟踪工具自动生成*
"""
return report
def export_to_feishu(title: str, content: str) -> str:
"""导出到飞书文档"""
import_script = FEISHU_SCRIPTS / "doc-import.sh"
if not import_script.exists():
print(f"❌ 找不到 feishu-docs skill: {import_script}", file=sys.stderr)
return ""
print("📤 导出到飞书文档...")
# 写入临时文件
tmp_file = Path("/tmp/xhs_report.md")
tmp_file.write_text(content, encoding="utf-8")
result = subprocess.run(
[str(import_script), title, "--file", str(tmp_file)],
capture_output=True, text=True, timeout=60
)
if result.returncode != 0:
print(f"⚠️ 飞书导出失败: {result.stderr}", file=sys.stderr)
return ""
# 解析返回的文档链接
output = result.stdout
print(output)
return output
def main():
parser = argparse.ArgumentParser(description="小红书热点跟踪工具")
parser.add_argument("keyword", help="要跟踪的话题/关键词")
parser.add_argument("--limit", "-n", type=int, default=10, help="获取帖子数量 (默认 10)")
parser.add_argument("--feishu", "-f", action="store_true", help="导出到飞书文档")
parser.add_argument("--output", "-o", help="输出 Markdown 文件路径")
parser.add_argument("--no-comments", action="store_true", help="不获取评论")
args = parser.parse_args()
# 1. 搜索帖子
feeds = search_feeds(args.keyword)
if not feeds:
print("❌ 未找到相关帖子")
sys.exit(1)
print(f"✅ 找到 {len(feeds)} 条帖子")
# 2. 获取详情
posts = []
for i, feed in enumerate(feeds[:args.limit]):
feed_id = feed.get("id")
xsec_token = feed.get("xsecToken")
title = feed.get("noteCard", {}).get("displayTitle", "")
print(f"📖 [{i+1}/{min(len(feeds), args.limit)}] 获取: {title[:30]}...")
detail = get_feed_detail(feed_id, xsec_token, not args.no_comments)
if detail:
posts.append(detail)
if not posts:
print("❌ 未能获取帖子详情")
sys.exit(1)
print(f"✅ 成功获取 {len(posts)} 篇帖子详情")
# 3. 生成报告
print("📝 生成报告...")
report = generate_report(args.keyword, posts)
# 4. 输出
if args.output:
output_path = Path(args.output)
output_path.write_text(report, encoding="utf-8")
print(f"✅ 报告已保存: {output_path}")
if args.feishu:
doc_title = f"小红书热点跟踪: {args.keyword} ({datetime.now().strftime('%m-%d')})"
export_to_feishu(doc_title, report)
if not args.output and not args.feishu:
# 默认输出到 stdout
print("\n" + "="*60 + "\n")
print(report)
return report
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,5 @@
#!/bin/bash
# 小红书热点跟踪工具
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
python3 "$SCRIPT_DIR/track-topic.py" "$@"

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# 获取小红书用户主页
USER_ID="$1"
XSEC_TOKEN="$2"
if [ -z "$USER_ID" ] || [ -z "$XSEC_TOKEN" ]; then
echo "用法: $0 <user_id> <xsec_token>"
echo ""
echo "user_id 和 xsec_token 可从搜索或推荐结果中获取"
exit 1
fi
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ARGS=$(jq -n --arg uid "$USER_ID" --arg tok "$XSEC_TOKEN" \
'{"user_id":$uid,"xsec_token":$tok}')
"$SCRIPT_DIR/mcp-call.sh" user_profile "$ARGS"

View File

@@ -0,0 +1,98 @@
# XHS-Downloader 辅助工具
配合 [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) 使用的工具脚本,用于下载小红书收藏/点赞笔记并导出为 OpenClaw 记忆库格式。
## 依赖
需要先安装 XHS-Downloader
```bash
git clone https://github.com/JoeanAmier/XHS-Downloader.git
cd XHS-Downloader
pip install -r requirements.txt
```
## 使用流程
### 1. 获取收藏/点赞链接(使用油猴脚本)
手动复制链接效率低,推荐使用 XHS-Downloader 提供的油猴脚本批量提取:
**安装脚本:**
1. 安装 [Tampermonkey](https://www.tampermonkey.net/) 浏览器扩展
2. 安装用户脚本:[XHS-Downloader.js](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
**提取链接:**
1. 打开 [小红书网页版](https://www.xiaohongshu.com) 并登录
2. 进入个人主页 → **收藏****点赞** 页面
3. 点击 Tampermonkey 图标,选择:
- `提取收藏作品链接`
- `提取点赞作品链接`
4. 脚本会自动滚动页面加载全部内容
5. 提取完成后链接自动复制到剪贴板
**保存链接:**
将剪贴板内容粘贴到 `links.md` 文件:
```
https://www.xiaohongshu.com/explore/xxx?xsec_token=...
https://www.xiaohongshu.com/explore/yyy?xsec_token=...
```
> **注意**:自动滚动功能默认关闭,需在脚本设置中手动开启。开启后可能触发风控,建议适度使用。
### 2. 批量下载
```bash
# 在 XHS-Downloader 目录下运行
python batch_download.py links.md
```
下载的数据会保存到 `Volume/Download/ExploreData.db`
### 3. 导出记忆库
**方式 A导出为单文件**
```bash
python export_memory.py
# 生成 xhs_memory.md
```
**方式 B导出为多文件推荐用于 OpenClaw**
```bash
python export_to_workspace.py
# 生成到 ~/.openclaw/workspace/xhs-memory/
```
### 4. 配置 OpenClaw 记忆搜索
编辑 `~/.openclaw/openclaw.json`,添加:
```json
{
"memorySearch": {
"extraPaths": [
"~/.openclaw/workspace/xhs-memory"
]
}
}
```
或者如果使用单文件导出,将 `xhs_memory.md` 放到 workspace 目录下。
## 脚本说明
| 脚本 | 功能 |
|------|------|
| `batch_download.py` | 批量下载笔记并记录到数据库 |
| `export_memory.py` | 导出为单个 Markdown 文件 |
| `export_to_workspace.py` | 导出为多个独立文件(按日期+标题命名) |
## 致谢
- [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) - GPL-3.0 License

View File

@@ -0,0 +1,48 @@
#!/usr/bin/env python
"""
批量下载小红书笔记
用法:
python batch_download.py [links_file]
默认读取当前目录的 links.md 文件
"""
import asyncio
import sys
from pathlib import Path
try:
from source import XHS
except ImportError:
print("错误: 请在 XHS-Downloader 项目目录下运行此脚本")
print("或安装依赖: pip install -e /path/to/XHS-Downloader")
sys.exit(1)
async def main():
# 读取链接文件
links_file = Path(sys.argv[1]) if len(sys.argv) > 1 else Path("links.md")
if not links_file.exists():
print(f"错误: 链接文件不存在: {links_file}")
print("用法: python batch_download.py [links_file]")
sys.exit(1)
links = links_file.read_text().strip()
link_count = len([l for l in links.split() if l.startswith("http")])
print(f"开始下载,共 {link_count} 个链接...")
async with XHS(
work_path="./Volume",
folder_name="Download",
record_data=True, # 记录作品数据到数据库
download_record=True, # 跳过已下载
author_archive=True, # 按作者分文件夹
) as xhs:
result = await xhs.extract(links, download=True)
print(f"完成!处理了 {len(result)} 个作品")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,67 @@
#!/usr/bin/env python
"""
从 XHS-Downloader 数据库导出笔记到单个 Markdown 文件
用法:
python export_memory.py [db_path] [output_file]
默认:
db_path: Volume/Download/ExploreData.db
output_file: xhs_memory.md
"""
import sqlite3
import sys
from pathlib import Path
from datetime import datetime
def export_memory(db_path: Path = None, output_file: Path = None):
db_path = db_path or Path("Volume/Download/ExploreData.db")
output_file = output_file or Path("xhs_memory.md")
if not db_path.exists():
print(f"错误: 数据库不存在: {db_path}")
return False
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# 查询所有作品
cursor.execute("""
SELECT 作品标题, 发布时间, 作品链接, 作品描述, 作者昵称, 作品标签
FROM explore_data
ORDER BY 发布时间 DESC
""")
rows = cursor.fetchall()
conn.close()
if not rows:
print("数据库为空")
return False
# 生成 Markdown
output = f"# 小红书收藏/点赞笔记 Memory\n\n"
output += f"> 导出时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
output += f"> 共 {len(rows)} 条笔记\n\n---\n\n"
for i, (title, time, link, desc, author, tags) in enumerate(rows, 1):
output += f"## {i}. {title or '无标题'}\n\n"
output += f"- **作者**: {author or '未知'}\n"
output += f"- **时间**: {time or '未知'}\n"
output += f"- **链接**: {link or ''}\n"
if tags:
output += f"- **标签**: {tags}\n"
output += f"\n### 内容\n\n{desc or '无内容'}\n\n---\n\n"
# 保存文件
output_file.write_text(output, encoding="utf-8")
print(f"导出完成: {output_file.absolute()}")
print(f"{len(rows)} 条笔记")
return True
if __name__ == "__main__":
db_path = Path(sys.argv[1]) if len(sys.argv) > 1 else None
output_file = Path(sys.argv[2]) if len(sys.argv) > 2 else None
export_memory(db_path, output_file)

View File

@@ -0,0 +1,101 @@
#!/usr/bin/env python
"""
将小红书笔记按单独文件导出到 OpenClaw workspace
用法:
python export_to_workspace.py [db_path] [output_dir]
默认:
db_path: Volume/Download/ExploreData.db
output_dir: ~/.openclaw/workspace/xhs-memory
导出格式类似 gpt-history每条笔记一个文件文件名格式: YYYY-MM-标题.md
"""
import sqlite3
import re
import sys
from pathlib import Path
def sanitize_filename(name: str, max_len: int = 50) -> str:
"""清理文件名,移除非法字符"""
name = re.sub(r'[<>:"/\\|?*\n\r\t]', '', name)
name = re.sub(r'\s+', '-', name.strip())
name = re.sub(r'-+', '-', name)
name = name.strip('-')
if len(name) > max_len:
name = name[:max_len].rstrip('-')
return name or "无标题"
def export_to_workspace(db_path: Path = None, output_dir: Path = None):
db_path = db_path or Path("Volume/Download/ExploreData.db")
output_dir = output_dir or Path.home() / ".openclaw/workspace/xhs-memory"
output_dir.mkdir(parents=True, exist_ok=True)
if not db_path.exists():
print(f"错误: 数据库不存在: {db_path}")
return False
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute("""
SELECT 作品标题, 发布时间, 作品链接, 作品描述, 作者昵称, 作品标签
FROM explore_data
ORDER BY 发布时间 DESC
""")
rows = cursor.fetchall()
conn.close()
if not rows:
print("数据库为空")
return False
count = 0
for title, time, link, desc, author, tags in rows:
# 解析时间: 2026-01-25_18:17:43 -> 2026-01
if time:
date_prefix = time[:7] # YYYY-MM
full_date = time.replace('_', ' ')
else:
date_prefix = "unknown"
full_date = "未知"
# 生成文件名
safe_title = sanitize_filename(title or "无标题")
filename = f"{date_prefix}-{safe_title}.md"
filepath = output_dir / filename
# 避免重复文件名
counter = 1
while filepath.exists():
filename = f"{date_prefix}-{safe_title}-{counter}.md"
filepath = output_dir / filename
counter += 1
# 生成内容
content = f"# {title or '无标题'}\n\n"
content += f"**来源**: 小红书收藏/点赞\n\n"
content += f"**日期**: {full_date}\n\n"
content += f"**作者**: {author or '未知'}\n\n"
content += f"**链接**: {link or ''}\n\n"
if tags:
content += f"**标签**: {tags}\n\n"
content += "---\n\n"
content += "## 内容\n\n"
content += f"{desc or '无内容'}\n"
filepath.write_text(content, encoding="utf-8")
count += 1
print(f"导出完成: {output_dir}")
print(f"共生成 {count} 个文件")
return True
if __name__ == "__main__":
db_path = Path(sys.argv[1]) if len(sys.argv) > 1 else None
output_dir = Path(sys.argv[2]) if len(sys.argv) > 2 else None
export_to_workspace(db_path, output_dir)