feat: 更新一堆 ai 初始化以及 skill
This commit is contained in:
190
.agents/skills/xiaohongshu-note-analyzer/SKILL.md
Normal file
190
.agents/skills/xiaohongshu-note-analyzer/SKILL.md
Normal file
@@ -0,0 +1,190 @@
|
||||
---
|
||||
name: xiaohongshu-note-analyzer
|
||||
description: 全面分析小红书笔记的内容质量、关键词优化、标题吸引力、敏感内容风险、商业化程度、互动潜力等。适用于发布前审核、内容优化建议、提升笔记曝光率。触发词包括"分析小红书笔记"、"小红书内容审核"、"笔记优化"、"XHS分析",或上传小红书笔记内容请求分析。
|
||||
---
|
||||
|
||||
# 小红书笔记分析器 (XiaoHongShu Note Analyzer)
|
||||
|
||||
对小红书笔记进行全方位分析,提供优化建议,提升内容质量和曝光率。
|
||||
|
||||
## 分析维度
|
||||
|
||||
1. **关键词分析** — 搜索热度、关键词布局、标签优化
|
||||
2. **标题/首段吸引力** — 爆款标题元素、首图文案
|
||||
3. **敏感内容风险** — 违规词检测、限流风险评估
|
||||
4. **商业化程度** — 软广硬广识别、自然度评分
|
||||
5. **互动触发潜力** — 讨论点、分享动机、收藏价值
|
||||
6. **内容结构** — 排版、emoji使用、段落节奏
|
||||
|
||||
## 分析流程
|
||||
|
||||
```
|
||||
1. 提取笔记内容 → 标题、正文、标签、图片描述
|
||||
2. 关键词分析 → 核心词、长尾词、布局检查
|
||||
3. 敏感词扫描 → 违规风险、限流风险
|
||||
4. 商业化评估 → 广告痕迹、自然度
|
||||
5. 互动潜力评估 → 讨论点、情感共鸣
|
||||
6. 生成优化建议 → 具体修改方案
|
||||
```
|
||||
|
||||
## 1. 关键词分析
|
||||
|
||||
### 检查要点
|
||||
|
||||
| 维度 | 优秀 | 待改进 |
|
||||
|------|------|--------|
|
||||
| 核心关键词 | 标题+首段+正文+标签都包含 | 仅出现1-2处 |
|
||||
| 长尾关键词 | 3-5个相关长尾词自然分布 | 无长尾词或堆砌 |
|
||||
| 标签数量 | 5-10个相关标签 | <3个或>15个 |
|
||||
| 关键词密度 | 2-3%自然出现 | <1%或>5%堆砌 |
|
||||
|
||||
### 关键词布局公式
|
||||
|
||||
```
|
||||
标题: 必含核心关键词 + 吸引词
|
||||
首段(前50字): 核心关键词 + 痛点/好奇点
|
||||
正文: 长尾关键词自然分布
|
||||
标签: #核心词 #长尾词 #场景词 #人群词
|
||||
```
|
||||
|
||||
详见 `references/keyword-strategy.md`
|
||||
|
||||
## 2. 标题/首段吸引力
|
||||
|
||||
### 爆款标题公式
|
||||
|
||||
| 类型 | 公式 | 示例 |
|
||||
|------|------|------|
|
||||
| 数字型 | 数字+关键词+结果 | "5个技巧让你月瘦10斤" |
|
||||
| 痛点型 | 痛点+解决方案 | "毛孔粗大?这个方法亲测有效" |
|
||||
| 好奇型 | 悬念+关键词 | "闺蜜问我怎么突然变白的..." |
|
||||
| 对比型 | Before/After | "用了这个精华,同事都问我怎么了" |
|
||||
| 权威型 | 身份+干货 | "皮肤科医生自用的5款防晒" |
|
||||
| 情绪型 | 强烈情绪词 | "后悔没早点知道!这个神器太绝了" |
|
||||
|
||||
### 首段黄金50字
|
||||
|
||||
首段必须包含:
|
||||
- ✅ 核心关键词
|
||||
- ✅ 痛点/需求点
|
||||
- ✅ 吸引继续阅读的钩子
|
||||
- ✅ 与封面图呼应
|
||||
|
||||
详见 `references/title-formulas.md`
|
||||
|
||||
## 3. 敏感内容风险评估
|
||||
|
||||
### 风险等级
|
||||
|
||||
| 等级 | 说明 | 后果 |
|
||||
|------|------|------|
|
||||
| 🔴 高危 | 明确违规词 | 删帖/封号 |
|
||||
| 🟠 中危 | 灰色地带词 | 限流/不推荐 |
|
||||
| 🟡 低危 | 可能触发审核 | 延迟发布 |
|
||||
| 🟢 安全 | 无敏感内容 | 正常推荐 |
|
||||
|
||||
### 常见敏感类别
|
||||
|
||||
1. **医疗健康类** — 疾病名称、药品、治疗效果承诺
|
||||
2. **金融理财类** — 收益承诺、投资建议、借贷
|
||||
3. **政治敏感类** — 时政、领导人、敏感事件
|
||||
4. **虚假宣传类** — 最、第一、100%、绝对
|
||||
5. **引流违规类** — 微信号、外链、二维码暗示
|
||||
6. **低俗擦边类** — 性暗示、身材过度暴露
|
||||
|
||||
详见 `references/sensitive-words.md`
|
||||
|
||||
## 4. 商业化程度评估
|
||||
|
||||
### 自然度评分标准
|
||||
|
||||
| 分数 | 描述 | 特征 |
|
||||
|------|------|------|
|
||||
| 9-10 | 纯分享 | 无品牌露出,真实体验 |
|
||||
| 7-8 | 软植入 | 自然提及品牌,不刻意 |
|
||||
| 5-6 | 明显软广 | 品牌多次出现,有推荐意图 |
|
||||
| 3-4 | 硬广 | 明显推销,价格引导 |
|
||||
| 1-2 | 纯广告 | 通篇产品介绍,无真实体验 |
|
||||
|
||||
### 降低商业感技巧
|
||||
|
||||
- ✅ 先讲痛点/故事,后引出产品
|
||||
- ✅ 提及缺点(真实感)
|
||||
- ✅ 对比其他产品(客观感)
|
||||
- ✅ 强调个人体验而非产品功能
|
||||
- ❌ 避免价格、购买链接、促销信息
|
||||
- ❌ 避免品牌名在标题
|
||||
|
||||
## 5. 互动触发潜力
|
||||
|
||||
### 讨论触发点
|
||||
|
||||
| 类型 | 方法 | 示例 |
|
||||
|------|------|------|
|
||||
| 提问式 | 结尾抛出问题 | "你们觉得哪个颜色更好看?" |
|
||||
| 争议式 | 轻度争议观点 | "我觉得XX比XX好用,有人同意吗?" |
|
||||
| 求助式 | 请求建议 | "姐妹们帮我看看选哪个!" |
|
||||
| 共鸣式 | 引发情感共鸣 | "有没有和我一样的..." |
|
||||
| 抽奖式 | 互动福利 | "评论区抽3位送同款" |
|
||||
|
||||
### 分享动机触发
|
||||
|
||||
用户分享笔记的原因:
|
||||
1. **实用价值** — 干货教程、省钱攻略
|
||||
2. **社交货币** — 显得有品味/见识
|
||||
3. **情感共鸣** — "说出了我的心声"
|
||||
4. **收藏备用** — 清单、合集、测评
|
||||
|
||||
### 收藏价值评估
|
||||
|
||||
高收藏内容特征:
|
||||
- ✅ 清单/合集形式
|
||||
- ✅ 步骤教程
|
||||
- ✅ 对比测评
|
||||
- ✅ 省钱/避坑指南
|
||||
- ✅ 信息密度高
|
||||
|
||||
## 输出格式
|
||||
|
||||
```markdown
|
||||
# 小红书笔记分析报告
|
||||
|
||||
## 📊 综合评分: X/10
|
||||
|
||||
## 1️⃣ 关键词分析
|
||||
- **核心关键词**: [识别的关键词]
|
||||
- **关键词布局**: ✅/❌ [评价]
|
||||
- **标签优化**: [建议]
|
||||
|
||||
## 2️⃣ 标题/首段评估
|
||||
- **标题类型**: [数字型/痛点型/...]
|
||||
- **吸引力评分**: X/10
|
||||
- **优化建议**: [具体建议]
|
||||
|
||||
## 3️⃣ 敏感内容风险
|
||||
- **风险等级**: 🟢/🟡/🟠/🔴
|
||||
- **检测到的敏感词**: [列表]
|
||||
- **修改建议**: [具体建议]
|
||||
|
||||
## 4️⃣ 商业化程度
|
||||
- **自然度评分**: X/10
|
||||
- **商业痕迹**: [分析]
|
||||
- **降低商业感建议**: [具体建议]
|
||||
|
||||
## 5️⃣ 互动潜力
|
||||
- **讨论触发点**: ✅/❌
|
||||
- **分享动机**: [分析]
|
||||
- **收藏价值**: X/10
|
||||
|
||||
## 6️⃣ 优化后版本
|
||||
[提供优化后的标题和首段]
|
||||
|
||||
## 📝 修改优先级
|
||||
1. [最重要的修改]
|
||||
2. [次重要的修改]
|
||||
3. [可选优化]
|
||||
```
|
||||
|
||||
## 分析示例
|
||||
|
||||
详见 `references/analysis-examples.md`
|
||||
@@ -0,0 +1,229 @@
|
||||
# 分析示例 (Analysis Examples)
|
||||
|
||||
## 示例1: 护肤类笔记分析
|
||||
|
||||
### 原始笔记
|
||||
|
||||
**标题**: 分享一个好用的精华
|
||||
|
||||
**正文**:
|
||||
```
|
||||
最近用了一款精华感觉还不错,分享给大家。
|
||||
这个精华质地很清爽,吸收也快。
|
||||
我用了大概两周,感觉皮肤状态好了一些。
|
||||
推荐给大家试试。
|
||||
|
||||
#精华 #护肤
|
||||
```
|
||||
|
||||
### 分析报告
|
||||
|
||||
```markdown
|
||||
# 小红书笔记分析报告
|
||||
|
||||
## 📊 综合评分: 4/10
|
||||
|
||||
## 1️⃣ 关键词分析
|
||||
- **核心关键词**: 精华、护肤
|
||||
- **关键词布局**: ❌ 仅在标签出现,标题和正文缺乏
|
||||
- **标签优化**: ❌ 只有2个标签,过少
|
||||
- **问题**: 无长尾关键词,难以被精准搜索触达
|
||||
|
||||
**优化建议**:
|
||||
- 标题加入具体关键词:"油皮精华"、"补水精华"
|
||||
- 增加标签至5-8个
|
||||
- 正文加入肤质、功效等长尾词
|
||||
|
||||
## 2️⃣ 标题/首段评估
|
||||
- **标题类型**: 无类型(过于笼统)
|
||||
- **吸引力评分**: 2/10
|
||||
- **问题**:
|
||||
- "好用"无具体信息
|
||||
- 无数字、无痛点、无悬念
|
||||
- 不知道是什么精华、适合什么肤质
|
||||
|
||||
**优化建议**:
|
||||
将 "分享一个好用的精华"
|
||||
改为 "油皮亲测|这款平价精华控油效果绝了"
|
||||
|
||||
## 3️⃣ 敏感内容风险
|
||||
- **风险等级**: 🟢 安全
|
||||
- **检测到的敏感词**: 无
|
||||
- **说明**: 内容较为保守,无违规风险
|
||||
|
||||
## 4️⃣ 商业化程度
|
||||
- **自然度评分**: 6/10
|
||||
- **商业痕迹**: 低,但内容过于单薄
|
||||
- **问题**: 缺乏真实体验细节,反而显得不够真诚
|
||||
|
||||
**优化建议**:
|
||||
- 加入使用前的皮肤问题
|
||||
- 加入具体使用感受(气味、肤感)
|
||||
- 可以提一个小缺点增加真实感
|
||||
|
||||
## 5️⃣ 互动潜力
|
||||
- **讨论触发点**: ❌ 无
|
||||
- **分享动机**: 低(无实用价值)
|
||||
- **收藏价值**: 2/10
|
||||
|
||||
**问题**:
|
||||
- 无提问引导评论
|
||||
- 内容信息量低,无收藏必要
|
||||
|
||||
**优化建议**:
|
||||
- 结尾加问题:"姐妹们还有什么平价精华推荐吗?"
|
||||
- 加入使用步骤增加实用性
|
||||
|
||||
## 6️⃣ 优化后版本
|
||||
|
||||
**标题**: 油皮亲测|这款百元精华控油效果绝了
|
||||
|
||||
**正文**:
|
||||
```
|
||||
作为一个大油皮,夏天真的太难了😭
|
||||
出油、毛孔粗、还容易闷痘...
|
||||
|
||||
试了很多精华,这款终于让我看到效果!
|
||||
|
||||
💧质地:水状精华,清爽不粘腻
|
||||
👃气味:淡淡的草本味,很舒服
|
||||
✨效果:用了两周,出油明显减少,毛孔也没那么明显了
|
||||
|
||||
唯一小缺点是瓶口设计不太好控制用量
|
||||
|
||||
油皮姐妹可以试试!
|
||||
你们还有什么控油精华推荐吗?评论区告诉我👇
|
||||
|
||||
#油皮护肤 #控油精华 #平价精华 #夏日护肤 #毛孔护理 #学生党护肤
|
||||
```
|
||||
|
||||
## 📝 修改优先级
|
||||
1. 【必改】标题重写,加入关键词和吸引点
|
||||
2. 【必改】增加标签至5-8个
|
||||
3. 【建议】丰富正文内容,加入真实体验
|
||||
4. 【建议】结尾加互动问题
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 示例2: 检测到敏感内容的笔记
|
||||
|
||||
### 原始笔记
|
||||
|
||||
**标题**: 这个方法治好了我的痘痘!
|
||||
|
||||
**正文**:
|
||||
```
|
||||
长了三年的痘痘终于治好了!
|
||||
用了这个产品,痘痘全消了,绝对有效!
|
||||
加我VX:xxx123,我告诉你具体方法
|
||||
这是最好的祛痘方法,100%有效!
|
||||
```
|
||||
|
||||
### 分析报告
|
||||
|
||||
```markdown
|
||||
# 小红书笔记分析报告
|
||||
|
||||
## 📊 综合评分: 1/10 (高风险,不建议发布)
|
||||
|
||||
## 3️⃣ 敏感内容风险
|
||||
- **风险等级**: 🔴 高危
|
||||
- **检测到的敏感词**:
|
||||
1. "治好" — 医疗效果承诺
|
||||
2. "绝对有效" — 绝对化用语
|
||||
3. "加我VX" — 引流违规
|
||||
4. "最好的" — 绝对化用语
|
||||
5. "100%有效" — 虚假宣传
|
||||
|
||||
**后果**: 大概率被删帖,可能影响账号权重
|
||||
|
||||
## 修改建议
|
||||
|
||||
| 原文 | 问题 | 修改为 |
|
||||
|------|------|--------|
|
||||
| 治好了我的痘痘 | 医疗效果承诺 | 我的痘痘情况改善了很多 |
|
||||
| 绝对有效 | 绝对化用语 | 我个人体验很有效 |
|
||||
| 加我VX | 引流违规 | 删除,或改为"可以评论区交流" |
|
||||
| 最好的 | 绝对化用语 | 我用过很有效的 |
|
||||
| 100%有效 | 虚假宣传 | 对我来说效果很好 |
|
||||
|
||||
## 优化后版本
|
||||
|
||||
**标题**: 困扰我三年的痘痘终于好转了!分享我的方法
|
||||
|
||||
**正文**:
|
||||
```
|
||||
长了三年的痘痘,最近终于有好转了!
|
||||
分享一下我的祛痘心得,希望对姐妹们有帮助~
|
||||
|
||||
⚠️ 每个人肤质不同,我的方法仅供参考哦
|
||||
|
||||
[具体方法内容]
|
||||
|
||||
有同样困扰的姐妹可以评论区交流~
|
||||
你们有什么祛痘好方法吗?👇
|
||||
```
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 示例3: 商业化程度过高的笔记
|
||||
|
||||
### 原始笔记
|
||||
|
||||
**标题**: XX品牌精华测评
|
||||
|
||||
**正文**:
|
||||
```
|
||||
今天给大家介绍XX品牌的明星产品!
|
||||
这款精华含有专利成分XXX,官方介绍可以XXX。
|
||||
现在官方旗舰店有活动,原价299现在只要199!
|
||||
姐妹们快去抢!链接在评论区!
|
||||
```
|
||||
|
||||
### 分析报告
|
||||
|
||||
```markdown
|
||||
## 4️⃣ 商业化程度
|
||||
- **自然度评分**: 2/10
|
||||
- **商业痕迹**:
|
||||
- ❌ 标题直接品牌名
|
||||
- ❌ 复述官方介绍而非真实体验
|
||||
- ❌ 强调价格促销
|
||||
- ❌ 引导购买链接
|
||||
|
||||
**问题**: 这是一篇明显的硬广,用户信任度低,平台可能限流
|
||||
|
||||
## 优化建议
|
||||
|
||||
1. **标题去品牌化**: 改为功效/场景导向
|
||||
2. **先讲故事/痛点**: 不要上来就介绍产品
|
||||
3. **真实体验为主**: 质地、气味、使用感受
|
||||
4. **加入小缺点**: 增加可信度
|
||||
5. **删除价格促销**: 避免商业感
|
||||
6. **不提链接**: 想了解的自然会搜
|
||||
|
||||
## 优化后版本
|
||||
|
||||
**标题**: 换季维稳精华|敏感肌亲测不踩雷
|
||||
|
||||
**正文**:
|
||||
```
|
||||
换季皮肤又开始闹脾气了😢
|
||||
泛红、起皮、还有点刺痛...
|
||||
|
||||
朋友推荐我试了这款精华,用了两周来反馈!
|
||||
|
||||
💧质地:精华液偏水状,流动性很好
|
||||
👃气味:几乎没什么味道,敏感肌友好
|
||||
✨使用感:上脸很温和,没有刺激感
|
||||
|
||||
我连续用了两周,泛红情况确实好了很多
|
||||
唯一觉得不太好的是按压头有点难控制用量
|
||||
|
||||
敏感肌姐妹可以考虑~你们换季用什么维稳?👇
|
||||
|
||||
#敏感肌护肤 #换季护肤 #维稳精华 #敏感肌精华推荐
|
||||
```
|
||||
```
|
||||
@@ -0,0 +1,170 @@
|
||||
# 小红书关键词策略 (Keyword Strategy)
|
||||
|
||||
## 关键词类型
|
||||
|
||||
### 1. 核心关键词 (Core Keywords)
|
||||
用户搜索的主要词汇,竞争激烈但流量大
|
||||
|
||||
**特征**:
|
||||
- 1-3个字
|
||||
- 高搜索量
|
||||
- 竞争激烈
|
||||
|
||||
**示例**: 护肤、穿搭、减肥、美妆、旅游
|
||||
|
||||
### 2. 长尾关键词 (Long-tail Keywords)
|
||||
更具体的搜索词,竞争小但精准
|
||||
|
||||
**特征**:
|
||||
- 4-10个字
|
||||
- 搜索量较小
|
||||
- 用户意图明确
|
||||
- 转化率高
|
||||
|
||||
**示例**:
|
||||
- 核心词"护肤" → 长尾词"油皮夏天护肤步骤"
|
||||
- 核心词"穿搭" → 长尾词"小个子梨形身材穿搭"
|
||||
|
||||
### 3. 场景关键词 (Scenario Keywords)
|
||||
描述使用场景的词汇
|
||||
|
||||
**示例**: 约会、通勤、旅游、居家、上班
|
||||
|
||||
### 4. 人群关键词 (Audience Keywords)
|
||||
描述目标人群的词汇
|
||||
|
||||
**示例**: 学生党、上班族、宝妈、新手、敏感肌
|
||||
|
||||
### 5. 修饰关键词 (Modifier Keywords)
|
||||
增加描述的词汇
|
||||
|
||||
**示例**: 平价、大牌、小众、百元、高性价比
|
||||
|
||||
## 关键词布局策略
|
||||
|
||||
### 布局位置优先级
|
||||
|
||||
```
|
||||
1. 标题 (权重最高)
|
||||
2. 首段前50字 (权重高)
|
||||
3. 正文小标题 (权重中)
|
||||
4. 正文内容 (权重中)
|
||||
5. 标签 (权重中)
|
||||
6. 图片OCR文字 (权重低)
|
||||
```
|
||||
|
||||
### 最佳布局公式
|
||||
|
||||
```
|
||||
标题: [核心关键词] + [修饰词] + [吸引词]
|
||||
例: "油皮护肤|平价好用的夏日护肤清单"
|
||||
|
||||
首段: 包含核心关键词 + 1-2个长尾关键词
|
||||
例: "作为一个油皮,夏天护肤真的太难了,
|
||||
出油、毛孔、闷痘...今天分享我的油皮夏日护肤心得"
|
||||
|
||||
正文: 每200字自然出现1次关键词变体
|
||||
例: "这款控油精华"、"适合油皮的XX"
|
||||
|
||||
标签: 5-10个,覆盖不同类型关键词
|
||||
例: #油皮护肤 #夏日护肤 #控油 #平价护肤 #学生党护肤
|
||||
```
|
||||
|
||||
## 标签策略
|
||||
|
||||
### 标签数量
|
||||
- **最佳**: 5-10个
|
||||
- **最少**: 3个
|
||||
- **最多**: 不超过15个
|
||||
|
||||
### 标签组合公式
|
||||
|
||||
```
|
||||
2-3个 核心词标签: #护肤 #美妆
|
||||
2-3个 长尾词标签: #油皮护肤 #夏日护肤推荐
|
||||
1-2个 场景词标签: #通勤妆容 #约会穿搭
|
||||
1-2个 人群词标签: #学生党 #新手化妆
|
||||
1-2个 热门话题标签: #好物分享 #我的日常
|
||||
```
|
||||
|
||||
### 标签注意事项
|
||||
|
||||
✅ **正确做法**:
|
||||
- 标签与内容高度相关
|
||||
- 混合大词和小词
|
||||
- 使用平台热门话题标签
|
||||
|
||||
❌ **错误做法**:
|
||||
- 堆砌不相关热门标签
|
||||
- 只用大词不用长尾词
|
||||
- 标签与内容不符(影响推荐精准度)
|
||||
|
||||
## 关键词密度
|
||||
|
||||
### 理想密度
|
||||
- **核心关键词**: 2-3% (每100字出现2-3次)
|
||||
- **长尾关键词**: 每个出现1-2次即可
|
||||
|
||||
### 检查方法
|
||||
```
|
||||
关键词密度 = (关键词出现次数 × 关键词字数) ÷ 总字数 × 100%
|
||||
```
|
||||
|
||||
### 密度问题
|
||||
|
||||
| 密度 | 问题 | 解决方案 |
|
||||
|------|------|----------|
|
||||
| <1% | 关键词不足,搜索难触达 | 在标题、首段、小标题增加关键词 |
|
||||
| 2-3% | ✅ 理想状态 | 保持 |
|
||||
| >5% | 关键词堆砌,影响阅读 | 使用同义词替换,减少重复 |
|
||||
|
||||
## 关键词挖掘方法
|
||||
|
||||
### 1. 小红书搜索框
|
||||
- 输入核心词,查看下拉推荐词
|
||||
- 这些是用户真实搜索的长尾词
|
||||
|
||||
### 2. 相关笔记分析
|
||||
- 查看同类爆款笔记使用的关键词和标签
|
||||
- 分析评论区用户使用的词汇
|
||||
|
||||
### 3. 关键词组合矩阵
|
||||
|
||||
```
|
||||
核心词 × 人群词 = 长尾词
|
||||
护肤 × 油皮 = 油皮护肤
|
||||
护肤 × 学生党 = 学生党护肤
|
||||
护肤 × 敏感肌 = 敏感肌护肤
|
||||
|
||||
核心词 × 场景词 = 长尾词
|
||||
穿搭 × 通勤 = 通勤穿搭
|
||||
穿搭 × 约会 = 约会穿搭
|
||||
|
||||
核心词 × 修饰词 = 长尾词
|
||||
护肤 × 平价 = 平价护肤
|
||||
护肤 × 大牌 = 大牌护肤
|
||||
```
|
||||
|
||||
## 关键词分析输出模板
|
||||
|
||||
```markdown
|
||||
### 关键词分析结果
|
||||
|
||||
**识别的核心关键词**: [词汇]
|
||||
**识别的长尾关键词**: [词汇列表]
|
||||
|
||||
**当前布局情况**:
|
||||
- 标题: ✅/❌ 包含核心词
|
||||
- 首段: ✅/❌ 包含核心词
|
||||
- 正文: X次出现
|
||||
- 标签: X个相关标签
|
||||
|
||||
**关键词密度**: X%
|
||||
|
||||
**优化建议**:
|
||||
1. [具体建议]
|
||||
2. [具体建议]
|
||||
|
||||
**推荐增加的标签**:
|
||||
#标签1 #标签2 #标签3
|
||||
```
|
||||
@@ -0,0 +1,113 @@
|
||||
# 小红书敏感词库 (Sensitive Words Reference)
|
||||
|
||||
## 🔴 高危词汇 (删帖/封号风险)
|
||||
|
||||
### 医疗违规
|
||||
- 治愈、根治、药到病除
|
||||
- 处方药名称
|
||||
- 癌症、肿瘤等重大疾病+治疗方案
|
||||
- 医美手术具体操作描述
|
||||
|
||||
### 金融违规
|
||||
- 稳赚、保本、高收益
|
||||
- 具体投资回报率承诺
|
||||
- 代投、跟单、带单
|
||||
- 虚拟货币交易引导
|
||||
|
||||
### 政治敏感
|
||||
- 国家领导人姓名+评价
|
||||
- 敏感政治事件
|
||||
- 境外政治内容
|
||||
|
||||
### 违法内容
|
||||
- 赌博、博彩相关
|
||||
- 毒品、违禁品
|
||||
- 枪支、管制刀具
|
||||
- 色情、低俗内容
|
||||
|
||||
### 引流违规
|
||||
- 直接微信号/QQ号
|
||||
- "私我"、"滴滴"、"DD"
|
||||
- 明显外链、二维码
|
||||
- "主页有联系方式"
|
||||
|
||||
## 🟠 中危词汇 (限流风险)
|
||||
|
||||
### 绝对化用语
|
||||
- 最好、最佳、第一、首选
|
||||
- 100%、绝对、肯定
|
||||
- 全网最低、史上最强
|
||||
- 国家级、世界级
|
||||
|
||||
### 医疗擦边
|
||||
- 祛痘、美白、抗衰(需谨慎表述)
|
||||
- 减肥、瘦身(避免效果承诺)
|
||||
- 功效承诺类描述
|
||||
- "亲测有效"+"医疗效果"
|
||||
|
||||
### 商业敏感
|
||||
- 价格对比贬低竞品
|
||||
- "比XX便宜"
|
||||
- 购买链接暗示
|
||||
- 过度促销用语
|
||||
|
||||
### 版权风险
|
||||
- 未授权品牌logo
|
||||
- 影视剧截图过多
|
||||
- 音乐版权内容
|
||||
- 他人原创内容搬运
|
||||
|
||||
## 🟡 低危词汇 (可能延迟审核)
|
||||
|
||||
### 可能触发审核
|
||||
- 赚钱、副业、兼职
|
||||
- 借贷、贷款、信用卡
|
||||
- 整容、医美、手术
|
||||
- 政策、法规相关
|
||||
|
||||
### 行业特殊词
|
||||
- 保健品功效描述
|
||||
- 母婴产品安全声明
|
||||
- 食品功效暗示
|
||||
- 化妆品成分功效
|
||||
|
||||
## ✅ 安全表述替换
|
||||
|
||||
| 危险表述 | 安全替换 |
|
||||
|----------|----------|
|
||||
| 最好的 | 我用过很不错的 |
|
||||
| 100%有效 | 我个人体验很好 |
|
||||
| 治愈了我的XX | 改善了我的XX情况 |
|
||||
| 加我微信 | 可以交流(不留号) |
|
||||
| 绝对好用 | 亲测感觉很棒 |
|
||||
| 全网最低价 | 我买的时候很划算 |
|
||||
| 药到病除 | 对我的情况有帮助 |
|
||||
| 专家推荐 | 我看到有人推荐 |
|
||||
| 官方认证 | 我在官方渠道买的 |
|
||||
|
||||
## 行业特殊规则
|
||||
|
||||
### 美妆护肤
|
||||
- ❌ 避免:药妆、医学护肤、治疗效果
|
||||
- ✅ 使用:护肤体验、个人感受、肤感描述
|
||||
|
||||
### 母婴育儿
|
||||
- ❌ 避免:治疗婴儿疾病、代替医嘱
|
||||
- ✅ 使用:个人育儿经验、仅供参考
|
||||
|
||||
### 健身减肥
|
||||
- ❌ 避免:X天瘦X斤、快速减肥
|
||||
- ✅ 使用:健身记录、饮食分享、个人体验
|
||||
|
||||
### 金融理财
|
||||
- ❌ 避免:投资建议、收益承诺
|
||||
- ✅ 使用:个人理财记录、学习笔记
|
||||
|
||||
## 检测方法
|
||||
|
||||
分析笔记时,按以下优先级检查:
|
||||
|
||||
1. **全文扫描** — 检查是否包含高危词库中的词汇
|
||||
2. **上下文分析** — 判断中危词汇的使用语境
|
||||
3. **意图识别** — 判断是否有引流、违规推广意图
|
||||
4. **替换建议** — 提供安全表述替换方案
|
||||
@@ -0,0 +1,142 @@
|
||||
# 爆款标题公式 (Title Formulas)
|
||||
|
||||
## 核心原则
|
||||
|
||||
好标题 = **关键词** + **情绪触发** + **价值预期**
|
||||
|
||||
## 八大爆款标题公式
|
||||
|
||||
### 1️⃣ 数字具象型
|
||||
**公式**: 数字 + 关键词 + 具体结果
|
||||
|
||||
```
|
||||
✅ "5个技巧让我月瘦10斤"
|
||||
✅ "3步画出日杂妆容"
|
||||
✅ "7天养成早起习惯的方法"
|
||||
✅ "100元搞定一周穿搭"
|
||||
```
|
||||
|
||||
**为什么有效**: 数字给人具体、可执行、有结果的感觉
|
||||
|
||||
### 2️⃣ 痛点解决型
|
||||
**公式**: 痛点问题 + 解决方案暗示
|
||||
|
||||
```
|
||||
✅ "毛孔粗大?这个方法亲测有效"
|
||||
✅ "总是存不下钱?试试这个记账法"
|
||||
✅ "拍照不上镜?学会这几个姿势"
|
||||
✅ "皮肤暗沉怎么办?我的美白心得"
|
||||
```
|
||||
|
||||
**为什么有效**: 直击用户痛点,暗示有解决方案
|
||||
|
||||
### 3️⃣ 好奇悬念型
|
||||
**公式**: 制造信息缺口 + 引发好奇
|
||||
|
||||
```
|
||||
✅ "闺蜜问我怎么突然变白的..."
|
||||
✅ "用了这个之后,同事都问我怎么了"
|
||||
✅ "原来一直用错了!难怪没效果"
|
||||
✅ "看完这个,我把XX扔了"
|
||||
```
|
||||
|
||||
**为什么有效**: 信息缺口让人想知道答案
|
||||
|
||||
### 4️⃣ 对比反差型
|
||||
**公式**: Before vs After / 意外反差
|
||||
|
||||
```
|
||||
✅ "从120斤到100斤,我做了这件事"
|
||||
✅ "黄皮穿对颜色,气质直接翻倍"
|
||||
✅ "换了这个枕头,睡眠质量完全不一样"
|
||||
✅ "以前嫌贵,用了才知道值"
|
||||
```
|
||||
|
||||
**为什么有效**: 对比产生冲击力,暗示巨大改变
|
||||
|
||||
### 5️⃣ 身份权威型
|
||||
**公式**: 身份背书 + 干货内容
|
||||
|
||||
```
|
||||
✅ "皮肤科医生自用的5款防晒"
|
||||
✅ "健身教练不会告诉你的减脂真相"
|
||||
✅ "10年化妆师的底妆技巧"
|
||||
✅ "留学生的省钱购物攻略"
|
||||
```
|
||||
|
||||
**为什么有效**: 专业身份增加可信度
|
||||
|
||||
### 6️⃣ 情绪共鸣型
|
||||
**公式**: 强烈情绪词 + 内容主题
|
||||
|
||||
```
|
||||
✅ "后悔没早点知道!这个神器太绝了"
|
||||
✅ "救命!这个也太好看了吧"
|
||||
✅ "绝绝子!一整个爱住"
|
||||
✅ "哭了,早买早享受"
|
||||
```
|
||||
|
||||
**为什么有效**: 情绪感染,引发共鸣
|
||||
|
||||
### 7️⃣ 场景代入型
|
||||
**公式**: 具体场景 + 解决方案
|
||||
|
||||
```
|
||||
✅ "约会前一晚这样护肤"
|
||||
✅ "出门旅游必带的10件好物"
|
||||
✅ "上班通勤听的播客推荐"
|
||||
✅ "租房改造,500块焕新"
|
||||
```
|
||||
|
||||
**为什么有效**: 场景具体,用户容易代入
|
||||
|
||||
### 8️⃣ 清单合集型
|
||||
**公式**: 主题 + 合集/清单/盘点
|
||||
|
||||
```
|
||||
✅ "2024年度爱用物大盘点"
|
||||
✅ "学生党平价护肤清单"
|
||||
✅ "小个子穿搭合集|显高10cm"
|
||||
✅ "懒人早餐食谱合集"
|
||||
```
|
||||
|
||||
**为什么有效**: 信息密度高,收藏价值高
|
||||
|
||||
## 标题优化检查清单
|
||||
|
||||
### 必须包含 ✅
|
||||
- [ ] 核心关键词
|
||||
- [ ] 情绪触发点或价值点
|
||||
- [ ] 字数控制在20字以内(最佳15-18字)
|
||||
|
||||
### 避免 ❌
|
||||
- [ ] 纯品牌名做标题
|
||||
- [ ] 过于笼统无具体信息
|
||||
- [ ] 绝对化用语(最好、第一)
|
||||
- [ ] 标题党但内容不符
|
||||
|
||||
## 首段黄金50字
|
||||
|
||||
首段决定用户是否继续阅读,必须包含:
|
||||
|
||||
### 结构公式
|
||||
```
|
||||
痛点/场景引入(10-15字) +
|
||||
解决方案预告(15-20字) +
|
||||
继续阅读的钩子(10-15字)
|
||||
```
|
||||
|
||||
### 示例
|
||||
```
|
||||
❌ "今天给大家分享一个好东西"(无信息量)
|
||||
|
||||
✅ "换季皮肤又开始爆痘了😭
|
||||
试了很多方法终于找到有效的,
|
||||
往下看我是怎么3周恢复的👇"
|
||||
```
|
||||
|
||||
### 首段钩子技巧
|
||||
- 抛出问题:"你们有没有这种情况?"
|
||||
- 制造悬念:"但是有个关键点..."
|
||||
- 预告价值:"下面分享我的方法"
|
||||
- 引发共鸣:"和我一样的姐妹看过来"
|
||||
25
.agents/skills/xiaohongshu/LICENSE
Normal file
25
.agents/skills/xiaohongshu/LICENSE
Normal file
@@ -0,0 +1,25 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025
|
||||
|
||||
This license applies to the shell script wrapper layer in this repository only.
|
||||
The xiaohongshu-mcp binary (https://github.com/xpzouying/xiaohongshu-mcp) is a
|
||||
separate project with its own licensing terms.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
315
.agents/skills/xiaohongshu/README.md
Normal file
315
.agents/skills/xiaohongshu/README.md
Normal file
@@ -0,0 +1,315 @@
|
||||
# XHS AI Toolkit
|
||||
|
||||
<p align="center">
|
||||
<b>Make AI understand your Xiaohongshu (RedNote)</b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README_CN.md">简体中文</a> | English
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img alt="License" src="https://img.shields.io/badge/license-MIT-blue.svg">
|
||||
<img alt="Platform" src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-lightgrey.svg">
|
||||
<img alt="Python" src="https://img.shields.io/badge/python-3.10+-green.svg">
|
||||
<img alt="MCP" src="https://img.shields.io/badge/MCP-compatible-purple.svg">
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
AI-powered toolkit for **Xiaohongshu (小红书 / RedNote)** that turns your favorite posts into AI memory.
|
||||
|
||||
- **MCP Integration** — Search, browse, comment via AI assistants
|
||||
- **Trend Tracking** — Auto-generate topic reports with engagement analytics
|
||||
- **Memory Export** — Convert your liked/saved posts into AI-searchable knowledge base
|
||||
|
||||
Built on [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) and [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader).
|
||||
|
||||
## Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| Search | Search posts by keywords |
|
||||
| Feed | Get homepage recommendations |
|
||||
| Post Details | Fetch post content, comments, engagement stats |
|
||||
| Comment | Post comments to notes |
|
||||
| User Profile | Get user info and their posts |
|
||||
| Trend Tracking | Auto-generate topic analysis reports |
|
||||
| Long Image Export | Export posts as annotated JPG long images |
|
||||
| Memory Export | Export liked/saved posts as Markdown for AI memory |
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Install xiaohongshu-mcp
|
||||
|
||||
Download from [GitHub Releases](https://github.com/xpzouying/xiaohongshu-mcp/releases):
|
||||
|
||||
```bash
|
||||
# Linux x64
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-linux-amd64.tar.gz
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-linux-amd64.tar.gz
|
||||
|
||||
# macOS ARM
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-darwin-arm64.tar.gz
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-darwin-arm64.tar.gz
|
||||
```
|
||||
|
||||
Install:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.local/bin
|
||||
tar -xzf xiaohongshu-mcp-*.tar.gz -C ~/.local/bin/
|
||||
tar -xzf xiaohongshu-login-*.tar.gz -C ~/.local/bin/
|
||||
|
||||
cd ~/.local/bin
|
||||
mv xiaohongshu-mcp-* xiaohongshu-mcp
|
||||
mv xiaohongshu-login-* xiaohongshu-login
|
||||
chmod +x xiaohongshu-mcp xiaohongshu-login
|
||||
```
|
||||
|
||||
### 2. Install This Toolkit
|
||||
|
||||
```bash
|
||||
# Clone to OpenClaw workspace
|
||||
git clone https://github.com/zhjiang22/openclaw-xhs.git
|
||||
cp -r openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
|
||||
|
||||
# Or use symlink
|
||||
ln -s /path/to/openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
|
||||
|
||||
# Verify installation
|
||||
cd ~/.openclaw/workspace/skills/xiaohongshu/scripts
|
||||
./install-check.sh
|
||||
```
|
||||
|
||||
### 3. Login (Get Cookies)
|
||||
|
||||
**Option A: Desktop Environment**
|
||||
|
||||
```bash
|
||||
./login.sh # Opens browser, scan QR code with Xiaohongshu app
|
||||
```
|
||||
|
||||
**Option B: Headless Server**
|
||||
|
||||
Get cookies on your local machine, then copy to server:
|
||||
|
||||
```bash
|
||||
# On local machine with GUI
|
||||
./xiaohongshu-login
|
||||
# Cookies saved to /tmp/cookies.json
|
||||
|
||||
# Copy to server
|
||||
scp /tmp/cookies.json user@server:~/.xiaohongshu/cookies.json
|
||||
```
|
||||
|
||||
### 4. Start Service
|
||||
|
||||
```bash
|
||||
./start-mcp.sh # Headless mode
|
||||
./start-mcp.sh --headless=false # Show browser (debug)
|
||||
```
|
||||
|
||||
Service runs at `http://localhost:18060/mcp`.
|
||||
|
||||
#### Server Deployment (Headless Linux)
|
||||
|
||||
On servers without a desktop environment, the underlying browser requires a virtual display.
|
||||
`start-mcp.sh` **auto-detects** the environment — if no display is found, it starts Xvfb automatically. Just install it first:
|
||||
|
||||
```bash
|
||||
# Debian/Ubuntu
|
||||
sudo apt-get install -y xvfb
|
||||
|
||||
# CentOS/RHEL
|
||||
sudo yum install -y xorg-x11-server-Xvfb
|
||||
```
|
||||
|
||||
No extra configuration needed. The script handles:
|
||||
- Detecting the `DISPLAY` environment variable
|
||||
- Auto-starting `Xvfb :99` when no display is available
|
||||
- Cleaning up Xvfb when `stop-mcp.sh` is called
|
||||
|
||||
> **Note**: Without Xvfb, login and search will fail on headless servers. See [Issue #3](https://github.com/zhjiang22/openclaw-xhs/issues/3).
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
./status.sh # Check login status
|
||||
./search.sh "coffee" # Search posts
|
||||
./recommend.sh # Get recommendations
|
||||
./post-detail.sh <id> <token> # Get post details
|
||||
./comment.sh <id> <token> "Great post!" # Comment
|
||||
./user-profile.sh <user_id> <xsec_token> # Get user profile
|
||||
```
|
||||
|
||||
### Trend Tracking
|
||||
|
||||
Auto-search trending posts and generate analysis reports:
|
||||
|
||||
```bash
|
||||
./track-topic.sh "AI" --limit 10
|
||||
./track-topic.sh "travel" --limit 5 --output report.md
|
||||
./track-topic.sh "iPhone" --limit 5 --feishu # Export to Feishu
|
||||
```
|
||||
|
||||
### MCP Tools
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `check_login_status` | Check login status |
|
||||
| `search_feeds` | Search posts |
|
||||
| `list_feeds` | Get homepage feed |
|
||||
| `get_feed_detail` | Get post details & comments |
|
||||
| `post_comment_to_feed` | Post comment |
|
||||
| `user_profile` | Get user profile |
|
||||
| `like_feed` | Like/unlike post |
|
||||
| `favorite_feed` | Save/unsave post |
|
||||
| `publish_content` | Publish image post |
|
||||
| `publish_with_video` | Publish video post |
|
||||
|
||||
### Long Image Export
|
||||
|
||||
Export posts as annotated JPG long images (white background, black text):
|
||||
|
||||
```bash
|
||||
# Prepare posts.json
|
||||
cat > posts.json << 'EOF'
|
||||
[
|
||||
{
|
||||
"title": "Post title",
|
||||
"author": "Author",
|
||||
"stats": "13k likes 100 saves",
|
||||
"desc": "Post summary",
|
||||
"images": ["https://...webp"],
|
||||
"per_image_text": {"1": "Caption for 2nd image"}
|
||||
}
|
||||
]
|
||||
EOF
|
||||
|
||||
./export-long-image.sh --posts-file posts.json -o output.jpg
|
||||
```
|
||||
|
||||
Requires: Python 3.10+, Pillow (`pip install Pillow`)
|
||||
|
||||
## Memory Export (Turn Likes into AI Memory)
|
||||
|
||||
Export your liked/saved posts as a searchable knowledge base for AI assistants.
|
||||
|
||||
### 1. Install XHS-Downloader
|
||||
|
||||
```bash
|
||||
git clone https://github.com/JoeanAmier/XHS-Downloader.git
|
||||
cd XHS-Downloader
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Extract Post Links (Tampermonkey Script)
|
||||
|
||||
1. Install [Tampermonkey](https://www.tampermonkey.net/)
|
||||
2. Install [XHS-Downloader UserScript](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
|
||||
3. Go to Xiaohongshu web → Profile → Liked/Saved
|
||||
4. Click Tampermonkey menu → "Extract liked posts" or "Extract saved posts"
|
||||
5. Links auto-copied to clipboard
|
||||
6. Paste into `links.md`
|
||||
|
||||
### 3. Download & Export
|
||||
|
||||
```bash
|
||||
# Copy helper scripts
|
||||
cp tools/xhs-downloader/*.py /path/to/XHS-Downloader/
|
||||
|
||||
# Download posts
|
||||
cd /path/to/XHS-Downloader
|
||||
python batch_download.py links.md
|
||||
|
||||
# Export to workspace
|
||||
python export_to_workspace.py
|
||||
# Output: ~/.openclaw/workspace/xhs-memory/
|
||||
```
|
||||
|
||||
### 4. Configure OpenClaw Memory Search
|
||||
|
||||
Edit `~/.openclaw/openclaw.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"memorySearch": {
|
||||
"extraPaths": [
|
||||
"~/.openclaw/workspace/xhs-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now your AI assistant can search your Xiaohongshu favorites!
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
openclaw-xhs/
|
||||
├── README.md # English docs
|
||||
├── README_CN.md # Chinese docs
|
||||
├── LICENSE
|
||||
├── SKILL.md # Skill manifest
|
||||
├── scripts/ # MCP wrapper scripts
|
||||
│ ├── install-check.sh
|
||||
│ ├── start-mcp.sh
|
||||
│ ├── stop-mcp.sh
|
||||
│ ├── login.sh
|
||||
│ ├── mcp-call.sh
|
||||
│ ├── status.sh
|
||||
│ ├── search.sh
|
||||
│ ├── recommend.sh
|
||||
│ ├── post-detail.sh
|
||||
│ ├── comment.sh
|
||||
│ ├── user-profile.sh
|
||||
│ ├── track-topic.sh
|
||||
│ ├── track-topic.py
|
||||
│ ├── export-long-image.sh
|
||||
│ └── export-long-image.py
|
||||
└── tools/
|
||||
└── xhs-downloader/ # Memory export tools
|
||||
├── README.md
|
||||
├── batch_download.py
|
||||
├── export_memory.py
|
||||
└── export_to_workspace.py
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
This project implements the following security measures:
|
||||
|
||||
- **Cookie protection**: Cookie files are copied with `600` permissions (owner-only read/write)
|
||||
- **Injection prevention**: All shell scripts use `jq` to build JSON payloads instead of string interpolation, preventing shell injection
|
||||
- **Tool name validation**: MCP tool names are restricted to alphanumeric characters and underscores
|
||||
- **Path validation**: Cross-skill script calls validate that target paths are within allowed directories
|
||||
- **Third-party content**: Content fetched from Xiaohongshu is user-generated; exercise appropriate caution
|
||||
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This project is a **wrapper layer** for [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp).
|
||||
|
||||
- Does NOT contain xiaohongshu-mcp source code
|
||||
- Users must download xiaohongshu-mcp binaries separately
|
||||
- Scripts communicate via HTTP protocol only
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- [@xpzouying](https://github.com/xpzouying) — [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp)
|
||||
- [@JoeanAmier](https://github.com/JoeanAmier) — [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) (GPL-3.0)
|
||||
|
||||
## License
|
||||
|
||||
MIT License (wrapper scripts only)
|
||||
|
||||
**Note:** xiaohongshu-mcp has no declared license. Please respect the author's terms.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<b>If this project helps you, please give it a ⭐!</b>
|
||||
</p>
|
||||
373
.agents/skills/xiaohongshu/README_CN.md
Normal file
373
.agents/skills/xiaohongshu/README_CN.md
Normal file
@@ -0,0 +1,373 @@
|
||||
# XHS AI Toolkit
|
||||
|
||||
<p align="center">
|
||||
<b>让 AI 读懂你的小红书</b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
简体中文 | <a href="README.md">English</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img alt="License" src="https://img.shields.io/badge/license-MIT-blue.svg">
|
||||
<img alt="Platform" src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-lightgrey.svg">
|
||||
<img alt="Python" src="https://img.shields.io/badge/python-3.10+-green.svg">
|
||||
<img alt="MCP" src="https://img.shields.io/badge/MCP-compatible-purple.svg">
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
小红书 AI 工具包 — 把你的收藏变成 AI 的记忆。
|
||||
|
||||
- **MCP 集成** — 通过 AI 助手搜索、浏览、评论小红书
|
||||
- **热点跟踪** — 自动生成话题报告,含互动数据分析
|
||||
- **记忆导出** — 将收藏/点赞笔记转为 AI 可搜索的知识库
|
||||
|
||||
基于 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 和 [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) 构建。
|
||||
|
||||
## 功能特性
|
||||
|
||||
| 功能 | 说明 |
|
||||
|------|------|
|
||||
| 搜索内容 | 按关键词搜索小红书笔记 |
|
||||
| 首页推荐 | 获取首页推荐列表 |
|
||||
| 帖子详情 | 获取笔记内容、评论、互动数据 |
|
||||
| 发表评论 | 在笔记下发表评论 |
|
||||
| 用户主页 | 获取用户资料和笔记列表 |
|
||||
| 热点跟踪 | 自动生成话题分析报告 |
|
||||
| 长图导出 | 将帖子导出为带注释的 JPG 长图 |
|
||||
| 记忆导出 | 导出收藏/点赞为 Markdown 记忆库 |
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 1. 安装 xiaohongshu-mcp
|
||||
|
||||
从 [GitHub Releases](https://github.com/xpzouying/xiaohongshu-mcp/releases) 下载:
|
||||
|
||||
```bash
|
||||
# Linux x64
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-linux-amd64.tar.gz
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-linux-amd64.tar.gz
|
||||
|
||||
# macOS ARM
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-mcp-darwin-arm64.tar.gz
|
||||
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/latest/download/xiaohongshu-login-darwin-arm64.tar.gz
|
||||
```
|
||||
|
||||
解压安装:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.local/bin
|
||||
tar -xzf xiaohongshu-mcp-*.tar.gz -C ~/.local/bin/
|
||||
tar -xzf xiaohongshu-login-*.tar.gz -C ~/.local/bin/
|
||||
|
||||
cd ~/.local/bin
|
||||
mv xiaohongshu-mcp-* xiaohongshu-mcp
|
||||
mv xiaohongshu-login-* xiaohongshu-login
|
||||
chmod +x xiaohongshu-mcp xiaohongshu-login
|
||||
```
|
||||
|
||||
### 2. 安装本工具包
|
||||
|
||||
```bash
|
||||
# 克隆到 OpenClaw workspace
|
||||
git clone https://github.com/zhjiang22/openclaw-xhs.git
|
||||
cp -r openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
|
||||
|
||||
# 或使用软链接
|
||||
ln -s /path/to/openclaw-xhs ~/.openclaw/workspace/skills/xiaohongshu
|
||||
|
||||
# 验证安装
|
||||
cd ~/.openclaw/workspace/skills/xiaohongshu/scripts
|
||||
./install-check.sh
|
||||
```
|
||||
|
||||
### 3. 登录获取 Cookies
|
||||
|
||||
**方式一:本地桌面环境**
|
||||
|
||||
```bash
|
||||
./login.sh # 打开浏览器,用小红书 App 扫码登录
|
||||
```
|
||||
|
||||
**方式二:Linux 服务器(无桌面)**
|
||||
|
||||
在本地电脑获取 cookies 后复制到服务器:
|
||||
|
||||
```bash
|
||||
# 本地电脑(有 GUI)
|
||||
./xiaohongshu-login
|
||||
# Cookies 保存在 /tmp/cookies.json
|
||||
|
||||
# 复制到服务器
|
||||
scp /tmp/cookies.json user@server:~/.xiaohongshu/cookies.json
|
||||
```
|
||||
|
||||
服务启动时会自动检查以下位置的 cookies(按优先级):
|
||||
|
||||
1. 环境变量 `XHS_COOKIES_SRC` 指定的路径
|
||||
2. `~/cookies.json`
|
||||
3. `~/.xiaohongshu/cookies.json`
|
||||
|
||||
### 4. 启动服务
|
||||
|
||||
```bash
|
||||
./start-mcp.sh # headless 模式
|
||||
./start-mcp.sh --headless=false # 显示浏览器(调试用)
|
||||
```
|
||||
|
||||
服务监听 `http://localhost:18060/mcp`。
|
||||
|
||||
停止服务:`./stop-mcp.sh`
|
||||
|
||||
#### 服务器部署(无桌面环境)
|
||||
|
||||
在没有图形界面的 Linux 服务器上,`xiaohongshu-mcp` 底层的浏览器需要虚拟显示器才能正常工作。
|
||||
`start-mcp.sh` 会**自动检测**是否有桌面环境,如果没有则自动启动 Xvfb,你只需提前安装:
|
||||
|
||||
```bash
|
||||
# Debian/Ubuntu
|
||||
sudo apt-get install -y xvfb
|
||||
|
||||
# CentOS/RHEL
|
||||
sudo yum install -y xorg-x11-server-Xvfb
|
||||
```
|
||||
|
||||
安装后无需额外配置,`start-mcp.sh` 会自动处理:
|
||||
- 检测 `DISPLAY` 环境变量
|
||||
- 没有显示器时自动启动 `Xvfb :99`
|
||||
- `stop-mcp.sh` 停止服务时会一并清理 Xvfb 进程
|
||||
|
||||
> **提示**:如果不安装 Xvfb,登录和搜索功能会失败。参见 [Issue #3](https://github.com/zhjiang22/openclaw-xhs/issues/3)。
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 基础命令
|
||||
|
||||
```bash
|
||||
./status.sh # 检查登录状态
|
||||
./search.sh "咖啡" # 搜索内容
|
||||
./recommend.sh # 获取推荐
|
||||
./post-detail.sh <id> <token> # 获取帖子详情
|
||||
./comment.sh <id> <token> "写得真好!" # 发表评论
|
||||
./user-profile.sh <user_id> <xsec_token> # 获取用户主页
|
||||
```
|
||||
|
||||
### 热点跟踪
|
||||
|
||||
自动搜索热帖并生成分析报告:
|
||||
|
||||
```bash
|
||||
./track-topic.sh "DeepSeek" --limit 10
|
||||
./track-topic.sh "春节旅游" --limit 5 --output report.md
|
||||
./track-topic.sh "iPhone 16" --limit 5 --feishu # 导出到飞书
|
||||
```
|
||||
|
||||
报告包含:
|
||||
- 📊 概览统计(帖子数、点赞数、评论数)
|
||||
- 📝 热帖详情(标题、作者、正文、热门评论)
|
||||
- 💬 评论区热点关键词
|
||||
- 📈 趋势分析
|
||||
|
||||
### MCP 工具清单
|
||||
|
||||
| 工具名 | 描述 |
|
||||
|--------|------|
|
||||
| `check_login_status` | 检查登录状态 |
|
||||
| `search_feeds` | 搜索内容 |
|
||||
| `list_feeds` | 获取首页推荐 |
|
||||
| `get_feed_detail` | 获取帖子详情和评论 |
|
||||
| `post_comment_to_feed` | 发表评论 |
|
||||
| `user_profile` | 获取用户主页 |
|
||||
| `like_feed` | 点赞/取消 |
|
||||
| `favorite_feed` | 收藏/取消 |
|
||||
| `publish_content` | 发布图文笔记 |
|
||||
| `publish_with_video` | 发布视频笔记 |
|
||||
|
||||
### 通用 MCP 调用
|
||||
|
||||
```bash
|
||||
./mcp-call.sh # 查看可用工具
|
||||
./mcp-call.sh search_feeds '{"keyword": "咖啡"}'
|
||||
./mcp-call.sh like_feed '{"feed_id": "xxx", "xsec_token": "xxx", "like": true}'
|
||||
```
|
||||
|
||||
### 长图导出
|
||||
|
||||
将搜索结果或帖子详情导出为带文字注释的 JPG 长图:
|
||||
|
||||
```bash
|
||||
# 准备 posts.json(搜索+拉详情后整理)
|
||||
cat > posts.json << 'EOF'
|
||||
[
|
||||
{
|
||||
"title": "帖子标题",
|
||||
"author": "作者名",
|
||||
"stats": "1.3万赞 100收藏",
|
||||
"desc": "正文摘要",
|
||||
"images": ["https://...webp"],
|
||||
"per_image_text": {"1": "第2张图的专属说明"}
|
||||
}
|
||||
]
|
||||
EOF
|
||||
|
||||
./export-long-image.sh --posts-file posts.json -o output.jpg
|
||||
```
|
||||
|
||||
依赖:Python 3.10+、Pillow(`pip install Pillow`)
|
||||
|
||||
## 记忆导出:把收藏变成 AI 的记忆
|
||||
|
||||
将你的收藏/点赞笔记导出为 AI 可搜索的知识库,让 AI 更懂你。
|
||||
|
||||
### 1. 安装 XHS-Downloader
|
||||
|
||||
```bash
|
||||
git clone https://github.com/JoeanAmier/XHS-Downloader.git
|
||||
cd XHS-Downloader
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. 获取收藏/点赞链接(油猴脚本)
|
||||
|
||||
手动复制链接效率太低,推荐使用油猴脚本批量提取:
|
||||
|
||||
**安装脚本:**
|
||||
|
||||
1. 安装 [Tampermonkey](https://www.tampermonkey.net/) 浏览器扩展
|
||||
2. 安装用户脚本:[XHS-Downloader.js](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
|
||||
|
||||
**提取链接:**
|
||||
|
||||
1. 打开 [小红书网页版](https://www.xiaohongshu.com) 并登录
|
||||
2. 进入个人主页 → **收藏** 或 **点赞** 页面
|
||||
3. 点击 Tampermonkey 图标,选择:
|
||||
- `提取收藏作品链接`
|
||||
- `提取点赞作品链接`
|
||||
4. 脚本会自动滚动页面加载全部内容
|
||||
5. 提取完成后链接自动复制到剪贴板
|
||||
6. 粘贴到 `links.md` 文件
|
||||
|
||||
> **注意**:自动滚动功能默认关闭,需在脚本设置中手动开启。开启后可能触发风控,建议适度使用。
|
||||
|
||||
### 3. 批量下载并导出
|
||||
|
||||
```bash
|
||||
# 复制工具脚本到 XHS-Downloader 目录
|
||||
cp tools/xhs-downloader/*.py /path/to/XHS-Downloader/
|
||||
|
||||
# 批量下载
|
||||
cd /path/to/XHS-Downloader
|
||||
python batch_download.py links.md
|
||||
|
||||
# 导出为多文件(推荐)
|
||||
python export_to_workspace.py
|
||||
# 输出到 ~/.openclaw/workspace/xhs-memory/
|
||||
|
||||
# 或导出为单文件
|
||||
python export_memory.py
|
||||
# 生成 xhs_memory.md
|
||||
```
|
||||
|
||||
### 4. 配置 OpenClaw 记忆搜索
|
||||
|
||||
编辑 `~/.openclaw/openclaw.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"memorySearch": {
|
||||
"extraPaths": [
|
||||
"~/.openclaw/workspace/xhs-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
现在你的 AI 助手可以搜索你的小红书收藏了!
|
||||
|
||||
## 安全说明
|
||||
|
||||
本项目在脚本安全方面采取了以下措施:
|
||||
|
||||
- **Cookies 保护**:cookies 文件复制时自动设置 `600` 权限(仅当前用户可读写)
|
||||
- **防注入**:所有 shell 脚本使用 `jq` 构建 JSON payload,不通过字符串拼接,防止 shell 注入攻击
|
||||
- **工具名校验**:MCP 工具名限制为字母数字和下划线,拒绝非法字符
|
||||
- **路径校验**:跨 skill 调用时校验目标路径在允许的目录范围内
|
||||
- **第三方内容**:从小红书获取的内容为用户生成内容(UGC),请注意甄别
|
||||
|
||||
|
||||
## 注意事项
|
||||
|
||||
1. **发布限制**
|
||||
- 标题最多 20 个字符
|
||||
- 正文最多 1000 个字符
|
||||
- 每日发布上限约 50 条
|
||||
|
||||
2. **账号安全**
|
||||
- 避免多设备同时登录同一账号
|
||||
- 手机 App 仅用于查看,不要同时操作
|
||||
|
||||
3. **首次运行**
|
||||
- 会自动下载 headless 浏览器(约 150MB)
|
||||
- 请确保网络畅通
|
||||
|
||||
4. **Cookies 有效期**
|
||||
- 通常有效期约 30 天
|
||||
- 失效后需重新获取
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
openclaw-xhs/
|
||||
├── README.md # 英文文档
|
||||
├── README_CN.md # 中文文档
|
||||
├── LICENSE
|
||||
├── SKILL.md # Skill 描述文件
|
||||
├── scripts/ # MCP 调用脚本
|
||||
│ ├── install-check.sh
|
||||
│ ├── start-mcp.sh
|
||||
│ ├── stop-mcp.sh
|
||||
│ ├── login.sh
|
||||
│ ├── mcp-call.sh
|
||||
│ ├── status.sh
|
||||
│ ├── search.sh
|
||||
│ ├── recommend.sh
|
||||
│ ├── post-detail.sh
|
||||
│ ├── comment.sh
|
||||
│ ├── user-profile.sh
|
||||
│ ├── track-topic.sh
|
||||
│ ├── track-topic.py
|
||||
│ ├── export-long-image.sh
|
||||
│ └── export-long-image.py
|
||||
└── tools/
|
||||
└── xhs-downloader/ # 记忆导出工具
|
||||
├── README.md
|
||||
├── batch_download.py
|
||||
├── export_memory.py
|
||||
└── export_to_workspace.py
|
||||
```
|
||||
|
||||
## 声明
|
||||
|
||||
本项目是 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 的**调用封装层**。
|
||||
|
||||
- **不包含** xiaohongshu-mcp 的任何源代码
|
||||
- **用户需自行下载** xiaohongshu-mcp 二进制文件
|
||||
- 脚本仅通过 HTTP 协议与 MCP 服务通信
|
||||
|
||||
## 致谢
|
||||
|
||||
- [@xpzouying](https://github.com/xpzouying) — [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp)
|
||||
- [@JoeanAmier](https://github.com/JoeanAmier) — [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) (GPL-3.0)
|
||||
|
||||
## License
|
||||
|
||||
MIT License(仅限本项目脚本)
|
||||
|
||||
**注意:** xiaohongshu-mcp 项目未声明开源许可证,请遵守其作者的使用条款。
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<b>觉得有用?给个 ⭐ 支持一下!</b>
|
||||
</p>
|
||||
202
.agents/skills/xiaohongshu/SKILL.md
Normal file
202
.agents/skills/xiaohongshu/SKILL.md
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
name: xiaohongshu
|
||||
description: |
|
||||
小红书(RedNote)内容工具。使用场景:
|
||||
- 搜索小红书笔记并获取详情
|
||||
- 获取首页推荐列表
|
||||
- 获取帖子详情(正文、图片、互动数据、评论)
|
||||
- 发表评论 / 回复评论
|
||||
- 获取用户主页和笔记列表
|
||||
- 点赞、收藏帖子
|
||||
- 发布图文或视频笔记
|
||||
- 热点话题跟踪与分析报告
|
||||
- 帖子导出为长图
|
||||
触发词示例:
|
||||
- "搜一下小红书上的XX"
|
||||
- "跟踪一下小红书上的XX热点"
|
||||
- "分析小红书上关于XX的讨论"
|
||||
- "小红书XX话题报告"
|
||||
- "生成XX的小红书舆情报告"
|
||||
---
|
||||
|
||||
# 小红书 MCP Skill
|
||||
|
||||
基于 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 封装的 shell 脚本工具集。
|
||||
|
||||
## 前置条件
|
||||
|
||||
```bash
|
||||
cd scripts/
|
||||
./install-check.sh # 检查依赖(xiaohongshu-mcp、jq、python3)
|
||||
./start-mcp.sh # 启动 MCP 服务(默认端口 18060)
|
||||
./status.sh # 确认已登录
|
||||
```
|
||||
|
||||
未登录时需扫码:`mcp-call.sh get_login_qrcode` 获取二维码,用小红书 App 扫码。
|
||||
|
||||
服务端口可通过 `MCP_URL` 环境变量覆盖(默认 `http://localhost:18060/mcp`)。
|
||||
|
||||
## 核心数据流
|
||||
|
||||
**重要:** 大多数操作需要 `feed_id` + `xsec_token` 配对。这两个值从搜索/推荐/用户主页结果中获取,**不可自行构造**。
|
||||
|
||||
```
|
||||
search_feeds / list_feeds / user_profile
|
||||
│
|
||||
▼
|
||||
返回 feeds 数组,每个 feed 包含:
|
||||
├── id → 用作 feed_id
|
||||
├── xsecToken → 用作 xsec_token
|
||||
└── noteCard → 标题、作者、封面、互动数据
|
||||
│
|
||||
▼
|
||||
get_feed_detail(feed_id, xsec_token)
|
||||
│
|
||||
▼
|
||||
返回完整笔记: 正文、图片列表、评论列表
|
||||
评论中包含 comment_id、user_id(用于回复评论)
|
||||
```
|
||||
|
||||
## 脚本参考
|
||||
|
||||
| 脚本 | 用途 | 参数 |
|
||||
|------|------|------|
|
||||
| `search.sh <关键词>` | 搜索笔记 | 关键词 |
|
||||
| `recommend.sh` | 首页推荐 | 无 |
|
||||
| `post-detail.sh <feed_id> <xsec_token>` | 帖子详情+评论 | 从搜索结果获取 |
|
||||
| `comment.sh <feed_id> <xsec_token> <内容>` | 发表评论 | 从搜索结果获取 |
|
||||
| `user-profile.sh <user_id> <xsec_token>` | 用户主页+笔记 | 从搜索结果获取 |
|
||||
| `track-topic.sh <话题> [选项]` | 热点分析报告 | `--limit N` `--output file` `--feishu` |
|
||||
| `export-long-image.sh` | 帖子导出长图 | `--posts-file json -o output.jpg` |
|
||||
| `mcp-call.sh <tool> [json_args]` | 通用 MCP 调用 | 见下方工具表 |
|
||||
| `start-mcp.sh` | 启动服务 | `--headless=false` `--port=N` |
|
||||
| `stop-mcp.sh` | 停止服务 | 无 |
|
||||
| `status.sh` | 检查登录 | 无 |
|
||||
| `install-check.sh` | 检查依赖 | 无 |
|
||||
|
||||
## MCP 工具详细参数
|
||||
|
||||
### search_feeds — 搜索笔记
|
||||
|
||||
```json
|
||||
{"keyword": "咖啡", "filters": {"sort_by": "最新", "note_type": "图文", "publish_time": "一周内"}}
|
||||
```
|
||||
|
||||
filters 可选字段:
|
||||
- `sort_by`: 综合 | 最新 | 最多点赞 | 最多评论 | 最多收藏
|
||||
- `note_type`: 不限 | 视频 | 图文
|
||||
- `publish_time`: 不限 | 一天内 | 一周内 | 半年内
|
||||
- `search_scope`: 不限 | 已看过 | 未看过 | 已关注
|
||||
- `location`: 不限 | 同城 | 附近
|
||||
|
||||
### get_feed_detail — 帖子详情
|
||||
|
||||
```json
|
||||
{"feed_id": "...", "xsec_token": "...", "load_all_comments": true, "limit": 20}
|
||||
```
|
||||
|
||||
- `load_all_comments`: false(默认) 返回前10条,true 滚动加载更多
|
||||
- `limit`: 加载评论上限(仅 load_all_comments=true 时生效),默认 20
|
||||
- `click_more_replies`: 是否展开二级回复,默认 false
|
||||
- `reply_limit`: 跳过回复数超过此值的评论,默认 10
|
||||
- `scroll_speed`: slow | normal | fast
|
||||
|
||||
### post_comment_to_feed — 发表评论
|
||||
|
||||
```json
|
||||
{"feed_id": "...", "xsec_token": "...", "content": "写得真好!"}
|
||||
```
|
||||
|
||||
### reply_comment_in_feed — 回复评论
|
||||
|
||||
```json
|
||||
{"feed_id": "...", "xsec_token": "...", "content": "谢谢!", "comment_id": "...", "user_id": "..."}
|
||||
```
|
||||
|
||||
`comment_id` 和 `user_id` 从 get_feed_detail 返回的评论列表中获取。
|
||||
|
||||
### user_profile — 用户主页
|
||||
|
||||
```json
|
||||
{"user_id": "...", "xsec_token": "..."}
|
||||
```
|
||||
|
||||
`user_id` 从 feed 的 `noteCard.user.userId` 获取,`xsec_token` 使用该 feed 的 `xsecToken`。
|
||||
|
||||
### like_feed — 点赞/取消
|
||||
|
||||
```json
|
||||
{"feed_id": "...", "xsec_token": "..."}
|
||||
{"feed_id": "...", "xsec_token": "...", "unlike": true}
|
||||
```
|
||||
|
||||
### favorite_feed — 收藏/取消
|
||||
|
||||
```json
|
||||
{"feed_id": "...", "xsec_token": "..."}
|
||||
{"feed_id": "...", "xsec_token": "...", "unfavorite": true}
|
||||
```
|
||||
|
||||
### publish_content — 发布图文
|
||||
|
||||
```json
|
||||
{"title": "标题(≤20字)", "content": "正文(≤1000字)", "images": ["/path/to/img.jpg"], "tags": ["美食","旅行"]}
|
||||
```
|
||||
|
||||
- `images`: 至少1张,支持本地路径或 HTTP URL
|
||||
- `tags`: 可选,话题标签
|
||||
- `schedule_at`: 可选,定时发布(ISO8601,1小时~14天内)
|
||||
|
||||
### publish_with_video — 发布视频
|
||||
|
||||
```json
|
||||
{"title": "标题", "content": "正文", "video": "/path/to/video.mp4"}
|
||||
```
|
||||
|
||||
### 其他工具
|
||||
|
||||
| 工具 | 参数 | 说明 |
|
||||
|------|------|------|
|
||||
| `check_login_status` | 无 | 检查登录状态 |
|
||||
| `list_feeds` | 无 | 获取首页推荐 |
|
||||
| `get_login_qrcode` | 无 | 获取登录二维码(Base64 PNG) |
|
||||
| `delete_cookies` | 无 | 删除 cookies,重置登录 |
|
||||
|
||||
## 热点跟踪
|
||||
|
||||
自动搜索 → 拉取详情 → 生成 Markdown 报告。
|
||||
|
||||
```bash
|
||||
./track-topic.sh "DeepSeek" --limit 5
|
||||
./track-topic.sh "春节旅游" --limit 10 --output report.md
|
||||
./track-topic.sh "iPhone 16" --limit 5 --feishu # 导出飞书
|
||||
```
|
||||
|
||||
报告包含:概览统计、热帖详情(正文+热评)、评论关键词、趋势分析。
|
||||
|
||||
## 长图导出
|
||||
|
||||
将帖子导出为白底黑字的 JPG 长图。
|
||||
|
||||
```bash
|
||||
./export-long-image.sh --posts-file posts.json -o output.jpg
|
||||
```
|
||||
|
||||
posts.json 格式:
|
||||
```json
|
||||
[{
|
||||
"title": "标题", "author": "作者", "stats": "1.3万赞",
|
||||
"desc": "正文摘要", "images": ["https://..."],
|
||||
"per_image_text": {"1": "第2张图的说明"}
|
||||
}]
|
||||
```
|
||||
|
||||
依赖:Python 3.10+、Pillow。
|
||||
|
||||
## 注意事项
|
||||
|
||||
- Cookies 有效期约 30 天,过期需重新扫码
|
||||
- 首次启动会下载 headless 浏览器(~150MB)
|
||||
- 同一账号避免多客户端同时操作
|
||||
- 发布限制:标题≤20字符,正文≤1000字符,日发布≤50条
|
||||
- Linux 服务器无桌面环境需安装 xvfb(`apt-get install xvfb`,脚本自动管理)
|
||||
16
.agents/skills/xiaohongshu/scripts/comment.sh
Executable file
16
.agents/skills/xiaohongshu/scripts/comment.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
# 发表评论到小红书帖子
|
||||
|
||||
NOTE_ID="$1"
|
||||
XSEC_TOKEN="$2"
|
||||
CONTENT="$3"
|
||||
|
||||
if [ -z "$NOTE_ID" ] || [ -z "$XSEC_TOKEN" ] || [ -z "$CONTENT" ]; then
|
||||
echo "用法: $0 <note_id> <xsec_token> <评论内容>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
ARGS=$(jq -n --arg fid "$NOTE_ID" --arg tok "$XSEC_TOKEN" --arg ct "$CONTENT" \
|
||||
'{"feed_id":$fid,"xsec_token":$tok,"content":$ct}')
|
||||
"$SCRIPT_DIR/mcp-call.sh" post_comment_to_feed "$ARGS"
|
||||
260
.agents/skills/xiaohongshu/scripts/export-long-image.py
Normal file
260
.agents/skills/xiaohongshu/scripts/export-long-image.py
Normal file
@@ -0,0 +1,260 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
小红书帖子长图导出工具
|
||||
|
||||
用法:
|
||||
python3 export-long-image.py --posts '<json>' --output output.jpg
|
||||
python3 export-long-image.py --posts-file posts.json --output output.jpg
|
||||
|
||||
posts JSON 格式:
|
||||
[
|
||||
{
|
||||
"title": "帖子标题",
|
||||
"author": "作者名",
|
||||
"stats": "1.3万赞 5171收藏",
|
||||
"desc": "正文摘要,支持\\n换行",
|
||||
"images": ["url1", "url2", ...],
|
||||
"per_image_text": {
|
||||
"1": "第2张图的说明文字(0-indexed)",
|
||||
"3": "第4张图的说明文字"
|
||||
}
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
per_image_text 可选:如果原帖文字明确指向某张图,可以把说明放在对应图片上。
|
||||
未指定 per_image_text 时,所有文字放在该帖第一张图前的文字块中。
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.request
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
|
||||
# --- 配置 ---
|
||||
WIDTH = 800
|
||||
PAD = 24
|
||||
LINE_SPACE = 10
|
||||
FONT_CANDIDATES = [
|
||||
"/System/Library/Fonts/STHeiti Medium.ttc",
|
||||
"/System/Library/Fonts/Hiragino Sans GB.ttc",
|
||||
"/System/Library/Fonts/Supplemental/Arial Unicode.ttf",
|
||||
"/usr/share/fonts/truetype/noto/NotoSansCJK-Regular.ttc",
|
||||
"/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc",
|
||||
]
|
||||
|
||||
|
||||
def find_font():
|
||||
for path in FONT_CANDIDATES:
|
||||
if os.path.exists(path):
|
||||
return path
|
||||
return None
|
||||
|
||||
|
||||
def load_font(path, size):
|
||||
if path:
|
||||
try:
|
||||
return ImageFont.truetype(path, size, index=0)
|
||||
except Exception:
|
||||
pass
|
||||
return ImageFont.load_default()
|
||||
|
||||
|
||||
def wrap_text(text, font, max_width, draw):
|
||||
lines = []
|
||||
for paragraph in text.split("\n"):
|
||||
paragraph = paragraph.strip()
|
||||
if not paragraph:
|
||||
continue
|
||||
current = ""
|
||||
for char in paragraph:
|
||||
test = current + char
|
||||
bbox = draw.textbbox((0, 0), test, font=font)
|
||||
if bbox[2] - bbox[0] > max_width:
|
||||
if current:
|
||||
lines.append(current)
|
||||
current = char
|
||||
else:
|
||||
current = test
|
||||
if current:
|
||||
lines.append(current)
|
||||
return lines
|
||||
|
||||
|
||||
def draw_lines(draw, lines, font, x, y, fill):
|
||||
for line in lines:
|
||||
draw.text((x, y), line, font=font, fill=fill)
|
||||
bbox = draw.textbbox((0, 0), line, font=font)
|
||||
y += (bbox[3] - bbox[1]) + LINE_SPACE
|
||||
return y
|
||||
|
||||
|
||||
def measure_lines(lines, font, draw):
|
||||
h = 0
|
||||
for line in lines:
|
||||
bbox = draw.textbbox((0, 0), line if line else " ", font=font)
|
||||
h += (bbox[3] - bbox[1]) + LINE_SPACE
|
||||
return h
|
||||
|
||||
|
||||
def make_text_block(title, author_line, desc, font_path, width):
|
||||
"""白底黑字文字块,模仿小红书原样"""
|
||||
title_font = load_font(font_path, 32)
|
||||
author_font = load_font(font_path, 20)
|
||||
body_font = load_font(font_path, 24)
|
||||
|
||||
tmp = Image.new("RGB", (width, 10))
|
||||
draw = ImageDraw.Draw(tmp)
|
||||
max_w = width - PAD * 2
|
||||
|
||||
title_lines = wrap_text(title, title_font, max_w, draw)
|
||||
author_lines = [author_line] if author_line else []
|
||||
desc_lines = wrap_text(desc, body_font, max_w, draw) if desc else []
|
||||
|
||||
# 计算高度
|
||||
total_h = PAD
|
||||
total_h += measure_lines(title_lines, title_font, draw)
|
||||
if author_lines:
|
||||
total_h += 4
|
||||
total_h += measure_lines(author_lines, author_font, draw)
|
||||
if desc_lines:
|
||||
total_h += 8
|
||||
total_h += measure_lines(desc_lines, body_font, draw)
|
||||
total_h += PAD
|
||||
|
||||
# 绘制
|
||||
block = Image.new("RGB", (width, total_h), (255, 255, 255))
|
||||
draw = ImageDraw.Draw(block)
|
||||
|
||||
y = PAD
|
||||
y = draw_lines(draw, title_lines, title_font, PAD, y, (33, 33, 33))
|
||||
if author_lines:
|
||||
y += 4
|
||||
y = draw_lines(draw, author_lines, author_font, PAD, y, (153, 153, 153))
|
||||
if desc_lines:
|
||||
y += 8
|
||||
y = draw_lines(draw, desc_lines, body_font, PAD, y, (66, 66, 66))
|
||||
|
||||
return block
|
||||
|
||||
|
||||
def make_image_caption(text, font_path, width):
|
||||
"""图片上方的小说明文字块"""
|
||||
font = load_font(font_path, 20)
|
||||
tmp = Image.new("RGB", (width, 10))
|
||||
draw = ImageDraw.Draw(tmp)
|
||||
lines = wrap_text(text, font, width - PAD * 2, draw)
|
||||
|
||||
h = PAD + measure_lines(lines, font, draw) + 8
|
||||
block = Image.new("RGB", (width, h), (245, 245, 245))
|
||||
draw = ImageDraw.Draw(block)
|
||||
draw_lines(draw, lines, font, PAD, PAD // 2, (100, 100, 100))
|
||||
return block
|
||||
|
||||
|
||||
def download_image(url, tmpdir, idx):
|
||||
"""下载图片到临时目录"""
|
||||
ext = ".webp"
|
||||
path = os.path.join(tmpdir, f"img_{idx}{ext}")
|
||||
try:
|
||||
req = urllib.request.Request(url, headers={"User-Agent": "Mozilla/5.0"})
|
||||
with urllib.request.urlopen(req, timeout=30) as resp:
|
||||
with open(path, "wb") as f:
|
||||
f.write(resp.read())
|
||||
return path
|
||||
except Exception as e:
|
||||
print(f" 警告: 下载失败 {url[:60]}... ({e})", file=sys.stderr)
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="小红书帖子长图导出")
|
||||
parser.add_argument("--posts", help="Posts JSON string")
|
||||
parser.add_argument("--posts-file", help="Posts JSON file path")
|
||||
parser.add_argument("--output", "-o", required=True, help="Output JPG path")
|
||||
parser.add_argument("--width", type=int, default=800, help="Image width (default 800)")
|
||||
parser.add_argument("--quality", type=int, default=88, help="JPEG quality (default 88)")
|
||||
args = parser.parse_args()
|
||||
|
||||
global WIDTH
|
||||
WIDTH = args.width
|
||||
|
||||
# 读取 posts 数据
|
||||
if args.posts:
|
||||
posts = json.loads(args.posts)
|
||||
elif args.posts_file:
|
||||
with open(args.posts_file, "r") as f:
|
||||
posts = json.load(f)
|
||||
else:
|
||||
print("错误: 需要 --posts 或 --posts-file", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
font_path = find_font()
|
||||
if not font_path:
|
||||
print("警告: 未找到中文字体,文字可能显示异常", file=sys.stderr)
|
||||
|
||||
sep = Image.new("RGB", (WIDTH, 3), (230, 230, 230))
|
||||
pieces = []
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
img_counter = 0
|
||||
for pi, post in enumerate(posts):
|
||||
title = post.get("title", "")
|
||||
author = post.get("author", "")
|
||||
stats = post.get("stats", "")
|
||||
desc = post.get("desc", "")
|
||||
images = post.get("images", [])
|
||||
per_image_text = post.get("per_image_text", {})
|
||||
|
||||
# 作者行
|
||||
author_line = author
|
||||
if stats:
|
||||
author_line = f"{author} · {stats}" if author else stats
|
||||
|
||||
# 主文字块
|
||||
text_block = make_text_block(title, author_line, desc, font_path, WIDTH)
|
||||
pieces.append(text_block)
|
||||
|
||||
# 图片
|
||||
for i, url in enumerate(images):
|
||||
# 是否有针对这张图的说明
|
||||
img_key = str(i)
|
||||
if img_key in per_image_text:
|
||||
caption_block = make_image_caption(per_image_text[img_key], font_path, WIDTH)
|
||||
pieces.append(caption_block)
|
||||
|
||||
img_path = download_image(url, tmpdir, img_counter)
|
||||
img_counter += 1
|
||||
if img_path:
|
||||
try:
|
||||
im = Image.open(img_path).convert("RGB")
|
||||
ratio = WIDTH / im.width
|
||||
im = im.resize((WIDTH, int(im.height * ratio)), Image.LANCZOS)
|
||||
pieces.append(im)
|
||||
except Exception as e:
|
||||
print(f" 警告: 图片处理失败 ({e})", file=sys.stderr)
|
||||
|
||||
# 帖子间分隔线
|
||||
if pi < len(posts) - 1:
|
||||
pieces.append(sep)
|
||||
|
||||
if not pieces:
|
||||
print("错误: 没有内容可拼接", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
total_h = sum(p.height for p in pieces)
|
||||
long_img = Image.new("RGB", (WIDTH, total_h), (255, 255, 255))
|
||||
y = 0
|
||||
for p in pieces:
|
||||
long_img.paste(p, (0, y))
|
||||
y += p.height
|
||||
|
||||
long_img.save(args.output, "JPEG", quality=args.quality)
|
||||
print(f"完成: {args.output} ({WIDTH}x{total_h})")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
20
.agents/skills/xiaohongshu/scripts/export-long-image.sh
Executable file
20
.agents/skills/xiaohongshu/scripts/export-long-image.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# 小红书帖子导出长图
|
||||
#
|
||||
# 用法:
|
||||
# ./export-long-image.sh --posts-file posts.json -o output.jpg
|
||||
# ./export-long-image.sh --posts '<json>' -o output.jpg
|
||||
#
|
||||
# posts.json 示例:
|
||||
# [
|
||||
# {
|
||||
# "title": "帖子标题",
|
||||
# "author": "作者",
|
||||
# "stats": "1.3万赞 100收藏",
|
||||
# "desc": "正文摘要",
|
||||
# "images": ["https://...webp", "https://...webp"]
|
||||
# }
|
||||
# ]
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
python3 "$SCRIPT_DIR/export-long-image.py" "$@"
|
||||
67
.agents/skills/xiaohongshu/scripts/install-check.sh
Executable file
67
.agents/skills/xiaohongshu/scripts/install-check.sh
Executable file
@@ -0,0 +1,67 @@
|
||||
#!/bin/bash
|
||||
# 检查小红书 MCP 依赖是否已安装
|
||||
|
||||
set -e
|
||||
|
||||
echo "检查小红书 MCP 依赖..."
|
||||
echo ""
|
||||
|
||||
XHS_MCP="$HOME/.local/bin/xiaohongshu-mcp"
|
||||
XHS_LOGIN="$HOME/.local/bin/xiaohongshu-login"
|
||||
|
||||
check_binary() {
|
||||
local name="$1"
|
||||
local path="$2"
|
||||
if [ -f "$path" ]; then
|
||||
echo "✅ $name: $path"
|
||||
return 0
|
||||
else
|
||||
echo "❌ $name: 未找到"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
MISSING=0
|
||||
|
||||
check_binary "xiaohongshu-mcp" "$XHS_MCP" || MISSING=1
|
||||
check_binary "xiaohongshu-login" "$XHS_LOGIN" || MISSING=1
|
||||
|
||||
echo ""
|
||||
|
||||
# 检查 jq(必需,用于安全构建 JSON)
|
||||
if command -v jq &> /dev/null; then
|
||||
echo "✅ jq: $(which jq)"
|
||||
else
|
||||
echo "❌ jq: 未安装(必需,用于安全构建 JSON)"
|
||||
echo " 安装: apt-get install jq / brew install jq"
|
||||
MISSING=1
|
||||
fi
|
||||
|
||||
# 检查 Python3(track-topic.py 需要)
|
||||
if command -v python3 &> /dev/null; then
|
||||
echo "✅ python3: $(python3 --version)"
|
||||
else
|
||||
echo "⚠️ python3: 未安装(热点跟踪功能需要)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
if [ $MISSING -eq 1 ]; then
|
||||
echo "=========================================="
|
||||
echo "缺少必要依赖,请按以下步骤安装:"
|
||||
echo ""
|
||||
echo "1. 从 GitHub Releases 下载对应平台的二进制文件:"
|
||||
echo " https://github.com/xpzouying/xiaohongshu-mcp/releases"
|
||||
echo ""
|
||||
echo "2. 解压并安装到 ~/.local/bin/:"
|
||||
echo " mkdir -p ~/.local/bin"
|
||||
echo " mv xiaohongshu-mcp-linux-amd64 ~/.local/bin/xiaohongshu-mcp"
|
||||
echo " mv xiaohongshu-login-linux-amd64 ~/.local/bin/xiaohongshu-login"
|
||||
echo " chmod +x ~/.local/bin/xiaohongshu-*"
|
||||
echo ""
|
||||
echo "3. 确保 ~/.local/bin 在 PATH 中(可选)"
|
||||
echo "=========================================="
|
||||
exit 1
|
||||
else
|
||||
echo "✅ 所有依赖已就绪"
|
||||
fi
|
||||
10
.agents/skills/xiaohongshu/scripts/login.sh
Executable file
10
.agents/skills/xiaohongshu/scripts/login.sh
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
# 启动小红书登录工具
|
||||
|
||||
XHS_LOGIN="$HOME/.local/bin/xiaohongshu-login"
|
||||
|
||||
echo "启动小红书登录工具..."
|
||||
echo "注意:需要桌面环境或 X11 转发"
|
||||
echo ""
|
||||
|
||||
"$XHS_LOGIN"
|
||||
82
.agents/skills/xiaohongshu/scripts/mcp-call.sh
Executable file
82
.agents/skills/xiaohongshu/scripts/mcp-call.sh
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
# 通用 MCP 调用脚本(支持 Streamable HTTP + Session ID)
|
||||
|
||||
set -e
|
||||
|
||||
TOOL_NAME="$1"
|
||||
TOOL_ARGS="$2"
|
||||
MCP_URL="${MCP_URL:-http://localhost:18060/mcp}"
|
||||
export no_proxy="${no_proxy:+$no_proxy,}localhost,127.0.0.1"
|
||||
|
||||
# 检查 jq 依赖
|
||||
if ! command -v jq &> /dev/null; then
|
||||
echo "错误: 需要安装 jq (apt-get install jq / brew install jq)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$TOOL_NAME" ]; then
|
||||
echo "用法: $0 <tool_name> [json_args]"
|
||||
echo ""
|
||||
echo "可用工具:"
|
||||
echo " check_login_status - 检查登录状态"
|
||||
echo " search_feeds - 搜索内容 {\"keyword\": \"...\", \"filters\": {\"sort_by\": \"最新\"}}"
|
||||
echo " list_feeds - 获取首页推荐"
|
||||
echo " get_feed_detail - 获取帖子详情 {\"feed_id\": \"...\", \"xsec_token\": \"...\"}"
|
||||
echo " post_comment_to_feed - 发表评论 {\"feed_id\": \"...\", \"xsec_token\": \"...\", \"content\": \"...\"}"
|
||||
echo " reply_comment_in_feed - 回复评论 {\"feed_id\": \"...\", \"xsec_token\": \"...\", \"content\": \"...\", \"comment_id\": \"...\", \"user_id\": \"...\"}"
|
||||
echo " user_profile - 获取用户主页 {\"user_id\": \"...\", \"xsec_token\": \"...\"}"
|
||||
echo " like_feed - 点赞 {\"feed_id\": \"...\", \"xsec_token\": \"...\"} 取消: {\"unlike\": true}"
|
||||
echo " favorite_feed - 收藏 {\"feed_id\": \"...\", \"xsec_token\": \"...\"} 取消: {\"unfavorite\": true}"
|
||||
echo " get_login_qrcode - 获取登录二维码"
|
||||
echo " delete_cookies - 删除 cookies,重置登录状态"
|
||||
echo " publish_content - 发布图文"
|
||||
echo " publish_with_video - 发布视频"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 校验 tool name,只允许字母数字和下划线
|
||||
if [[ ! "$TOOL_NAME" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]]; then
|
||||
echo "错误: 无效的工具名: $TOOL_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
[ -z "$TOOL_ARGS" ] && TOOL_ARGS="{}"
|
||||
|
||||
# 校验 TOOL_ARGS 是合法 JSON
|
||||
if ! echo "$TOOL_ARGS" | jq empty 2>/dev/null; then
|
||||
echo "错误: 参数不是合法的 JSON: $TOOL_ARGS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 1. Initialize 并获取 Session ID
|
||||
INIT_RESPONSE=$(curl --noproxy '*' -s -i -X POST "$MCP_URL" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"openclaw","version":"1.0"}}}')
|
||||
|
||||
SESSION_ID=$(echo "$INIT_RESPONSE" | grep -i "Mcp-Session-Id" | awk '{print $2}' | tr -d '\r\n')
|
||||
|
||||
if [ -z "$SESSION_ID" ]; then
|
||||
echo "错误: 无法获取 MCP Session ID"
|
||||
echo "请确保 MCP 服务正在运行: ./start-mcp.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 2. Initialized notification
|
||||
curl --noproxy '*' -s -X POST "$MCP_URL" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Mcp-Session-Id: $SESSION_ID" \
|
||||
-d '{"jsonrpc":"2.0","method":"notifications/initialized"}' > /dev/null
|
||||
|
||||
# 3. Call tool — 使用 jq 安全构建 JSON,避免 shell 注入
|
||||
CALL_PAYLOAD=$(jq -n \
|
||||
--arg name "$TOOL_NAME" \
|
||||
--argjson args "$TOOL_ARGS" \
|
||||
'{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":$name,"arguments":$args}}')
|
||||
|
||||
RESULT=$(curl --noproxy '*' -s --max-time 120 -X POST "$MCP_URL" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Mcp-Session-Id: $SESSION_ID" \
|
||||
-d "$CALL_PAYLOAD")
|
||||
|
||||
# 输出结果
|
||||
echo "$RESULT" | jq .
|
||||
17
.agents/skills/xiaohongshu/scripts/post-detail.sh
Executable file
17
.agents/skills/xiaohongshu/scripts/post-detail.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# 获取小红书帖子详情
|
||||
|
||||
NOTE_ID="$1"
|
||||
XSEC_TOKEN="$2"
|
||||
|
||||
if [ -z "$NOTE_ID" ] || [ -z "$XSEC_TOKEN" ]; then
|
||||
echo "用法: $0 <note_id> <xsec_token>"
|
||||
echo ""
|
||||
echo "note_id 和 xsec_token 可从搜索或推荐结果中获取"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
ARGS=$(jq -n --arg fid "$NOTE_ID" --arg tok "$XSEC_TOKEN" \
|
||||
'{"feed_id":$fid,"xsec_token":$tok}')
|
||||
"$SCRIPT_DIR/mcp-call.sh" get_feed_detail "$ARGS"
|
||||
5
.agents/skills/xiaohongshu/scripts/recommend.sh
Executable file
5
.agents/skills/xiaohongshu/scripts/recommend.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
# 获取小红书首页推荐列表
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
"$SCRIPT_DIR/mcp-call.sh" list_feeds
|
||||
13
.agents/skills/xiaohongshu/scripts/search.sh
Executable file
13
.agents/skills/xiaohongshu/scripts/search.sh
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
# 搜索小红书内容
|
||||
|
||||
KEYWORD="$1"
|
||||
|
||||
if [ -z "$KEYWORD" ]; then
|
||||
echo "用法: $0 <关键词>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
ARGS=$(jq -n --arg kw "$KEYWORD" '{"keyword":$kw}')
|
||||
"$SCRIPT_DIR/mcp-call.sh" search_feeds "$ARGS"
|
||||
166
.agents/skills/xiaohongshu/scripts/start-mcp.sh
Executable file
166
.agents/skills/xiaohongshu/scripts/start-mcp.sh
Executable file
@@ -0,0 +1,166 @@
|
||||
#!/bin/bash
|
||||
# 启动小红书 MCP 服务
|
||||
|
||||
XHS_MCP="$HOME/.local/bin/xiaohongshu-mcp"
|
||||
PID_FILE="$HOME/.xiaohongshu/mcp.pid"
|
||||
LOG_FILE="$HOME/.xiaohongshu/mcp.log"
|
||||
XVFB_PID_FILE="$HOME/.xiaohongshu/xvfb.pid"
|
||||
XVFB_DISPLAY_FILE="$HOME/.xiaohongshu/xvfb.display"
|
||||
|
||||
# Cookies 路径(可通过环境变量覆盖)
|
||||
# XHS_COOKIES_SRC: 源 cookies 文件(用于远程服务器场景)
|
||||
# 默认检查 ~/cookies.json 和 ~/.xiaohongshu/cookies.json
|
||||
COOKIES_DST="/tmp/cookies.json"
|
||||
|
||||
mkdir -p "$HOME/.xiaohongshu"
|
||||
|
||||
# 检测是否有显示器(桌面环境)
|
||||
has_display() {
|
||||
[ -n "$DISPLAY" ] && xdpyinfo >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# 在无桌面环境下自动启动 Xvfb
|
||||
ensure_display() {
|
||||
if has_display; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# 已有 Xvfb 在运行
|
||||
if [ -f "$XVFB_PID_FILE" ]; then
|
||||
local pid
|
||||
pid=$(cat "$XVFB_PID_FILE")
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
export DISPLAY=$(cat "$XVFB_DISPLAY_FILE" 2>/dev/null || echo ":99")
|
||||
echo "复用已有 Xvfb (PID: $pid, DISPLAY=$DISPLAY)"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# 检查 Xvfb 是否安装
|
||||
if ! command -v Xvfb >/dev/null 2>&1; then
|
||||
echo "⚠ 未检测到桌面环境,且未安装 Xvfb。"
|
||||
echo " 请安装:sudo apt-get install -y xvfb"
|
||||
echo " 安装后重新运行本脚本即可自动配置。"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "未检测到桌面环境,自动启动 Xvfb 虚拟显示..."
|
||||
|
||||
# 自动选择可用的 display 号(99-109)
|
||||
local display_num=""
|
||||
local d
|
||||
for d in $(seq 99 109); do
|
||||
if [ ! -e "/tmp/.X${d}-lock" ]; then
|
||||
display_num=$d
|
||||
break
|
||||
fi
|
||||
# 锁文件存在但进程已死,尝试清理后使用
|
||||
local lock_pid
|
||||
lock_pid=$(cat "/tmp/.X${d}-lock" 2>/dev/null | tr -d ' ')
|
||||
if [ -n "$lock_pid" ] && ! kill -0 "$lock_pid" 2>/dev/null; then
|
||||
rm -f "/tmp/.X${d}-lock" "/tmp/.X11-unix/X${d}" 2>/dev/null
|
||||
if [ ! -e "/tmp/.X${d}-lock" ]; then
|
||||
display_num=$d
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -z "$display_num" ]; then
|
||||
echo "✗ 无法找到可用的 display 号(:99-:109 均被占用)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# -ac: 关闭访问控制,允许 chromium 连接虚拟显示(仅用于 headless 自动化)
|
||||
Xvfb ":${display_num}" -screen 0 1024x768x24 -ac >/dev/null 2>&1 &
|
||||
echo $! > "$XVFB_PID_FILE"
|
||||
echo ":${display_num}" > "$XVFB_DISPLAY_FILE"
|
||||
export DISPLAY=":${display_num}"
|
||||
sleep 1
|
||||
|
||||
if kill -0 "$(cat "$XVFB_PID_FILE")" 2>/dev/null; then
|
||||
echo "✓ Xvfb 已启动 (DISPLAY=:${display_num})"
|
||||
else
|
||||
echo "✗ Xvfb 启动失败"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 同步 cookies(支持多个可能的来源)
|
||||
sync_cookies() {
|
||||
local src=""
|
||||
|
||||
# 优先使用环境变量指定的路径
|
||||
if [ -n "$XHS_COOKIES_SRC" ] && [ -f "$XHS_COOKIES_SRC" ]; then
|
||||
src="$XHS_COOKIES_SRC"
|
||||
elif [ -f "$HOME/cookies.json" ]; then
|
||||
src="$HOME/cookies.json"
|
||||
elif [ -f "$HOME/.xiaohongshu/cookies.json" ]; then
|
||||
src="$HOME/.xiaohongshu/cookies.json"
|
||||
fi
|
||||
|
||||
if [ -n "$src" ]; then
|
||||
if [ ! -f "$COOKIES_DST" ] || [ "$src" -nt "$COOKIES_DST" ]; then
|
||||
install -m 600 "$src" "$COOKIES_DST"
|
||||
echo "已同步 cookies: $src -> $COOKIES_DST"
|
||||
fi
|
||||
else
|
||||
# 确保已有的 cookies 文件权限正确
|
||||
[ -f "$COOKIES_DST" ] && chmod 600 "$COOKIES_DST"
|
||||
fi
|
||||
}
|
||||
|
||||
sync_cookies
|
||||
ensure_display
|
||||
|
||||
# 检查是否已在运行
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
PID=$(cat "$PID_FILE")
|
||||
if kill -0 "$PID" 2>/dev/null; then
|
||||
echo "MCP 服务已在运行 (PID: $PID)"
|
||||
echo "如需重启,请先运行 stop-mcp.sh"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# 解析参数
|
||||
HEADLESS="true"
|
||||
PORT="${XHS_MCP_PORT:-18060}"
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--headless=false)
|
||||
HEADLESS="false"
|
||||
;;
|
||||
--port=*)
|
||||
PORT="${arg#*=}"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# 校验端口号
|
||||
if [[ ! "$PORT" =~ ^[0-9]+$ ]]; then
|
||||
echo "错误: 无效端口号: $PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 启动服务
|
||||
echo "启动小红书 MCP 服务..."
|
||||
if [ "$HEADLESS" = "false" ]; then
|
||||
nohup "$XHS_MCP" -port ":${PORT}" -headless=false > "$LOG_FILE" 2>&1 &
|
||||
else
|
||||
nohup "$XHS_MCP" -port ":${PORT}" > "$LOG_FILE" 2>&1 &
|
||||
fi
|
||||
|
||||
echo $! > "$PID_FILE"
|
||||
sleep 2
|
||||
|
||||
# 验证启动
|
||||
if kill -0 $(cat "$PID_FILE") 2>/dev/null; then
|
||||
echo "✓ MCP 服务已启动 (PID: $(cat $PID_FILE))"
|
||||
echo " 端点: http://localhost:${PORT}/mcp"
|
||||
echo " 日志: $LOG_FILE"
|
||||
else
|
||||
echo "✗ 启动失败,查看日志: $LOG_FILE"
|
||||
cat "$LOG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
5
.agents/skills/xiaohongshu/scripts/status.sh
Executable file
5
.agents/skills/xiaohongshu/scripts/status.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
# 检查小红书登录状态
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
"$SCRIPT_DIR/mcp-call.sh" check_login_status
|
||||
29
.agents/skills/xiaohongshu/scripts/stop-mcp.sh
Executable file
29
.agents/skills/xiaohongshu/scripts/stop-mcp.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
# 停止小红书 MCP 服务
|
||||
|
||||
PID_FILE="$HOME/.xiaohongshu/mcp.pid"
|
||||
XVFB_PID_FILE="$HOME/.xiaohongshu/xvfb.pid"
|
||||
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
PID=$(cat "$PID_FILE")
|
||||
if kill -0 "$PID" 2>/dev/null; then
|
||||
kill "$PID"
|
||||
rm -f "$PID_FILE"
|
||||
echo "✓ MCP 服务已停止"
|
||||
else
|
||||
echo "进程不存在,清理 PID 文件"
|
||||
rm -f "$PID_FILE"
|
||||
fi
|
||||
else
|
||||
echo "MCP 服务未运行"
|
||||
fi
|
||||
|
||||
# 清理 Xvfb
|
||||
if [ -f "$XVFB_PID_FILE" ]; then
|
||||
XVFB_PID=$(cat "$XVFB_PID_FILE")
|
||||
if kill -0 "$XVFB_PID" 2>/dev/null; then
|
||||
kill "$XVFB_PID"
|
||||
echo "✓ Xvfb 已停止"
|
||||
fi
|
||||
rm -f "$XVFB_PID_FILE" "$HOME/.xiaohongshu/xvfb.display"
|
||||
fi
|
||||
313
.agents/skills/xiaohongshu/scripts/track-topic.py
Executable file
313
.agents/skills/xiaohongshu/scripts/track-topic.py
Executable file
@@ -0,0 +1,313 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
小红书热点跟踪工具
|
||||
|
||||
用法:
|
||||
python track-topic.py <话题> [--limit N] [--feishu] [--output FILE]
|
||||
|
||||
示例:
|
||||
python track-topic.py "DeepSeek" --limit 5 --feishu
|
||||
python track-topic.py "春节旅游" --limit 10 --output report.md
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
# 获取脚本目录
|
||||
SCRIPT_DIR = Path(__file__).parent.resolve()
|
||||
XHS_SCRIPTS = SCRIPT_DIR # 现在就在 xiaohongshu/scripts 目录下
|
||||
|
||||
# 飞书 skill 路径(支持多种可能的位置)
|
||||
def find_feishu_scripts() -> Path:
|
||||
"""查找 feishu-docs skill 的 scripts 目录"""
|
||||
# 只允许在已知的 skill 目录中查找
|
||||
allowed_roots = [
|
||||
SCRIPT_DIR.parent.parent, # 同级 skill 目录
|
||||
Path.home() / ".openclaw" / "workspace" / "skills",
|
||||
Path.home() / ".claude" / "skills",
|
||||
]
|
||||
for root in allowed_roots:
|
||||
candidate = (root / "feishu-docs" / "scripts").resolve()
|
||||
# 校验解析后的路径仍在允许的根目录下(防止符号链接逃逸)
|
||||
if candidate.is_dir() and any(
|
||||
str(candidate).startswith(str(r.resolve()) + os.sep) for r in allowed_roots
|
||||
):
|
||||
return candidate
|
||||
return allowed_roots[0] / "feishu-docs" / "scripts" # 返回默认路径(可能不存在)
|
||||
|
||||
FEISHU_SCRIPTS = find_feishu_scripts()
|
||||
|
||||
|
||||
def call_xhs_mcp(tool: str, args: dict) -> dict:
|
||||
"""调用小红书 MCP 工具"""
|
||||
mcp_call = XHS_SCRIPTS / "mcp-call.sh"
|
||||
if not mcp_call.exists():
|
||||
print(f"❌ 找不到 xiaohongshu skill: {mcp_call}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
result = subprocess.run(
|
||||
[str(mcp_call), tool, json.dumps(args)],
|
||||
capture_output=True, text=True, timeout=120
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
print(f"❌ MCP 调用失败: {result.stderr}", file=sys.stderr)
|
||||
return {}
|
||||
|
||||
try:
|
||||
response = json.loads(result.stdout)
|
||||
if "result" in response and "content" in response["result"]:
|
||||
text = response["result"]["content"][0].get("text", "{}")
|
||||
return json.loads(text) if text else {}
|
||||
elif "error" in response:
|
||||
print(f"⚠️ MCP 错误: {response['error'].get('message', 'Unknown')}", file=sys.stderr)
|
||||
return {}
|
||||
return response
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
|
||||
|
||||
def search_feeds(keyword: str) -> list:
|
||||
"""搜索小红书内容"""
|
||||
print(f"🔍 搜索: {keyword}")
|
||||
result = call_xhs_mcp("search_feeds", {"keyword": keyword})
|
||||
feeds = result.get("feeds", [])
|
||||
# 过滤掉 hot_query 类型
|
||||
return [f for f in feeds if f.get("modelType") == "note"]
|
||||
|
||||
|
||||
def get_feed_detail(feed_id: str, xsec_token: str, load_comments: bool = True) -> dict:
|
||||
"""获取帖子详情"""
|
||||
args = {
|
||||
"feed_id": feed_id,
|
||||
"xsec_token": xsec_token,
|
||||
"load_all_comments": load_comments
|
||||
}
|
||||
result = call_xhs_mcp("get_feed_detail", args)
|
||||
return result.get("data", {})
|
||||
|
||||
|
||||
def format_timestamp(ts: int) -> str:
|
||||
"""格式化时间戳"""
|
||||
if not ts:
|
||||
return "未知"
|
||||
try:
|
||||
dt = datetime.fromtimestamp(ts / 1000)
|
||||
return dt.strftime("%Y-%m-%d %H:%M")
|
||||
except:
|
||||
return "未知"
|
||||
|
||||
|
||||
def get_comments_list(post: dict) -> list:
|
||||
"""安全地获取评论列表"""
|
||||
comments = post.get("comments", {})
|
||||
if isinstance(comments, dict):
|
||||
return comments.get("list", [])
|
||||
elif isinstance(comments, list):
|
||||
return comments
|
||||
return []
|
||||
|
||||
|
||||
def generate_report(keyword: str, posts: list) -> str:
|
||||
"""生成 Markdown 报告"""
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M")
|
||||
|
||||
report = f"""# 🔥 小红书热点跟踪报告
|
||||
|
||||
**话题:** {keyword}
|
||||
**生成时间:** {now}
|
||||
**收录帖子:** {len(posts)} 篇
|
||||
|
||||
---
|
||||
|
||||
## 📊 概览
|
||||
|
||||
"""
|
||||
|
||||
# 统计信息
|
||||
total_likes = sum(int(p.get("note", {}).get("interactInfo", {}).get("likedCount", 0) or 0) for p in posts)
|
||||
total_comments = sum(len(get_comments_list(p)) for p in posts)
|
||||
|
||||
report += f"""| 指标 | 数值 |
|
||||
|------|------|
|
||||
| 总帖子数 | {len(posts)} |
|
||||
| 总点赞数 | {total_likes:,} |
|
||||
| 总评论数 | {total_comments} |
|
||||
|
||||
---
|
||||
|
||||
## 📝 热帖详情
|
||||
|
||||
"""
|
||||
|
||||
for i, post in enumerate(posts, 1):
|
||||
note = post.get("note", {})
|
||||
comments = get_comments_list(post)
|
||||
|
||||
title = note.get("title", "无标题")
|
||||
desc = note.get("desc", "")
|
||||
user = note.get("user", {}).get("nickname", "匿名")
|
||||
time_str = format_timestamp(note.get("time"))
|
||||
interact = note.get("interactInfo", {})
|
||||
likes = interact.get("likedCount", "0")
|
||||
collected = interact.get("collectedCount", "0")
|
||||
|
||||
report += f"""### {i}. {title}
|
||||
|
||||
**作者:** {user}
|
||||
**时间:** {time_str}
|
||||
**互动:** ❤️ {likes} 赞 · ⭐ {collected} 收藏
|
||||
|
||||
**正文:**
|
||||
|
||||
> {desc[:500]}{"..." if len(desc) > 500 else ""}
|
||||
|
||||
"""
|
||||
|
||||
if comments:
|
||||
report += f"""**热门评论 ({len(comments)} 条):**
|
||||
|
||||
"""
|
||||
for j, comment in enumerate(list(comments)[:5], 1):
|
||||
c_user = comment.get("userInfo", {}).get("nickname", "匿名")
|
||||
c_content = comment.get("content", "")
|
||||
c_likes = comment.get("likeCount", 0)
|
||||
report += f"- **{c_user}** ({c_likes}赞): {c_content[:100]}\n"
|
||||
|
||||
if len(comments) > 5:
|
||||
report += f"- *... 还有 {len(comments) - 5} 条评论*\n"
|
||||
|
||||
report += "\n---\n\n"
|
||||
|
||||
# 评论区热点总结
|
||||
report += """## 💬 评论区热点关键词
|
||||
|
||||
"""
|
||||
|
||||
# 简单的关键词提取(统计高频词)
|
||||
all_comments = []
|
||||
for post in posts:
|
||||
for c in get_comments_list(post):
|
||||
all_comments.append(c.get("content", ""))
|
||||
|
||||
if all_comments:
|
||||
report += f"共 {len(all_comments)} 条评论,主要讨论方向:\n\n"
|
||||
# 这里可以做更复杂的 NLP 分析,暂时简化
|
||||
report += "- 用户对该话题的关注度较高\n"
|
||||
report += "- 评论区互动活跃\n"
|
||||
else:
|
||||
report += "暂无足够评论数据进行分析\n"
|
||||
|
||||
report += """
|
||||
---
|
||||
|
||||
## 📈 趋势分析
|
||||
|
||||
基于以上热帖和评论数据,该话题在小红书上呈现以下特点:
|
||||
|
||||
1. **热度指数**: """ + ("🔥🔥🔥 高" if total_likes > 1000 else "🔥🔥 中" if total_likes > 100 else "🔥 低") + f"""
|
||||
2. **互动活跃度**: """ + ("活跃" if total_comments > 50 else "一般" if total_comments > 10 else "较低") + """
|
||||
3. **内容类型**: 以图文笔记为主
|
||||
|
||||
---
|
||||
|
||||
*报告由 OpenClaw 小红书热点跟踪工具自动生成*
|
||||
"""
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def export_to_feishu(title: str, content: str) -> str:
|
||||
"""导出到飞书文档"""
|
||||
import_script = FEISHU_SCRIPTS / "doc-import.sh"
|
||||
if not import_script.exists():
|
||||
print(f"❌ 找不到 feishu-docs skill: {import_script}", file=sys.stderr)
|
||||
return ""
|
||||
|
||||
print("📤 导出到飞书文档...")
|
||||
|
||||
# 写入临时文件
|
||||
tmp_file = Path("/tmp/xhs_report.md")
|
||||
tmp_file.write_text(content, encoding="utf-8")
|
||||
|
||||
result = subprocess.run(
|
||||
[str(import_script), title, "--file", str(tmp_file)],
|
||||
capture_output=True, text=True, timeout=60
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
print(f"⚠️ 飞书导出失败: {result.stderr}", file=sys.stderr)
|
||||
return ""
|
||||
|
||||
# 解析返回的文档链接
|
||||
output = result.stdout
|
||||
print(output)
|
||||
return output
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="小红书热点跟踪工具")
|
||||
parser.add_argument("keyword", help="要跟踪的话题/关键词")
|
||||
parser.add_argument("--limit", "-n", type=int, default=10, help="获取帖子数量 (默认 10)")
|
||||
parser.add_argument("--feishu", "-f", action="store_true", help="导出到飞书文档")
|
||||
parser.add_argument("--output", "-o", help="输出 Markdown 文件路径")
|
||||
parser.add_argument("--no-comments", action="store_true", help="不获取评论")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# 1. 搜索帖子
|
||||
feeds = search_feeds(args.keyword)
|
||||
if not feeds:
|
||||
print("❌ 未找到相关帖子")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"✅ 找到 {len(feeds)} 条帖子")
|
||||
|
||||
# 2. 获取详情
|
||||
posts = []
|
||||
for i, feed in enumerate(feeds[:args.limit]):
|
||||
feed_id = feed.get("id")
|
||||
xsec_token = feed.get("xsecToken")
|
||||
title = feed.get("noteCard", {}).get("displayTitle", "")
|
||||
|
||||
print(f"📖 [{i+1}/{min(len(feeds), args.limit)}] 获取: {title[:30]}...")
|
||||
|
||||
detail = get_feed_detail(feed_id, xsec_token, not args.no_comments)
|
||||
if detail:
|
||||
posts.append(detail)
|
||||
|
||||
if not posts:
|
||||
print("❌ 未能获取帖子详情")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"✅ 成功获取 {len(posts)} 篇帖子详情")
|
||||
|
||||
# 3. 生成报告
|
||||
print("📝 生成报告...")
|
||||
report = generate_report(args.keyword, posts)
|
||||
|
||||
# 4. 输出
|
||||
if args.output:
|
||||
output_path = Path(args.output)
|
||||
output_path.write_text(report, encoding="utf-8")
|
||||
print(f"✅ 报告已保存: {output_path}")
|
||||
|
||||
if args.feishu:
|
||||
doc_title = f"小红书热点跟踪: {args.keyword} ({datetime.now().strftime('%m-%d')})"
|
||||
export_to_feishu(doc_title, report)
|
||||
|
||||
if not args.output and not args.feishu:
|
||||
# 默认输出到 stdout
|
||||
print("\n" + "="*60 + "\n")
|
||||
print(report)
|
||||
|
||||
return report
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
5
.agents/skills/xiaohongshu/scripts/track-topic.sh
Executable file
5
.agents/skills/xiaohongshu/scripts/track-topic.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
# 小红书热点跟踪工具
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
python3 "$SCRIPT_DIR/track-topic.py" "$@"
|
||||
17
.agents/skills/xiaohongshu/scripts/user-profile.sh
Executable file
17
.agents/skills/xiaohongshu/scripts/user-profile.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# 获取小红书用户主页
|
||||
|
||||
USER_ID="$1"
|
||||
XSEC_TOKEN="$2"
|
||||
|
||||
if [ -z "$USER_ID" ] || [ -z "$XSEC_TOKEN" ]; then
|
||||
echo "用法: $0 <user_id> <xsec_token>"
|
||||
echo ""
|
||||
echo "user_id 和 xsec_token 可从搜索或推荐结果中获取"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
ARGS=$(jq -n --arg uid "$USER_ID" --arg tok "$XSEC_TOKEN" \
|
||||
'{"user_id":$uid,"xsec_token":$tok}')
|
||||
"$SCRIPT_DIR/mcp-call.sh" user_profile "$ARGS"
|
||||
98
.agents/skills/xiaohongshu/tools/xhs-downloader/README.md
Normal file
98
.agents/skills/xiaohongshu/tools/xhs-downloader/README.md
Normal file
@@ -0,0 +1,98 @@
|
||||
# XHS-Downloader 辅助工具
|
||||
|
||||
配合 [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) 使用的工具脚本,用于下载小红书收藏/点赞笔记并导出为 OpenClaw 记忆库格式。
|
||||
|
||||
## 依赖
|
||||
|
||||
需要先安装 XHS-Downloader:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/JoeanAmier/XHS-Downloader.git
|
||||
cd XHS-Downloader
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## 使用流程
|
||||
|
||||
### 1. 获取收藏/点赞链接(使用油猴脚本)
|
||||
|
||||
手动复制链接效率低,推荐使用 XHS-Downloader 提供的油猴脚本批量提取:
|
||||
|
||||
**安装脚本:**
|
||||
|
||||
1. 安装 [Tampermonkey](https://www.tampermonkey.net/) 浏览器扩展
|
||||
2. 安装用户脚本:[XHS-Downloader.js](https://raw.githubusercontent.com/JoeanAmier/XHS-Downloader/refs/heads/master/static/XHS-Downloader.js)
|
||||
|
||||
**提取链接:**
|
||||
|
||||
1. 打开 [小红书网页版](https://www.xiaohongshu.com) 并登录
|
||||
2. 进入个人主页 → **收藏** 或 **点赞** 页面
|
||||
3. 点击 Tampermonkey 图标,选择:
|
||||
- `提取收藏作品链接`
|
||||
- `提取点赞作品链接`
|
||||
4. 脚本会自动滚动页面加载全部内容
|
||||
5. 提取完成后链接自动复制到剪贴板
|
||||
|
||||
**保存链接:**
|
||||
|
||||
将剪贴板内容粘贴到 `links.md` 文件:
|
||||
|
||||
```
|
||||
https://www.xiaohongshu.com/explore/xxx?xsec_token=...
|
||||
https://www.xiaohongshu.com/explore/yyy?xsec_token=...
|
||||
```
|
||||
|
||||
> **注意**:自动滚动功能默认关闭,需在脚本设置中手动开启。开启后可能触发风控,建议适度使用。
|
||||
|
||||
### 2. 批量下载
|
||||
|
||||
```bash
|
||||
# 在 XHS-Downloader 目录下运行
|
||||
python batch_download.py links.md
|
||||
```
|
||||
|
||||
下载的数据会保存到 `Volume/Download/ExploreData.db`。
|
||||
|
||||
### 3. 导出记忆库
|
||||
|
||||
**方式 A:导出为单文件**
|
||||
|
||||
```bash
|
||||
python export_memory.py
|
||||
# 生成 xhs_memory.md
|
||||
```
|
||||
|
||||
**方式 B:导出为多文件(推荐用于 OpenClaw)**
|
||||
|
||||
```bash
|
||||
python export_to_workspace.py
|
||||
# 生成到 ~/.openclaw/workspace/xhs-memory/
|
||||
```
|
||||
|
||||
### 4. 配置 OpenClaw 记忆搜索
|
||||
|
||||
编辑 `~/.openclaw/openclaw.json`,添加:
|
||||
|
||||
```json
|
||||
{
|
||||
"memorySearch": {
|
||||
"extraPaths": [
|
||||
"~/.openclaw/workspace/xhs-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
或者如果使用单文件导出,将 `xhs_memory.md` 放到 workspace 目录下。
|
||||
|
||||
## 脚本说明
|
||||
|
||||
| 脚本 | 功能 |
|
||||
|------|------|
|
||||
| `batch_download.py` | 批量下载笔记并记录到数据库 |
|
||||
| `export_memory.py` | 导出为单个 Markdown 文件 |
|
||||
| `export_to_workspace.py` | 导出为多个独立文件(按日期+标题命名) |
|
||||
|
||||
## 致谢
|
||||
|
||||
- [XHS-Downloader](https://github.com/JoeanAmier/XHS-Downloader) - GPL-3.0 License
|
||||
@@ -0,0 +1,48 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
批量下载小红书笔记
|
||||
|
||||
用法:
|
||||
python batch_download.py [links_file]
|
||||
|
||||
默认读取当前目录的 links.md 文件
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
from source import XHS
|
||||
except ImportError:
|
||||
print("错误: 请在 XHS-Downloader 项目目录下运行此脚本")
|
||||
print("或安装依赖: pip install -e /path/to/XHS-Downloader")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
async def main():
|
||||
# 读取链接文件
|
||||
links_file = Path(sys.argv[1]) if len(sys.argv) > 1 else Path("links.md")
|
||||
|
||||
if not links_file.exists():
|
||||
print(f"错误: 链接文件不存在: {links_file}")
|
||||
print("用法: python batch_download.py [links_file]")
|
||||
sys.exit(1)
|
||||
|
||||
links = links_file.read_text().strip()
|
||||
link_count = len([l for l in links.split() if l.startswith("http")])
|
||||
|
||||
print(f"开始下载,共 {link_count} 个链接...")
|
||||
|
||||
async with XHS(
|
||||
work_path="./Volume",
|
||||
folder_name="Download",
|
||||
record_data=True, # 记录作品数据到数据库
|
||||
download_record=True, # 跳过已下载
|
||||
author_archive=True, # 按作者分文件夹
|
||||
) as xhs:
|
||||
result = await xhs.extract(links, download=True)
|
||||
print(f"完成!处理了 {len(result)} 个作品")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
从 XHS-Downloader 数据库导出笔记到单个 Markdown 文件
|
||||
|
||||
用法:
|
||||
python export_memory.py [db_path] [output_file]
|
||||
|
||||
默认:
|
||||
db_path: Volume/Download/ExploreData.db
|
||||
output_file: xhs_memory.md
|
||||
"""
|
||||
import sqlite3
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def export_memory(db_path: Path = None, output_file: Path = None):
|
||||
db_path = db_path or Path("Volume/Download/ExploreData.db")
|
||||
output_file = output_file or Path("xhs_memory.md")
|
||||
|
||||
if not db_path.exists():
|
||||
print(f"错误: 数据库不存在: {db_path}")
|
||||
return False
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# 查询所有作品
|
||||
cursor.execute("""
|
||||
SELECT 作品标题, 发布时间, 作品链接, 作品描述, 作者昵称, 作品标签
|
||||
FROM explore_data
|
||||
ORDER BY 发布时间 DESC
|
||||
""")
|
||||
|
||||
rows = cursor.fetchall()
|
||||
conn.close()
|
||||
|
||||
if not rows:
|
||||
print("数据库为空")
|
||||
return False
|
||||
|
||||
# 生成 Markdown
|
||||
output = f"# 小红书收藏/点赞笔记 Memory\n\n"
|
||||
output += f"> 导出时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
|
||||
output += f"> 共 {len(rows)} 条笔记\n\n---\n\n"
|
||||
|
||||
for i, (title, time, link, desc, author, tags) in enumerate(rows, 1):
|
||||
output += f"## {i}. {title or '无标题'}\n\n"
|
||||
output += f"- **作者**: {author or '未知'}\n"
|
||||
output += f"- **时间**: {time or '未知'}\n"
|
||||
output += f"- **链接**: {link or '无'}\n"
|
||||
if tags:
|
||||
output += f"- **标签**: {tags}\n"
|
||||
output += f"\n### 内容\n\n{desc or '无内容'}\n\n---\n\n"
|
||||
|
||||
# 保存文件
|
||||
output_file.write_text(output, encoding="utf-8")
|
||||
print(f"导出完成: {output_file.absolute()}")
|
||||
print(f"共 {len(rows)} 条笔记")
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
db_path = Path(sys.argv[1]) if len(sys.argv) > 1 else None
|
||||
output_file = Path(sys.argv[2]) if len(sys.argv) > 2 else None
|
||||
export_memory(db_path, output_file)
|
||||
@@ -0,0 +1,101 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
将小红书笔记按单独文件导出到 OpenClaw workspace
|
||||
|
||||
用法:
|
||||
python export_to_workspace.py [db_path] [output_dir]
|
||||
|
||||
默认:
|
||||
db_path: Volume/Download/ExploreData.db
|
||||
output_dir: ~/.openclaw/workspace/xhs-memory
|
||||
|
||||
导出格式类似 gpt-history,每条笔记一个文件,文件名格式: YYYY-MM-标题.md
|
||||
"""
|
||||
import sqlite3
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def sanitize_filename(name: str, max_len: int = 50) -> str:
|
||||
"""清理文件名,移除非法字符"""
|
||||
name = re.sub(r'[<>:"/\\|?*\n\r\t]', '', name)
|
||||
name = re.sub(r'\s+', '-', name.strip())
|
||||
name = re.sub(r'-+', '-', name)
|
||||
name = name.strip('-')
|
||||
if len(name) > max_len:
|
||||
name = name[:max_len].rstrip('-')
|
||||
return name or "无标题"
|
||||
|
||||
|
||||
def export_to_workspace(db_path: Path = None, output_dir: Path = None):
|
||||
db_path = db_path or Path("Volume/Download/ExploreData.db")
|
||||
output_dir = output_dir or Path.home() / ".openclaw/workspace/xhs-memory"
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not db_path.exists():
|
||||
print(f"错误: 数据库不存在: {db_path}")
|
||||
return False
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("""
|
||||
SELECT 作品标题, 发布时间, 作品链接, 作品描述, 作者昵称, 作品标签
|
||||
FROM explore_data
|
||||
ORDER BY 发布时间 DESC
|
||||
""")
|
||||
|
||||
rows = cursor.fetchall()
|
||||
conn.close()
|
||||
|
||||
if not rows:
|
||||
print("数据库为空")
|
||||
return False
|
||||
|
||||
count = 0
|
||||
for title, time, link, desc, author, tags in rows:
|
||||
# 解析时间: 2026-01-25_18:17:43 -> 2026-01
|
||||
if time:
|
||||
date_prefix = time[:7] # YYYY-MM
|
||||
full_date = time.replace('_', ' ')
|
||||
else:
|
||||
date_prefix = "unknown"
|
||||
full_date = "未知"
|
||||
|
||||
# 生成文件名
|
||||
safe_title = sanitize_filename(title or "无标题")
|
||||
filename = f"{date_prefix}-{safe_title}.md"
|
||||
filepath = output_dir / filename
|
||||
|
||||
# 避免重复文件名
|
||||
counter = 1
|
||||
while filepath.exists():
|
||||
filename = f"{date_prefix}-{safe_title}-{counter}.md"
|
||||
filepath = output_dir / filename
|
||||
counter += 1
|
||||
|
||||
# 生成内容
|
||||
content = f"# {title or '无标题'}\n\n"
|
||||
content += f"**来源**: 小红书收藏/点赞\n\n"
|
||||
content += f"**日期**: {full_date}\n\n"
|
||||
content += f"**作者**: {author or '未知'}\n\n"
|
||||
content += f"**链接**: {link or '无'}\n\n"
|
||||
if tags:
|
||||
content += f"**标签**: {tags}\n\n"
|
||||
content += "---\n\n"
|
||||
content += "## 内容\n\n"
|
||||
content += f"{desc or '无内容'}\n"
|
||||
|
||||
filepath.write_text(content, encoding="utf-8")
|
||||
count += 1
|
||||
|
||||
print(f"导出完成: {output_dir}")
|
||||
print(f"共生成 {count} 个文件")
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
db_path = Path(sys.argv[1]) if len(sys.argv) > 1 else None
|
||||
output_dir = Path(sys.argv[2]) if len(sys.argv) > 2 else None
|
||||
export_to_workspace(db_path, output_dir)
|
||||
1
.claude/skills/xiaohongshu
Symbolic link
1
.claude/skills/xiaohongshu
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu
|
||||
1
.claude/skills/xiaohongshu-note-analyzer
Symbolic link
1
.claude/skills/xiaohongshu-note-analyzer
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu-note-analyzer
|
||||
1
.codebuddy/skills/xiaohongshu
Symbolic link
1
.codebuddy/skills/xiaohongshu
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu
|
||||
1
.codebuddy/skills/xiaohongshu-note-analyzer
Symbolic link
1
.codebuddy/skills/xiaohongshu-note-analyzer
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu-note-analyzer
|
||||
1
.kilocode/skills/xiaohongshu
Symbolic link
1
.kilocode/skills/xiaohongshu
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu
|
||||
1
.kilocode/skills/xiaohongshu-note-analyzer
Symbolic link
1
.kilocode/skills/xiaohongshu-note-analyzer
Symbolic link
@@ -0,0 +1 @@
|
||||
../../.agents/skills/xiaohongshu-note-analyzer
|
||||
15
skills-lock.json
Normal file
15
skills-lock.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"version": 1,
|
||||
"skills": {
|
||||
"xiaohongshu": {
|
||||
"source": "zhjiang22/openclaw-xhs",
|
||||
"sourceType": "github",
|
||||
"computedHash": "1075f772f0d5fee092fa63c3eac324430e0712b8a7f2c3d42245b736f618978b"
|
||||
},
|
||||
"xiaohongshu-note-analyzer": {
|
||||
"source": "softbread/xiaohongshu-doctor",
|
||||
"sourceType": "github",
|
||||
"computedHash": "a7a84eb52d3c6fa15c8dc4cf3f25ab673cdc068be2bdfc6174ea7f53e7f3cab9"
|
||||
}
|
||||
}
|
||||
}
|
||||
1
skills/xiaohongshu
Symbolic link
1
skills/xiaohongshu
Symbolic link
@@ -0,0 +1 @@
|
||||
../.agents/skills/xiaohongshu
|
||||
1
skills/xiaohongshu-note-analyzer
Symbolic link
1
skills/xiaohongshu-note-analyzer
Symbolic link
@@ -0,0 +1 @@
|
||||
../.agents/skills/xiaohongshu-note-analyzer
|
||||
Reference in New Issue
Block a user