<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Hekmon</title>
        <link>https://blog.hekmon.com</link>
        <description>Hekmon's blog</description>
        <lastBuildDate>Mon, 26 May 2025 09:46:28 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>All rights reserved Hekmon 2025</copyright>
        <item>
            <title><![CDATA[Deepseek 繁忙错误和限制的完整解决方案指南]]></title>
            <link>https://blog.hekmon.com/blogs/deal-with-deepseek-busy</link>
            <guid>https://blog.hekmon.com/blogs/deal-with-deepseek-busy</guid>
            <pubDate>Sun, 09 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[处理 Deepseek 繁忙错误和限制的完整解决方案指南]]></description>
            <content:encoded><![CDATA[
## DeepSeek 繁忙错误和限制的完整解决方案指南（2025）

![AI 替代方案横幅](https://images.pexels.com/photos/8386440/pexels-photo-8386440.jpeg?auto=compress&cs=tinysrgb&w=800)  
*2025年AI工具生态 - DeepSeek之外的多种选择*

## 为什么 DeepSeek 会显示"繁忙"状态或被封禁？
由于服务器超载和地区限制，DeepSeek API 访问变得不稳定。主要原因包括：
- 🚨 **大规模采用**其具有成本效益的 API（价格是 OpenAI 的 1/50）:cite[2]
- 🔒 由于合规问题在某些地区**地理封锁**
- 🔥 **R1 模型在编程/数学任务方面的表现**超过 GPT-4 而备受欢迎:cite[2]:cite[3]
- ⚡ **本地部署趋势**减少云依赖:cite[6]

---

### DeepSeek 完整访问解决方案

#### 1. 本地部署（推荐）
<iframe 
  src="https://www.youtube.com/embed/e-EG3B5Uj78" 
  title="DeepSeek R1 本地部署教程"
  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
  allowFullScreen
></iframe>
*视频教程：本地部署 DeepSeek-R1*

| 工具 | 主要特点 |
|:-----|:------------|
| **LM Studio** | 一键部署 R1，支持 CPU/GPU |
| **Ollama** | 基于终端，适合开发者 |
| **SiliconFlow** | 企业级容器化 |

#### 2. 中国友好平台
![中国 AI 生态系统](https://images.pexels.com/photos/6150603/pexels-photo-6150603.jpeg?auto=compress&cs=tinysrgb&w=800)  
*可在中国大陆访问的平台*

| 平台 | 专长 | 免费额度 |
|:---------|:----------|:-----------|
| [腾讯云](https://cloud.tencent.com/document/product/1772/115969) | 完整 R1/V3 API 访问 | ✔️ 截至 2025-02-25 |
| [Metaso Search](https://metaso.cn/) | 联网版 R1 模型 | 无限制 |
| [Flowith](https://flowith.io/) | 知识库集成 | 永久免费计划 |

#### 3. 全球替代方案
![全球 AI 工具](https://images.pexels.com/photos/3861969/pexels-photo-3861969.jpeg?auto=compress&cs=tinysrgb&w=800)  
*具有类似 DeepSeek 功能的国际平台*

| 服务 | 优势 | 最适合 |
|:--------|:---------|:---------|
| [Fireworks.ai](https://fireworks.ai/) | R1 API + GPT-4 对等性能 | 开发者 |
| [OpenRouter](https://openrouter.ai/) | 多模型网关 | 预算用户 |
| [Claude 3](https://www.anthropic.com) | 伦理 AI 护栏 | 研究人员:cite[1]:cite[4] |
| [Llama 3](https://ai.meta.com/llama) | 开源定制化 | 注重隐私的用户:cite[1]:cite[10] |

---

### 五大 DeepSeek 替代品对比

#### 1. **Claude 3** - 伦理AI强者
- ✅ 20万token上下文窗口
- ✅ 宪法性AI安全保障
- ❌ 无免费额度

#### 2. **Google Gemini** - 实时数据大师
- ✅ 实时网络数据集成
- ✅ GWorkspace 兼容性
- ❌ 创造力有限

#### 3. **Llama 3** - 隐私卫士
- ✅ 完全开源
- ✅ 本地 GPU 部署
- ❌ 需要编程技能

#### 4. **GitHub Copilot** - 代码专家
- ✅ IDE 集成
- ✅ 支持40+种编程语言
- ❌ 仅限代码领域

#### 5. **Perplexity AI** - 研究优化
- ✅ 支持引用的答案
- ✅ 实时事实核查
- ❌ 无 API 访问

---

### 企业解决方案矩阵
![云服务提供商](https://images.pexels.com/photos/325229/pexels-photo-325229.jpeg?auto=compress&cs=tinysrgb&w=800)  
*企业级 AI 部署选项*

| 提供商 | 专业领域 | 合规性 |
|:---------|:---------------|:-----------|
| 阿里云 | 高容量处理 | 中国国标 |
| Azure | 混合云 AI | GDPR 就绪 |
| NVIDIA NIM | GPU 优化推理 | 全球认证 |

---

### 保持持续访问的专业建议
1. **收藏** [DeepSeek 状态页面](https://status.deepseek.com)  
2. 使用 **VPN 轮换**应对地理封锁  
3. 结合**本地 R1 部署**和云端 API  
4. 关注 [AI Subreddit](https://reddit.com/r/machinelearning) 获取更新  ]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[How to deal with Deepseek is busy]]></title>
            <link>https://blog.hekmon.com/blogs/deal-with-deepseek-is-busy</link>
            <guid>https://blog.hekmon.com/blogs/deal-with-deepseek-is-busy</guid>
            <pubDate>Sun, 09 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Several alternative methods to deal with Deepseek is busy]]></description>
            <content:encoded><![CDATA[
## Overcoming "DeepSeek Is Busy" Errors and Bans: Complete Guide with Alternatives (2025)

![AI Alternatives Banner](https://images.pexels.com/photos/8386440/pexels-photo-8386440.jpeg?auto=compress&cs=tinysrgb&w=800)  
*AI tools landscape in 2025 - Multiple options available beyond DeepSeek*

## Why DeepSeek Shows "Busy" Status or Gets Banned?
Recent server overloads and regional restrictions have made DeepSeek API access unstable. Key reasons include:
- 🚨 **Mass adoption** of its cost-effective API (1/50th of OpenAI's pricing):cite[2]
- 🔒 **Geo-blocking** in certain regions due to compliance issues
- 🔥 **R1 model popularity** for coding/math tasks outperforming GPT-4:cite[2]:cite[3]
- ⚡ **Local deployment trends** reducing cloud dependency:cite[6]

---

### Full-Blooded DeepSeek Access Solutions

#### 1. Local Deployment (Recommended)
<iframe 
  src="https://www.youtube.com/embed/e-EG3B5Uj78" 
  title="DeepSeek R1 Local Setup Tutorial"
  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
  allowFullScreen
></iframe>
*Video Guide: Deploy DeepSeek-R1 Locally*

| Tool | Key Features |
|:-----|:------------|
| **LM Studio** | 1-click R1 deployment, CPU/GPU support |
| **Ollama** | Terminal-based, ideal for developers |
| **SiliconFlow** | Enterprise-grade containerization |

#### 2. Chinese-Friendly Platforms
![Chinese AI Ecosystem](https://images.pexels.com/photos/6150603/pexels-photo-6150603.jpeg?auto=compress&cs=tinysrgb&w=800)  
*Platforms accessible from mainland China*

| Platform | Specialty | Free Tier |
|:---------|:----------|:-----------|
| [Tencent Cloud](https://cloud.tencent.com/document/product/1772/115969) | Full R1/V3 API access | ✔️ Until 2025-02-25 |
| [Metaso Search](https://metaso.cn/) | Web-connected R1 model | Unlimited |
| [Flowith](https://flowith.io/) | Knowledge base integration | Permanent free plan |

#### 3. Global Alternatives
![Global AI Tools](https://images.pexels.com/photos/3861969/pexels-photo-3861969.jpeg?auto=compress&cs=tinysrgb&w=800)  
*International platforms with DeepSeek-like capabilities*

| Service | Strength | Best For |
|:--------|:---------|:---------|
| [Fireworks.ai](https://fireworks.ai/) | R1 API + GPT-4 parity | Developers |
| [OpenRouter](https://openrouter.ai/) | Multi-model gateway | Budget users |
| [Claude 3](https://www.anthropic.com) | Ethical AI guardrails | Researchers:cite[1]:cite[4] |
| [Llama 3](https://ai.meta.com/llama) | Open-source customization | Privacy-focused users:cite[1]:cite[10] |

---

### Top 5 DeepSeek Alternatives Compared

#### 1. **Claude 3** - The Ethical Powerhouse
- ✅ 200K token context window  
- ✅ Constitutional AI safeguards  
- ❌ No free tier  

#### 2. **Google Gemini** - Real-Time Master
- ✅ Live web data integration  
- ✅ GWorkspace compatibility  
- ❌ Limited creativity  

#### 3. **Llama 3** - Privacy Champion
- ✅ Fully open-source  
- ✅ Local GPU deployment  
- ❌ Requires coding skills  

#### 4. **GitHub Copilot** - Code Specialist
- ✅ IDE integration  
- ✅ 40+ programming languages  
- ❌ Code-only focus  

#### 5. **Perplexity AI** - Research Optimized
- ✅ Citation-supported answers  
- ✅ Real-time fact-checking  
- ❌ No API access  

---

### Enterprise Solutions Matrix
![Cloud Providers](https://images.pexels.com/photos/325229/pexels-photo-325229.jpeg?auto=compress&cs=tinysrgb&w=800)  
*Enterprise-grade AI deployment options*

| Provider | Specialization | Compliance |
|:---------|:---------------|:-----------|
| Alibaba Cloud | High-volume processing | China GB Standard |
| Azure | Hybrid cloud AI | GDPR Ready |
| NVIDIA NIM | GPU-optimized inference | Global Certifications |

---

### Pro Tips for Uninterrupted Access
1. **Bookmark** [DeepSeek Status Page](https://status.deepseek.com)  
2. Use **VPN rotation** for geo-blocks  
3. Combine **local R1 deployment** with cloud APIs  
4. Monitor [AI Subreddit](https://reddit.com/r/machinelearning) for updates  ]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[Deepseek R1满血联网版本，支持API，还很便宜（百万tokens 1元）！]]></title>
            <link>https://blog.hekmon.com/blogs/deepseek-free-access-web-api</link>
            <guid>https://blog.hekmon.com/blogs/deepseek-free-access-web-api</guid>
            <pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Deepseek R1 access to web, 50w tokens free, 1 yuan/1 million tokens.]]></description>
            <content:encoded><![CDATA[
火山推出了支持联网的Deepseek API，现在还免费送50w tokens，限时半价，1.0 元/百万tokens，Deepseek官方太卡的话可以用这个。
实测下来，火山还是最稳的，稳定输出，不卡顿，字节牛逼。

# 前置步骤

首先去开通大语言模型，字节每个模型送了50w token，限时半价，1.0 元/百万tokens。
![](../asset/Snipaste_2025-02-18_10-50-14.jpg)
[链接](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)

# 使用方式

## API 接入
1. 打开模型广场，选择Deepseek R1，选择推理，选择之前开通的R1模型，点击确认接入；
[链接](https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=deepseek-r1)
![](../asset/Snipaste_2025-02-18_10-57-50.jpg)

2. 在接入点列表选择最新创建的接入点，点击API调用，创建key，注意保密！

3. 打开cherry-studio，默认没有火山云，需要自己新建，参考如下，选择openAI就行，这是openAI兼容格式的api，key填入刚刚获取到的：

API endpoint： https://ark.cn-beijing.volces.com/api/v3

![](../asset/Snipaste_2025-02-18_15-53-12.png)

4. 按照上述步骤，就可以在cherry-studio中使用Deepseek R1了！

## 在线使用

在线使用可以访问如下链接：
[链接](https://console.volcengine.com/ark/region:ark+cn-beijing/application/detail?id=bot-20250211130201-pll2f-nocode-preset)

参考如下图：
![](../asset/Snipaste_2025-02-18_10-55-17.jpg)

# API如何联网使用？

待更新...]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[Deepseek R1 API可用平台汇总]]></title>
            <link>https://blog.hekmon.com/blogs/deepseek-r1-api</link>
            <guid>https://blog.hekmon.com/blogs/deepseek-r1-api</guid>
            <pubDate>Sun, 02 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Deepseek R1 API collection]]></description>
            <content:encoded><![CDATA[
⭕️Deepseek R1 API可用平台
deepseek官方的平台一直在被攻击，再加上用的人很多，经常不可用，这里整理了几个可用的API平台，搭配chatbox或者Cherry Studio使用很不错

1️⃣ 硅基流动
联合华为云推出的API服务，671B版本，注册送2000w token
🔗 https://cloud.siliconflow.cn/i/ql7ojxtv

2️⃣ NVIDIA
老黄的免费平台，个人用户注册可以免费用1000次调用（不计算token数量，而是按照调用次数来）
🔗 https://build.nvidia.com/deepseek-ai/deepseek-r1

3️⃣ Cloudflare
cf大善人的AI gateway也支持了deepseek r1模型，包含多个模型，beta阶段的不收费，qw32b蒸馏版收费
🔗 https://developers.cloudflare.com/workers-ai/models/

4️⃣ Microsoft Azure
微软出品，虽然这家公司嘴上说着要调查DS毕竟重资OpenAI🙂‍↕️
🔗 https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/

ps：第一次发现Cherry Studio，pc端感觉比chatbox好用多了]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[Deepseek R1 满血API（可白嫖）]]></title>
            <link>https://blog.hekmon.com/blogs/deepseek-r1-full-api</link>
            <guid>https://blog.hekmon.com/blogs/deepseek-r1-full-api</guid>
            <pubDate>Sat, 08 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Free Deepseek R1 API]]></description>
            <content:encoded><![CDATA[
⭕️ 满血deepseek R1 API（可白嫖）

deepseek官网还是流量太大用不了，一些idea还是需要R1帮我整下，所以发掘了下哪里可以白嫖R1：

1️⃣ 腾讯云，免费到2.25，速白嫖
支持：
1. DeepSeek-V3 为 671B 参数 MoE 模型
2. DeepSeek-R1 为 671B 模型
🔗 https://cloud.tencent.com/document/product/1772/115969
注：提供了在线联网版本可以使用
🔗 [在线体验](https://lke.cloud.tencent.com/lke#/experience-center/detail?expAppBizId=1887453086201675776&appType=knowledge_qa&avatar=https%3A%2F%2Fqidian-qbot-1251316161.cos.ap-guangzhou.myqcloud.com%2Fpublic%2F1773234660389421056%2Fimage%2FnCpqyPwvZLpKKLIogCmk-1887453082234912768.png&name=DeepSeek%E8%81%94%E7%BD%91%E5%8A%A9%E6%89%8B)

2️⃣ AICNN
第三方平台，满血r1，支持联网，还支持其他gpt模型等，提供api调用，可以白嫖
🔗 [AiCNN](http://aicnn.cn/loginPage?aff=CLG4Ws5Vuo)
key：HAPPYNEWYEAR （使用这个key送88888积分）

3️⃣ 秘塔搜索
满血Deepseek R1模型，支持联网搜索，联网搜索也是秘塔的老本行
🔗 https://metaso.cn/

4️⃣ flowith
flowith是一个知识库工具，内部集成了deepseek R1，老板说免费用，实测速度不错，可白嫖！
🔗 https://flowith.io/]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[DeepSeek R1开源模型的爆火，说明苦openAI久矣]]></title>
            <link>https://blog.hekmon.com/blogs/deepseek-to-openai</link>
            <guid>https://blog.hekmon.com/blogs/deepseek-to-openai</guid>
            <pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Deepseek break openAI control]]></description>
            <content:encoded><![CDATA[#  DeepSeek：打破美国AI垄断的中国力量，为何全球开发者都在支持它？

**——从被DDoS攻击到生态崛起，一场技术话语权的反击战**

2025年，当ChatGPT引爆全球AI军备竞赛时，一个来自中国的名字——**DeepSeek**——正悄然改写规则。它不仅是首个突破美国算力封锁的国产AI基础设施，更因遭受**神秘DDoS攻击**导致官网宕机的事件，意外揭开了一场关乎技术主权的暗战。而如今，从Cursor到Cloudflare，从Azure到Perplexity，越来越多国际平台选择支持DeepSeek API。这场“全球AI起义”，或许正印证了开发者社区的心声：**天下苦OpenAI久矣**。

---

## **一、DeepSeek为何而生？打破“AI铁幕”的中国答卷**

### **1.1 美国封锁下的技术突围**

当美国商务部将中国AI企业列入实体清单，并限制英伟达向华出售高端GPU时，硅谷试图用“算力卡脖子”扼杀竞争。而DeepSeek的诞生，正是通过**异构计算架构**与**分布式训练优化**，在国产芯片上实现了**90%+的等效计算效率**——这不仅是技术突破，更象征着中国AI产业“去殖民化”的里程碑。

### **1.2 OpenAI垄断之困：开发者们的“甜蜜枷锁”**

尽管OpenAI的GPT系列重塑了AI应用生态，但其**封闭式API+高昂定价**的策略，已让开发者陷入两难：

- **成本失控**：GPT-4的API调用成本是开源模型的**5-10倍**
- **数据主权风险**：敏感数据必须经由美国服务器
- **创新天花板**：模型微调权限被严格限制

正如Reddit热议的“**OpenAI tax**”，技术垄断正在扼杀AI民主化。

---

## **二、从被DDoS攻击到生态崛起：DeepSeek的“诺曼底时刻”**

### **2.1 官网宕机事件：一场没有硝烟的战争**

2025年初，DeepSeek官网突遭**超500Gbps的DDoS攻击**，攻击源IP主要来自北美。耐人寻味的是，同期GitHub出现大量教程教授“如何通过Cloudflare Workers反向代理访问DeepSeek API”。这场攻防战反而加速了其技术生态的扩散——**去中心化生存，本就是Web3时代的最强防御**。

### **2.2 全球支持的背后：开发者用代码投票**

目前明确支持DeepSeek技术栈的平台包括：

- **Cursor**：首个集成DeepSeek代码模型的IDE，响应速度比Copilot快**40%**
- **Cloudflare**：提供DeepSeek API的全球加速节点，延迟低于50ms
- **Azure**：在东亚区部署DeepSeek混合云解决方案，合规性提升**70%**
- **Perplexity**：基于DeepSeek MoE架构开发垂直搜索引擎
- **纳米AI/Windsurf**：360旗下搜索APP，提供Deepseek专线

这些合作绝非偶然——当开发者厌倦了OpenAI的“黑箱统治”，**开源友好、成本透明、数据可控**的DeepSeek自然成为新选项。

---

## **三、DeepSeek替代方案全景图：如何加入这场“AI起义”？**

### **3.1 技术迁移指南**

- **API兼容性**：通过**DeepSeek-to-OpenAI Adapter**，仅需修改API endpoint即可平滑过渡
- **混合部署策略**：使用Cloudflare的**AI Gateway**动态路由请求（示例代码）：
``` python
# 根据负载自动选择DeepSeek/本地模型  
response = cloudflare.AI.run(  
    model_name="@deepseek/llama-3-70b",  
    prompt=user_input,  
    fallback_strategy="local-llm"  
)  
```

### **3.2 生态红利捕捉**

- **成本优势**：同等性能下，DeepSeek API价格仅为GPT-4的**1/3**
- **政策避险**：符合中国《生成式AI服务管理暂行办法》与欧盟GDPR双重要求
- **创新自由**：支持**PyTorch原生微调**，自定义模型可商业闭源

---

## **四、未来之战：AI世界的“多极化”已不可避免**

当微软研究院报告显示“**DeepSeek架构被47%的开源项目引用**”，当Hugging Face周下载量突破300万次，这场技术变局已超越单纯的产品竞争。它预示着：

1. **算力平权**：国产芯片+算法优化的组合，正在消解美国硬件霸权
2. **协议革命**：DeepSeek提出的**Federated Training Protocol**，可能重塑AI协作范式
3. **地缘重构**：从“一个GPT统治全球”到“区域性AI联盟”的分化

正如Linux基金会AI总监Ibrahim Haddad所言：“**The future of AI is plural.**”（AI的未来是多元的）——而DeepSeek，或许正是这场多元革命的序章。]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[尝试在windows上使用wxt框架开发chrome插件]]></title>
            <link>https://blog.hekmon.com/blogs/develop-plugin-in-win-via-wxt</link>
            <guid>https://blog.hekmon.com/blogs/develop-plugin-in-win-via-wxt</guid>
            <pubDate>Mon, 10 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[windows-wxt-plugin-dev]]></description>
            <content:encoded><![CDATA[
公司摸鱼时间，打算研究下插件的开发。因为另一款插件是用wxt框架开发的，即刻不少人也推荐这个框架，因此计划也是用他来开发chrome插件，没想到是踩坑的开始。
公司电脑是Windows+Ubuntu双PC，一般在Windows上使用vscode ssh链接Ubuntu做代码开发，Windows上主要是处理文档之类的。

## 初始化项目（Ubuntu）
根据官方文档，初始化项目：
```
npx wxt@latest init myhelper
```
然后直接`npm run dev`测试下，结果Ubuntu因不使用桌面，索性没安装chrome，所以出现如下报错：
```
ERROR  The CHROME_PATH environment variable must be set to a Chrome/Chromium executable no older than Chrome stable.
```
看来得安装个chrome才行，这里比较疑问的是即使安装了chrome，该如何调试？先安装吧。
```
sudo apt install chromium-browser
which chromium-browser
```
安装成功后可以正常编译了，但是继续出现了后续报错，访问 34713 被拒绝，查了下原因，这里应该是wxt热重载使用的端口？且每次都是随机的，vscode server无法设置自动转发。看文档上没咋说，AI也不知道，放弃了。。
```
myhelper$ npm run dev

> wxt-react-starter@0.0.0 dev
> wxt


WXT 0.19.26                                                                                                            4:14:14 PM
✔ Started dev server @ http://localhost:3000                                                                          4:14:15 PM
ℹ Pre-rendering chrome-mv3 for development with Vite 6.0.8                                                            4:14:15 PM
✔ Built extension in 350 ms                                                                                           4:14:15 PM
  ├─ .output/chrome-mv3/manifest.json               1.05 kB 
  ├─ .output/chrome-mv3/popup.html                  744 B   
  ├─ .output/chrome-mv3/background.js               19.96 kB
  ├─ .output/chrome-mv3/chunks/popup-BxzUToPe.js    8.24 kB 
  ├─ .output/chrome-mv3/content-scripts/content.js  38.3 kB 
  ├─ .output/chrome-mv3/icon/128.png                3.07 kB 
  ├─ .output/chrome-mv3/icon/16.png                 559 B   
  ├─ .output/chrome-mv3/icon/32.png                 916 B   
  ├─ .output/chrome-mv3/icon/48.png                 1.33 kB 
  ├─ .output/chrome-mv3/icon/96.png                 2.37 kB 
  └─ .output/chrome-mv3/wxt.svg                     1.07 kB 
Σ Total size: 77.61 kB                                               
✖ Command failed after 26.0 s                                                                                         4:14:40 PM

 ERROR  connect ECONNREFUSED 127.0.0.1:34713                                                                           4:14:40 PM

    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1611:16)
```

## Windows初尝试

很烦windows上的环境变量配置，习惯了Ubuntu和Mac OS。。现在不得不折腾一下，在vscode中尝试下。

### 安装node
问了下AI，可以使用Chocolatey来做包管理器，类似linux上的apt和yum，听起来很不错！
1. 管理员权限在powershell中安装Chocolatey：
```
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
```
2. 安装完Chocolatey后，在powershell中安装node：
```
choco install nodejs
```
3. 设置环境变量以生效配置
这里有个坑，其实上一步是已经自动配置了环境变量，cmd中已有可以运行`node -v`了，但是，我使用的是powershell！！！不会自动更新环境变量！！！
```
# powershell中查看环境变量
Get-ChildItem Env:\
```
看了下，有网友写了个脚本来生效powershell的环境变量，实测可用：
```
# RefreshEnv.ps1
# 
# PowerShell脚本用于从注册表读取环境变量并设置会话变量

Write-Host "正在从注册表刷新cmd.exe的环境变量。请稍候..." -NoNewline

function Set-FromReg {
	param ( 
		[string]$regPath,
		[string]$name,
		[string]$varName
	)
	
	$value = Get-ItemProperty -Path $regPath -Name $name -ErrorAction SilentlyContinue
	if ($value) {
		Set-Item -Path Env:$varName -Value $value.$name
	} 
}

function Get-RegEnv {
	param (
		[string]$regPath
	)
	
	$vars = Get-Item -Path $regPath
	foreach ($var in $vars.Property) {
		if ($var -ne "Path")
		{
			Set-FromReg $regPath $var $var
		}
	} 
}

# 获取系统和用户环境变量

Get-RegEnv "HKLM:\System\CurrentControlSet\Control\Session Manager\Environment"
Get-RegEnv "HKCU:\Environment"

# 特殊处理PATH - 混合用户和系统路径

$path_HKLM = (Get-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Session Manager\Environment").Path
$path_HKCU = (Get-ItemProperty -Path "HKCU:\Environment").Path
$env:Path = "$path_HKLM;$path_HKCU"

# 保存原始用户名和架构
$OriginalUserName = $env:USERNAME
$OriginalArchitecture = $env:PROCESSOR_ARCHITECTURE

# 重置用户名和架构
$env:USERNAME = $OriginalUserName
$env:PROCESSOR_ARCHITECTURE = $OriginalArchitecture

Write-Host "完成"
```

### wxt插件开发环境
1. 初始化项目，和之前的Windows一样
```
# 设置成淘宝镜像源
npm config set registry https://registry.npmmirror.com/
npx wxt@latest init myhelper
```
2. 跟其他一样了。]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[GDB 跨平台交叉编译]]></title>
            <link>https://blog.hekmon.com/blogs/gdb-cross-compile</link>
            <guid>https://blog.hekmon.com/blogs/gdb-cross-compile</guid>
            <pubDate>Sun, 31 Mar 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[GDB Cross Compile in Embedded Linux]]></description>
            <content:encoded><![CDATA[
# 1. 环境配置

## 1.1 源码来源

在如下链接下载gdb源码：https://ftp.gnu.org/gnu/gdb/

## 1.2 源码解压

通过如下指令解压源码

```
tar -xf gdb-14.1.tar.xz
```

## 1.3 configure

先执行configure以便配置编译环境，如下所示：

```
cd gdb-14.1/
 ./configure --enable-mpers=no --host=aarch64-linux-gnu --target=aarch64-linux-gnu CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld --prefix=xxx/gdb-14.1/
```

### 1.3.1 configure报错

在执行上述**configure**指令后，报了如下错误，应该是**GMP和 MPFR**版本过低导致：

```
checking build system type... x86_64-pc-linux-gnu
...
checking for aarch64-linux-gnu-gdc... no
checking for gdc... no
checking whether the D compiler works... no
checking how to compare bootstrapped objects... cmp --ignore-initial=16 $$f1 $$f2
checking for objdir... .libs
checking for the correct version of gmp.h... no
configure: error: Building GDB requires GMP 4.2+, and MPFR 3.1.0+.        《《《 报错
Try the --with-gmp and/or --with-mpfr options to specify
their locations.  If you obtained GMP and/or MPFR from a vendor
distribution package, make sure that you have installed both the libraries
and the header files.  They may be located in separate packages.
```

可参考如下博文中的解决方法，拉取最新的**GMP和 MPFR**进行本地编译（同样需要交叉编译）：

[https://blog.csdn.net/qq_36393978/article/details/118678521](https://blog.csdn.net/qq_36393978/article/details/118678521)

#### **编译gmp**

源码：[https://ftp.gnu.org/gnu/gmp/](https://ftp.gnu.org/gnu/gmp/)

解压编译：

```
tar -xf gmp-6.3.0.tar.xz
cd gmp-6.3.0/
./configure --enable-mpers=no --host=aarch64-linux-gnu --target=aarch64-linux-gnu CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld --prefix=xxx/gmp-6.3.0
make
sudo make install
```

#### **编译mpfr**

**源码**：[https://ftp.gnu.org/gnu/mpfr/](https://ftp.gnu.org/gnu/mpfr/)

**解压编译**：

```
tar -xf mpfr-4.2.1.tar.xz
cd mpfr-4.2.1/
./configure --with-gmp=/usr/local/gmp-6.3.0 --host=aarch64-linux-gnu --target=aarch64-linux-gnu CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld --prefix=xxx/mpfr-4.2.1
make
sudo make install
```

#### **编译mpc**

**源码获取**：

[https://ftp.gnu.org/gnu/mpc/](https://ftp.gnu.org/gnu/mpc/)

**解压编译**：

```
tar -xf mpc-1.3.1.tar.gz
cd mpc-1.3.1/
./configure --with-gmp=/usr/local/gmp-6.3.0 --with-mpfr=/usr/local/mpfr-4.2.1 --host=aarch64-linux-gnu --target=aarch64-linux-gnu CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld --prefix=xxx/mpc-1.3.1
make
sudo make install
```

编译安装完上述依赖库后，按如下指令重新执行gdb的configure指令，此过程未再报错。

```
./configure --with-mpfr=/usr/local/mpfr-4.2.1 --with-gmp=/usr/local/gmp-6.3.0 --with-mpc=/usr/local/mpc-1.3.1 --host=aarch64-linux-gnu --target=aarch64-linux-gnu CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld AR=${CROSS_COMPILE}ar --prefix=xxx/gdb-14.1/
```

# 2. 编译

直接执行make指令进行编译。

### 2.1 编译报错

```
linux-aarch64-low.cc:216:16: note: forward declaration of ‘struct aarch64_store_gregset(regcache*, const void*)::user_pt_regs’
  216 |   const struct user_pt_regs *regset = (const struct user_pt_regs *) buf;
      |                ^~~~~~~~~~~~
...
./../gdb/nat/aarch64-scalable-linux-sigcontext.h:275:28: note: in expansion of macro ‘SVE_PT_FPSIMD_SIZE’
  275 |   : SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
      |                            ^~~~~~~~~~~~~~~~~~
linux-aarch64-low.cc:925:21: note: in expansion of macro ‘SVE_PT_SIZE’
  925 |      regset->size = SVE_PT_SIZE (AARCH64_MAX_SVE_VQ, SVE_PT_REGS_SVE);
      |                     ^~~~~~~~~~~
make[2]: *** [Makefile:546: linux-aarch64-low.o] Error 1
make[1]: *** [Makefile:11607: all-gdbserver] Error 2
make: *** [Makefile:1021: all] Error 2
```

经分析，上述错误应该是**c++**编译导致，可加上如下configure选项解决：

```
CXX=${CROSS_COMPILE}g++
```

最终可用的配置如下：

```
./configure --with-mpfr=/usr/local/mpfr-4.2.1 --with-gmp=/usr/local/gmp-6.3.0 --with-mpc=/usr/local/mpc-1.3.1 --host=aarch64-linux-gnu --target=aarch64-linux-gnu CXX=${CROSS_COMPILE}g++ CC=${CROSS_COMPILE}gcc LD=${CROSS_COMPILE}ld AR=${CROSS_COMPILE}ar --prefix=xxx/gdb-14.1/
```

# 2.2 编译产物

解决完上述问题后，最终编译出可在样机中执行的gdbserver

# 2.3 gdbserver使用

## 2.3.1 **样机端**

1. 将上述附件中的**gdbserver_stripped** tftp到样机的/bin目录下：

```
cd /bin;tftp -gr gdbserver 192.168.1.105;chmod +x gdbserver_stripped;
```

2. 运行gdbserver:

```
gdbserver_stripped 192.168.1.105:1234 --attach `pgrep main`
```

## 2.3.2 **主机端**

1. 运行toolchain的gdb:

```
aarch64-linux-gnu-gdb # 注意提前设置环境变量
```

2. 远程连接样机：

```
set sysroot xxx
target remote 192.168.1.76:1234
```

3. 样机串口打印如下信息说明**gdbserver已成功连接：**

```
Attached; pid = 1300
gdbserver: Unable to determine the number of hardware watchpoints available.
gdbserver: Unable to determine the number of hardware breakpoints available.
Listening on port 1234
Remote debugging from host 192.168.1.105, port 3465
```

4. 在主机端的gdb中执行如下指令保存样机当前的core信息：

```
gcore ./core_dump.test  #或使用generate-core-file ./core_dump.test指令
```

执行上述指令后，可在当前路径下生成core文件**core_dump.test**，如下所示：

```
file core_dump.test
core_dump.test: ELF 64-bit LSB core file, ARM aarch64, version 1 (SYSV), SVR4-style, from '/bin/main'
```

接下来就可以使用gdb来分析上述core_dump.test了。]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[计划「周更」的简单总结（2025.5）]]></title>
            <link>https://blog.hekmon.com/blogs/nothing-but-everything-2025-05</link>
            <guid>https://blog.hekmon.com/blogs/nothing-but-everything-2025-05</guid>
            <pubDate>Sun, 25 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[都是摸着石头过河吧？]]></description>
            <content:encoded><![CDATA[
![](../asset/2505-summary.jpeg)

2025已经年过半载了吧，感觉时间流逝的飞快，仿佛昨天还是被ChatGPT 3.5发布而震撼，今天已经很大程度进入到了AGI。

「当ChatGPT开始理解冷笑话时，我就知道人类最后的幽默防线也沦陷了」

其实，我的备忘录里面一直有一条提醒事项叫「总结」，但是每次都被默默归档，以至于这个博客上一次更新还是3-4个月前。
因为实在无法新建文件来写下第一个字，不知道从何总结起。

## 人生进度条：从打工人到"新杭州人"

2025年，算是人生中非常重要的一年，算是稳扎稳打的杭州户口了，有了自己的小家庭，有求婚领证的欣喜，也有对未来的一丝迷茫。

- 很庆幸自己遇到了很棒的另一半，一起装饰了温馨的家，一起度过了很多脚踏实地的日子，以及各种新奇的经历；
- 把户口从公司迁出、迁入杭州的家、换上新的身份证，似乎是一个崭新的身份，享受着这个城市提供的社会福利，和未来的机遇；

但是啊，站在上帝视角，这只不过是政府统计中的一个小数字，仅仅是1/n，是其中的1，更有无数的n，仿佛来到了一个新的内卷世界。

## 职场五年老兵的舒适区

可以叫做舒适区吧？赶上了时代红利，毕业后在入职的第一家公司待了五年有余，有不错的薪资，也有尚可还行的岗位、职级，也让我有了「老员工」的称呼。
但是，时代红利总会过去，干不完的项目工作以及日益减少的可用人力，产品、研发、甚至管理层的冲突日益加剧。很明显，这个公司在走下坡路，至少在收益上是的。
当然，在技术上，这家公司没啥技术可言。

小圈子里面的同事逐渐离职，一个三人的「社畜聊天」，如今也只剩下我一人还在这家公司。

只能算是先苟着？可是？！我还很年轻，为什么是「老员工」？必不可能。

## AI焦虑症患者日常

AI时代，瞬息万变。各个大厂都在发布新的AI模型，抑或者发布新的AI应用生态，从爆火的Manus到Coze等，每天刷刷消息，都能看到xxx新模型发布等等等，让人深思：
- 这个模型有哪些可以新的应用机会？
- 这个大厂又用AI端掉了哪些工作？
- ...

我一直觉得AGI近在咫尺，就如同工业革命一般，起初只是一台蒸汽机的发明，但随之而来的是长达几个世纪的科技大发展。
工业革命替代了大量重复性的体力劳动，AGI则会替换掉大量重复性的脑力劳动，甚至一些非重复性的工作。

与我自己呢？
- 对本业工作来说，AI是有效的工具，一个新员工一天的工作，AI几分钟就可以搞定。
- 对其他方面来说，当下最热门的毫无疑问就是vibe coding，让没有相关语言开发经验的我，连爬带滚学会了不少新的语言比如flutter和nextjs（PS：这也印证了实践是最好的导师）

## 差点什么？

其实干副业也很久了，GitHub提交前段时间几乎每天点亮，但是原创有多少呢？副业进展有多少呢？

用AI写代码确实爽，但GitHub上绿油油的commit记录骗不了人。
上周用AI生成的页面，今天打开连自己都看不懂注释了。这就好比用魔法变出来的饭，吃的时候很香，洗碗时发现锅都化了。

仔细分析了下，发现还是差一点东西：

- 不够专一，人的时间精力毕竟有限，但是我似乎总认为AI提效可以让我同时并行多个项目，最终导致啥都没赶上，抑或着赶上了但是流量微乎其微；
  * 1月：我要用Flutter开发跨平台应用！
  * 3月：Next.js真香，三天做个网站
  * 5月：AI绘画这么火，必须分一杯羹
  * 也许7月：算了还是学量化交易吧...
- 沉迷于自我满足，满足于一个网站或者插件的开发完成，却没有持续维护迭代，连谷歌也放弃收录那种
- 时间规划与新idea的冲突，在有限的时间里面，也许我更应该关注已有的时间规划

deepseek给我描述了一个创新者的窘境，简单但真实：

```
def 数字游民生存指南():
    while True:
        灵感 = 刷推特时突然坐直()
        MVP = 用GPT生成脚手架(灵感)
        流量 = 发到ProductHunt等流星划过()
        if 流量 < 阈值:
            发朋友圈宣布「战略性放弃」()
        else:
            让项目在服务器上自然腐烂()
    return 技术债务
```

与其说是在满足自己，但不如说是在生产「数字废墟」。

这个死循环的破解密钥，或许藏在《战争与和平》的隐喻里： ** 真正改变世界的不是天才的灵光，而是持续聚焦的能量密度** 。

## 下半年生存指南

有个同事说的很对，留给准备面试的时间也不多了。那么，下半年生存策略又该如何？

- 在技术松鼠症和深度专注之间：每个月只允许追1个新技术热点
- 给副业装个刹车：要么把一个产品运营到能自动赚钱，要么痛快放弃！

最后，借小区快递柜标语共勉：**每一个包裹都值得被认真对待**。在这个AGI重构一切的时代，或许我们终将学会与不确定性共舞。

]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[数字游民，，一个月几块钱搞定国外手机号]]></title>
            <link>https://blog.hekmon.com/blogs/oversea-sim</link>
            <guid>https://blog.hekmon.com/blogs/oversea-sim</guid>
            <pubDate>Mon, 29 Apr 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Instructions for Oversea SIM Card]]></description>
            <content:encoded><![CDATA[
⭕️ 数字游民，先搞定国外手机号

五一打算去香港，研究境外流量卡的时候想起来可以趁此搞个境外手机号，方便注册一些国外的app。
搜刮了一波，分享一下我的2块💰/月的保号攻略（可以接收短信验证码）

第1️⃣步：5ber esim卡
准确来说是esim转实体sim卡的方式，我的国行iPhone不支持esim，也不想多带一个手机，于是找到了这个神卡
一百多，可以写入无限制数量的esim卡，随用随切（iPhone不支持写卡，还是得安卓机来搞定）
🔗 https://esim.5ber.com/（5%折扣码👉 SLRUP1）
ps：也有大佬推荐买个二手的海外版 iPhone SE二代（800💰左右）

第2️⃣步：购买esim卡
收到5ber实体卡后，购买esim卡，有很多地方可以买，我找了一圈，买的泰国神卡，支持126个国家通话和上网（具体分全球套餐和亚洲套餐），如果不开流量套餐的话，号码收短信免费，保号是10泰铢每月，折合人民币2💰左右，优势有：
✅ 保号便宜，流量价格看套餐，也有便宜的
✅ 不用实名
✅ 泰国实体号码➕原生ip，速度快，国内走的移动（坐标杭州实测）
✅ 可以微信支付、微信公众号充值

购买的地方有很多，我是在 sim2fly 买的
🔗 https://esim2fly.com （ESIMDB这个coupon code可以省1🔪）
其他类似网站也有很多：
🌟 https://mobimatter.com/
🌟 https://www.jetpacglobal.com/product-details/?tenant=USA&productregion=HKG （有个1🔪1G 4天有效的）
🌟 esim聚合站： https://esimdb.com/

第3️⃣步：激活
邮件会收到二维码和手机号，使用51ber app扫描二维码绑定就行，我买的套餐默认是没有流量的，需要开通（公众号充💰，USSID开通），这里贴一下USSID部分指令（在购买网站也有）
👉 *121# 余额查询
👉 *111*6# 剩余流量查询
👉 流量办理
![](https://img.xwyue.com/i/2024/04/29/662fac60d3db7.jpeg)

第4️⃣步：延长有效期
我买的手机号是半年有效，可以充💰延长有效期，看网上攻略是每次充值余额都会将号码有效期延长1个月，myais app或是第三方公众号弗威 fuwii 选择泰国ais充值可以只充10泰铢，还没有实测。]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[Run tensorflow in ARM]]></title>
            <link>https://blog.hekmon.com/blogs/run-tensorflow-in-arm</link>
            <guid>https://blog.hekmon.com/blogs/run-tensorflow-in-arm</guid>
            <pubDate>Fri, 14 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Run tensorflow in ARM]]></description>
            <content:encoded><![CDATA[
To achieve image recognition and user behavior recognition in routers, we attempted to introduce AI. After research, we found that TensorFlow has the potential to run on ARM, so we started working in this direction.

## Python Environment Preparation

### Prerequisites

1. A router device.
2. PC: Ubuntu, with access to GitHub, GitLab, and Gitee.
3. Compatible TensorFlow version: 2.18.0-rc1

### Modify Configuration Files

**First, determine the architecture type of your DUT**: Enter `opkg info | grep Architecture | sort | uniq` to see the architecture of the installed packages. In this case, it is `aarch64_cortex-a73_neon-vfpv4`, but the software source download URL does not have `aarch64_cortex-a73_neon-vfpv4`, so we download `aarch64_generic`.

**OpenWrt software source version number**: This is mainly determined by the version of Python 3 you want to install. As long as the architecture is correct, the version doesn't matter much, but for example, 19.07 only has Python 3.7 packages, and 21.02 only has Python 3.9 packages.

Since the software source download URL does not have `aarch64_cortex-a73_neon-vfpv4`, we need to add `aarch64_generic` to the supported package architectures:

```
# Execute on DUT
echo arch all 100 >> /etc/opkg.conf
echo arch aarch64_generic 200 >> /etc/opkg.conf
echo arch aarch64_cortex-a73_neon-vfpv4 300 >> /etc/opkg.conf
```

This number refers to the priority of the software source.

### Install Python and pip (to memory)

First, modify the environment variable for dynamic libraries:

```
# Execute on DUT
export LD_LIBRARY_PATH=/tmp/usr/lib:$LD_LIBRARY_PATH
alias pip='python -m pip'
```

And define an alias for pip. If you install pip this way, without defining an alias, you can only execute it through `python -m pip`.

#### OPKG Automated Offline Installation

Since the DUT does not have `wget`, and my previous attempt to install `wget` failed, we cannot use `opkg install` for **online installation** on the DUT.

Initially, my idea was to directly download the required packages from the mirror source and then **install them offline**. However, this approach has many disadvantages:

1. You need to download dependency packages one by one, and you only know the dependencies after installation fails.
2. Changing versions or architectures requires starting over, which is very troublesome.

My current solution is to write a script to resolve opkg dependencies, download the dependency packages on the PC, generate a shell script with the dependency relationships, and then transfer it to the DUT for **offline installation**. (Dependency relationships are resolved by parsing the Packages file under each category.)

The entire process is very automated. Just provide the required opkg names, TFTP server address, mirror source, and a few configuration options. This Python script will download the required packages, dependency packages, generate dependency relationships, and even automatically generate the DUT installation script.

#### Automated Python and tflite_runtime Environment Integration Script

For the integration of Python and tflite_runtime, an automated Python script has also been implemented: `tflite_runtime_py_env_prepare.py`, as follows:

```
import argparse
import os
import subprocess

OPKGS_NEEDED = [
                "python3-pip",
                "python3-numpy",
                "python3-pillow",
                # "gcc",
                ]
TFTP_SERVER_IP = "192.168.1.100" 
OPKG_MIRROR_URL = "https://mirrors.aliyun.com/openwrt/releases/packages-23.05/aarch64_generic/" # Precise to version and architecture
INSTALL_IN_RAM = True # Whether to install in RAM

def shell_cmd_pkg(url_prefix, pkg_name):
    global INSTALL_IN_RAM
    shell_cmd = ""
    if url_prefix:
        os.system(f"wget --no-check-certificate --no-clobber {url_prefix}{pkg_name}")
    shell_cmd += f"tftp -gr {pkg_name} {TFTP_SERVER_IP} && "
    shell_cmd += f"chmod 777 {pkg_name} && "
    if pkg_name.endswith(".ipk"):
        shell_cmd += f"opkg install {pkg_name}{' -d ram' if INSTALL_IN_RAM else ''} && "
    elif pkg_name.endswith(".sh"):
        shell_cmd += f"./{pkg_name} && "
    elif pkg_name.endswith(".whl"):
        shell_cmd += f"python -m pip install {pkg_name} && "
    
    shell_cmd += f"rm {pkg_name}\n"

    return shell_cmd

if __name__ == '__main__':
    shell_cmd_content = ""

    # Pre-preparation
    shell_cmd_content += "cd /tmp\n"
    shell_cmd_content += "export LD_LIBRARY_PATH=/tmp/usr/lib:$LD_LIBRARY_PATH\n"


    # for gcc, aarch64
    # shell_cmd_content += shell_cmd_pkg("https://downloads.openwrt.org/snapshots/targets/ipq807x/generic/packages/", "libstdcpp6_12.3.0-4_aarch64_cortex-a53.ipk")

    # for py311, armhf
    # shell_cmd_content += shell_cmd_pkg("https://mirrors.aliyun.com/openwrt/releases/23.05.5/targets/mediatek/mt7629/packages/", "libatomic1_12.3.0-4_arm_cortex-a7.ipk")

    # Install Python and pip
    subprocess.run(["python3", "opkg_prepare.py", 
                    "--tftp-server-ip", TFTP_SERVER_IP,
                    "--mirror-url", OPKG_MIRROR_URL,] 
                    + (["--ram"] if INSTALL_IN_RAM else []) 
                    + OPKGS_NEEDED)
    shell_cmd_content += shell_cmd_pkg(None, "opkg_install_from_tftp.sh")
    
    # Install pip
    # os.system("wget --no-check-certificate -nc https://bootstrap.pypa.io/pip/get-pip.py")
    # shell_cmd_content += f"tftp -gr get-pip.py {TFTP_SERVER_IP} && python3 get-pip.py && rm get-pip.py\n"

    # Install pkginfo
    shell_cmd_content += shell_cmd_pkg("https://files.pythonhosted.org/packages/c0/38/d617739840a2f576e400f03fea0a
    # Install tflite_runtime
    shell_cmd_content += shell_cmd_pkg("xxx", "tflite_runtime-2.18.0-cp311-cp311-linux_aarch64.whl") 
    
    with open("DUT_env_prepare.sh", "w") as file:
        file.write(shell_cmd_content)
```

(WARNING) However, for tflite-runtime, it currently only supports pairing with python3.11 (i.e., packages-23.05), due to the numpy version requirement of tensorflow-2.18.0, only packages-23.05 is compatible in the official source and openwrt source.

Based on your own environment and needs, modify **OPKGS_NEEDED, TFTP_SERVER_IP, OPKG_MIRROR_URL, INSTALL_IN_RAM** in tflite_runtime_py_env_prepare.py.

After executing tflite_runtime_py_env_prepare.py on the PC, you will get all dependencies and a file named **DUT_env_prepare.sh**. Transfer this sh file to the DUT and run it to achieve offline installation of tflite_runtime.

If you only need to integrate python, simply comment out the part containing `"tflite_runtime-2.18.0-cp311-cp311-linux_aarch64.whl"` in tflite_runtime_py_env_prepare.py.

### Verify tflite Integration Success

If the output is as follows, it means the integration is successful.

```
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Prediction:  tabby, tabby cat
```

# Locally Build tflite-runtime Wheel Package

The official only provides the glibc wheel package, but openwrt's C standard library is musl, so we need to rebuild it ourselves.

During the build process, remember to refer to these URLs:

[kasitoru/tflite_runtime-musl: Build the TensorFlow Lite tflite_runtime Python library for ARM devices using musl libc (github.com)](https://github.com/kasitoru/tflite_runtime-musl/tree/main)

[feranick/TFlite-builds: TFlite cross-platform builds (github.com)](https://github.com/feranick/TFlite-builds)

Here is a record of the build process and the pitfalls encountered:

## tensorflow Download

First, you need to download tensorflow from github. It's almost impossible to git clone successfully, so we can select a fixed version from the tag and download the zip package.

## Cross-compilation Based on musl libc

Quick build:

1. cd to your tensorflow-2.18-rc1 folder.
2. Download the patch file from T417309#10451504.
3. `patch -p1 < tflite-2-18-rc1-musl.patch`
4. `make -C tensorflow/lite/tools/pip_package docker-build TENSORFLOW_TARGET=aarch64 PYTHON_VERSION=3.11`
5. After successfully building the docker, execute: `/tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)`
6. Wait for the build to complete.

### Docker Network Unblocking

Most Docker build failures are due to network issues, such as the ubuntu source (amd64), ubuntu-ports source (arm64 and armhf), and PPA source. One reason is the intranet gateway, and the other is the inability to access foreign sources.

This description is concise, but this pitfall actually took me a lot of time.

Things to note:

1. Dockerfile.py3
    1. Both wget and curl https links need to cancel authentication.
    2. Adding deadsnakes' ppa will cause errors, so switch to adding mirror sources.
    3. Add timezone when apt-get install, otherwise it will get stuck.
    4. Modify cmake version, refer to https://github.com/feranick/TFlite-builds
2. update_sources.sh and update_ppa_deadsnakes.sh: change sources
    1. Cannot use http sources, otherwise `hash sum mismatch` error will occur, only https sources can be used.
    2. Use USTC mirror for PPA https://launchpad.proxy.ustclug.org/
3. Makefile: refer to https://github.com/feranick/TFlite-builds

All modifications are commented, see the diff file below for details:

```diff
diff --git a/tensorflow/lite/tools/pip_package/Dockerfile.py3 b/tensorflow/lite/tools/pip_package/Dockerfile.py3
index 63373905..d9035768 100644
--- a/tensorflow/lite/tools/pip_package/Dockerfile.py3
+++ b/tensorflow/lite/tools/pip_package/Dockerfile.py3
@@ -33,33 +33,40 @@ RUN apt-get update && \
     apt-get clean
 
 # Install Bazel.
-RUN wget https://github.com/bazelbuild/bazelisk/releases/download/v1.15.0/bazelisk-linux-amd64 \
+# wget and curl https links need to cancel authentication.
+RUN wget --no-check-certificate https://github.com/bazelbuild/bazelisk/releases/download/v1.15.0/bazelisk-linux-amd64 \
   -O /usr/local/bin/bazel && chmod +x /usr/local/bin/bazel
 
 # Install Python packages.
 RUN dpkg --add-architecture armhf
 RUN dpkg --add-architecture arm64
-RUN yes | add-apt-repository ppa:deadsnakes/ppa
-RUN apt-get update && \
-    apt-get install -y \
+## Adding deadsnakes' ppa will cause errors, so switch to adding mirror sources.
+COPY update_ppa_deadsnakes.sh /
+RUN /update_ppa_deadsnakes.sh
+# RUN yes | add-apt-repository ppa:deadsnakes/ppa
+RUN apt-get update
+# Add timezone when apt-get install, otherwise it will get stuck.
+RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC \
+      apt-get install -y \
       python$PYTHON_VERSION \
       python$PYTHON_VERSION-dev \
       python$PYTHON_VERSION-venv \
       python$PYTHON_VERSION-distutils \
       libpython$PYTHON_VERSION-dev \
       libpython$PYTHON_VERSION-dev:armhf \
-      libpython$PYTHON_VERSION-dev:arm64
+      libpython$PYTHON_VERSION-dev:arm64  
 RUN ln -sf /usr/bin/python$PYTHON_VERSION /usr/bin/python3
-RUN curl -OL https://bootstrap.pypa.io/get-pip.py
+RUN curl -k -OL https://bootstrap.pypa.io/get-pip.py
 RUN python3 get-pip.py
 RUN rm get-pip.py
 RUN pip3 install --upgrade pip
 RUN pip3 install numpy~=$NUMPY_VERSION setuptools pybind11
 RUN ln -sf /usr/include/python$PYTHON_VERSION /usr/include/python3
 RUN ln -sf /usr/local/lib/python$PYTHON_VERSION/dist-packages/numpy/core/include/numpy /usr/include/python3/numpy
-RUN curl -OL https://github.com/Kitware/CMake/releases/download/v3.16.8/cmake-3.16.8-Linux-x86_64.sh
+# Modify cmake version, refer to https://github.com/feranick/TFlite-builds
+RUN curl -k -OL https://cmake.org/files/v3.29/cmake-3.29.6-linux-x86_64.sh
 RUN mkdir /opt/cmake
-RUN sh cmake-3.16.8-Linux-x86_64.sh --prefix=/opt/cmake --skip-license
+RUN sh cmake-3.29.6-linux-x86_64.sh --prefix=/opt/cmake --skip-license
 RUN ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake
 
 ENV CI_BUILD_PYTHON=python$PYTHON_VERSION
diff --git a/tensorflow/lite/tools/pip_package/Makefile b/tensorflow/lite/tools/pip_package/Makefile
index 24bc4970..ff85b8da 100644
--- a/tensorflow/lite/tools/pip_package/Makefile
+++ b/tensorflow/lite/tools/pip_package/Makefile
@@ -13,9 +13,10 @@
 # limitations under the License.
 
 # Values: debian:<version>, ubuntu:<version>
 BASE_IMAGE ?= ubuntu:20.04
 PYTHON_VERSION ?= 3.11
-NUMPY_VERSION ?= 1.23.2
+NUMPY_VERSION ?= 1.24.4
 
 # Values: rpi, aarch64, native
 TENSORFLOW_TARGET ?= native
@@ -73,4 +74,7 @@ docker-build: docker-image
 		--rm --interactive $(shell tty -s && echo --tty) \
 		$(DOCKER_PARAMS) \
 		$(TAG_IMAGE) \
-		/with_the_same_user /bin/bash -C /tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)
+		/with_the_same_user /bin/bash
+		# /with_the_same_user /bin/bash -C /tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)
+# Slightly modified above to avoid reloading docker every time there is a compilation error
+# Correspondingly, after entering the container, manually execute the command after the commented-out bash -C
diff --git a/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh b/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh
new file mode 100755
index 00000000..0332e665
--- /dev/null
+++ b/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+set -ex
+
+. /etc/os-release
+
+[[ "${NAME}" == "Ubuntu" ]] || exit 0
+
+yes | apt-get install gnupg
+apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BA6932366A755776 # USTC PPA source public key
+
+cat <<EOT >> /etc/apt/sources.list
+
+## python deadsnakes ppa
+deb https://launchpad.proxy.ustclug.org/deadsnakes/ppa/ubuntu/ ${UBUNTU_CODENAME} main
+# deb-src https://launchpad.proxy.ustclug.org/deadsnakes/ppa/ubuntu/ ${UBUNTU_CODENAME} main
+
+EOT
\ No newline at end of file
diff --git a/tensorflow/lite/tools/pip_package/update_sources.sh b/tensorflow/lite/tools/pip_package/update_sources.sh
index 40e3213c..bf4898e5 100755
--- a/tensorflow/lite/tools/pip_package/update_sources.sh
+++ b/tensorflow/lite/tools/pip_package/update_sources.sh
@@ -15,14 +15,41 @@
 # ==============================================================================
 
 #!/bin/bash
+
+set -ex
+
 . /etc/os-release
 
 [[ "${NAME}" == "Ubuntu" ]] || exit 0
 
 sed -i "s/deb\ /deb \[arch=amd64\]\ /g" /etc/apt/sources.list
 
+
+apt-get update
+
+# Without installing ca-certificates, https cannot be downloaded
+yes | apt-get install ca-certificates
+
 cat <<EOT >> /etc/apt/sources.list
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME} main universe
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME}-updates main universe
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME}-security main universe
+
+## aarch64 and armhf sources
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME} main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME} main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-security main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-security main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-updates main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-updates main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-backports main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-backports main restricted universe multiverse
+
+## Not recommended
+# deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-proposed main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-proposed main restricted universe multiverse
+
 EOT

```

### Cross-compilation based on musl libc

Reference: [kasitoru/tflite_runtime-musl: Build the TensorFlow Lite tflite_runtime Python library for ARM devices using musl libc (github.com)](https://github.com/kasitoru/tflite_runtime-musl/tree/main)

[Build TensorFlow Lite Python Wheel Package (google.cn)](https://tensorflow.google.cn/lite/guide/build_cmake_pip?hl=zh-cn)

Things to note:

All modifications are commented, see the diff file below for details:

1. download_toolchains.sh
    1. Change the toolchain from glibc to musl libc
    2. The toolchain cannot be downloaded from external URLs, I have replaced it, and the toolchain files are available on the intranet pha.
    3. If this script is not modified or fails after modification, the error `error: conflicting types for 'cpuinfo_isa'; have 'struct cpuinfo_arm_isa'` will occur
2. build_pip_package_with_cmake.sh
    1. Disable verification and increase the buffer size for successful git clone
    2. [Apply a patch, otherwise compilation will fail based on musl](https://github.com/sartura/flatbuffers/commit/92bd62407329caacd66e92e5bfd2949f2f137bfe#diff-cbd47ef1c2023c38eaa5f2ae941fb09e74c38cafdfbbf968ea68b6cc96a7d257R268)

```diff
diff --git a/tensorflow/lite/tools/cmake/download_toolchains.sh b/tensorflow/lite/tools/cmake/download_toolchains.sh
index 02ff70c7..9ea3ad6c 100755
--- a/tensorflow/lite/tools/cmake/download_toolchains.sh
+++ b/tensorflow/lite/tools/cmake/download_toolchains.sh
@@ -14,13 +14,9 @@
 # limitations under the License.
 # ==============================================================================
 
-# Download GCC 8.3 based toolchains.
-# Using up-to-date toolchain introduces compatibility issues.
-# https://github.com/tensorflow/tensorflow/issues/59631
-#
-# In Bazel build, we uses GCC 11.3 based toolchains to support FP16 better
-# with XNNPACK. https://github.com/tensorflow/tensorflow/pull/57585
-
+# If this script is not modified or fails after modification, the error
+# error: conflicting types for 'cpuinfo_isa'; have 'struct cpuinfo_arm_isa'
+# will occur
 set -e
 
 SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
@@ -30,55 +26,88 @@ TOOLCHAINS_DIR=$(realpath tensorflow/lite/tools/cmake/toolchains)
 mkdir -p ${TOOLCHAINS_DIR}
 
 case $1 in
-  armhf)
-    if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf" ]]; then
-      curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/bin rel/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz >&2
-      tar xvf gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz -C ${TOOLCHAINS_DIR} >&2
+	armhf)
+    ARMCC_ROOT=${TOOLCHAINS_DIR}/armv7l-linux-musleabihf-cross
+    if [[ ! -d ${ARMCC_ROOT} ]]; then
+      curl -LO https://more.musl.cc/10/x86_64-linux-musl/armv7l-linux-musleabihf-cross.tgz >&2
+      tar zxvf armv7l-linux-musleabihf-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+      rm armv7l-linux-musleabihf-cross.tgz     
+      echo '#define __BEGIN_DECLS extern "C" {' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+      echo '#define __END_DECLS }' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+      echo '#define __THROW' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+      echo '#define __nonnull(params)' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
     fi
-    ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
     echo "ARMCC_FLAGS=\"-march=armv7-a -mfpu=neon-vfpv4 -funsafe-math-optimizations \
-      -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include \
-      -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include-fixed \
-      -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/include/c++/8.3.0 \
-      -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/libc/usr/include \
+      -isystem ${ARMCC_ROOT}/armv7l-linux-musleabihf/include/c++/10.2.1 \
+      -isystem ${ARMCC_ROOT}/armv7l-linux-musleabihf/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/armv7l-linux-musleabihf/10.2.1/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/armv7l-linux-musleabihf/10.2.1/include-fixed \
       -isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
       -isystem /usr/include\""
-    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/arm-linux-gnueabihf-"
-    ;;
-  aarch64)
-    if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu" ]]; then
-      curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz >&2
-      tar xvf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C ${TOOLCHAINS_DIR} >&2
+    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/armv7l-linux-musleabihf-"
+		;;
+	aarch64)
+    ARMCC_ROOT=${TOOLCHAINS_DIR}/aarch64-linux-musl-cross
+    if [[ ! -d ${ARMCC_ROOT} ]]; then
+      curl -LO https://more.musl.cc/10/x86_64-linux-musl/aarch64-linux-musl-cross.tgz >&2
+      tar zxvf aarch64-linux-musl-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+      rm aarch64-linux-musl-cross.tgz
     fi
-    ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu
     echo "ARMCC_FLAGS=\"-funsafe-math-optimizations \
-      -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-gnu/8.3.0/include \
-      -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-gnu/8.3.0/include-fixed \
-      -isystem ${ARMCC_ROOT}/aarch64-linux-gnu/include/c++/8.3.0 \
-      -isystem ${ARMCC_ROOT}/aarch64-linux-gnu/libc/usr/include \
+      -isystem ${ARMCC_ROOT}/aarch64-linux-musl/include/c++/10.2.1 \
+      -isystem ${ARMCC_ROOT}/aarch64-linux-musl/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-musl/10.2.1/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-musl/10.2.1/include-fixed \
       -isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
       -isystem /usr/include\""
-    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/aarch64-linux-gnu-"
-    ;;
-  rpi0)
-    if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf" ]]; then
-      curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz >&2
-      tar xvf gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz -C ${TOOLCHAINS_DIR} >&2
+    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/aarch64-linux-musl-"
+		;;
+	rpi0)
+    ARMCC_ROOT=${TOOLCHAINS_DIR}/armv6-linux-musleabihf-cross
+    if [[ ! -d ${ARMCC_ROOT} ]]; then
+      curl -LO https://more.musl.cc/10/x86_64-linux-musl/armv6-linux-musleabihf-cross.tgz >&2
+      tar zxvf armv6-linux-musleabihf-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+      rm armv6-linux-musleabihf-cross.tgz
+      echo '#define __BEGIN_DECLS extern "C" {' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+      echo '#define __END_DECLS }' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+      echo '#define __THROW' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+      echo '#define __nonnull(params)' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
     fi
-    ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
-    echo "ARMCC_FLAGS=\"-march=armv6 -mfpu=vfp -mfloat-abi=hard -funsafe-math-optimizations \
-      -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include \
-      -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include-fixed \
-      -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/include/c++/8.3.0 \
-      -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/libc/usr/include \
+    echo "ARMCC_FLAGS=\"-march=armv6 -mfpu=vfp -funsafe-math-optimizations \
+      -isystem ${ARMCC_ROOT}/armv6-linux-musleabihf/include/c++/10.2.1 \
+      -isystem ${ARMCC_ROOT}/armv6-linux-musleabihf/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/armv6-linux-musleabihf/10.2.1/include \
+      -isystem ${ARMCC_ROOT}/lib/gcc/armv6-linux-musleabihf/10.2.1/include-fixed \
       -isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
       -isystem /usr/include\""
-    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/arm-linux-gnueabihf-"
-    ;;
-  *)
-    echo "Usage: download_toolchains.sh [armhf|aarch64|rpi0]" >&2
+    echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/armv6-linux-musleabihf-"
+		;;
+	*)
+		echo "Usage: download_toolchains.sh [armhf|aarch64|rpi0]" >&2
     exit
-    ;;
+		;;
   esac
 
 echo "download_toolchains.sh completed successfully." >&2
diff --git a/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh b/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
index aa5b9eb7..6b453575 100755
--- a/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
+++ b/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
@@ -15,6 +15,12 @@
 # ==============================================================================
 set -ex
 
+# For the subsequent git clone to succeed
+git config --global http.sslverify false
+git config --global https.sslverify false
+git config --global http.postBuffer 52428000
+git config --global https.postBuffer 52428000
+
 SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 PYTHON="${CI_BUILD_PYTHON:-python3}"
 VERSION_SUFFIX=${VERSION_SUFFIX:-}
@@ -50,7 +56,9 @@ if [ ! -z "${CI_BUILD_HOME}" ] && [ `pwd` = "/workspace" ]; then
 fi
 
 # Build source tree.
-rm -rf "${BUILD_DIR}" && mkdir -p "${BUILD_DIR}/tflite_runtime"
+# Do not delete it every time you compile, change it to manual deletion. Otherwise, you have to re-download the dependency packages every time you compile, which is very slow.
+# rm -rf "${BUILD_DIR}" && mkdir -p "${BUILD_DIR}/tflite_runtime"
+mkdir -p "${BUILD_DIR}/tflite_runtime"
 cp -r "${TENSORFLOW_LITE_DIR}/tools/pip_package/debian" \
       "${TENSORFLOW_LITE_DIR}/tools/pip_package/MANIFEST.in" \
       "${TENSORFLOW_LITE_DIR}/python/interpreter_wrapper" \
@@ -120,7 +128,7 @@ case "${TENSORFLOW_TARGET}" in
       -DCMAKE_SYSTEM_PROCESSOR=aarch64 \
       -DXNNPACK_ENABLE_ARM_I8MM=OFF \
       -DTFLITE_HOST_TOOLS_DIR="${HOST_BUILD_DIR}" \
-      "${TENSORFLOW_LITE_DIR}"
+      "${TENSORFLOW_LITE_DIR}" --debug-output # If the compilation process is not detailed, often one of the packages gets stuck due to network issues, and it remains stuck for a long time without knowing.
     ;;
   native)
     BUILD_FLAGS=${BUILD_FLAGS:-"-march=native -I${PYTHON_INCLUDE} -I${PYBIND11_INCLUDE} -I${NUMPY_INCLUDE}"}
@@ -138,6 +146,13 @@ case "${TENSORFLOW_TARGET}" in
     ;;
 esac
 
+# refer https://github.com/versatica/mediasoup/issues/1223
+# refer https://github.com/sartura/flatbuffers/commit/92bd62407329caacd66e92e5bfd2949f2f137bfe#diff-cbd47ef1c2023c38eaa5f2ae941fb09e74c38cafdfbbf968ea68b6cc96a7d257R268
+cd "${BUILD_DIR}/cmake_build/flatbuffers/include/flatbuffers"
+patch base.h < flatbuffers_base.h.diff
+cd -
+
 cmake --build . --verbose -j ${BUILD_NUM_JOBS} -t _pywrap_tensorflow_interpreter_wrapper
 cd "${BUILD_DIR}"
```
]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
        <item>
            <title><![CDATA[2024年五一假期香港旅行]]></title>
            <link>https://blog.hekmon.com/blogs/travel-to-HongKong-2024</link>
            <guid>https://blog.hekmon.com/blogs/travel-to-HongKong-2024</guid>
            <pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Travel to HongKong 2024]]></description>
            <content:encoded><![CDATA[
# 📍香港旅行经验&吐槽贴

五一去了趟香港，第一次解锁境外体验，感受颇多

## 感受
1️⃣ 快节奏🚶
藏于细节之中，绿灯短促的滴滴声、公交车总是挨的很近，诸如此类
2️⃣ 世界经济要塞💰
无处不在的各家银行、银行业务人满为患（不乏我这种大陆去香港办卡的背包客），甚至虚拟银行、支持加密货币的银行也有很多，如号称香港第一虚拟银行的ZA Bank
3️⃣ 高昂的物价与漫地的顶奢💍
打个车1.25km花了52HKD，动辄十几万的金佛像/包包、价值百万的格子🏠随处可见，香港最低工资是杭州的四倍，收入水平本身高，再加上全境免税政策造就了繁荣的贸易港，但是对大陆人来说，赚RMB花港币不是个好路子
4️⃣ 发达的上层经济与不可忽视的底层劳动力👨🏿‍💼
境内有跨省打工人，香港有跨国打工人，东南亚&非洲的劳动力撑起了外卖、快递等基础生活，以及休息日随处可见的 地摊露营⛺️

## 经验

另外，再分享点🇭🇰的旅行经验：
1️⃣ 八达通很不错，商家基本都支持，搭配Apple pay NFC，付款非常丝滑，全程没有用现金，另外云闪付五一有300-15的充值活动，PS：最近境内可能也要推动银联线下支付了
2️⃣ 不要信小红书的热门美食帖，排队人巨多，而且也很难吃，如风很大的 金华冰室、新记车仔面等，不值得，不如用本地版大众点评 openRice 看看有什么不错的
3️⃣ 可以适度薅羊毛，如 KeeTa 新用户有80-50的优惠券（邀请同行朋友下单再得80-50），如ZA bank的300HKD返现
4️⃣ 办银行卡，建议先预约，不要walk in，汇丰人很多，没排，预约沙田线下办了恒生，当场拿卡（大陆信用卡地址账单➕财力证明），线上办了中银香港（次日审核通过，邮寄银行卡，等待中）➕ZA bank，可惜木有信用卡💳

## Hightlight

🌟 作为一个观光的游客，香港给我的感受是 下次不会再来了，压抑的城市网格、破败的城市街区、稀疏的城市绿化，远不如高架满是🌸的杭州舒适
🌟 作为打工人，我还是会再来香港，毕竟在这里拥有数字身份，才是走向世界的开始

以上是关于🇭🇰的一点随手记，下一站 🚌 🇲🇴]]></content:encoded>
            <author>wishhself@gmail.com (Hekmon)</author>
        </item>
    </channel>
</rss>