Run tensorflow in ARM
To achieve image recognition and user behavior recognition in routers, we attempted to introduce AI. After research, we found that TensorFlow has the potential to run on ARM, so we started working in this direction.
Python Environment Preparation
Prerequisites
- A router device.
- PC: Ubuntu, with access to GitHub, GitLab, and Gitee.
- Compatible TensorFlow version: 2.18.0-rc1
Modify Configuration Files
First, determine the architecture type of your DUT: Enter opkg info | grep Architecture | sort | uniq
to see the architecture of the installed packages. In this case, it is aarch64_cortex-a73_neon-vfpv4
, but the software source download URL does not have aarch64_cortex-a73_neon-vfpv4
, so we download aarch64_generic
.
OpenWrt software source version number: This is mainly determined by the version of Python 3 you want to install. As long as the architecture is correct, the version doesn't matter much, but for example, 19.07 only has Python 3.7 packages, and 21.02 only has Python 3.9 packages.
Since the software source download URL does not have aarch64_cortex-a73_neon-vfpv4
, we need to add aarch64_generic
to the supported package architectures:
# Execute on DUT
echo arch all 100 >> /etc/opkg.conf
echo arch aarch64_generic 200 >> /etc/opkg.conf
echo arch aarch64_cortex-a73_neon-vfpv4 300 >> /etc/opkg.conf
This number refers to the priority of the software source.
Install Python and pip (to memory)
First, modify the environment variable for dynamic libraries:
# Execute on DUT
export LD_LIBRARY_PATH=/tmp/usr/lib:$LD_LIBRARY_PATH
alias pip='python -m pip'
And define an alias for pip. If you install pip this way, without defining an alias, you can only execute it through python -m pip
.
OPKG Automated Offline Installation
Since the DUT does not have wget
, and my previous attempt to install wget
failed, we cannot use opkg install
for online installation on the DUT.
Initially, my idea was to directly download the required packages from the mirror source and then install them offline. However, this approach has many disadvantages:
- You need to download dependency packages one by one, and you only know the dependencies after installation fails.
- Changing versions or architectures requires starting over, which is very troublesome.
My current solution is to write a script to resolve opkg dependencies, download the dependency packages on the PC, generate a shell script with the dependency relationships, and then transfer it to the DUT for offline installation. (Dependency relationships are resolved by parsing the Packages file under each category.)
The entire process is very automated. Just provide the required opkg names, TFTP server address, mirror source, and a few configuration options. This Python script will download the required packages, dependency packages, generate dependency relationships, and even automatically generate the DUT installation script.
Automated Python and tflite_runtime Environment Integration Script
For the integration of Python and tflite_runtime, an automated Python script has also been implemented: tflite_runtime_py_env_prepare.py
, as follows:
import argparse
import os
import subprocess
OPKGS_NEEDED = [
"python3-pip",
"python3-numpy",
"python3-pillow",
# "gcc",
]
TFTP_SERVER_IP = "192.168.1.100"
OPKG_MIRROR_URL = "https://mirrors.aliyun.com/openwrt/releases/packages-23.05/aarch64_generic/" # Precise to version and architecture
INSTALL_IN_RAM = True # Whether to install in RAM
def shell_cmd_pkg(url_prefix, pkg_name):
global INSTALL_IN_RAM
shell_cmd = ""
if url_prefix:
os.system(f"wget --no-check-certificate --no-clobber {url_prefix}{pkg_name}")
shell_cmd += f"tftp -gr {pkg_name} {TFTP_SERVER_IP} && "
shell_cmd += f"chmod 777 {pkg_name} && "
if pkg_name.endswith(".ipk"):
shell_cmd += f"opkg install {pkg_name}{' -d ram' if INSTALL_IN_RAM else ''} && "
elif pkg_name.endswith(".sh"):
shell_cmd += f"./{pkg_name} && "
elif pkg_name.endswith(".whl"):
shell_cmd += f"python -m pip install {pkg_name} && "
shell_cmd += f"rm {pkg_name}\n"
return shell_cmd
if __name__ == '__main__':
shell_cmd_content = ""
# Pre-preparation
shell_cmd_content += "cd /tmp\n"
shell_cmd_content += "export LD_LIBRARY_PATH=/tmp/usr/lib:$LD_LIBRARY_PATH\n"
# for gcc, aarch64
# shell_cmd_content += shell_cmd_pkg("https://downloads.openwrt.org/snapshots/targets/ipq807x/generic/packages/", "libstdcpp6_12.3.0-4_aarch64_cortex-a53.ipk")
# for py311, armhf
# shell_cmd_content += shell_cmd_pkg("https://mirrors.aliyun.com/openwrt/releases/23.05.5/targets/mediatek/mt7629/packages/", "libatomic1_12.3.0-4_arm_cortex-a7.ipk")
# Install Python and pip
subprocess.run(["python3", "opkg_prepare.py",
"--tftp-server-ip", TFTP_SERVER_IP,
"--mirror-url", OPKG_MIRROR_URL,]
+ (["--ram"] if INSTALL_IN_RAM else [])
+ OPKGS_NEEDED)
shell_cmd_content += shell_cmd_pkg(None, "opkg_install_from_tftp.sh")
# Install pip
# os.system("wget --no-check-certificate -nc https://bootstrap.pypa.io/pip/get-pip.py")
# shell_cmd_content += f"tftp -gr get-pip.py {TFTP_SERVER_IP} && python3 get-pip.py && rm get-pip.py\n"
# Install pkginfo
shell_cmd_content += shell_cmd_pkg("https://files.pythonhosted.org/packages/c0/38/d617739840a2f576e400f03fea0a
# Install tflite_runtime
shell_cmd_content += shell_cmd_pkg("xxx", "tflite_runtime-2.18.0-cp311-cp311-linux_aarch64.whl")
with open("DUT_env_prepare.sh", "w") as file:
file.write(shell_cmd_content)
(WARNING) However, for tflite-runtime, it currently only supports pairing with python3.11 (i.e., packages-23.05), due to the numpy version requirement of tensorflow-2.18.0, only packages-23.05 is compatible in the official source and openwrt source.
Based on your own environment and needs, modify OPKGS_NEEDED, TFTP_SERVER_IP, OPKG_MIRROR_URL, INSTALL_IN_RAM in tflite_runtime_py_env_prepare.py.
After executing tflite_runtime_py_env_prepare.py on the PC, you will get all dependencies and a file named DUT_env_prepare.sh. Transfer this sh file to the DUT and run it to achieve offline installation of tflite_runtime.
If you only need to integrate python, simply comment out the part containing "tflite_runtime-2.18.0-cp311-cp311-linux_aarch64.whl"
in tflite_runtime_py_env_prepare.py.
Verify tflite Integration Success
If the output is as follows, it means the integration is successful.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Prediction: tabby, tabby cat
Locally Build tflite-runtime Wheel Package
The official only provides the glibc wheel package, but openwrt's C standard library is musl, so we need to rebuild it ourselves.
During the build process, remember to refer to these URLs:
feranick/TFlite-builds: TFlite cross-platform builds (github.com)
Here is a record of the build process and the pitfalls encountered:
tensorflow Download
First, you need to download tensorflow from github. It's almost impossible to git clone successfully, so we can select a fixed version from the tag and download the zip package.
Cross-compilation Based on musl libc
Quick build:
- cd to your tensorflow-2.18-rc1 folder.
- Download the patch file from T417309#10451504.
patch -p1 < tflite-2-18-rc1-musl.patch
make -C tensorflow/lite/tools/pip_package docker-build TENSORFLOW_TARGET=aarch64 PYTHON_VERSION=3.11
- After successfully building the docker, execute:
/tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)
- Wait for the build to complete.
Docker Network Unblocking
Most Docker build failures are due to network issues, such as the ubuntu source (amd64), ubuntu-ports source (arm64 and armhf), and PPA source. One reason is the intranet gateway, and the other is the inability to access foreign sources.
This description is concise, but this pitfall actually took me a lot of time.
Things to note:
- Dockerfile.py3
- Both wget and curl https links need to cancel authentication.
- Adding deadsnakes' ppa will cause errors, so switch to adding mirror sources.
- Add timezone when apt-get install, otherwise it will get stuck.
- Modify cmake version, refer to https://github.com/feranick/TFlite-builds
- update_sources.sh and update_ppa_deadsnakes.sh: change sources
- Cannot use http sources, otherwise
hash sum mismatch
error will occur, only https sources can be used. - Use USTC mirror for PPA https://launchpad.proxy.ustclug.org/
- Cannot use http sources, otherwise
- Makefile: refer to https://github.com/feranick/TFlite-builds
All modifications are commented, see the diff file below for details:
diff --git a/tensorflow/lite/tools/pip_package/Dockerfile.py3 b/tensorflow/lite/tools/pip_package/Dockerfile.py3
index 63373905..d9035768 100644
--- a/tensorflow/lite/tools/pip_package/Dockerfile.py3
+++ b/tensorflow/lite/tools/pip_package/Dockerfile.py3
@@ -33,33 +33,40 @@ RUN apt-get update && \
apt-get clean
# Install Bazel.
-RUN wget https://github.com/bazelbuild/bazelisk/releases/download/v1.15.0/bazelisk-linux-amd64 \
+# wget and curl https links need to cancel authentication.
+RUN wget --no-check-certificate https://github.com/bazelbuild/bazelisk/releases/download/v1.15.0/bazelisk-linux-amd64 \
-O /usr/local/bin/bazel && chmod +x /usr/local/bin/bazel
# Install Python packages.
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture arm64
-RUN yes | add-apt-repository ppa:deadsnakes/ppa
-RUN apt-get update && \
- apt-get install -y \
+## Adding deadsnakes' ppa will cause errors, so switch to adding mirror sources.
+COPY update_ppa_deadsnakes.sh /
+RUN /update_ppa_deadsnakes.sh
+# RUN yes | add-apt-repository ppa:deadsnakes/ppa
+RUN apt-get update
+# Add timezone when apt-get install, otherwise it will get stuck.
+RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC \
+ apt-get install -y \
python$PYTHON_VERSION \
python$PYTHON_VERSION-dev \
python$PYTHON_VERSION-venv \
python$PYTHON_VERSION-distutils \
libpython$PYTHON_VERSION-dev \
libpython$PYTHON_VERSION-dev:armhf \
- libpython$PYTHON_VERSION-dev:arm64
+ libpython$PYTHON_VERSION-dev:arm64
RUN ln -sf /usr/bin/python$PYTHON_VERSION /usr/bin/python3
-RUN curl -OL https://bootstrap.pypa.io/get-pip.py
+RUN curl -k -OL https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN rm get-pip.py
RUN pip3 install --upgrade pip
RUN pip3 install numpy~=$NUMPY_VERSION setuptools pybind11
RUN ln -sf /usr/include/python$PYTHON_VERSION /usr/include/python3
RUN ln -sf /usr/local/lib/python$PYTHON_VERSION/dist-packages/numpy/core/include/numpy /usr/include/python3/numpy
-RUN curl -OL https://github.com/Kitware/CMake/releases/download/v3.16.8/cmake-3.16.8-Linux-x86_64.sh
+# Modify cmake version, refer to https://github.com/feranick/TFlite-builds
+RUN curl -k -OL https://cmake.org/files/v3.29/cmake-3.29.6-linux-x86_64.sh
RUN mkdir /opt/cmake
-RUN sh cmake-3.16.8-Linux-x86_64.sh --prefix=/opt/cmake --skip-license
+RUN sh cmake-3.29.6-linux-x86_64.sh --prefix=/opt/cmake --skip-license
RUN ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake
ENV CI_BUILD_PYTHON=python$PYTHON_VERSION
diff --git a/tensorflow/lite/tools/pip_package/Makefile b/tensorflow/lite/tools/pip_package/Makefile
index 24bc4970..ff85b8da 100644
--- a/tensorflow/lite/tools/pip_package/Makefile
+++ b/tensorflow/lite/tools/pip_package/Makefile
@@ -13,9 +13,10 @@
# limitations under the License.
# Values: debian:<version>, ubuntu:<version>
BASE_IMAGE ?= ubuntu:20.04
PYTHON_VERSION ?= 3.11
-NUMPY_VERSION ?= 1.23.2
+NUMPY_VERSION ?= 1.24.4
# Values: rpi, aarch64, native
TENSORFLOW_TARGET ?= native
@@ -73,4 +74,7 @@ docker-build: docker-image
--rm --interactive $(shell tty -s && echo --tty) \
$(DOCKER_PARAMS) \
$(TAG_IMAGE) \
- /with_the_same_user /bin/bash -C /tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)
+ /with_the_same_user /bin/bash
+ # /with_the_same_user /bin/bash -C /tensorflow/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh $(TENSORFLOW_TARGET)
+# Slightly modified above to avoid reloading docker every time there is a compilation error
+# Correspondingly, after entering the container, manually execute the command after the commented-out bash -C
diff --git a/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh b/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh
new file mode 100755
index 00000000..0332e665
--- /dev/null
+++ b/tensorflow/lite/tools/pip_package/update_ppa_deadsnakes.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+set -ex
+
+. /etc/os-release
+
+[[ "${NAME}" == "Ubuntu" ]] || exit 0
+
+yes | apt-get install gnupg
+apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BA6932366A755776 # USTC PPA source public key
+
+cat <<EOT >> /etc/apt/sources.list
+
+## python deadsnakes ppa
+deb https://launchpad.proxy.ustclug.org/deadsnakes/ppa/ubuntu/ ${UBUNTU_CODENAME} main
+# deb-src https://launchpad.proxy.ustclug.org/deadsnakes/ppa/ubuntu/ ${UBUNTU_CODENAME} main
+
+EOT
\ No newline at end of file
diff --git a/tensorflow/lite/tools/pip_package/update_sources.sh b/tensorflow/lite/tools/pip_package/update_sources.sh
index 40e3213c..bf4898e5 100755
--- a/tensorflow/lite/tools/pip_package/update_sources.sh
+++ b/tensorflow/lite/tools/pip_package/update_sources.sh
@@ -15,14 +15,41 @@
# ==============================================================================
#!/bin/bash
+
+set -ex
+
. /etc/os-release
[[ "${NAME}" == "Ubuntu" ]] || exit 0
sed -i "s/deb\ /deb \[arch=amd64\]\ /g" /etc/apt/sources.list
+
+apt-get update
+
+# Without installing ca-certificates, https cannot be downloaded
+yes | apt-get install ca-certificates
+
cat <<EOT >> /etc/apt/sources.list
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME} main universe
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME}-updates main universe
-deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports ${UBUNTU_CODENAME}-security main universe
+
+## aarch64 and armhf sources
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME} main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME} main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-security main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-security main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-updates main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-updates main restricted universe multiverse
+
+deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-backports main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-backports main restricted universe multiverse
+
+## Not recommended
+# deb [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-proposed main restricted universe multiverse
+# deb-src [arch=arm64,armhf] https://repo.huaweicloud.com/ubuntu-ports/ ${UBUNTU_CODENAME}-proposed main restricted universe multiverse
+
EOT
Cross-compilation based on musl libc
Build TensorFlow Lite Python Wheel Package (google.cn)
Things to note:
All modifications are commented, see the diff file below for details:
- download_toolchains.sh
- Change the toolchain from glibc to musl libc
- The toolchain cannot be downloaded from external URLs, I have replaced it, and the toolchain files are available on the intranet pha.
- If this script is not modified or fails after modification, the error
error: conflicting types for 'cpuinfo_isa'; have 'struct cpuinfo_arm_isa'
will occur
- build_pip_package_with_cmake.sh
- Disable verification and increase the buffer size for successful git clone
- Apply a patch, otherwise compilation will fail based on musl
diff --git a/tensorflow/lite/tools/cmake/download_toolchains.sh b/tensorflow/lite/tools/cmake/download_toolchains.sh
index 02ff70c7..9ea3ad6c 100755
--- a/tensorflow/lite/tools/cmake/download_toolchains.sh
+++ b/tensorflow/lite/tools/cmake/download_toolchains.sh
@@ -14,13 +14,9 @@
# limitations under the License.
# ==============================================================================
-# Download GCC 8.3 based toolchains.
-# Using up-to-date toolchain introduces compatibility issues.
-# https://github.com/tensorflow/tensorflow/issues/59631
-#
-# In Bazel build, we uses GCC 11.3 based toolchains to support FP16 better
-# with XNNPACK. https://github.com/tensorflow/tensorflow/pull/57585
-
+# If this script is not modified or fails after modification, the error
+# error: conflicting types for 'cpuinfo_isa'; have 'struct cpuinfo_arm_isa'
+# will occur
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
@@ -30,55 +26,88 @@ TOOLCHAINS_DIR=$(realpath tensorflow/lite/tools/cmake/toolchains)
mkdir -p ${TOOLCHAINS_DIR}
case $1 in
- armhf)
- if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf" ]]; then
- curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/bin rel/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz >&2
- tar xvf gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz -C ${TOOLCHAINS_DIR} >&2
+ armhf)
+ ARMCC_ROOT=${TOOLCHAINS_DIR}/armv7l-linux-musleabihf-cross
+ if [[ ! -d ${ARMCC_ROOT} ]]; then
+ curl -LO https://more.musl.cc/10/x86_64-linux-musl/armv7l-linux-musleabihf-cross.tgz >&2
+ tar zxvf armv7l-linux-musleabihf-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+ rm armv7l-linux-musleabihf-cross.tgz
+ echo '#define __BEGIN_DECLS extern "C" {' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+ echo '#define __END_DECLS }' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+ echo '#define __THROW' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
+ echo '#define __nonnull(params)' >> "${ARMCC_ROOT}/armv7l-linux-musleabihf/include/features.h"
fi
- ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
echo "ARMCC_FLAGS=\"-march=armv7-a -mfpu=neon-vfpv4 -funsafe-math-optimizations \
- -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include \
- -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include-fixed \
- -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/include/c++/8.3.0 \
- -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/libc/usr/include \
+ -isystem ${ARMCC_ROOT}/armv7l-linux-musleabihf/include/c++/10.2.1 \
+ -isystem ${ARMCC_ROOT}/armv7l-linux-musleabihf/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/armv7l-linux-musleabihf/10.2.1/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/armv7l-linux-musleabihf/10.2.1/include-fixed \
-isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
-isystem /usr/include\""
- echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/arm-linux-gnueabihf-"
- ;;
- aarch64)
- if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu" ]]; then
- curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz >&2
- tar xvf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C ${TOOLCHAINS_DIR} >&2
+ echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/armv7l-linux-musleabihf-"
+ ;;
+ aarch64)
+ ARMCC_ROOT=${TOOLCHAINS_DIR}/aarch64-linux-musl-cross
+ if [[ ! -d ${ARMCC_ROOT} ]]; then
+ curl -LO https://more.musl.cc/10/x86_64-linux-musl/aarch64-linux-musl-cross.tgz >&2
+ tar zxvf aarch64-linux-musl-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+ rm aarch64-linux-musl-cross.tgz
fi
- ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu
echo "ARMCC_FLAGS=\"-funsafe-math-optimizations \
- -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-gnu/8.3.0/include \
- -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-gnu/8.3.0/include-fixed \
- -isystem ${ARMCC_ROOT}/aarch64-linux-gnu/include/c++/8.3.0 \
- -isystem ${ARMCC_ROOT}/aarch64-linux-gnu/libc/usr/include \
+ -isystem ${ARMCC_ROOT}/aarch64-linux-musl/include/c++/10.2.1 \
+ -isystem ${ARMCC_ROOT}/aarch64-linux-musl/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-musl/10.2.1/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/aarch64-linux-musl/10.2.1/include-fixed \
-isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
-isystem /usr/include\""
- echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/aarch64-linux-gnu-"
- ;;
- rpi0)
- if [[ ! -d "${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf" ]]; then
- curl -LO https://storage.googleapis.com/mirror.tensorflow.org/developer.arm.com/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz >&2
- tar xvf gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz -C ${TOOLCHAINS_DIR} >&2
+ echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/aarch64-linux-musl-"
+ ;;
+ rpi0)
+ ARMCC_ROOT=${TOOLCHAINS_DIR}/armv6-linux-musleabihf-cross
+ if [[ ! -d ${ARMCC_ROOT} ]]; then
+ curl -LO https://more.musl.cc/10/x86_64-linux-musl/armv6-linux-musleabihf-cross.tgz >&2
+ tar zxvf armv6-linux-musleabihf-cross.tgz -C ${TOOLCHAINS_DIR} >&2
+ rm armv6-linux-musleabihf-cross.tgz
+ echo '#define __BEGIN_DECLS extern "C" {' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+ echo '#define __END_DECLS }' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+ echo '#define __THROW' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
+ echo '#define __nonnull(params)' >> "${ARMCC_ROOT}/armv6-linux-musleabihf/include/features.h"
fi
- ARMCC_ROOT=${TOOLCHAINS_DIR}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
- echo "ARMCC_FLAGS=\"-march=armv6 -mfpu=vfp -mfloat-abi=hard -funsafe-math-optimizations \
- -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include \
- -isystem ${ARMCC_ROOT}/lib/gcc/arm-linux-gnueabihf/8.3.0/include-fixed \
- -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/include/c++/8.3.0 \
- -isystem ${ARMCC_ROOT}/arm-linux-gnueabihf/libc/usr/include \
+ echo "ARMCC_FLAGS=\"-march=armv6 -mfpu=vfp -funsafe-math-optimizations \
+ -isystem ${ARMCC_ROOT}/armv6-linux-musleabihf/include/c++/10.2.1 \
+ -isystem ${ARMCC_ROOT}/armv6-linux-musleabihf/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/armv6-linux-musleabihf/10.2.1/include \
+ -isystem ${ARMCC_ROOT}/lib/gcc/armv6-linux-musleabihf/10.2.1/include-fixed \
-isystem \"\${CROSSTOOL_PYTHON_INCLUDE_PATH}\" \
-isystem /usr/include\""
- echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/arm-linux-gnueabihf-"
- ;;
- *)
- echo "Usage: download_toolchains.sh [armhf|aarch64|rpi0]" >&2
+ echo "ARMCC_PREFIX=${ARMCC_ROOT}/bin/armv6-linux-musleabihf-"
+ ;;
+ *)
+ echo "Usage: download_toolchains.sh [armhf|aarch64|rpi0]" >&2
exit
- ;;
+ ;;
esac
echo "download_toolchains.sh completed successfully." >&2
diff --git a/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh b/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
index aa5b9eb7..6b453575 100755
--- a/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
+++ b/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
@@ -15,6 +15,12 @@
# ==============================================================================
set -ex
+# For the subsequent git clone to succeed
+git config --global http.sslverify false
+git config --global https.sslverify false
+git config --global http.postBuffer 52428000
+git config --global https.postBuffer 52428000
+
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PYTHON="${CI_BUILD_PYTHON:-python3}"
VERSION_SUFFIX=${VERSION_SUFFIX:-}
@@ -50,7 +56,9 @@ if [ ! -z "${CI_BUILD_HOME}" ] && [ `pwd` = "/workspace" ]; then
fi
# Build source tree.
-rm -rf "${BUILD_DIR}" && mkdir -p "${BUILD_DIR}/tflite_runtime"
+# Do not delete it every time you compile, change it to manual deletion. Otherwise, you have to re-download the dependency packages every time you compile, which is very slow.
+# rm -rf "${BUILD_DIR}" && mkdir -p "${BUILD_DIR}/tflite_runtime"
+mkdir -p "${BUILD_DIR}/tflite_runtime"
cp -r "${TENSORFLOW_LITE_DIR}/tools/pip_package/debian" \
"${TENSORFLOW_LITE_DIR}/tools/pip_package/MANIFEST.in" \
"${TENSORFLOW_LITE_DIR}/python/interpreter_wrapper" \
@@ -120,7 +128,7 @@ case "${TENSORFLOW_TARGET}" in
-DCMAKE_SYSTEM_PROCESSOR=aarch64 \
-DXNNPACK_ENABLE_ARM_I8MM=OFF \
-DTFLITE_HOST_TOOLS_DIR="${HOST_BUILD_DIR}" \
- "${TENSORFLOW_LITE_DIR}"
+ "${TENSORFLOW_LITE_DIR}" --debug-output # If the compilation process is not detailed, often one of the packages gets stuck due to network issues, and it remains stuck for a long time without knowing.
;;
native)
BUILD_FLAGS=${BUILD_FLAGS:-"-march=native -I${PYTHON_INCLUDE} -I${PYBIND11_INCLUDE} -I${NUMPY_INCLUDE}"}
@@ -138,6 +146,13 @@ case "${TENSORFLOW_TARGET}" in
;;
esac
+# refer https://github.com/versatica/mediasoup/issues/1223
+# refer https://github.com/sartura/flatbuffers/commit/92bd62407329caacd66e92e5bfd2949f2f137bfe#diff-cbd47ef1c2023c38eaa5f2ae941fb09e74c38cafdfbbf968ea68b6cc96a7d257R268
+cd "${BUILD_DIR}/cmake_build/flatbuffers/include/flatbuffers"
+patch base.h < flatbuffers_base.h.diff
+cd -
+
cmake --build . --verbose -j ${BUILD_NUM_JOBS} -t _pywrap_tensorflow_interpreter_wrapper
cd "${BUILD_DIR}"