Transitioning to Mac M1 (ARM64)
Problem statement
Recently, I was forced to change laptops and use a new Apple Macbook Pro M1 Max (ARM64) architecture. I must admit the hardware is great and extremely silent in comparison with my former AMD Ryzen 3950X.
Everything started well because onboarding my IDEs and Visual Studio Code was trivial. Now, the surprise arrived when I tried to port on of my native C++ projects to the new hardware. In this article, I am focusing on the experience and the changes I had to do to be able to run and evolve my project on the new laptop.
A little bit of the C++ project:
I first tried to start docker-compose stack on my laptop. Initially it seemed to be working but extremely slow (5 mins startup time in comparison with 1 min on the former amd64 architecture). Nevertheless, the stack ended up with containers failing with SIGABRT 6 error reported by qemu.
Solution and Continuous Integration improvements
Hmmm, this was strange. Now, the fun begins ... Basically, somewhere in the compiled code there are things which can not be emulated out of the box.
In order to solve this issue, cross compilation was required so that we also have container images for arm64 bits. While there are multiple ways of achieving this, I opted for buildx support for linux/amd64 and linux/arm64 platforms. With this approach it was relatively easy to do a local build. Here, linux/arm64 builds were approximately 3x faster than linux/amd64 ones. Still, one obvious problem came from the fact there was no way to compile the project natively for Mac because of the epoll explicit dependency.
To be able to do incremental changes and local development I opted for the remote development docker based functionality of CLion (https://blog.jetbrains.com/clion/2020/01/using-docker-with-clion/). This is easy to setup and matched perfectly for my project as I was building a base image with the toolchain required to compile the project. This solved the local development mode and the actual experience is extremely good. In comparison with my former AMD Ryzen 3950X I noticed a 1.5x compilation improvement of the overall codebase (and without any fan noise as an additional benefit).
Ok, now I was almost done with the migration except continuous integration. This topic was quite tricky because the initial implementation of CI was done on top of Github Actions with hosted runners. While this is fine, there are no arm related builders for it. Moreover, because the base toolchain image increased a little bit in size after adding some additional packages the hosted solution seemed impractical.
This is why I automated an infrastructure for self hosted runners in AWS using c6gd spot instances. On top of this, there is an additional issue. The CI process was using a multi stage approach in which the build of the mono repo was done, then various tests were executing starting from that image, then static checks (clang format, clang-tidy-check) and only after that the final release images were built and published.
领英推荐
When introducing two platforms in docker I quickly realised buildx easily supports this with the docker-linux driver but the default docker engine does not support multi-arch / multi list imports of containers.
In order to solve this I used the following approach:
Here is a snippet of how to achieve this in github actions:
jobs
build-runtime:
runs-on: [self-hosted, linux, ARM64]
steps:
- name: Checkout code base.
uses: actions/checkout@v2
with:
submodules: true
- name: Load x64 docker image.
run: |
docker buildx build --platform linux/amd64 \
--tag masterplanner/themis-runtime-compiled-large:latest-amd64 \
-f cpp/deployment/platform/runtime.dockerfile \
--load .
- name: Build and run common layer x64.
run: docker run --platform linux/amd64 --rm masterplanner/themis-runtime-compiled-large:latest-amd64 -- "cd build && make -j16 common_tests && ./extensions/common/common_tests"
? ? ? - name: Load arm64 docker image.
? ? ? ? run: |
? ? ? ? ? docker buildx build --platform linux/arm64 \
? ? ? ? ? ? --tag masterplanner/themis-runtime-compiled-large:latest-arm64 \
? ? ? ? ? ? -f cpp/deployment/platform/runtime.dockerfile \
? ? ? ? ? ? --load .
? ? ? - name: Build and run common layer arm64.
? ? ? ? run: docker run --platform linux/arm64 --rm masterplanner/themis-runtime-compiled-large:latest-arm64 -- "cd build && make -j16 common_tests && ./extensions/common/common_tests"
? ? ? - name: Build runtime compact image.
? ? ? ? run: |
docker tag masterplanner/themis-runtime-compiled-large:latest-arm64 masterplanner/themis-runtime-compiled:<your version>-arm64
? ? ? ? ? docker push masterplanner/themis-runtime-compiled:<your version>-arm64
docker tag masterplanner/themis-runtime-compiled-large:latest-arm64 masterplanner/themis-runtime-compiled:<your version>-amd64
? ? ? ? ? docker push masterplanner/themis-runtime-compiled:<your version>-amd64
The last step was to migrate all the release images to buildx multi-arch and pass --push so that the resulting multi-listing manifest is correctly pushed to the Docker registry.
build-router
needs: build-runtime
runs-on: [self-hosted, linux, ARM64]
steps:
- name: Checkout code base.
uses: actions/checkout@v2
with:
submodules: true
- name: Generate semver
uses: ./.github/actions/semantic-version
id: svc-version
- name: Build and publish.
run: |
docker buildx build --platform linux/arm64,linux/amd64 \
--build-arg RUNTIME_IMAGE_VERSION=${{ steps.svc-version.outputs.semver }} \
--build-arg ARTEFACT_PATH=router/nginx/src/nginx-build \
--build-arg ARTEFACT_NAME=router \
-f cpp/deployment/platform/build-runtime-router.dockerfile \
--tag masterplanner/themis-runtime-router:${{ steps.svc-version.outputs.semver }} \
--push .
Now, our problem is solved. In another article I will explain how to optimise monorepo builds on top of github actions and how data locality matters in that process.
Conclusion
While the overall migration took longer than expected (approximately 1 week) the benefits are great:
Moreover, I think the new generation MacBook Pro laptops are such amazing machines.
Entrepreneur
2 周Thanks for the tips Radu ????
Software Architect
2 年Thank you Valerio Santinelli and Florin Musatoiu for your support.