Transitioning to Mac M1 (ARM64)
https://www.macrumors.com/guide/m1/

Transitioning to Mac M1 (ARM64)

Problem statement

Recently, I was forced to change laptops and use a new Apple Macbook Pro M1 Max (ARM64) architecture. I must admit the hardware is great and extremely silent in comparison with my former AMD Ryzen 3950X.

Everything started well because onboarding my IDEs and Visual Studio Code was trivial. Now, the surprise arrived when I tried to port on of my native C++ projects to the new hardware. In this article, I am focusing on the experience and the changes I had to do to be able to run and evolve my project on the new laptop.

A little bit of the C++ project:

  • uses CMake
  • integrates VCPackage
  • uses Apache Ignite
  • builds on linux x64.
  • contains many microservices
  • compiles using gcc 10.3
  • it is packaged as container images
  • contains a docker compose definition for local development setup

I first tried to start docker-compose stack on my laptop. Initially it seemed to be working but extremely slow (5 mins startup time in comparison with 1 min on the former amd64 architecture). Nevertheless, the stack ended up with containers failing with SIGABRT 6 error reported by qemu.

Solution and Continuous Integration improvements

https://www.idlememe.com/huh-meme-14/

Hmmm, this was strange. Now, the fun begins ... Basically, somewhere in the compiled code there are things which can not be emulated out of the box.

In order to solve this issue, cross compilation was required so that we also have container images for arm64 bits. While there are multiple ways of achieving this, I opted for buildx support for linux/amd64 and linux/arm64 platforms. With this approach it was relatively easy to do a local build. Here, linux/arm64 builds were approximately 3x faster than linux/amd64 ones. Still, one obvious problem came from the fact there was no way to compile the project natively for Mac because of the epoll explicit dependency.

To be able to do incremental changes and local development I opted for the remote development docker based functionality of CLion (https://blog.jetbrains.com/clion/2020/01/using-docker-with-clion/). This is easy to setup and matched perfectly for my project as I was building a base image with the toolchain required to compile the project. This solved the local development mode and the actual experience is extremely good. In comparison with my former AMD Ryzen 3950X I noticed a 1.5x compilation improvement of the overall codebase (and without any fan noise as an additional benefit).

Ok, now I was almost done with the migration except continuous integration. This topic was quite tricky because the initial implementation of CI was done on top of Github Actions with hosted runners. While this is fine, there are no arm related builders for it. Moreover, because the base toolchain image increased a little bit in size after adding some additional packages the hosted solution seemed impractical.

This is why I automated an infrastructure for self hosted runners in AWS using c6gd spot instances. On top of this, there is an additional issue. The CI process was using a multi stage approach in which the build of the mono repo was done, then various tests were executing starting from that image, then static checks (clang format, clang-tidy-check) and only after that the final release images were built and published.

No alt text provided for this image

When introducing two platforms in docker I quickly realised buildx easily supports this with the docker-linux driver but the default docker engine does not support multi-arch / multi list imports of containers.

In order to solve this I used the following approach:

  • do a full compilation using docker buildx for all the platforms you want (linux/amd64,linux/arm64) and make sure you pass --load. The output is important here because without it FROM clause referencing these images will return not found when trying docker builds / publishes.
  • tag the imported image and suffix it with the architecture (e.g -arm64).
  • repeat the steps for all images.

Here is a snippet of how to achieve this in github actions:

jobs
  build-runtime:
    runs-on: [self-hosted, linux, ARM64]
    steps:
      - name: Checkout code base.
        uses: actions/checkout@v2
        with:
          submodules: true
      - name: Load x64 docker image.
        run: |
          docker buildx build --platform linux/amd64 \
            --tag masterplanner/themis-runtime-compiled-large:latest-amd64 \
            -f cpp/deployment/platform/runtime.dockerfile \
            --load .


      - name: Build and run common layer x64.
        run: docker run --platform linux/amd64 --rm masterplanner/themis-runtime-compiled-large:latest-amd64 -- "cd build && make -j16 common_tests && ./extensions/common/common_tests"
? ? ? - name: Load arm64 docker image.
? ? ? ? run: |
? ? ? ? ? docker buildx build --platform linux/arm64 \
? ? ? ? ? ? --tag masterplanner/themis-runtime-compiled-large:latest-arm64 \
? ? ? ? ? ? -f cpp/deployment/platform/runtime.dockerfile \
? ? ? ? ? ? --load .
? ? ? - name: Build and run common layer arm64.
? ? ? ? run: docker run --platform linux/arm64 --rm masterplanner/themis-runtime-compiled-large:latest-arm64 -- "cd build && make -j16 common_tests && ./extensions/common/common_tests"
? ? ? - name: Build runtime compact image.
? ? ? ? run: |
          docker tag masterplanner/themis-runtime-compiled-large:latest-arm64 masterplanner/themis-runtime-compiled:<your version>-arm64
? ? ? ? ? docker push masterplanner/themis-runtime-compiled:<your version>-arm64
          docker tag masterplanner/themis-runtime-compiled-large:latest-arm64 masterplanner/themis-runtime-compiled:<your version>-amd64
? ? ? ? ? docker push masterplanner/themis-runtime-compiled:<your version>-amd64
        

The last step was to migrate all the release images to buildx multi-arch and pass --push so that the resulting multi-listing manifest is correctly pushed to the Docker registry.

  build-router
    needs: build-runtime
    runs-on: [self-hosted, linux, ARM64]
    steps:
      - name: Checkout code base.
        uses: actions/checkout@v2
        with:
          submodules: true
      - name: Generate semver
        uses: ./.github/actions/semantic-version
        id: svc-version
      - name: Build and publish.
        run: |
          docker buildx build --platform linux/arm64,linux/amd64 \
            --build-arg RUNTIME_IMAGE_VERSION=${{ steps.svc-version.outputs.semver }} \
            --build-arg ARTEFACT_PATH=router/nginx/src/nginx-build \
            --build-arg ARTEFACT_NAME=router \
            -f cpp/deployment/platform/build-runtime-router.dockerfile \
            --tag masterplanner/themis-runtime-router:${{ steps.svc-version.outputs.semver }} \
            --push .        
No alt text provided for this image

Now, our problem is solved. In another article I will explain how to optimise monorepo builds on top of github actions and how data locality matters in that process.

Conclusion

While the overall migration took longer than expected (approximately 1 week) the benefits are great:

  • approximately 40% cheaper costs of the runtime infrastructure in the cloud.
  • easy to build solution on the local machine.
  • multiple architectures supported.

Moreover, I think the new generation MacBook Pro laptops are such amazing machines.

Thanks for the tips Radu ????

回复

Thank you Valerio Santinelli and Florin Musatoiu for your support.

回复

要查看或添加评论,请登录

Radu Viorel Cosnita的更多文章

  • Reproducible builds

    Reproducible builds

    Every software company struggles with the pressure of delivering software on time and at a constant pace. Even though…

    1 条评论

社区洞察

其他会员也浏览了