Migrating from Bitbucket to GitLab: Our Journey to a Faster, More Reliable CI/CD Pipeline

Migrating from Bitbucket to GitLab: Our Journey to a Faster, More Reliable CI/CD Pipeline

Introduction

In the fast-paced world of software development, having a reliable and efficient CI/CD platform is crucial for maintaining productivity and ensuring smooth deployments. After experiencing persistent issues with Bitbucket, including sluggish performance and frequent downtime, we decided it was time for a change. Our solution? Migrating to GitLab, a more robust and scalable platform that could meet our growing needs.

This blog will walk you through our migration journey, highlighting the challenges we faced with Bitbucket and how we successfully set up GitLab on EC2 instances, with all our runners and Sidekiq processes running on Amazon EKS. Additionally, we’ll discuss how we optimized our GitLab setup by storing artifacts on Amazon S3 and mounting EFS for repository data.

Why We Chose to Migrate from Bitbucket

Bitbucket served us well for a time, but as our team and project demands grew, we started noticing some critical drawbacks:

  1. Performance Issues: Bitbucket's performance became increasingly sluggish, particularly with larger repositories and complex CI/CD pipelines. The slow response times were causing delays in our development workflow, impacting overall productivity.
  2. Frequent Downtime: We also experienced frequent downtime with Bitbucket, which led to frustration and inefficiency. Having our CI/CD platform go down during critical deployments was simply unacceptable.
  3. Scaling Limitations: As our projects grew, we found that Bitbucket's scaling capabilities were not meeting our needs, especially when it came to managing large numbers of runners and handling parallel jobs efficiently.

These challenges prompted us to seek a more reliable and scalable solution, and after careful consideration, we chose GitLab.

Setting Up GitLab on EC2 Instances with ALB Access

Our GitLab instance is hosted on multiple EC2 instances to ensure high availability and scalability. To manage traffic effectively, we utilized an Application Load Balancer (ALB) that directs incoming requests to the appropriate EC2 instance. This setup not only improves our system's fault tolerance but also ensures that our GitLab instance is always accessible, even during peak usage times.

Key steps in our setup:

  • EC2 Instance Configuration: We configured our EC2 instances with the necessary resources to handle GitLab's operations, including CPU, memory, and storage.
  • ALB Setup: The ALB is configured to distribute traffic evenly across the EC2 instances, ensuring optimal performance and reliability. It also provides SSL termination, enhancing the security of our GitLab instance.

Running GitLab Runners on EKS

To leverage the power of Kubernetes, we decided to run all our GitLab runners on Amazon EKS. This decision allowed us to:

  • Scale Efficiently: EKS makes it easy to scale our runners up or down based on demand, ensuring that our CI/CD pipeline can handle any workload.
  • Isolate Workloads: Running runners on EKS ensures that they are isolated from our main GitLab instance, reducing the risk of any single point of failure affecting the entire CI/CD pipeline.
  • Automate Management: With EKS, we can automate the management of our runners, including updates, scaling, and monitoring.

Separating GitLab Sidekiq on EKS

We further optimized our GitLab setup by running Sidekiq, the background job processor for GitLab, separately on EKS. This separation allows us to:

  • Improve Performance: By isolating Sidekiq, we prevent background jobs from affecting the performance of our main GitLab instance, ensuring that both can run smoothly.
  • Scale Independently: Sidekiq processes can be scaled independently from the GitLab web and API services, allowing us to handle large volumes of background jobs without impacting the rest of the system.

Storing Artifacts on Amazon S3

To ensure that our artifacts are stored securely and can be accessed quickly, we decided to use Amazon S3 for artifact storage. The benefits include:

  • Durability and Availability: S3 provides high durability and availability, ensuring that our artifacts are safe and accessible whenever needed.
  • Cost-Effective: S3’s pricing model allows us to store large amounts of data at a low cost, making it a cost-effective solution for our artifact storage needs.

Mounting EFS for Repository Data

All of our repository data is stored on Amazon EFS, which is mounted across all our EC2 instances. This setup provides several advantages:

  • Shared Storage: EFS allows all EC2 instances to access the same repository data, ensuring consistency and reliability across our GitLab instance.
  • Scalability: EFS automatically scales as our repository data grows, eliminating the need for manual intervention or additional storage management.
  • High I/O Performance: By mounting EFS with I/O optimized settings, we ensure that our GitLab instance can handle high volumes of data access with minimal latency.

Conclusion

Migrating from Bitbucket to GitLab has been a game-changer for our team. The improved performance, reliability, and scalability of GitLab have enabled us to optimize our CI/CD pipeline and ensure that our development processes run smoothly. With GitLab running on EC2, supported by EKS for runners and Sidekiq, and backed by S3 and EFS for storage, we have built a robust infrastructure that can scale with our needs and support our team in delivering high-quality software efficiently.

Thanks to my teammates Milan Sharma and vibhor malhotra for helping me achieve this migration.

Aaditya Rai

DevOps Engineer @Credflow | AWS | Azure | DevOps ?? | Jenkins | Docker | Kubernetes ??|

3 个月

Nice blog, vinamra gupta

Priyanshi Sarad

Kubernetes | Docker | Jenkins | Python | GitlabCI | AWS | Terraform | Grafana | Prometheus | InfluxDB

3 个月

Impressive

要查看或添加评论,请登录

社区洞察

其他会员也浏览了