Deploying your application from a Bitbucket repository to production in Google Kubernetes Engine (GKE) typically involves setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Google Cloud Build is a popular choice for this, as it integrates well with Bitbucket and GKE.
Here’s a breakdown of the common steps and concepts involved:
1. Prerequisites:
- Google Cloud Project: You need an active Google Cloud project.
- Google Kubernetes Engine (GKE) Cluster: A running GKE cluster where you’ll deploy your application.
- Google Cloud APIs Enabled:
- Cloud Build API
- Kubernetes Engine API
- Artifact Registry API – for storing your Docker images.
- Secret Manager API
- Bitbucket Repository: Your application code should be in a Bitbucket repository.
- Service Account: A Google Cloud service account with the necessary permissions for Cloud Build to:
- Build and push Docker images to Artifact Registry.
- Deploy to your GKE cluster (e.g., Kubernetes Engine Developer role).
- Access any other Google Cloud resources your build process needs.
- Kubernetes Manifests: You’ll need Kubernetes YAML files (Deployment, Service, Ingress, etc.) that define how your application should run on GKE.
2. Core Steps for the CI/CD Pipeline:
A. Connect Bitbucket to Google Cloud Build:
- Google Cloud Console:
- Go to the Cloud Build Triggers page.
- Click “Create trigger”.
- Select Bitbucket Cloud (or Bitbucket Server if you’re using a self-hosted instance) as your source.
- Follow the prompts to authorize Cloud Build to access your Bitbucket repositories.
- Select the specific Bitbucket repository you want to connect.
- Trigger Configuration:
- Event: Choose the event that triggers your build (e.g., “Push to a branch” for master or main branch for production deployments, or “Push new tag” for versioned releases).
- Source: Select your connected Bitbucket repository and specify the branch or tag pattern.
- Configuration: Choose “Cloud Build configuration file (yaml or json)”.
- Location: Specify the path to your cloudbuild.yaml file in your repository (e.g., cloudbuild.yaml).
B. Define Your cloudbuild.yaml (Build Configuration File):
This file, located in your Bitbucket repository, instructs Cloud Build on how to build your application, containerize it, push the image, and deploy it to GKE.
A typical cloudbuild.yaml for GKE deployment would include the following steps:
- Build Docker Image:
- Use the docker builder to build your application’s Docker image.
- Tag the image with a unique identifier (e.g., Git commit SHA, branch name, or a version tag).
- Push Docker Image to Artifact Registry:
- Use the docker builder to push the newly built image to your chosen registry.
- Use the docker builder to push the newly built image to your chosen registry.
- Get GKE Cluster Credentials:
- Use the gcloud builder to authenticate and get credentials for your GKE cluster. This allows subsequent kubectl commands to interact with your cluster.
- Use the gcloud builder to authenticate and get credentials for your GKE cluster. This allows subsequent kubectl commands to interact with your cluster.
- Deploy to GKE and Expose service:
- Using kubectl directly:
- This provides more fine-grained control
- This provides more fine-grained control
- Using kubectl directly:
C. Manage Service Account Permissions:
- Ensure the Cloud Build service account (which is [PROJECT_NUMBER]@cloudbuild.gserviceaccount.com by default) has the necessary IAM roles:
- Cloud Build Editor (or Cloud Build Service Account)
- Storage Admin (for pushing to GCS if needed for intermediate artifacts)
- Artifact Registry Writer or Storage Object Admin (for pushing Docker images)
- Kubernetes Engine Developer (to deploy to GKE)
- Service Account User (if you’re using other service accounts during the build)
3. Example cloudbuild.yaml:
4. Example deployment.yaml:
5. Example service.yaml:
Key Considerations:
- Environments (Dev, Staging, Production): It’s highly recommended to have separate GKE clusters or namespaces for different environments. Your Bitbucket pipeline can be configured to deploy to different environments based on branches (e.g., develop to dev, main to production) or tags.
- Rollback Strategy: Understand how to roll back deployments in GKE in case of issues. kubectl rollout undo is your friend.
- Monitoring and Logging: Integrate with Google Cloud’s monitoring (Cloud Monitoring) and logging (Cloud Logging) to observe your application’s health and troubleshoot problems.
- Testing: Implement automated tests (unit, integration, end-to-end) in your CI/CD pipeline before deployment to ensure the quality of your application.
- Helm or Kustomize: For more complex Kubernetes deployments, consider using Helm charts or Kustomize to manage your Kubernetes manifests, making them more modular and configurable.
- Network Access: If your GKE cluster is private, ensure Cloud Build has network access to the cluster’s control plane. This might involve setting up a private pool for Cloud Build.
By following these steps, you can establish a robust CI/CD pipeline to automatically deploy your application from Bitbucket to Google Kubernetes Engine.
Arya Gairola is a seasoned professional with expertise in Frontend Development and managing CI/CD pipelines. He has a comprehensive background in designing responsive user interfaces, optimizing web performance, and streamlining deployment workflows through modern DevOps practices. Arya is dedicated to building efficient, scalable applications and automating delivery processes to enhance productivity and reduce time-to-market. Alongside his frontend expertise, he has a keen interest in cloud infrastructure and continuous integration tools, staying current with the latest advancements in web technologies. His diverse skill set and passion for continuous learning make him a valuable asset.