About The Client
Client is an award-winning education technology company working on innovative platforms for online admissions, online assessments, onscreen evaluation system, and remote proctoring technology. The client is also associated with many multinational companies, corporates, education institutes and universities.
The Business Challenge
The client was looking out to move its existing legacy applications to the cloud as it was running higher bills and outdated infrastructure. Starting with moving legacy database from on-premise to cloud, setting up continuous integration and deployment pipeline and optimized management of virtual machines created on the cloud. The requirement was to auto-scale the solution to serve increased demand on specific events or days for their end customers.
The client was already using Azure for its infrastructure setup and had started moving applications to .Net Core for Linux deployments. Another requirement was to make the deployments on-demand. These deployments were planned to be multi-tenant SaaS as well as dedicated deployments.
With limited in-house knowledge on Azure DevOps and .Net Core technologies, they also required guidance to make the solutions scalable.
How CloudHedge Helped?
The overall solution was delivered in three phases.
Phase 1: Migrating the database to cloud
CloudHedge analyzed the legacy database and checked for changes required to migrate the database to the cloud. Post this analysis, the SQL Server database was successfully migrated, and all the points in the application code which required modification were highlighted The client team performed required code changes and tested the application after connecting it to the cloud database instance.
Phase 2: Isolating code from configuration data
The client team used Azure DevOps in development, for which the artifacts generated from the continuous integration cycle were during deployments at customer’s end. This mix of code and configuration made managing multiple deployments complicated. To resolve this, the team trained the client’s staff about the best practices followed while writing cloud-native applications. The CloudHedge team helped the client define configuration files and environment variables so that the configuration data was isolated from the source for making the service container ready. These pipelines are now used to create new cloud-native container images with version management.
Phase 3: Automating target deployment with infrastructure as code
CloudHedge was used to create Azure Kubernetes Cluster (AKS) with sufficient nodes to support fault tolerance, high availability, and scale for the services. Prometheus monitoring was also enabled along with the deployment of AKS for infrastructure and application monitoring. Using Azure AKS container orchestration, the new and updated containers can be deployed onto AKS. These deployments are designed with AutoScale parameters based on CPU thresholds and as the demand increases the pods are autoscaled to maintain the load on the system automatically.
CloudHedge Cruize was utilized in the setup of CI/CD workflow along with Azure DevOps pipelines to deploy updated application releases on an ongoing basis and providing monitoring capabilities on the go. CloudHedge team enabled the setup of automation similar pipelines for UAT and Demo environments for the client and allowed the client to began working on the deployment of client-specific versions on demand. deploy specific versions on demand.
As the platform solution completely automated and integrated with existing build/release pipelines, developers did not have to worry about deployments thereby allowing them to focus on code.
- The solution can be deployed through automation, eliminating the need for highly skilled administrators.
- The ease of deploying on various environments accelerated process automation like DevOps.
- Elimination of the need for file copy and publish using FileZilla during deployments.
- The client could perform easy, fast and production-ready deployments and easily monitor the entire infrastructure.
- Simplified access to logs from UI reduced the need for logging into multiple systems solution was able to scale horizontally based on the requirements.
By Containerizing the application, the following benefits were derived:
- 1/4th reduction in the infrastructure costs with the introduction of containers and Kubernetes
Ability to spin up new environments (QA/Staging, etc.)
- Seamless integration with the existing CI/CD automation
- Ability to spin up new environments in minutes vs days in the earlier process
Maintenance and Upgrades
- Zero downtime and maintenance costs for platform upgrades and patching due to managed Kubernetes services from the cloud provider
Ability to autoscale application workloads
- Manual and vertical scaling which used to take days and downtime is now reduced to zero downtime and autoscaled by detecting an increase in a load of the system.
- Ability to distribute different services on containers instead of increasing number of VMs thereby reducing the infrastructure by 1/4th
Enhanced Monitoring and alerting systems
- Native integrations with cloud service provider monitoring and alerting systems