Tuesday, January 21, 2020

DevOps workshop at NMIT Bangalore - Day 1

As part of the Campus Connect, My colleague Aditya and I conducted a DevOps workshop at NMIT Bangalore. As covering all the topics related to DevOps was not possible in a single day, we decided to break the entire workshop into multiple smaller sessions spanning across multiple days based on the availability of the students and our work commitments.

Day 1

We reached NMIT college at 8:30 AM and met our point of contact Impana(Assistant Professor at NMIT) who was kind enough to make all the logistical arrangements for us. We were guided to their New Seminar hall and we started setting up for our presentation.
By 9:30 the students had reached the seminar hall( most of them were from 4th and 6th semester). 
We started the session by asking the students what they understood by the term DevOps. To our pleasant surprise, many of the students had a good sense of DevOps and why it has become indispensable in the software development industry.

As most of the students were familiar with the Waterfall model of software development, we began a comparison between waterfall and Agile methodology to make them understand the importance of quick release cycles, continuous feedback and minimum viable product(MVP).
Once they were comfortable with the basic concepts of Agile, we started describing how Agile and DevOps are interrelated and how DevOps helps in achieving the agile development methodology successfully.

We showed them a build cycle using the traditional approach where it took weeks and months, if not years to release a new version of the product. Once the students were able to feel the pain points associated with that approach, we introduced the DevOps release cycle which leveraged Continous Integration, Continous Testing, and Continous Deployment/Delivery how it shortens the release cycle to minutes and hours.

As the DevOps terms like CI and CD are fairly unknown to college students, we explained the meaning of each term and its significance. We used a DevOps pipeline diagram which we used as an aid to explain each stage in the DevOps cycle along with some industry-leading tools like Jenkins, SonarQube, Terraform, Ansible, etc.
We also made emphasis on the fact that the students should focus on learning the technologies and not tools as there are a plethora of tools available in the market and the choice to choose one over the other is dependent on various factors.

 In the end, we showed them a quick demo of a Jenkins pipeline which auto-triggered a build based on code push to Github and ran a Python script which contained a sample function.

It was a very interactive session as there were good questions asked by the students and we thoroughly enjoyed explaining to them the nuances of the DevOps and SDLC.

 I personally felt a sense of nostalgia and life has come full circle for me.
Just a few years ago, I was in the same place as these intelligent, and curious students, on the path to becoming professional developers.

I have been blessed to work with some of the finest engineers in Unisys like Sanket and many others, and I feel that workshops like these will help me pass on the knowledge to the next generation of developers and engineers.

What is DevOps? An Engineers' perspective


What is DevOps?

DevOps is a culture where active collaboration between development, operations, and business teams is achieved. It’s not all about tools. DevOps in an organization are required to create value to end customers. Tools are only aids to build this culture. DevOps increases organizations' capability to deliver high-quality products or services at a swift pace. It automates all processes starting from build to deployment phase of an application.

Why is DevOps needed?


DevOps helps to remove silos in organizations and enable the creation of cross-functional teams, thus reducing reliance on any one person or team during the delivery process. Frequent communication between teams improves the confidence and efficiency of the team members. Through automation, the DevOps team increases their productivity making satisfied customers.

DEVS ARE FROM VENUS,
OPS ARE FROM MARS
                              - Steven Haines

How to implement DevOps?

The “DevOps Handbook” defines the “Three Ways: The principles of underpinning DevOps” as a way to implement DevOps in large enterprises.

The First Way: Systems Thinking


The First Way emphasizes the need for global optimization as opposed to local optimization, hence the focus is on optimizing all business value streams enabled by IT.


The Second Way: Amplify feedback loops


The Second Way is about discovering and injecting the right feedback loops so that necessary corrections can be made before it’s too late.

The Third Way: Culture of Experimentation and learning


The third way is all about creating the right culture that fosters two things, continual experimentation and learning from failures. It emphasizes the understanding that repetition and practice make teams perfect.

While the three ways focus on the key principles, we also have three pillars which are keys to any successful DevOps adoption.

The three pillars of any DevOps adoption are,

  1. Culture and People
  2. Tools and Technology
  3. Processes and practices


Some common DevOps terms:


Continuous Integration

Continuous integration is a software engineering practice where software development team members frequently merge and build their code changes. The key benefit is to detect and fix code merge conflicts and integration bugs in the early stages of software development. Hence reducing the cost to detect and fix the issues.


Continuous Delivery

Continuous delivery is a software engineering practice in which changes are automatically built, tested, and made release ready for production. In order to get into a continuous delivery state, it is very crucial to define a test strategy. The main goal is to identify functional and non-functional defects at a much earlier stage thus reducing the cost to fix defects. It also enables teams to come up with working software as defined in the agile manifesto. Continuous delivery as a practice depends on continuous integration and test automation. Hence it is crucial that teams need to ensure that they practice continuous integration along with test automation religiously, to effectively practice continuous delivery.


Continuous Deployment

Continuous deployment is a software engineering practice in which codes committed by the developers are automatically built, tested and deployed to production. Continuous deployment as a practice, require that teams have already adopted continuous integration and continuous delivery approach. The primary advantage of this practice is reducing time-to-market and early feedback from users.

Continuous Testing

Continuous Testing can be defined as a software testing practice that involves a process of testing early, testing often and test automation. The primary goal of Continuous Testing is to shift left the test phase as much as possible to identify defects and reduce the cost of fixing.



Some popular DevOps tools:


  • Git– Source code management and version control system. 
  • Selenium – This is an automation testing tool
  • Jenkins – Automation server with plenty of plugins to develop CI/CD pipelines
  • SonarQube - Static code analysis
  • Ansible - This is a configuration management and deployment tool
  • Docker – This is a containerization platform
  • Terraform– This is an infrastructure automation tool
  • Kubernetes – This is a Container Orchestration tool
  • Nagios – This tool is good for Continuous Monitoring

Following are the factors that will improve as a result of DevOps implementation:


Predictability: DevOps decreases the failure rate of new product releases.
Maintainability: The process improves the overall recovery rate at the time of the release event.
Improved Quality: DevOps improves the quality of product development by incorporating infrastructure issues.
Lower Risk: Security aspects are incorporated in SDLC, and the number of defects gets decreased across the product
Cost Efficient: Cost efficiency is improved due to DevOps that is always an aspiration of every business organization.
Stability: DevOps implementation offers a stable and secure operational state.
Streamlined Delivery Process: As DevOps provides streamlined software delivery, so marketing effort is reduced up to 50%. It happens due to the mobile application and digital platform.


Friday, January 10, 2020

Setting up your first Blockchain network using Hyperledger Fabric

Pre-requisites 

  • An Ubuntu 16 virtual machine
  • Basic understanding of Linux commands

Steps:

  • Login to your Ubuntu machine. Please use a NON-ROOT user to login. If you don't have a Non-Root user, please create one.
  • sudo adduser <username>
  • Open a new terminal by pressing CTRL + ALT + T
  • Run the following commands one by one.
  • Logout from the terminal and open a new one using CTRL + ALT + T
  • Run these commands in the terminal


Basic Troubleshooting:


Hyperledger: byfn.sh up: ERROR !!! FAILED to execute


Run the following commands to remove old docker container and images:
$ docker rm -f $(docker ps -aq)

This will remove all your containers
$ docker rmi -f $(docker image list -aq)

This will remove all your images
$ docker volume prune

Cryptogen not Found:

$ cd fabric-samples
$ curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/v1.0.5/scripts/bootstrap.sh | bash -s 1.0.5
$ cd fabric-samples/first-network
$ ./byfn.sh -m generate

You will see a brief description as to what will occur, along with a yes/no command-line prompt. Respond with a y or hit the return key to execute the described action.

Terraform and Ansible to setup K8s clusters

Quickly set up dev/QA/pre-prod k8s cluster for teams

The Challenge

In my current project, we run our infrastructure on ERL, AWS, and Azure. We have multiple microservices that make up our application and we deploy each of them to different Kubernetes clusters/environments(dev/stage/QA/prod) on ERL, AWS, and Azure.
Given the rapid development cycle and PoCs that our team does, we need to quickly create environments to test new services without affecting our existing clusters/environments.
Earlier it was manageable to create these clusters by using ERL console, but lately, we realized that it took a long time to get the VMs, configure them and set up the Kubernetes clusters which our Dev team could use. We explored AWS to quickly provision resources(EC2, S3, etc.) but we were not satisfied by the time spent on navigating the AWS console setting up VPCs, subnets, security groups, keys, etc.
The DevOps team was not able to focus on other initiatives like evaluating better CD tools etc. as most of the time was spent on setting up multiple environments for dev/QA team.
Once we decided that we need to address this pain point, we started exploring Infrastructure and configuration management tools to automate the creation of our dev/QA environments. That is when we discovered Terraform and Ansible. 

Terraform and Ansible

Terraform is a tool for developing, changing and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It supports major cloud vendors, so it’s agnostic to which cloud you are running, and it can help with automation.

Ansible is an open-source automation platform. It is very, very simple to set up and yet powerful. Ansible can help you with configuration management, application deployment, task automation. It can also do IT orchestration, where you have to run tasks in sequence and create a chain of events that must happen on several different servers or devices.

So without going into the theory, let's see what we did to make our lives easier. 

The tools used:

  • Terraform 
  • Ansible 
  • Jenkins
  • Bash Scripting/Python
  • Bitbucket
We integrated Terraform and Ansible with Jenkins so that we are able to fill our requirements (such as Instance_Name, Instance_type, Subnet, VPC, Key_pair, etc.) in Jenkins field, and allow Jenkins variables to do the work for us.
We use a bash script in the backend which takes the input from Jenkins and writes it to .vars file. The Jenkins input variables would get appended to the Terraform-user-input variable file(.vars file). Check out the image below for better understanding.

This is just a one-time effort, and then we can create unlimited Kubernetes cluster in any region within minutes.
Make sure to take the variables as input in Jenkins job, and run a background script(be it in Bash, Python, etc.), that keeps appending the values in these variables to .vars file, and then performing Terraform plan and Terraform apply. Make sure to delete the terraform.tfstate in every build, else you will end up modifying your existing infrastructure and land yourself in trouble.

Now all you need to is click "Build Now" in the Jenkins job and you will have your Kubernetes cluster up and running in no time.
Feel free to use, modify and hack around the code to suit your needs.


Making YouTube usable again

 YouTube has evolved from a platform for educational and entertaining content into a space filled with ads, distractions, and the ever-growi...