Friday, June 5, 2020

Updating the Channel Configs in Hyperledger Fabric


Find the details under the "Adding an Org to Channel" section of tutorials. 
This is very important as you need to switch to orderer MSP to send a channel update Optional
Reference: https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html

Static Leader Election in Hyperledger Fabric Peer


How to enable static leader election in HLF?

Static leader election allows you to manually define one or more peers within an organization as leader peers. Please note, however, that having too many peers connect to the ordering service may result in inefficient use of bandwidth. To enable static leader election mode, configure the following parameters within the section of core.yaml:  


 vi /fabric-samples/config/core.yaml


peer:
    # Gossip related configuration
    gossip:
        useLeaderElectionfalse
        orgLeadertrue

Alternatively, these parameters could be configured and overridden with environmental variables:

export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=true

Note:
The following configuration will keep peer in stand-by mode, i.e. peer will not try to become a leader:

export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=false


  • Setting CORE_PEER_GOSSIP_USELEADERELECTION and CORE_PEER_GOSSIP_ORGLEADER with true value is ambiguous and will lead to an error.
  • In static configuration organization admin is responsible to provide high availability of the leader node in case for failure or crashes.

Chaincode operations on Hyperledger Fabric


Objective:

  • Install chaincode on peers
  • Instantiate chaincode on the channel
  • Upgrade chaincode
  • Query chaincode
  • Invoke chaincode
  • List chaincodes installed and instantiated 


Thursday, May 21, 2020

Hyperledger Fabric Certificate Authority( CA) Operations

Objectives

  • Initialize and start the Fabric server.
  • Enroll an admin
  • Register an admin
  • Register a peer
  • Enroll a peer
  • Register a user
  • Enroll a user
  • Re-enroll a client
  • List the identities
  • Revoke a user
  • Generate a Certificate Revocation List
  • Verify crl.pem
  • Copy crl.pem file from the docker container to host

cd fabric-samples/first-network
vi docker-compose-ca.yaml

In the docker-compose-ca.yaml
Add this line to both ca0 and ca1 section after container name

container_name: ca_peerOrg1
extra_hosts:
      - ca.org1.example.com:127.0.0.1

container_name: ca_peerOrg2
extra_hosts:
      - ca.org2.example.com:127.0.0.1

 Start the CA container.

cd fabric-samples/first-network
./byfn.sh -a -o etcdraft
docker exec -it ca_peerOrg1 bash

Please make sure that you run these commands in a single line or put a line break \ in case you are using multiple lines






Sunday, February 9, 2020

Adding new member to Hyperledger Fabric Organization

Objectives:

  • Create the peer definition
  • Deploy the peer
  • Add the peer to a channel


Creating the peer

cd fabric-samples/first-network


Edit the docker-compose.yml File


Please open docker-compose.yml in VS Code
The first thing we will do inside the docker-compose.yml file is to create a new container
definition (in the services section). Our container name will be peer1.org1.example.com .
We can specify a lot of the container configuration details to be very similar to peer1 since the two are from the same org (which have similar details). We can add the following information under the services section:


After you have added the peer1.org1.example.com details in the docker-compose.yml file, it should look like this:



One of the most important things to pay attention to here are the environment variables, which set values like the MSP identity, logging levels, etc. The other important thing is our volumes. The volumes section is how we expose/map our local folders, like the crypto-config or network-artifacts (aka our configuration directory) configuration folders to our Docker container for access.


Edit the crypto-config File

Since we are adding a new peer to Org1 , we must reconfigure what the template for Org1 is
going to be. We need to edit the generation template for Org1 so that it accommodates the
creation of two user identities (in the form of x509 certificates).
Open crypto-config.yaml in your preferred code editor.
What we are looking to edit is the value for Template.Count , which can be found under the
org1 section. We will be changing that value from 1 to 2. So that section should read:
Template:
Count: 2


Next, we need to re-generate the crypto certificates from our updated crypto-config using
cryptogen .This time, we can use a command to create new certificates without totally removing
the old ones.







Generating the Initial Configuration


First, we need to generate our genesis block.

../bin/configtxgen -profile OneOrgOrdererGenesis \
-outputBlock ./config/genesis.block

We can look at this newly created genesis block by using the inspectBlock command.


../bin/configtxgen -inspectBlock ./config/genesis.block

Start the Peer Container


Now, let’s use our docker-compose definition to quickly bring up peer1.org1


docker-compose -f docker-compose.yml up \
-d peer0.org1.example.com peer1.org1.example.com cli

Let’s return our peer containers to confirm they were started with no issues.


docker ps --filter name=peer

Since we joined peer0 in bootstrap, we must also do the same for peer1 . First, we must enter

into our peer container.

docker exec -it peer1.org1.example.com bash

We will set the identification Path to the Admin for Org1, so that we are authorized to pull and

edit the configuration. ( Note : The reason the path is /etc/hyperledger/msp is because we
mapped it that way at container startup using volumes.)

export
CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@ org1.example.
com/msp

Next, we need to pull the genesis block for the current channel we would like to append our
peer to.

peer channel fetch oldest allarewelcome . block - c allarewelcome \
-- orderer orderer.example.com:7050

Join the Peer to the Current Channel

peer channel join -b allarewelcome.block

Now, we can check and see if our peer is actually joined.

peer channel list

Tuesday, January 21, 2020

DevOps workshop at NMIT Bangalore - Day 1

As part of the Campus Connect, My colleague Aditya and I conducted a DevOps workshop at NMIT Bangalore. As covering all the topics related to DevOps was not possible in a single day, we decided to break the entire workshop into multiple smaller sessions spanning across multiple days based on the availability of the students and our work commitments.

Day 1

We reached NMIT college at 8:30 AM and met our point of contact Impana(Assistant Professor at NMIT) who was kind enough to make all the logistical arrangements for us. We were guided to their New Seminar hall and we started setting up for our presentation.
By 9:30 the students had reached the seminar hall( most of them were from 4th and 6th semester). 
We started the session by asking the students what they understood by the term DevOps. To our pleasant surprise, many of the students had a good sense of DevOps and why it has become indispensable in the software development industry.

As most of the students were familiar with the Waterfall model of software development, we began a comparison between waterfall and Agile methodology to make them understand the importance of quick release cycles, continuous feedback and minimum viable product(MVP).
Once they were comfortable with the basic concepts of Agile, we started describing how Agile and DevOps are interrelated and how DevOps helps in achieving the agile development methodology successfully.

We showed them a build cycle using the traditional approach where it took weeks and months, if not years to release a new version of the product. Once the students were able to feel the pain points associated with that approach, we introduced the DevOps release cycle which leveraged Continous Integration, Continous Testing, and Continous Deployment/Delivery how it shortens the release cycle to minutes and hours.

As the DevOps terms like CI and CD are fairly unknown to college students, we explained the meaning of each term and its significance. We used a DevOps pipeline diagram which we used as an aid to explain each stage in the DevOps cycle along with some industry-leading tools like Jenkins, SonarQube, Terraform, Ansible, etc.
We also made emphasis on the fact that the students should focus on learning the technologies and not tools as there are a plethora of tools available in the market and the choice to choose one over the other is dependent on various factors.

 In the end, we showed them a quick demo of a Jenkins pipeline which auto-triggered a build based on code push to Github and ran a Python script which contained a sample function.

It was a very interactive session as there were good questions asked by the students and we thoroughly enjoyed explaining to them the nuances of the DevOps and SDLC.

 I personally felt a sense of nostalgia and life has come full circle for me.
Just a few years ago, I was in the same place as these intelligent, and curious students, on the path to becoming professional developers.

I have been blessed to work with some of the finest engineers in Unisys like Sanket and many others, and I feel that workshops like these will help me pass on the knowledge to the next generation of developers and engineers.

What is DevOps? An Engineers' perspective


What is DevOps?

DevOps is a culture where active collaboration between development, operations, and business teams is achieved. It’s not all about tools. DevOps in an organization are required to create value to end customers. Tools are only aids to build this culture. DevOps increases organizations' capability to deliver high-quality products or services at a swift pace. It automates all processes starting from build to deployment phase of an application.

Why is DevOps needed?


DevOps helps to remove silos in organizations and enable the creation of cross-functional teams, thus reducing reliance on any one person or team during the delivery process. Frequent communication between teams improves the confidence and efficiency of the team members. Through automation, the DevOps team increases their productivity making satisfied customers.

DEVS ARE FROM VENUS,
OPS ARE FROM MARS
                              - Steven Haines

How to implement DevOps?

The “DevOps Handbook” defines the “Three Ways: The principles of underpinning DevOps” as a way to implement DevOps in large enterprises.

The First Way: Systems Thinking


The First Way emphasizes the need for global optimization as opposed to local optimization, hence the focus is on optimizing all business value streams enabled by IT.


The Second Way: Amplify feedback loops


The Second Way is about discovering and injecting the right feedback loops so that necessary corrections can be made before it’s too late.

The Third Way: Culture of Experimentation and learning


The third way is all about creating the right culture that fosters two things, continual experimentation and learning from failures. It emphasizes the understanding that repetition and practice make teams perfect.

While the three ways focus on the key principles, we also have three pillars which are keys to any successful DevOps adoption.

The three pillars of any DevOps adoption are,

  1. Culture and People
  2. Tools and Technology
  3. Processes and practices


Some common DevOps terms:


Continuous Integration

Continuous integration is a software engineering practice where software development team members frequently merge and build their code changes. The key benefit is to detect and fix code merge conflicts and integration bugs in the early stages of software development. Hence reducing the cost to detect and fix the issues.


Continuous Delivery

Continuous delivery is a software engineering practice in which changes are automatically built, tested, and made release ready for production. In order to get into a continuous delivery state, it is very crucial to define a test strategy. The main goal is to identify functional and non-functional defects at a much earlier stage thus reducing the cost to fix defects. It also enables teams to come up with working software as defined in the agile manifesto. Continuous delivery as a practice depends on continuous integration and test automation. Hence it is crucial that teams need to ensure that they practice continuous integration along with test automation religiously, to effectively practice continuous delivery.


Continuous Deployment

Continuous deployment is a software engineering practice in which codes committed by the developers are automatically built, tested and deployed to production. Continuous deployment as a practice, require that teams have already adopted continuous integration and continuous delivery approach. The primary advantage of this practice is reducing time-to-market and early feedback from users.

Continuous Testing

Continuous Testing can be defined as a software testing practice that involves a process of testing early, testing often and test automation. The primary goal of Continuous Testing is to shift left the test phase as much as possible to identify defects and reduce the cost of fixing.



Some popular DevOps tools:


  • Git– Source code management and version control system. 
  • Selenium – This is an automation testing tool
  • Jenkins – Automation server with plenty of plugins to develop CI/CD pipelines
  • SonarQube - Static code analysis
  • Ansible - This is a configuration management and deployment tool
  • Docker – This is a containerization platform
  • Terraform– This is an infrastructure automation tool
  • Kubernetes – This is a Container Orchestration tool
  • Nagios – This tool is good for Continuous Monitoring

Following are the factors that will improve as a result of DevOps implementation:


Predictability: DevOps decreases the failure rate of new product releases.
Maintainability: The process improves the overall recovery rate at the time of the release event.
Improved Quality: DevOps improves the quality of product development by incorporating infrastructure issues.
Lower Risk: Security aspects are incorporated in SDLC, and the number of defects gets decreased across the product
Cost Efficient: Cost efficiency is improved due to DevOps that is always an aspiration of every business organization.
Stability: DevOps implementation offers a stable and secure operational state.
Streamlined Delivery Process: As DevOps provides streamlined software delivery, so marketing effort is reduced up to 50%. It happens due to the mobile application and digital platform.


Friday, January 10, 2020

Setting up your first Blockchain network using Hyperledger Fabric

Pre-requisites 

  • An Ubuntu 16 virtual machine
  • Basic understanding of Linux commands

Steps:

  • Login to your Ubuntu machine. Please use a NON-ROOT user to login. If you don't have a Non-Root user, please create one.
  • sudo adduser <username>
  • Open a new terminal by pressing CTRL + ALT + T
  • Run the following commands one by one.
  • Logout from the terminal and open a new one using CTRL + ALT + T
  • Run these commands in the terminal


Basic Troubleshooting:


Hyperledger: byfn.sh up: ERROR !!! FAILED to execute


Run the following commands to remove old docker container and images:
$ docker rm -f $(docker ps -aq)

This will remove all your containers
$ docker rmi -f $(docker image list -aq)

This will remove all your images
$ docker volume prune

Cryptogen not Found:

$ cd fabric-samples
$ curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/v1.0.5/scripts/bootstrap.sh | bash -s 1.0.5
$ cd fabric-samples/first-network
$ ./byfn.sh -m generate

You will see a brief description as to what will occur, along with a yes/no command-line prompt. Respond with a y or hit the return key to execute the described action.

Terraform and Ansible to setup K8s clusters

Quickly set up dev/QA/pre-prod k8s cluster for teams

The Challenge

In my current project, we run our infrastructure on ERL, AWS, and Azure. We have multiple microservices that make up our application and we deploy each of them to different Kubernetes clusters/environments(dev/stage/QA/prod) on ERL, AWS, and Azure.
Given the rapid development cycle and PoCs that our team does, we need to quickly create environments to test new services without affecting our existing clusters/environments.
Earlier it was manageable to create these clusters by using ERL console, but lately, we realized that it took a long time to get the VMs, configure them and set up the Kubernetes clusters which our Dev team could use. We explored AWS to quickly provision resources(EC2, S3, etc.) but we were not satisfied by the time spent on navigating the AWS console setting up VPCs, subnets, security groups, keys, etc.
The DevOps team was not able to focus on other initiatives like evaluating better CD tools etc. as most of the time was spent on setting up multiple environments for dev/QA team.
Once we decided that we need to address this pain point, we started exploring Infrastructure and configuration management tools to automate the creation of our dev/QA environments. That is when we discovered Terraform and Ansible. 

Terraform and Ansible

Terraform is a tool for developing, changing and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It supports major cloud vendors, so it’s agnostic to which cloud you are running, and it can help with automation.

Ansible is an open-source automation platform. It is very, very simple to set up and yet powerful. Ansible can help you with configuration management, application deployment, task automation. It can also do IT orchestration, where you have to run tasks in sequence and create a chain of events that must happen on several different servers or devices.

So without going into the theory, let's see what we did to make our lives easier. 

The tools used:

  • Terraform 
  • Ansible 
  • Jenkins
  • Bash Scripting/Python
  • Bitbucket
We integrated Terraform and Ansible with Jenkins so that we are able to fill our requirements (such as Instance_Name, Instance_type, Subnet, VPC, Key_pair, etc.) in Jenkins field, and allow Jenkins variables to do the work for us.
We use a bash script in the backend which takes the input from Jenkins and writes it to .vars file. The Jenkins input variables would get appended to the Terraform-user-input variable file(.vars file). Check out the image below for better understanding.

This is just a one-time effort, and then we can create unlimited Kubernetes cluster in any region within minutes.
Make sure to take the variables as input in Jenkins job, and run a background script(be it in Bash, Python, etc.), that keeps appending the values in these variables to .vars file, and then performing Terraform plan and Terraform apply. Make sure to delete the terraform.tfstate in every build, else you will end up modifying your existing infrastructure and land yourself in trouble.

Now all you need to is click "Build Now" in the Jenkins job and you will have your Kubernetes cluster up and running in no time.
Feel free to use, modify and hack around the code to suit your needs.


Making YouTube usable again

 YouTube has evolved from a platform for educational and entertaining content into a space filled with ads, distractions, and the ever-growi...