All posts by chevaladmin

DevOps – Infrastructure as Code on Azure Platform with Hashicorp Terraform Part 1

At Cheval Partners, we believe that infrastructure as code is essential to get most out of your cloud platforms. Infrastructure as code is also required to implement immutable infrastructure. Hashicorp has a suite of products that make it easy to implement infrastructure as code on numerous public and private cloud platforms. We have been leveraging Hashicorp tools for the past year for AWS implementations. We have many clients that are deploying applications on Azure Platform. We were unable to use Hashicorp tools with Azure because Azure Resource Manager support was missing. We were ecstatic to read this announcement from Hashicorp that Packer and Terraform now support Azure Resource manager. https://www.hashicorp.com/blog/azure-packer-terraform.html I will use this blog post to introduce you to using Terraform to provision Azure Resource Manager resources.

Imperative or Declarative

There are two different ways in which you can implement infrastructure as code.

Imperative

Imperative method uses scripts or code to provision your services. Here are a few examples: AWS: You are using bash scripts that leverage AWS CLI. Another example of AWS is the excellent AWS python SDK boto3. Azure: You use Azure PowerShell cmdlets to provision the services

Declarative

Here you are defining templates for your infrastructure. Some examples of this are: AWS: CloudFormation Azure: Azure Resource Manager Terraform Both Imperative and Declarative method of implementing infrastructure as code are better than manual error prone resource provisioning. However with imperative method you are responsible for deploying services in correct order and recovering from failures. As the number of services grows the scripts become brittle.

Terraform

Terraform provides a common configuration to launch infrastructure — from physical and virtual servers to email and DNS providers. Once launched, Terraform safely and efficiently changes infrastructure as the configuration is evolved. Simple file based configuration gives you a single view of your entire infrastructure. Terraform is a declarative method of resource provisioning. Terraform can be used to provision resources in Azure, AWS, Google, Openstack, Digital Ocean and many other providers and services. You can see the complete list of supported providers here: https://www.terraform.io/docs/providers/index.html Terraform uses Hashicorp configuration language(HCL) to define the infrastructure. Once you learn HCL, you will be able to use many different providers for public and private clouds to provision infrastructure. I want to take this opportunity to make a case for Terraform. I have heard many different arguments against using Terraform. 1. Not multi-cloud: Even if you are not provisioning infrastructure in multiple clouds you will find learning HCL to be easier and intuitive. It will also give you an ability to provision infrastructure in other clouds in future. 2. Can’t keep up with cloud provider: I heard that most of the cloud providers(AWS, Azure) are constantly adding new services. So tools like Terraform will fall behind. However, open source community and Hashicorp has done an excellent job of keeping up and in some cases staying ahead of the cloud provider. This is especially true for their AWS provider. In mid December AWS released NAT Gateway. By the time I went to the google group for Terraform: https://groups.google.com/forum/#!forum/terraform-tool somebody had already asked about Terraform AWS Nat Gateway suppord and they were told that support is coming.  AWS Nat Gateway was supported by Terraform even before AWS CloudFormation. 4. What if a certain feature is not supported in Terraform? Terraform allows you to call AWS CloudFormation or ARM Templates, and you can also use a custom script that invokes CLI to provision resources that are not supported by Terraform. You can implement the feature and contribute it back to the community. If you still have concerns check out their change log and release schedule here: https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md. They have been releasing new features and services on a monthly basis. 5. What happens if Hashicorp ceases to exist? Hashicorp is a startup but most of their tools are open source. Vagrant is one of their first tool and it has been around for long time. No one can predict the future but the community is active and engaged and supporting Terraform. Containers, Microservices, Multi-cloud deployments are here to stay and Hashicorp tools can help.

Documentation

Azure Provider for Hashicorp is located here: https://www.terraform.io/docs/providers/azurerm/index.html

Use cases: https://www.terraform.io/intro/use-cases.html

Terraform vs Other Software: https://www.terraform.io/intro/vs/index.html

Installation

Terraform is an open source tool developed in go language. Since it was developed in go language, it is can run on 6 different operating systems including Windows and Mac OS. Same is true for other Hashicorp tools as well. We will cover those in future blog posts. Installation is as simple as downloading the version suitable for your operating system from here: https://www.terraform.io/downloads.html It is an executable so you will just have to add it to your path to make it convenient for you to run it.

Verify installation

I installed Terraform on Windows I opened a cmd window and ran terraform –version to view its version. I got the following output. image

Azure Credentials for Terraform

The process to get Azure credentials for Terraform is a bit convoluted. You need the subscription_id, client_id, client_secret, tenant_id.
1:  # Configure the Azure Resource Manager Provider  
2:  provider "azurerm" {  
3:   subscription_id = "..."  
4:   client_id    = "..."  
5:   client_secret  = "..."  
6:   tenant_id    = "..."  
7:  }  
  You can use the new Azure Portal http://portal.azure.com to find subscription_id. You can browse the list of subscriptions and get the subscription_id. image You can get the value of client_id, client_secret and tenant_id using these steps:
  1. Creating Azure Active directory application
  2. Creating an authentication key
  3. Setting delegated permissions
  4. Assigning application to a role that has appropriate permissions to provision resources.
You will not have to perform this setup frequently so it may be easiest to use the portal. https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-service-principal-portal/ Here are the instructions If you want to use PowerShell to perform these steps: https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/

Show me HCL

The following script creates a new resource group and provisions a virtual network. Lines 10-13 specify the name of the new resource group and its location. Lines 15-25 define an ARM virtual network named “productionNetwork”. It also defined three subnets.
1:  # Configure the Azure Resource Manager Provider  
2:  provider "azurerm" {  
3:   subscription_id = "..."  
4:   client_id    = "..."  
5:   client_secret  = "..."  
6:   tenant_id    = "..."  
7:  }  
8:    
9:  # Create a resource group  
10:  resource "azurerm_resource_group" "production" {  
11:    name   = "production"  
12:    location = "West US"  
13:  }  
14:    
15:  # Create a virtual network in the web_servers resource group  
16:  resource "azurerm_virtual_network" "network" {  
17:   name        = "productionNetwork"  
18:   address_space    = ["10.0.0.0/16"]  
19:   location      = "West US"  
20:   resource_group_name = "${azurerm_resource_group.production.name}"  
21:    
22:   subnet {  
23:    name      = "subnet1"  
24:    address_prefix = "10.0.1.0/24"  
25:   }  
26:    
27:   subnet {  
28:    name      = "subnet2"  
29:    address_prefix = "10.0.2.0/24"  
30:   }  
31:    
32:   subnet {  
33:    name      = "subnet3"  
34:    address_prefix = "10.0.3.0/24"  
35:   }  
36:  }  
37:    
  After saving the script you can first execute the statement

terraform plan

You can review the full help for terraform plan command here: https://www.terraform.io/docs/commands/plan.html This command examines your configuration and your current state. It lists resources it will create, destroy or modify. It is one of the best features of Terraform. Terraform allows you to organize your configuration into modules. My current example does not use modules. If you were using modules you will need to run the command: terraform plan –module-depth=-1 As you can see from the output below it shows that a new resource group will be provisioned. It also shows that a new virtual network with 3 subnets will be provisioned. image

terraform apply

If the plan shows you the services, you wanted to provision the next step is to provision the infrastructure. You need to use apply command to provision or update your infrastructure. terraform apply –help shows you various command line options that are available. You can also see the detailed help here: https://www.terraform.io/docs/commands/apply.html image terraform apply output is shown above. It shows that a new resource group and virtual network. You can use the portal and verify that these resources were created successfully. It also shows that the state of your infrastructure was saved successfully to terraform.tfstate file. This file needs to be saved and checked into source control. There are other options to store this file in a central location. They will be covered in a future blog post.

Remove the infrastructure

You can delete your entire infrastructure provisioned by terraform using the command. terraform destroy command is used to destroy the infrastructure. You are prompted to confirm before the actual destruction takes place.   image  

Closing Thoughts

I have introduced Terraform and shown how you can use it to provision resources on Azure Platform. Terraform Azure Resource Manager provider is less than a week old. It does not cover all the services yet. With their monthly releases, more services will be supported in near future. If you find issues or have feature requests you can log them here: https://github.com/hashicorp/terraform/issues If you want to contribute you can make a pull request here: https://github.com/hashicorp/terraform In part 2 of this blog post series I will added virtual machine, public IP’s and manage DNS using Azure DNS.

Steps to becoming Self Employed

Over 15 years ago I decided to become self-employed IT Consultant. I was already a consultant for 5 years before that so becoming self-employed seemed the next logical step to me. At that time, I had written a word document that described the steps I took to setup my consulting business. Over the years, I shared this document with many people and asked them to update it and share it back with me. If you are considering self-employment I want to share these resources with you. I live in Minnesota so some of the information may not apply to you.

Legal Entity

The very first decision you need to make is what type of legal entity you need to setup for your business. Here are some options: LLC’s have fewer paperwork requirements and are easier to manage as compared with S. Corporation. Your circumstances may vary so this is one of the first decision you need to make. You may want to consult your Certified Public Accountant to determine which option is best for you.

Name of your Legal Entity

Once you have decided the type of legal entity you need to create you will need to select its name. Most businesses have a website so you may start with finding out if a domain name is still available. Domain Name You website address is also known as your domain name. You can check if a domain name is still available from any website registrar like GoDaddy. https://www.godaddy.com/ Legal Entity Name Before you purchase your domain name you need to make sure that the name is available in the state where you are forming the legal entity. Most states have business information lookup website you can use. In Minnesota, you can l do a name search at https://mblsportal.sos.state.mn.us/ Federal Tax Identification Number Every Legal entity is required to have a Federal Tax Identification Number. You can apply for it online: https://www.irs.gov/Businesses/Small-Businesses-&-Self-Employed/Apply-for-an-Employer-Identification-Number-(EIN)-Online State Tax Identification Number Most states require you to get a state tax identification number. For the state of Minnesota, you can get a tax identification number here: http://www.revenue.state.mn.us/businesses/withholding/Pages/HowdoIgetaMinnesotawithholdingtaxIDnumber.aspx State Unemployment ID You will also need to setup an account with your state unemployment office. In Minnesota you can open your account here: http://www.uimn.org/uimn/employers/employer-account/new-account/

Banking

You will need to open a dedicated business checking account for your business. You will need to have Federal TIN before you can open a business checking account. You may also want to open a business savings account. You can look at http://www.bankrate.com to find a bank that will meet your needs.

Credit/Debit Cards

When opening your bank account you may also want to apply for a credit and debit cards. Always charge your business related expenses on your business credit card. This makes it easier to do your bookkeeping.

Business Checks

I found Costco to be one of the least expensive places to print business checks.  

Accounting

Certified Pubic Accountant

Business taxes are different than personal taxes so it may be worthwhile to find a reputable Certified Public Accountant(CPA) to help you with your business Taxes

Accounting Software

In addition, to finding a CPA you also need to determine how you will keep track of your business related transactions. Most commonly used business accounting software is QuickBooks. I used QuickBooks for many years. For the past 1.5 years, I have started using a service called Xero (http://www.xero.com ). They have an application for IPhone along with a website. I also evaluated Freshbooks but I chose Xero as they had double entry accounting which is helpful for business related accounting. Xero does offer payroll service in some of the states in the US. They currently don’t support a payroll service in Minnesota. Xero was an easy choice because they were inexpensive, they automatically downloaded banking and credit card transactions, allowed me to give access to the books to my accountant, partners, and their mobile application allowed uploading receipts. They also had a large number of integrations with other services. Even their service levels could be adjusted in the middle of the month to save money on their service charges.

Payroll

I have always used a third party payroll provider to do my payroll and all the associated filings. The cost of payroll can vary a lot. The majority of payroll services are providing a similar type of services but they reserve the right to charge you exorbitant fees to prey upon your ignorance. I currently use a service called Gusto (http://www.gusto.com ). They had lower fees than most other payroll providers, they allowed me to run an unlimited amount of payrolls every month, they did all the quarterly filings and provided W2 every year for no additional cost. I have seen others use these types of services for payroll:
  • QuickBooks
  • Wells Fargo Payroll
  • Sure Payroll
  • Use and accountant
  • Do your own payroll and business filings

Taxes

When you are self-employed you will be pay self-employment taxes. If you have a payroll provider you will not have to do anything else but you may still want to read about self-employment taxes here: https://www.irs.gov/Businesses/Small-Businesses-&-Self-Employed/Self-Employment-Tax-Social-Security-and-Medicare-Taxes  

Time Management Software

Over the years, I have used a variety of methods to keep track of my time to bill my clients. If you have more than one clients I highly recommend Harvest App. They have the web and mobile applications. They are super easy to use and inexpensive. You can learn more about Harvest app here: https://www.getharvest.com  

Integration

In addition to looking at ease of use, cost and features one other criterion you should use while selecting an application are integration. In my case, I chose Xero, Harvest, and Gusto. All of them integrate well. I can keep track of my time in Harvest and the invoices from Harvest are automatically transferred into Xero. I use Gusto for payroll and the transactions automatically flow into Xero. Xero automatically downloads the transactions from my bank accounts and credit cards.

Insurance

Workers Comp Insurance

If you are self-employed your state will most likely require you to purchase workers comp insurance. Since you are working for yourself you are not allowed to file a worker comp claim against yourself but you will still be required to purchase workers comp insurance. I have been purchasing my workers comp insurance from State Farm. Their prices seemed reasonable to me.

Commercial General Liability Insurance

If you are signing contracts with your clients you will be required to purchase Commercial General Liability Insurance. Most of the time my clients require 1 to 2 Million dollar General Liability insurance policy. I currently use State Farm for my general liability insurance. They were cheaper than most other providers.

Professional Liability Insurance

There have been times when I was asked to purchase Professional Liability Insurance. This is typically a lot more expensive than General Liability Insurance. I don’t have a recommendation for a specific provider. You should try to avoid an annual contract for your insurance. Pay your premium monthly so you can cancel this policy if it is no longer necessary.

Automobile Expenses

If you use your car for a business related activity you will be able to charge some of these expenses to your business. You should talk with your CPA to determine the option that works best for you. Here are a few options: 1. Purchase or Lease the car for your business 2. Use your personal car and keep track of miles you are driving for business.

Mileage Tracking App

I recommend MileIQ https://dashboard.mileiq.com application as it makes it super easy to keep track of business-related driving.

Health Insurance

If your spouse is employed your best bet is to get your Health Insurance benefits from her employer. This may be less expensive than purchasing health insurance in the open market. Even if your spouse does not work you will be able to purchase quality health insurance as a result of Obamacare. You can shop for plans here: https://www.healthcare.gov/

Disability Insurance

I highly recommend purchasing a quality disability insurance. There are many types of disability insurance policies and options. Disability insurance can expensive so you need to do your research. Your cost of insurance goes up as you get older. I recommend working with an independent agent and purchase a disability insurance policy that will meet your needs. I purchased my disability insurance from Guardian. They are used by physicians and IT consultants I know. I purchased my disability insurance more than 10 years ago so I don’t remember all the features I looked for but here are a few things to keep in mind:
  • Select a Policy that allows you to define your “Owner Occupation”
  • Cost of Living Adjustment Rider is worth pay for
  • Make sure your disability insurance kicks in after your employer based short term disability stops
  • Since you are paying your disability insurance premium with after tax dollars you will get payments that will not be taxable.

Retirement Plans

One of the best parts of being self-employed is to be able to contribute more to your retirement. If your self-employment income varies greatly you have the option not make any payments into your retirement plan until the 4th Quarter of calendar year. A few different type of retirement plans is available for self-employed folks. Two of the best choices are:
  • Solo 401K Plan
  • SEP Plan
Both of them don’t require a lot of paperwork. I like Solo 401K more because it allows me to contribute more per year as compared with SEP Plan. There are two types of contribution in a Solo 401K Plan:
  • Employee 401K Contribution: For 2015 an employee can defer $18000 of their income into their Solo 401K plan
  • Employer 401K Contribution: Employer can contribute up to 25% of your annual income to your 401K.
The total annual contribution into a Solo 401K plan for participants 50 years and younger in 2015 cannot exceed $53000 IRS website provides a good overview of these plans: https://www.irs.gov/Retirement-Plans/One-Participant-401(k)-Plans There are a large number of financial institutions that offer Solo 401K plans. Here are a few good options:
  • Vanguard
  • Fidelity
  • Schwab
I have used all three but my favorite is Vanguard because of their low costs.

Business Stationary

You will most likely need business cards and letterheads. There are a large number of inexpensive choices available. Here are two options worth considering:
  • moo.com
  • vistaprint.com
 

Messaging and Collaboration

Cloud Software as a Service(SAAS) offerings will be your best bet. They are inexpensive, don’t require large upfront investment, easy to setup and can grow with your needs. Office 365 is a SAAS is by far my favorite service for messaging and collaboration. They have plans starting as low as $5 per employee per month and it includes:
  • Email
  • SharePoint Online for Collaboration
  • Skype for Communication
  • One Drive for secure file exchange.
https://products.office.com/en-us/business/office-365-business

Website Hosting

Office 365 allows you to host a public website but they are deprecating this feature. There are many great options for hosting your public facing website. Some of these are:  

Content Management System

There are many great options here as well. WordPress is by far one of the most popular content management systems. There are dedicated WordPress hosting providers. More technical folks may install WordPress, MySQL in a VM for most of you may be better off selecting a provider that spins up a website for you and allows you to manage your content.

Customer Relationship Management

This software helps businesses manage customer data and customer interaction, access business information, automate sales, marketing and customer support and also manage employee, vendor and partner relationships. In the past CRM software was primarily used by larger enterprises. With the advent of Software as Service applications now any sized business can use this type of software. Two of the best CRM products are:
  • Sales Force
  • Microsoft CRM Online
Both of them are easy to set up, require no upfront investment and will grow as your business grows.

Networking/Branding

Once you are self-employed your continued employment will depend on your brand and your network. You will find numerous resources on the web that will cover this topic. I will share a few suggestions:
  • Leverage Social Media: Twitter, LinkedIn, and SlideShare
  • Join professional organizations to connect with your peers
  • Participate in Meetups and local events
  • Plan to spend 20% of time training and business development
  • Participate in the relevant communities like StackExchange for developers
  • Blog consistently and publish it in LinkedIn, personal/business website
  • Find a Mentor
  • Give back and help others
I hope you found some of these resources helpful. I wish you the very best in your self-employment journey.

How to upgrade Azure DS Series VM to GS Series VM

Azure GS Series VM released on September 2nd 2015. G Series VM’s were released earlier this year.

GS Series added ability to use SSD backed premium storage to the largest/fastest virtual machines on Azure platform.

You can read more about them here:

https://azure.microsoft.com/en-us/blog/azure-has-the-most-powerful-vms-in-the-public-cloud/

GS Series VM’s are not available in all regions yet.

I had a DS Series VM running on “West US” region. This VM was in an availability set.

We needed to upgrade this VM to GS Series VM. When I looked at http://portal.azure.com  and tried to resize the VM I did not see GS Series VM in the list upgrade options. I knew that GS Series was available in “West US” so I needed to find a way to resize my VM.

Azure Resource Explorer

https://resources.azure.com

If you are writing Azure Resource Management Templates you will find Azure resource explorer invaluable. Documentation for services is often incomplete so I create a resource using azure management portal. Once the resource has been created I use resource explorer to understand the properties of the resource. Majority of time I use resource explorer to read information. However in this particular case I used resource explorer to upgrade the VM. I only tried it in Dev and it worked.

Here is my DS1 instance running in West US.

image

Here is how this VM looks like in the resource explorer.

image

My VM was running. I tried updating the VM to Standard_GS1 using these steps:

1. You have to be appropriate access to the Azure subscription/resource group where the VM is running.

2. Log into resource explorer using “ReadWrite” mode and select the subscription, resource, compute and virtual machine.

image

3. Select the virtual machine and press the “Edit” button as shown.

image

4. Update the value of vmSize to Standard_GS1 as shown below and press “PUT”

image

5. The operation failed with this error below. I found that I had to stop/deallocate my VM even if it was not in the availability set. Error message was very descriptive.

1:  {  
2:   "error": {  
3:    "code": "OperationNotAllowed",  
4:    "target": "vmSize",  
5:    "message": "Unable to update the VM. The requested VM size 'Standard_GS1' may not be available in the resources supporting the existing allocation. Please try again later, try with a different VM size or create a VM with a new availability set or no availability set binding."  
6:   }  
7:  }  

6. I went to the preview portal and stopped the VM. The status of the VM will change to stopped (deallocated).

7. I refreshed the resource explorer to make sure it had the latest settings for the VM.

8. I repeated the steps 2, 3 and 4 again. This time there were no errors.

9. I verified in the portal that size had change to Standard_GS1 as shown below.

image

10. Don’t forget to shutdown the VM after your experiment is over.

Summary

Upgrading a DS Series VM to GS Series VM is possible however a reboot is required. Reboot is required if the VM is standalone or in an availability set. When new services are launched on Azure platform they may not have PowerShell or Azure CLI available or documented. Azure Resource Explorer allows us directly interact with the Azure platform. It can be used to manage resources.

Adventures with Azure Resource Manager Part I

Overview

In this series of blog posts I will create ARM templates used to provision Azure resources. I will kick things off by creating a template that shows you have to create multiple storage accounts. It also shows:

  1. How to use parameters of type arrays
  2. How to use length operator to iterate over elements of array
  3. How to use copy
  4. How to use output section of the template to display information about newly created resources.
  5. How you can use parameter files to provision resources in your dev, test and production environment.

Show me my template

Parameters: Lines 4-11

Like any other ARM template this template starts with a parameters section. Line 5 declares a parameter of type storageAccountList. It is of type array. This parameter will pass in an array of objects which will provide all the required details to provision a storage account.

Resources: Lines 12-26

This is the section where we iterate through the list of objects in the storageAccountList and provision storage accounts in a resource loop.

Line 14: Sets the name property of the storage account being provisioned

Line 17: Sets the Location property of the storage account being provisioned

Line 20: Uses length function to determine the number of elements in the storageAccountList

Line 23: Sets the accountType property of the storage account being provisioned

Outputs: Lines  27-40

This section displays details about the storage accounts that were provisioned.

Lines 30, 34 and 38 reference the storage accounts that were provisioned.

1:  {  
2:    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",  
3:    "contentVersion": "1.0.0.0",  
4:    "parameters": {  
5:      "storageAccountList": {  
6:        "type": "array",  
7:        "metadata": {  
8:          "description": "List of storage accounts that need to be created"  
9:        }  
10:      }  
11:    },  
12:    "resources": [  
13:      {  
14:        "name": "[parameters('storageAccountList')[copyIndex()].name]",  
15:        "type": "Microsoft.Storage/storageAccounts",  
16:        "apiVersion": "2015-05-01-preview",  
17:        "location": "[parameters('storageAccountList')[copyIndex()].location]",  
18:        "copy": {  
19:          "name": "storageAccountLoop",  
20:          "count": "[length(parameters('storageAccountList'))]"  
21:        },  
22:        "properties": {  
23:          "accountType": "[parameters('storageAccountList')[copyIndex()].storageAccountType]"  
24:        }  
25:      }  
26:    ],  
27:    "outputs": {  
28:      "stgobject1": {  
29:        "type": "object",  
30:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[0].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
31:      },  
32:      "stgobject2": {  
33:        "type": "object",  
34:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[1].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
35:      },  
36:      "stgobject3": {  
37:        "type": "object",  
38:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[2].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
39:      }  
40:    }  
41:  }  

Parameter Files

You created one template which is parameterized. After the template has been tested you can use different parameter files with the same template to provision resources in different environments.

Dev Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a dev environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajappdev", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbdev", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebdev", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchdev", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

Test Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a test environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajapptest", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbtest", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebtest", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchtest", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

Prod Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a prod environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajappprod", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbprod", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebprod", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchprod", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

 

Ship It (Make it so Number 2)

Now that our templates are ready we are ready to execute them to provision resources.

Here is a short script that is used to provision resources.

Lines 16-30: Create the resource group if it does not already exist

Line 40: Uses the template and a parameters file to  provision storage accounts.

1:  Param  
2:  (  
3:    [Parameter (Mandatory = $true)]  
4:    [string] $ResourceGroupName,  
5:    
6:    [Parameter (Mandatory = $true)]  
7:    [string] $Location,  
8:    
9:    [Parameter (Mandatory = $true)]  
10:    [string] $ParametersFile  
11:  )  
12:    
13:  #publish version of the the powershell cmdlets we are using  
14:  (Get-Module Azure).Version  
15:    
16:  $rg = Get-AzureResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue  
17:    
18:  if (!$rg)  
19:  {  
20:    # Create a new storage account  
21:    Write-Output "";  
22:    Write-Output "Creating Resource Group [$ResourceGroupName] in location [$Location]"  
23:    
24:    
25:    New-AzureResourceGroup -Name "$ResourceGroupName" -Force -Location $Location -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null  
26:    
27:    if (!($?))   
28:    {   
29:      throw "Cannot create new Resource Group [$ResourceGroupName] in region [$Location]. Error Detail: $errorVariable"   
30:    }  
31:       
32:    Write-Output "Resource Group [$ResourceGroupName] was created"   
33:      
34:  }  
35:  else  
36:  {  
37:    Write-Output "Resource Group [$ResourceGroupName] already exists"  
38:  }  
39:    
40:  New-AzureResourceGroupDeployment -Name stgdeployment -ResourceGroupName $ResourceGroupName -TemplateFile .\createstorageaccts.json -TemplateParameterFile $ParametersFile  

 

Trust but Verify

Line 1: It calls deploy.ps1 script and passes in resource group name, location and parameters file.

Line 54-92: Show the details of the storage accounts that were provisioned.

1:  PS C:\git\ArmExamples\CreateStorageAccounts> .\deploy.ps1 -ResourceGroupName ARM-Dev -Location "West US" -ParametersFile  
2:   .\storageaccts-dev.json  
3:    
4:  Creating Resource Group [ARM-Dev] in location [West US]  
5:  VERBOSE: 3:54:11 PM - Created resource group 'ARM-Dev' in location 'westus'  
6:  Resource Group [ARM-Dev] was created  
7:  VERBOSE: 3:54:13 PM - Template is valid.  
8:  VERBOSE: 3:54:14 PM - Create template deployment 'stgdeployment'.  
9:  VERBOSE: 3:54:22 PM - Resource Microsoft.Storage/storageAccounts 'rajarchdev' provisioning status is running  
10:  VERBOSE: 3:54:22 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is running  
11:  VERBOSE: 3:54:24 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is running  
12:  VERBOSE: 3:54:24 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is running  
13:  VERBOSE: 4:04:03 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is succeeded  
14:  VERBOSE: 4:04:03 PM - Resource Microsoft.Storage/storageAccounts 'rajarchdev' provisioning status is succeeded  
15:  VERBOSE: 4:04:05 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is succeeded  
16:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is succeeded  
17:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is succeeded  
18:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is succeeded  
19:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is succeeded  
20:    
21:    
22:  DeploymentName  : stgdeployment  
23:  ResourceGroupName : ARM-Dev  
24:  ProvisioningState : Succeeded  
25:  Timestamp     : 8/14/2015 9:04:25 PM  
26:  Mode       : Incremental  
27:  TemplateLink   :  
28:  Parameters    :  
29:            Name       Type            Value  
30:            =============== ========================= ==========  
31:            storageAccountList Array           [  
32:             {  
33:              "name": "rajappdev",  
34:              "location": "Central US",  
35:              "storageAccountType": "Standard_LRS"  
36:             },  
37:             {  
38:              "name": "rajdbdev",  
39:              "location": "Central US",  
40:              "storageAccountType": "Standard_GRS"  
41:             },  
42:             {  
43:              "name": "rajwebdev",  
44:              "location": "Central US",  
45:              "storageAccountType": "Standard_ZRS"  
46:             },  
47:             {  
48:              "name": "rajarchdev",  
49:              "location": "West US",  
50:              "storageAccountType": "Premium_LRS"  
51:             }  
52:            ]  
53:    
54:  Outputs      :  
55:            Name       Type            Value  
56:            =============== ========================= ==========  
57:            stgobject1    Object           {  
58:             "provisioningState": "Succeeded",  
59:             "accountType": "Standard_LRS",  
60:             "primaryEndpoints": {  
61:              "blob": "https://rajappdev.blob.core.windows.net/",  
62:              "queue": "https://rajappdev.queue.core.windows.net/",  
63:              "table": "https://rajappdev.table.core.windows.net/"  
64:             },  
65:             "primaryLocation": "Central US",  
66:             "statusOfPrimary": "Available",  
67:             "creationTime": "2015-08-14T20:54:32.9062387Z"  
68:            }  
69:            stgobject2    Object           {  
70:             "provisioningState": "Succeeded",  
71:             "accountType": "Standard_GRS",  
72:             "primaryEndpoints": {  
73:              "blob": "https://rajdbdev.blob.core.windows.net/",  
74:              "queue": "https://rajdbdev.queue.core.windows.net/",  
75:              "table": "https://rajdbdev.table.core.windows.net/"  
76:             },  
77:             "primaryLocation": "Central US",  
78:             "statusOfPrimary": "Available",  
79:             "secondaryLocation": "East US 2",  
80:             "statusOfSecondary": "Available",  
81:             "creationTime": "2015-08-14T20:54:32.0468124Z"  
82:            }  
83:            stgobject3    Object           {  
84:             "provisioningState": "Succeeded",  
85:             "accountType": "Standard_ZRS",  
86:             "primaryEndpoints": {  
87:              "blob": "https://rajwebdev.blob.core.windows.net/"  
88:             },  
89:             "primaryLocation": "Central US",  
90:             "statusOfPrimary": "Available",  
91:             "creationTime": "2015-08-14T20:54:29.9062389Z"  
92:            }  

Cleanup

To remove all the resources you provisioned you can use the Remove-AzureResoureGroup cmdlet as shown below

  Remove-AzureResourceGroup -Name ARM-Dev  

Doggy Bag Please

You can access all the samples from my GitHub Repository here: https://github.com/rajinders/ArmExamples

Summary

I hope you found this sample helpful. I will post more samples on a regular basis.

Resources to learn Azure Resource Manager (ARM) Language

Azure Resource Manager(ARM) was announced in Spring 2014. It is a completely different way of deploying services on Azure platform. It matters because before the release of ARM it was only possible to deploy one service at a time. When you were deploying applications using PowerShell or Azure CLI you had to deploy all the services via a script. As the number of services increased the scripts got increasingly complex and brittle. Over the past year ARM capabilities have evolved rapidly. All future services will be deployed via ARM cmdlets or templates. The current Azure Service management API’s will be eventually deprecated. Even when using ARM you have two choices:

  • Imperative: This is very similar to how you were using Service Management API’s to provision services.
  • Declarative: Here you define the application configuration with a JSON Template. This template can be parameterized. Once this is done a single PowerShell cmdlet New-AzureResourceGroupDeployment deploys your entire application. This deployment can span regions as well. You can define dependencies between resources and deployment process will deploy them in the order necessary to make the deployment successful. If there are no dependencies it parallelizes the deployment. You can repeatedly deploy the same template and the deployment process is smart enough to determine what changed and only deploy/update the services that changed. ARM templates can not only provision the infrastructure they also also execute tasks inside the provisioned VM’s to fully configure your application. On Windows VM’s you can either use DSC or PowerShell scripts to customize it. On Linux you can use bash scripts to customize the VM after it has been created.

AWS has had a similar capability for many years. It is called CloudFormation. While ARM and CloudFormation are similar and are trying to achieve similar goals there are some differences between them as well.

Resources

If you believe in DevOps and work with Microsoft Azure platform understanding ARM will be beneficial. Another thing worth mentioning is that ARM templates will allow you to deploy services in your private cloud when Azure stack is released. I want to share some helpful resources to make it easier for you to learn ARM.

  1. Treat your Azure Infrastructure as code is an excellent overview of ARM and its benefits: https://www.linkedin.com/pulse/treat-your-azure-infrastructure-code-krishna-venkataraman?trk=prof-post
  2. ARM Language Reference: https://msdn.microsoft.com/en-us/library/azure/Dn835138.aspx?f=255&MSPPError=-2147217396
  3. Azure Quick Start Templates at Github: If you are like me you learn from examples. Here is a large repository of ARM templates. https://github.com/Azure/azure-quickstart-templates
  4. Ryan Jones from Microsoft posted many simple ARM samples here: https://github.com/rjmax/ArmExamples
  5. Full Scale 180 blog is another excellent resource to learn how to write ARM templates.  http://blog.fullscale180.com/building-azure-resource-manager-templates/   I especially like the Couchbase Sample: https://github.com/Azure/azure-quickstart-templates/tree/master/couchbase-on-ubuntu
  6. If you still want to use the imperative method of deploying Azure resource check out this sample for Joe Davies that walks you through the process provisioning a VM here: https://azure.microsoft.com/blog/2015/06/11/step-through-creating-resource-manager-virtual-machine-powershell/
  7. Here is a sample showing how to lock down your resources with Resource Manager Lock. http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/06/18/lock-down-your-azure-resources.aspx
  8. Neil Mackenzie posted a sample for creating a VM with a instance IP address here: https://gist.github.com/nmackenzie/db9a4b7abdee2760dba8 https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  9. Alexandre Brisebois posted a sample showing how to provision Centos VM using an ARM.  In this example he shows how to customize the VM after its creation using  a bash script. https://alexandrebrisebois.wordpress.com/2015/05/25/create-a-centos-virtual-machine-using-azure-resource-manager-arm/
  10. Kloud Blog has a nice overview of how to get started with ARM and many samples: http://blog.kloud.com.au/tag/azure-resource-manager/
  11. If you want learn about best practices for writing ARM templates this is a must read document. https://azure.microsoft.com/en-us/documentation/articles/best-practices-resource-manager-design-templates/
  12. This blog post shows how you can use output section of the template publish information about newly created resources. http://blogs.msdn.com/b/girishp/archive/2015/06/16/azure-arm-templates-tips-on-using-outputs.aspx
  13. Check out this list of resources for ARM by Hans Vredevoort. It is very comprehensive. https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  14. This blog post shows how you can use arrays, length function, resource loops, outputs to provision multiple storage accounts http://www.rajinders.com/2015/08/14/adventures-with-azure-resource-manager-part-i/

 

Samples

As I work with ARM templates I am constantly developing or looking for samples that can help me. These sample templates were created by product teams in Microsoft but have not been integrated into Quick Start templates yet. I will use this section to document some of the helpful samples I have found.

  1. Azure Web Site with a Web Job Template: This template was created by David Ebbo. This is the only ARM template sample that shows you how to publish webjobs with an ARM template. https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/WebAppWithWebJobs.json
  2. Length Function: As I began learning the template language I found it annoying that I had to pass in Array and its length as separate parameters. I just found a sample created by Ryan Jones which shows how to calculate length of an array. https://github.com/rjmax/ArmExamples/blob/master/copySampleWithLength.json

Tools

ARM documentation is still evolving and sometimes it is difficult to find samples you are looking for. If you are trying to create a new template and you cannot find any documentation here are few things that may be helpful

  1. Azure Resource Explorer: This is an essential tool for anybody writing ARM templates. You can deploy a resource using the portal and use the resource explorer to see the JSON schema for the resource you just created. You can make changes to the resources: https://resources.azure.com/
  2. ARM Schemas: This is the location where MSFT ARM teams are posting their schemas. https://github.com/Azure/azure-resource-manager-schemas

Debugging

You can view the logs using these PowerShell cmdlets.

  1. Get-AzureResourceLog: Gets logs for a specific Azure  resource
  2. Get-AzureResourceGroupLog: Get logs for a Azure resource group
  3. Get-AzureResourceProviderLog Gets logs for an Azure resource provider
  4. Get-AzureResourceGroupDeploymentOperation Get logs for the deployment operation

When your template deployment operation fails the error message may not have enough detail to tell you the reason for failure. You can go to the preview azure portal and examine the audit logs. You can filter by resource group, resource type, and time range. I was able to get detailed error message from the portal.

Surprises

In addition to running the cmdlet Switch-AzureMode –Name AzureResourceManager I also had to enable my subscription for specific Azure resource providers. When I was using Service Management API’s this was not necessary. As an example to be able to provision virtual networks with ARM I had to run the following cmdlet:

Register-AzureProvider –ProviderNamespace Microsoft.Network

Even though Template language can work with JSON arrays it cannot determine the number of elements in the JSON array so you have to pass the count separately. on 08/04/2015 I removed the previous line as length function is now available.

I hope these resources are helpful. If you are aware of other helpful ARM resources feel free to mention them in comments on this  blog post and I can add them to my list.

I will be posting ARM samples on my blog as well.

Installing Java Runtime in Azure Cloud Services with Chocolatey

I recently wrote a blog post about installing Splunk on Azure Web/Worker roles with the help of a startup task. You can see that blog post here. In this blog post I will show you how to install Java runtime in web/worker roles. Azure Web/Worker roles are stateless so the only way to install third party software or tweak windows features on web/worker roles is via startup tasks.

Linux users have had the benefit of tools like apt, yum etc to download and install software via command line. Chocolatey provides you with similar functionality on Windows Platform. If you into DevOps and automation on Windows platform you should check out Chocolatey here. It has nearly 15000 packages already available.

Once you have Chocolatey installed installing java is a breeze. It is as simple as

 choco install javaruntime -y  

The statement above is self explanatory. Option –y  answers y to all the questions including accepting the license so you are not prompted to answer any questions.

I already provided detailed steps to define startup tasks in my  previous blog post. So I will just share the startup script along with the service definition file that shows how to deploy Java runtime in Azure web/worker role with a startup task.

Step 1

Create a startup.cmd file and add it to your worker/web role implementation. It should be saved as “Unicode (UTF-8 without signature) – Codepage 65001”.

Set the “copy to output directory” property of startup.cmd to “copy if newer”

Line 9 checks to see if the startup task ran successfully  and end if it did

Line 16 installs chocolatey

Line 22 install java run time

Line 26 only execute if java was installed successfully and it creates startupcomplete.txt file in approot directory.

1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  Echo Installing Chocolatey >> "%LogPath%" 2>&1  
15:    
16:  @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin  >> "%LogPath%" 2>&1  
17:    
18:  IF ERRORLEVEL EQU 0 (  
19:    
20:       Echo Installing Java runtime >> "%LogPath%" 2>&1  
21:    
22:       %ALLUSERSPROFILE%\chocolatey\bin\choco install javaruntime -y >> "%LogPath%" 2>&1  
23:    
24:       IF ERRORLEVEL EQU 0 (            
25:                 ECHO Java installed. Startup completed. >> "%LogPath%" 2>&1  
26:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
27:                 EXIT /B 0  
28:       ) ELSE (  
29:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
30:            EXIT %ERRORLEVEL%  
31:       )  
32:  ) ELSE (  
33:    ECHO An error occurred while install chocolatey The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
34:    EXIT %ERRORLEVEL%  
35:  )  
36:    

 

Step 2

Update the service definition file to define the startup task.

Lines 5 through 19 define the statup task.

1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureJavaPaaS" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Step 3

Publish the cloud service to Azure. I enabled remote desktop to be able to verify if the worker role was configured successfully.

Verification

I used Remote Desktop to log into the worker role. I  looked in

C:\Resources\Directory\d063631e14c1485cb6c838c8f92cd7c3.MyWorkerRole.LogsPath and found startup.txt

It had the following content. As you can see below that java was installed successfully.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  Installing Chocolatey   
6:  Installing Java runtime   
7:  Chocolatey v0.9.9.8  
8:  Installing the following packages:  
9:  javaruntime  
10:  By installing you accept licenses for the packages.  
11:    
12:  jre8 v8.0.45  
13:   Downloading jre8 32 bit  
14:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106246'  
15:   Installing jre8...  
16:   jre8 has been installed.  
17:   Downloading jre8 64 bit  
18:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106248'  
19:   Installing jre8...  
20:   jre8 has been installed.  
21:   PATH environment variable does not have D:\Program Files\Java\jre1.8.0_45\bin in it. Adding...  
22:   The install of jre8 was successful.  
23:    
24:  javaruntime v8.0.40  
25:   The install of javaruntime was successful.  
26:    
27:  Chocolatey installed 2/2 package(s). 0 package(s) failed.  
28:   See the log for details (D:\ProgramData\chocolatey\logs\chocolatey.log).  
29:  Java installed. Startup completed.   
30:    

I also verified that e:\startupcomplete.txt file was created.

I verified that java was installed in D:\Sun\Java directory

You can get the source code for this entire project from my GitHub Repository https://github.com/rajinders/azure-java-paas.

How to migrate from Standard Azure Virtual Machines to DS Series Storage Optimized VM’s

Background

We are implementing Azure solutions for a few clients. Most of our clients are use cloud services and virtual machines to implement their solutions on Azure platform. For many years Azure platform  only offered just one performance tier for storage. You can see the sizes of virtual machines and cloud services and the disk performance they offer here:

https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

For standard Azure virtual machines each disk is limited to 500 IOPS per disk. If you needed better performance you had to use disk striping with multiple disk to get better performance. Number of disks one could add to Azure Virtual Machine is constrained by the size of VM. One core allows us to add 2 VHD’s. Each VHD is a page blob with a maximum size of 1 TB. When we were deploying packaged software or custom applications with high IOPS requirements it was challenging to meet the needs of our customers. All this changed with the following announcement by Mark Russinovich where he announced General Availability of Azure Premium Storage.

http://azure.microsoft.com/blog/2015/04/16/azure-premium-storage-now-generally-available-2/

Azure premium storage offers durable SSD storage. Along with premium storage Microsoft also released storage optimized virtual machines called DS Series VM’s. These are capable of achieving up to 64000 IOPS and 524 MB/sec. This will enable many scenarios like NoSQL or even large SQL database that need higher IOPS than the standard Azure virtual machines offer. You can read about the specifications for DS Series VM’s in the link posted above. If you were using a standard Azure VM you can easily scale up or down to another standard virtual machine using Portal, PowerShell and Azure CLI. Unfortunately it is currently not possible to upgrade/migrate a standard Azure virtual machine to a DS Series virtual machine with premium storage. In this blog post I will show you how you can migrate an existing virtual machine to a DS’s series virtual machine with Premium(durable SSD) storage. It will provide a PowerShell script you can leverage to migrate a standard virtual machine to a DS Series virtual machine.

Details

Creating Premium Storage Account

Premium storage account is different than standard storage accounts. If you want to leverage premium storage you need to create a new storage account in the azure preview portal. Account type you need to select is “Premium Locally Redundant”

It is not possible to use the existing Azure management portal to provision premium storage account.

New Storage Account

Here is how you can use PowerShell cmdlet to create premium storage account. As you can see it is similar to how you create standard storage accounts. I was unable to find what value I had to specify for Type and I had to read the actual source code to determine that it was ‘Premium_LRS’

001
002
003
004
005
006
007
008
$StorageAccountTypePremium = ‘Premium_LRS’

$DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

if (!($?)) 
{ 
    throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
}

 

Premium storage and DS Series virtual machines are not available in all regions. The complete script I will provide validates your location preference and fails if you specify a location where premium storage and DS Series VM’s are not available.

Creating DS Series Virtual machine is identical to creating standard virtual machines.

Here are a few things I learned about DS Series Virtual machines and Premium storage.

  • Premium storage does not allow us to add  disks smaller than 10 GB. If your VM has a disk smaller than 10 GB the script will fail
  • Default Host Caching option for Premium storage data disks is “Ready Only” as compared with “None” for standard data disks.
  • Default Host Caching option for Premium storage OS disk is “Read Write” which is same as standard OS disks
  • Currently this script only migrates virtual machines to the same subscription. It can be easily extended to support migration to different subscriptions. 
  • It can migrate VM’s to a different region as long as premium storage is available in that region
  • It shuts down the existing source VM before making of copy of the VHD’s for the virtual machine.
  • It validates that virtual network for the destination VM exists but does not validate if subnet also exists
  • It gives new names to the disks in the destination virtual machine
  • Currently I am only copying disks, end points, VM extensions. I am not copying ACL’s and other type of extensions like malware extension
  • I only tested the script with PowerShell SDK Version 0.9.2
  • I tested migrating standard VM in West US to DS Series VM in West US only. I logged into the newly created VM and verified that all disks were present. This is the extent of my testing. My VM with 3 Disk’s copied in 10 minutes.
  • If your destination storage account already exists it has to be of type “Premium_LRS”. If you have an existing account of different type the script will fail. If the storage account does not exist it will be created.

Sample Script

You can access the entire source code from my public GitHub repository

https://github.com/rajinders/migrate-to-azuredsvm

I have also pasted the entire source code here for your convenience.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
<#
Copyright 2015 Rajinder Singh
 
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#>

<#
.SYNOPSIS
Migrates an existing VM into a DS Series VM which uses Premium storage.
 
.DESCRIPTION
This script migrates an exitsing VM into a DS Series VM which uses Premium Storage. At this time DS Series VM’s are not available in all regions.
It currently expects the VM to be migrated in the same subscription. It supports migrating VM to the same region or a different region.
It can be easily extended to support migrating to a different subscription as well
 
.PARAMETER SourceVMName
The name of the VM that needs to be migrated
 
.PARAMTER SourceServiceName
The name of service for the old VM
 
.PARAMETER DestVMName
The name of New DS Series VM that will be created.
 
.PARAMTER DestServiceName
The name of the Service for the new VM
 
.PARAMTER Location
Region where new VM will be created
 
.PARAMTER Size
Size of the new VM
 
.PARAMTER DestStorageAccountName
Name of the storage account where the VM will be created. It has to be premium storage account
 
.PARAMETER ResourceGroupName
Resource group where the cache will be create
 
.EXAMPLE
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2″ -SourceServiceName “rajsourcevm2″ -DestVMName “rajdsvm12″ -DestServiceName “rajdsvm12svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg18′ -DestStorageAccountContainer ‘vhds’
 
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2″ -SourceServiceName “rajsourcevm2″ -DestVMName “rajdsvm16″ -DestServiceName “rajdsvm16svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg19′ -DestStorageAccountContainer ‘vhds’ -VNetName rajvnettest3 -SubnetName FrontEndSubnet
 
#>

[CmdletBinding(DefaultParameterSetName=“Default”)]
Param
(
    [Parameter (Mandatory = $true)]
    [string] $SourceVMName,

    [Parameter (Mandatory = $true)]
    [string] $SourceServiceName,

    [Parameter (Mandatory = $true)]
    [string] $DestVMName,

    [Parameter (Mandatory = $true)]
    [string] $DestServiceName,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘West US’,‘East US 2′,‘West Europe’,‘East China’,‘Southeast Asia’,‘West Japan’, ignorecase=$true)]
    [string] $Location,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘Standard_DS1′,‘Standard_DS2′,‘Standard_DS3′,‘Standard_DS4′,‘Standard_DS11′,‘Standard_DS12′,‘Standard_DS13′,‘Standard_DS14′, ignorecase=$true)]
    [string] $VMSize,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountName,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountContainer,

    [Parameter (Mandatory = $false)]
    [string] $VNetName,

    [Parameter (Mandatory = $false)]
    [string] $SubnetName
)

#publish version of the the powershell cmdlets we are using
(Get-Module Azure).Version

#$VerbosePreference = “Continue”
$StorageAccountTypePremium = ‘Premium_LRS’

#############################################################################################################
#validation section
#Perform as much upfront validation as possible
#############################################################################################################

#validate upfront that this service we are trying to create already exists
if((Get-AzureService -ServiceName $DestServiceName -ErrorAction SilentlyContinue) -ne $null)
{
    Write-Error “Service [$DestServiceName] already exists”
    return
}

#Determine we are migrating the VM to a Virtual network. If it is then verify that VNET exists
if( !$VNetName -and !$SubnetName )
{
    $DeployToVNet = $false
}
else
{
    $DeployToVNet = $true
    $vnetSite = Get-AzureVNetSite -VNetName $VNetName -ErrorAction SilentlyContinue

    if (!$vnetSite)
    {
        Write-Error “Virtual Network [$VNetName] does not exist”
        return
    }
}

Write-Host “DepoyToVNet is set to [$DeployToVnet]”

#TODO: add validation to make sure the destination VM size can accomodate the number of disk in the source VM

$DestStorageAccount = Get-AzureStorageAccount -StorageAccountName $DestStorageAccountName -ErrorAction SilentlyContinue

#check to see if the storage account exists and create a premium storage account if it does not exist
if(!$DestStorageAccount)
{
    # Create a new storage account
    Write-Output “”;
    Write-Output (“Configuring Destination Storage Account {0} in location {1}” -f $DestStorageAccountName, $Location);

    $DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

    if (!($?)) 
    { 
        throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
    } 
   
    Write-Verbose “Created Destination Storage Account [$DestStorageAccountName] with AccountType of [$($DestStorageAccount.AccountType)]”    
}
else
{
    Write-Host “Destination Storage account [$DestStorageAccountName] already exists. Storage account type is [$($DestStorageAccount.AccountType)]”

    #make sure if the account already exists it is of type premium storage
    if( $DestStorageAccount.AccountType -ne $StorageAccountTypePremium )
    {
        Write-Error “Storage account [$DestStorageAccountName] account type of [$($DestStorageAccount.AccountType)] is invalid”
        return
    }
}

Write-Host “Source VM Name is [$SourceVMName] and Service Name is [$SourceServiceName]”

#Get VM Details
$SourceVM = Get-AzureVM -Name $SourceVMName -ServiceName $SourceServiceName -ErrorAction SilentlyContinue

if($SourceVM -eq $null)
{
    Write-Error “Unable to find Virtual Machine [$SourceServiceName] in Service Name [$SourceServiceName]”
    return
}

Write-Host “vm name is [$($SourceVM.Name)] and vm status is [$($SourceVM.Status)]”

#need to shutdown the existing VM before copying its disks.
if($SourceVM.Status -eq “ReadyRole”)
{
    Write-Host “Shutting down virtual machine [$SourceVMName]”
    #Shutdown the VM
    Stop-AzureVM -ServiceName $SourceServiceName -Name $SourceVMName -Force
}

$osdisk = $SourceVM | Get-AzureOSDisk

Write-Host “OS Disk name is $($osdisk.DiskName) and disk location is $($osdisk.MediaLink)”

$disk_configs = @{}

# Used to track disk copy status
$diskCopyStates = @()

##################################################################################################################
# Kicks off the async copy of VHDs
##################################################################################################################

# Copies to remote storage account
# Returns blob copy state to poll against
function StartCopyVHD($sourceDiskUri, $diskName, $OS, $destStorageAccountName, $destContainer)
{
    Write-Host “Destination Storage Account is [$destStorageAccountName], Destination Container is [$destContainer]”

    #extract the name of the source storage account from the URI of the VHD
    $sourceStorageAccountName = $sourceDiskUri.Host.Replace(“.blob.core.windows.net”, “”)
   

    $vhdName = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  1].Replace(“%20″,” “) 
    $sourceContainer = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  2].Replace(“/”, “”)

    $sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary
    $sourceContext = New-AzureStorageContext -StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceStorageAccountKey

    $destStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccountName).Primary
    $destContext = New-AzureStorageContext -StorageAccountName $destStorageAccountName -StorageAccountKey $destStorageAccountKey
    if((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
    {
        New-AzureStorageContainer -Name $destContainer -Context $destContext | Out-Null

        while((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
        {
            Write-Host “Pausing to ensure container $destContainer is created..” -ForegroundColor Green
            Start-Sleep 15
        }
    }

    # Save for later disk registration
    $destinationUri = “https://$destStorageAccountName.blob.core.windows.net/$destContainer/$vhdName”
   
    if($OS -eq $null)
    {
        $disk_configs.Add($diskName, “$destinationUri”)
    }
    else
    {
       $disk_configs.Add($diskName, “$destinationUri;$OS”)
    }

    #start async copy of the VHD. It will overwrite any existing VHD
    $copyState = Start-AzureStorageBlobCopy -SrcBlob $vhdName -SrcContainer $sourceContainer -SrcContext $sourceContext -DestContainer $destContainer -DestBlob $vhdName -DestContext $destContext -Force

    return $copyState
}

##################################################################################################################
# Tracks status of each blob copy and waits until all the blobs have been copied
##################################################################################################################

function TrackBlobCopyStatus()
{
    param($diskCopyStates)
    do
    {
        $copyComplete = $true
        Write-Host “Checking Disk Copy Status for VM Copy” -ForegroundColor Green
        foreach($diskCopy in $diskCopyStates)
        {
            $state = $diskCopy | Get-AzureStorageBlobCopyState | Format-Table -AutoSize -Property Status,BytesCopied,TotalBytes,Source
            if($state -ne “Success”)
            {
                $copyComplete = $true
                Write-Host “Current Status” -ForegroundColor Green
                $hideHeader = $false
                $inprogress = 0
                $complete = 0
                foreach($diskCopyTmp in $diskCopyStates)
                { 
                    $stateTmp = $diskCopyTmp | Get-AzureStorageBlobCopyState
                    $source = $stateTmp.Source
                    if($stateTmp.Status -eq “Success”)
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor Green
                        $complete++
                    }
                    elseif(($stateTmp.Status -like “*failed*”) -or ($stateTmp.Status -like “*aborted*”))
                    {
                        Write-Error ($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)
                        return $false
                    }
                    else
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor DarkYellow
                        $copyComplete = $false
                        $inprogress++
                    }
                    $hideHeader = $true
                }
                if($copyComplete -eq $false)
                {
                    Write-Host “$complete Blob Copies are completed with $inprogress that are still in progress.” -ForegroundColor Magenta
                    Write-Host “Pausing 60 seconds before next status check.” -ForegroundColor Green 
                    Start-Sleep 60
                }
                else
                {
                    Write-Host “Disk Copy Complete” -ForegroundColor Green
                    break 
                }
            }
        }
    } while($copyComplete -ne $true) 
    Write-Host “Successfully Copied up all Disks” -ForegroundColor Green
}

# Mark the start time of the script execution
$startTime = Get-Date 

Write-Host “Destination storage account name is [$DestStorageAccountName]”

# Copy disks using the async API from the source URL to the destination storage account
$diskCopyStates += StartCopyVHD -sourceDiskUri $osdisk.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $osdisk.DiskName -OS $osdisk.OS

# copy all the data disks
$SourceVM | Get-AzureDataDisk | foreach {

    Write-Host “Disk Name [$($_.DiskName)], Size is [$($_.LogicalDiskSizeInGB)]”

    #Premium storage does not allow disks smaller than 10 GB
    if( $_.LogicalDiskSizeInGB -lt 10 )
    {
        Write-Warning “Data Disk [$($_.DiskName)] with size [$($_.LogicalDiskSizeInGB) is less than 10GB so it cannnot be added” 
    }
    else
    {
        Write-Host “Destination storage account name is [$DestStorageAccountName]”
        $diskCopyStates += StartCopyVHD -sourceDiskUri $_.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $_.DiskName
    }
}

#check that status of blob copy. This may take a while if you are doing cross region copies.
#even in the same region a 127 GB takes nearly 10 minutes
TrackBlobCopyStatus -diskCopyStates $diskCopyStates

# Mark the finish time of the script execution
$finishTime = Get-Date 
 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Host “The disk copies completed in $TotalTime seconds.” -ForegroundColor Green

Write-Host “Registering Copied Disk” -ForegroundColor Green

$luncount = 0   # used to generate unique lun value for data disks
$index = 0  # used to generate unique disk names
$OSDisk = $null

$datadisk_details = @{}

foreach($diskName in $disk_configs.Keys)
{
    $index = $index + 1

    $diskConfig = $disk_configs[$diskName].Split(“;”)

    #since we are using the same subscription we need to update the diskName for it to be unique
    $newDiskName = “$DestVMName” + “-disk-“ + $index

    Write-Host “Adding disk [$newDiskName]”

    #check to see if this disk already exists
    $azureDisk = Get-AzureDisk -DiskName $newDiskName -ErrorAction SilentlyContinue

    if(!$azureDisk)
    {

        if($diskConfig.Length -gt 1)
        {
           Write-Host “Adding OS disk [$newDiskName] -OS [$diskConfig[1]] -MediaLocation [$diskConfig[0]]”

           #Expect OS Disk to be the first disk in the array
           $OSDisk = Add-AzureDisk -DiskName $newDiskName -OS $diskConfig[1] -MediaLocation $diskConfig[0]

           $vmconfig = New-AzureVMConfig -Name $DestVMName -InstanceSize $VMSize -DiskName $OSDisk.DiskName 

        }
        else
        {
            Write-Host “Adding Data disk [$newDiskName] -MediaLocation [$diskConfig[0]]”

            Add-AzureDisk -DiskName $newDiskName -MediaLocation $diskConfig[0]

            $datadisk_details[$luncount] = $newDiskName

            $luncount = $luncount + 1  
        }
    }
    else
    {
        Write-Error “Unable to add Azure Disk [$newDiskName] as it already exists”
        Write-Error “You can use Remove-AzureDisk -DiskName $newDiskName to remove the old disk”
        return
    }
}

#add all the data disks to the VM configuration
foreach($lun in $datadisk_details.Keys)
{
    $datadisk_name = $datadisk_details[$lun]

    Write-Host “Adding data disk [$datadisk_name] to the VM configuration”

    $vmconfig | Add-AzureDataDisk -Import -DiskName $datadisk_name  -LUN $lun
}

#read all the end points in the source VM and create them in the destination VM
#NOTE: I don’t copy ACL’s yet. I need to add this.
$SourceVM | get-azureendpoint | foreach {

    if($_.LBSetName -eq $null)
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)]]”
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn
    }
    else
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)], LBSetName is [$($_.LBSetName)]”       
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn -LBSetName $_.LBSetName -DefaultProbe
    }
}

#
if( $DeployToVnet )
{
    Write-Host “Virtual Network Name is [$VNetName] and Subnet Name is [$SubnetName]” 

    $vmconfig | Set-AzureSubnet -SubnetNames $SubnetName
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -VNetName $VNetName -Location $Location
}
else
{
    #Creating the virtual machine
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -Location $Location
}

#get any vm extensions
#there may be other types of extensions that be in the source vm. I don’t copy them yet
$SourceVM | get-azurevmextension | foreach {
    Write-Host “ExtensionName [$($_.ExtensionName)] Publisher [$($_.Publisher)] Version [$($_.Version)] ReferenceName [$($_.ReferenceName)] State [$($_.State)] RoleName [$($_.RoleName)]”
    get-azurevm -ServiceName $DestServiceName -Name $DestVMName -Verbose | set-azurevmextension -ExtensionName $_.ExtensionName -Publisher $_.Publisher -Version $_.Version -ReferenceName $_.ReferenceName -Verbose | Update-azurevm -Verbose
}

 

Conclusion

I had to look at many different code samples as well as MSDN documentation to create this script. I am grateful to all the open source samples folks are contributing and this is my way of giving back to the Azure community. If you have questions and/or features requests drop me a line and I will do what  I can to help.

Azure SDK 2.6 Diagnostics Improvements for Cloud Services

I haven’t blogged for a while because of being very busy at work. Things are slowing down a bit so I will try to write more frequently.

History

Azure SDK 2.5 made big changes to Azure diagnostics. It introduced Azure PaaS Diagnostic extension. Even though this was a good long term strategy the implementation was less than perfect. Here were a few issues that were introduced as a result of Azure SDK 2.5

  1. Local emulator did not support diagnostics
  2. No support for using different diagnostics storage account for different environments
  3. Manual editing required to create XML configuration file needed by Set-AzureServiceDiagnosticsConfiguration. This PowerShell cmdlet was required to deploy the diagnostic extension
  4. To make matters worse there was a bug in the PowerShell cmdlet which surfaced when you had a . in the name of the roles.

All these factors made it impossible to do continuous integration/deployment for Cloud service projects.

A few days ago Azure SDK 2.6 was released. I went through the release notes and read up the documentation. I ran tests to see if sanity has been restored. I am glad to report all the issues introduced by SDK 2.5 have been fixed. Here is a summary of improvements.

  1. Local emulator now supports diagnostics.
  2. Ability to specify different diagnostics storage account for different service configuration
  3. To simplify configuration of paas diagnostics extension the package output from Visual Studio contains the public configuration XML for the diagnostics extension for each role.
  4. PowerShell version 0.9.0 which was released along with the Azure SDK 2.6 also fixed the pesky bug that was happening when you had a . in the name of the role.

Here is a document that provides all the gory details for Azure SDK 2.6 diagnostics changes.

https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx

Overview

If you are developing application and still not using continuous integration and continuous deployment  you should be learn more about it. I will use rest of this blog post to show how you can use PowerShell cmdlets to automate the installation and updating of PaaS diagnostics extension for Cloud Services built using Azure SDK 2.6.

Details

I installed Azure SDK 2.6 on my development machine. I install PowerShell cmdlets(version 0.9.0) and Azure CLI as well.

I created a simple Cloud Service Project. I added a web role and a worker role to it.

I  added one more Service Configuration called “Test” to this project.

image 

I examined the properties of the WebRole1 to see what has changed with SDK 2.6

If you select “All Configurations”  you can still enable/disable the diagnostics like you used to do in SDK 2.5

image

When I selected “Configure” button to configure the diagnostics I found that we don’t have to select the diagnostics storage account in the “General” tab like we used to do. Rest of the configuration is same.

image

Returning back to the configuration of the WebRole1 I changed the Service Configuration to “Cloud”.

In the past there was no way to configure diagnostics storage account for configuration type.

But now we can define a different diagnostics storage account for each configuration type.

image

Quick examination of the ServiceConfiguration.Cloud.cscfg confirmed that diagnostics connection string was defined in it.

This makes a lot of sense because rest of the environment specific configuration setting are also defined in the same file.

image 

I did not want to deploy this project directly from Visual Studio because most build servers do not use Visual studio to deploy applications.

First I created a deployment package by selecting the Cloud project and select Package.

image

Selected the “Cloud” Service Configuration and Press “Package” button.

image

The project was built and packaged successfully. It opened up the location where the package and related files were created.

It created a directory called app.publish in the bin\debug directory under the cloud service project.

This is not any different from the past. However there is a new directory called Extensions.

image

Extensions directory has PubConfig.xml file for each role type. You had to create this file manually from diagnostics.wadcfg in the past. These files are needed by PowerShell cmdlets that are used to deploy diagnostics extension.

image

We use AppVeyor for continuous integration and deployment. It uses msbuild to build the projects.

I ran “Developer Command Prompt for Visual Studio 2013” and used the following command to build and package the cloud project.

msbuild <ccproj_file> /t:Publish /p:PublishDir=<temp_path>

I verified that msbuild also created the package and all the related files.

PowerShell Cmdlets for Azure Diagnostics

image

For new Cloud Services there are two ways to apply diagnostics extensions.

  1. You can pass the extension configuration to New-AzureDeployment via –ExtensionConfiguration parameter.
  2. You can create the Cloud Service first and use Set-AzureServiceDiagnosticsExtension to apply the PaaS diagnostics extension.

You can learn about it here.

https://msdn.microsoft.com/en-us/library/azure/dn495270.aspx

I chose method one because it was faster than applying extension in a separate call.

Deploying PaaS Diagnostics Extension for the first time

The following script creates a new Cloud Services, creates the diagnostics configuration and deploys the package which also deploys the PaaS diagnostics extension.

I am setting the diagnostics extension for each Role separately.

At the end of this script I use Get-AzureServiceDiagnosticsExtension to verify if the diagnostics has been installed.

You can also use Visual Studio Server Explorer to view the diagnostics.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Provisions a new cloud service with web/worker role built with SDK 2.6 and applies diagnostics extension
 
.DESCRIPTION
This script will create a new cloud service, deploy cloud service and apply azure diagnostics extension to each role type.
This cloud service has a WebRole1 and WorkerRole2
#>

$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop”

$SubscriptionName = “Your Subscription Name”
$VMStorageAccount = “storage account used during deployment”
$service_name = ‘cloud service name’
$location = “Central US”
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”
$slot = “Production”
#diagnostics storage account
$storage_name = ‘diagnostics storage account name’
#diagnostics storage account key
$key= ‘storage account key’


# SDK 2.6 tool generate these pubconfig files for each role type
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#set the default storage account for the subscription
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccountName $VMStorageAccount

if(Test-AzureName -Service $service_name)
{
    Write-Host “Serivice [$service_name] already exists”
}
else
{ 
    #Create new cloud service
    New-AzureService -ServiceName $service_name -Label “Raj SDK 2.6 Diagnostics Demo” -Location $location
}

#create storage context
$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key

$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1″
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1″

#deploy to the new cloud service and diagnostics extension
New-AzureDeployment -ServiceName rajsdk26diagdemo -Package $package -Configuration $configuration -Slot $slot -ExtensionConfiguration @($workerconfig,$webconfig)

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

 

Update PaaS Diagnostics Extension

I wanted to see how we can update diagnostics extension so I made these changes to my project.

I added a new worker role to the same project. I also changed the the configuration of diagnostics.

Typically an extension is only deployed once. To deploy the extension again you have two option:

  1. You can either change the name of the extension
  2. You can remove the extension and install it again

I chose the second option.

Here is what this script does:

It removes the PaaS Diagnostics extension from the cloud service

It creates PaaS diagnostics configuration for each role.

It updates the Cloud Service and applies PaaS diagnostics extension to each role including the new worker role Hard.WorkerRole.

Having a . in the name used to break the Set-AzureServiceDiagnosticsExtension. It is nice to see it is working now

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Updates an existing Cloud service and applies azure diagnostics extension as well
 
.DESCRIPTION
This script removes diagnostics extension, updates cloud service, applies azure diagnostics extension to each role type.
This cloud service had a WebRole1 and WorkerRole2 initially. I added a new role called Hard.WorkerRole
I put . in the name because SDK 2.5 Set-AzureServiceDiagnosticsExtension had a bug where . in the name broke it.
#>

# Set the output level to verbose and make the script stop on error
$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop” 

$service_name = ‘Cloud service name’
$storage_name = ‘diagnostics storage account’
$key= ‘storage account key’
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#remove the old diagnostics extension
Remove-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable
if (!($?)) 
{ 
        Write-Error “Unable to remove diagnostics extension from Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”
$hardwrkdiagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.Hard.WorkerRole.PubConfig.xml”
 

#create extension config
$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1″
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1″
$hardwrkconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $hardwrkdiagconfig -role “Hard.WorkerRole”

#upgrade the existing code and apply diagnostic extension at the same time
Set-AzureDeployment -Upgrade -ServiceName $service_name -Mode Auto -Package $package -Configuration $configuration  -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable -ExtensionConfiguration @($workerconfig,$webconfig, $hardworkconfig)
if (!($?)) 
{ 
        Write-Error “Unable to upgrade Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

Summary

Azure SDK 2.6 has addressed most of the issues related to deploying diagnostics to Cloud Services that were introduced by SDK 2.5. Cleanest way to update diagnostics extensions is the remove the existing diagnostics extension and setting it again during the deployment.  I tested deploying Diagnostics extension individually on each role it took 3-4 minutes to deploy each extension so if  you have a large number of roles your deployment times may increase. In my case with 3 role types it was taking 12 minutes for the script to run. When I used –ExtensionConfiguration parameter of New-AzureDeployment and Set-AzureDeployment it took only 5 minutes for the entire script to run.

NLog Target for Azure ServiceBus Event Hub

NLog is a popular open source logging framework for .Net applications. It writes to various destinations via Target. It has a large number of Targets available available. I created a NLog Target that can send message to Azure ServiceBus EventHub. You can get the source code and documentation here: https://github.com/rajinders/nlog-targets-azureeventhub

I also created a NuGet package which you can download from here: https://www.nuget.org/packages/NLog.Targets.AzureEventHub/

If you already know how to use NLog it will take you a few minutes to start using the target.

Feel free to use it and let me know if you have any suggestions for improvements.

You may be wondering why would anyone would to send logs to Azure Event Hub. Most applications use logging frameworks to write application logs. These logs are not only helpful in debugging issues they are also a source for business intelligence. There are already successful companies like Splunk, Logentries and Loggly who provide cloud based log aggregation services. If you wanted to create your own log aggregation service without write a lot of code you can do so in Azure platform. You can send you log messages to EventHub with NLog or Serilog targets for EventHub. You can leverage Azure stream analytics service to process your log streams. You can even send these logs to Power BI to create dashboards. Both Azure Event Hub and stream analytics are highly scalable. Scaling up can be achieved by simple configuration changes.

Bloggers Guide to Azure Event Hub

I love Integration/Middleware space. In the spring of 2004 I was working on a large implementation for a client. We had to integrate externally, internally with a large number of systems and we also had a need for long running processes. We were already using BizTalk 2002. We came to know about a radical new version of BizTalk Server called BizTalk 2004. It was based on .Net and was re-written from scratch. As soon as we learned about its capabilities we knew that it was a far better product for what we were implementing. We made a decision to use BizTalk Server 2004 Beta during our development. Since the product we were building was releasing in fall/winter we knew that it will become generally available before we go live. Making the decision to switch to BizTalk 2004 was an easy decision. Hard part came when I had to design 30 plus long running processes using BizTalk. There wasn’t any documentation. There were no BizTalk experts we could reach out it. At that time somebody began publishing a guide called “Bloggers Guide to BizTalk”. It was a compiled help file which included blog posts from authors all over the world. Without this guide we would have failed to implement our solution using BizTalk 2004.

I  still like middleware space but I have added Cloud, IOT, DevOps to my list of technologies I use every day. Azure Event Hub is a relatively new PAAS Service that was announced in last Build conference. It became Generally Available at TechEd Barcelona in October 2014. I will use this blog post to document various resources about Azure ServiceBus EventHub service. I named it “Bloggers Guide to Azure Event Hub” as an Ode to “Bloggers Guide to BizTalk”. I want to make it easier for anybody learning about Azure Event Hub to find helpful resources that will quickly get them started. I will make weekly updates to keep it current.

 

Videos

Introduction to EventHub from TechEd Barcelona: http://channel9.msdn.com/Events/TechEd/Europe/2014/CDP-B307

Cloud Cover Show about Event Hub: http://search.channel9.msdn.com/content/result?sid=b8411351-e4b2-4fff-bb3c-a64b566c7d99&rid=85437dcd-37ee-4965-ab09-a3d4013c30d7

 

MSDN Documentation

Event Hub Overview: https://msdn.microsoft.com/en-us/library/azure/dn836025.aspx

Eventh Hubs Programming Guide: https://msdn.microsoft.com/en-us/library/azure/dn789972.aspx

Event Hub API Overview: https://msdn.microsoft.com/en-us/library/azure/dn790190.aspx

 

Event Processor Host

EventProcessorHost class: https://msdn.microsoft.com/en-us/library/azure/microsoft.servicebus.messaging.eventprocessorhost.aspx

EventProcessor Host is covered in the API overview but I want to call this out once again as it is the easiest way to process messages out of Event Hub. It may meet the needs of more 90-95% of scenarios. To get an in depth understanding of EventProcessorHost you should read this series of blog posts from Dan Rosanova.

Event Processor Host Best Practices Part I : http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx

Event Process Host Best Practices Part II: http://blogs.msdn.com/b/servicebus/archive/2015/01/21/event-processor-host-best-practices-part-2.aspx

 

Code Samples

ServiceBus Event Hubs Getting Started : https://code.msdn.microsoft.com/windowsapps/Service-Bus-Event-Hub-286fd097

Scale Out Event Processing with Event Hub: https://code.msdn.microsoft.com/windowsapps/Service-Bus-Event-Hub-45f43fc3

ServiceBus Event Hub Direct Receiver: https://code.msdn.microsoft.com/windowsapps/Event-Hub-Direct-Receivers-13fa95c6

 

Reference Architecture

data-pipeline

https://github.com/mspnp/data-pipeline

If you are looking for reference architecture and code sample for how to build a scalable real world application data-pipeline will be helpful to you.

Real-Time Event Processing with Microsoft Azure Stream Analytics

http://azure.microsoft.com/en-us/documentation/articles/stream-analytics-real-time-event-processing-reference-architecture/

This reference architecture is about Stream Analytics but it shows how Event Hub is a core part of the real-time event processing architecture.

 

Tools

ServiceBus Explorer

https://code.msdn.microsoft.com/windowsapps/Service-Bus-Explorer-f2abca5a

Anybody developing ServiceBus application should be using this tool. It has Queues, Topics and EventHub support as well.

 

Provisioning

If you want to provision EventHub in Azure you options are:

1. Use the Azure Management Portal

2. Use the SDK to provision it in code

3. Use the REST API

4. Paolo Salvatori created a PowerShell Script that invokes the REST API to create Service Bus namespace and EventHub. This is the script I am using my current project. http://blogs.msdn.com/b/paolos/archive/2014/12/01/how-to-create-a-service-bus-namespace-and-an-event-hub-using-a-powershell-script.aspx

 

Logging Framework

EventHub makes an excellent target for ingesting logs at scale.

Serilog

Serilog is a easy to use .Net structured logging framework. It already has an EventHub Appender. You can check it out here:

https://github.com/serilog/serilog/tree/dev/src/Serilog.Sinks.AzureEventHub

 

Miscellaneous Blog Posts

Azure Event Hubs – All my thoughts by Nino Crudele: http://ninocrudele.me/2014/12/12/azure-event-hub-all-my-thoughts/

Getting Started with Azure Event Hub by Fabric Controller: http://fabriccontroller.net/blog/posts/getting-started-azure-service-bus-event-hubs-building-a-real-time-log-stream/

Azure’s New Event Hub – Brent’s Notepad: https://brentdacodemonkey.wordpress.com/2014/11/18/azures-new-event-hub/

Sending Raspberry Pi data to Event Hub and many blog posts about Azure Event Hub on Faister’s blog: http://blog.faister.com/

Sending Kinect data to Azure Event Hub at Alejandro’s blog: http://blogs.southworks.net/ajezierski/2014/11/10/azure-event-hubs-the-thing-and-the-internet/

Azure Stream Analytics, Scenarios and Introduction by Sam Vanhoutte. This blog post is about Azure Stream Analytics but both of these services will work together in many scenarios. http://www.codit.eu/blog/2015/01/azure-stream-analytics-getting-started/

Azure Event Hub Updates from a NetMF Device on Dev Mobiles blog: http://blog.devmobile.co.nz/2014/11/30/azure-event-hub-updates-from-a-netmf-device/