Category Archives: Uncategorized

How to upgrade Azure DS Series VM to GS Series VM

Azure GS Series VM released on September 2nd 2015. G Series VM’s were released earlier this year.

GS Series added ability to use SSD backed premium storage to the largest/fastest virtual machines on Azure platform.

You can read more about them here:

https://azure.microsoft.com/en-us/blog/azure-has-the-most-powerful-vms-in-the-public-cloud/

GS Series VM’s are not available in all regions yet.

I had a DS Series VM running on “West US” region. This VM was in an availability set.

We needed to upgrade this VM to GS Series VM. When I looked at http://portal.azure.com  and tried to resize the VM I did not see GS Series VM in the list upgrade options. I knew that GS Series was available in “West US” so I needed to find a way to resize my VM.

Azure Resource Explorer

https://resources.azure.com

If you are writing Azure Resource Management Templates you will find Azure resource explorer invaluable. Documentation for services is often incomplete so I create a resource using azure management portal. Once the resource has been created I use resource explorer to understand the properties of the resource. Majority of time I use resource explorer to read information. However in this particular case I used resource explorer to upgrade the VM. I only tried it in Dev and it worked.

Here is my DS1 instance running in West US.

image

Here is how this VM looks like in the resource explorer.

image

My VM was running. I tried updating the VM to Standard_GS1 using these steps:

1. You have to be appropriate access to the Azure subscription/resource group where the VM is running.

2. Log into resource explorer using “ReadWrite” mode and select the subscription, resource, compute and virtual machine.

image

3. Select the virtual machine and press the “Edit” button as shown.

image

4. Update the value of vmSize to Standard_GS1 as shown below and press “PUT”

image

5. The operation failed with this error below. I found that I had to stop/deallocate my VM even if it was not in the availability set. Error message was very descriptive.

1:  {  
2:   "error": {  
3:    "code": "OperationNotAllowed",  
4:    "target": "vmSize",  
5:    "message": "Unable to update the VM. The requested VM size 'Standard_GS1' may not be available in the resources supporting the existing allocation. Please try again later, try with a different VM size or create a VM with a new availability set or no availability set binding."  
6:   }  
7:  }  

6. I went to the preview portal and stopped the VM. The status of the VM will change to stopped (deallocated).

7. I refreshed the resource explorer to make sure it had the latest settings for the VM.

8. I repeated the steps 2, 3 and 4 again. This time there were no errors.

9. I verified in the portal that size had change to Standard_GS1 as shown below.

image

10. Don’t forget to shutdown the VM after your experiment is over.

Summary

Upgrading a DS Series VM to GS Series VM is possible however a reboot is required. Reboot is required if the VM is standalone or in an availability set. When new services are launched on Azure platform they may not have PowerShell or Azure CLI available or documented. Azure Resource Explorer allows us directly interact with the Azure platform. It can be used to manage resources.

Adventures with Azure Resource Manager Part I

Overview

In this series of blog posts I will create ARM templates used to provision Azure resources. I will kick things off by creating a template that shows you have to create multiple storage accounts. It also shows:

  1. How to use parameters of type arrays
  2. How to use length operator to iterate over elements of array
  3. How to use copy
  4. How to use output section of the template to display information about newly created resources.
  5. How you can use parameter files to provision resources in your dev, test and production environment.

Show me my template

Parameters: Lines 4-11

Like any other ARM template this template starts with a parameters section. Line 5 declares a parameter of type storageAccountList. It is of type array. This parameter will pass in an array of objects which will provide all the required details to provision a storage account.

Resources: Lines 12-26

This is the section where we iterate through the list of objects in the storageAccountList and provision storage accounts in a resource loop.

Line 14: Sets the name property of the storage account being provisioned

Line 17: Sets the Location property of the storage account being provisioned

Line 20: Uses length function to determine the number of elements in the storageAccountList

Line 23: Sets the accountType property of the storage account being provisioned

Outputs: Lines  27-40

This section displays details about the storage accounts that were provisioned.

Lines 30, 34 and 38 reference the storage accounts that were provisioned.

1:  {  
2:    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",  
3:    "contentVersion": "1.0.0.0",  
4:    "parameters": {  
5:      "storageAccountList": {  
6:        "type": "array",  
7:        "metadata": {  
8:          "description": "List of storage accounts that need to be created"  
9:        }  
10:      }  
11:    },  
12:    "resources": [  
13:      {  
14:        "name": "[parameters('storageAccountList')[copyIndex()].name]",  
15:        "type": "Microsoft.Storage/storageAccounts",  
16:        "apiVersion": "2015-05-01-preview",  
17:        "location": "[parameters('storageAccountList')[copyIndex()].location]",  
18:        "copy": {  
19:          "name": "storageAccountLoop",  
20:          "count": "[length(parameters('storageAccountList'))]"  
21:        },  
22:        "properties": {  
23:          "accountType": "[parameters('storageAccountList')[copyIndex()].storageAccountType]"  
24:        }  
25:      }  
26:    ],  
27:    "outputs": {  
28:      "stgobject1": {  
29:        "type": "object",  
30:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[0].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
31:      },  
32:      "stgobject2": {  
33:        "type": "object",  
34:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[1].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
35:      },  
36:      "stgobject3": {  
37:        "type": "object",  
38:        "value": "[reference(concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountList')[2].name),providers('Microsoft.Storage', 'storageAccounts').apiVersions[0])]"  
39:      }  
40:    }  
41:  }  

Parameter Files

You created one template which is parameterized. After the template has been tested you can use different parameter files with the same template to provision resources in different environments.

Dev Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a dev environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajappdev", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbdev", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebdev", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchdev", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

Test Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a test environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajapptest", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbtest", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebtest", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchtest", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

Prod Parameters File

This parameter file defines a storage account list. It defines name, location and storageAccountType properties for each storage account. It can be used to provision storage account in a prod environment. 

1:  {  
2:    "storageAccountList": {  
3:      "value": [  
4:        { "name": "rajappprod", "location": "Central US", "storageAccountType": "Standard_LRS" },  
5:        { "name": "rajdbprod", "location": "Central US", "storageAccountType": "Standard_GRS" },  
6:        { "name": "rajwebprod", "location": "Central US", "storageAccountType": "Standard_ZRS" },  
7:        { "name": "rajarchprod", "location": "West US", "storageAccountType": "Premium_LRS" }  
8:        ]  
9:    }  
10:  }  

 

Ship It (Make it so Number 2)

Now that our templates are ready we are ready to execute them to provision resources.

Here is a short script that is used to provision resources.

Lines 16-30: Create the resource group if it does not already exist

Line 40: Uses the template and a parameters file to  provision storage accounts.

1:  Param  
2:  (  
3:    [Parameter (Mandatory = $true)]  
4:    [string] $ResourceGroupName,  
5:    
6:    [Parameter (Mandatory = $true)]  
7:    [string] $Location,  
8:    
9:    [Parameter (Mandatory = $true)]  
10:    [string] $ParametersFile  
11:  )  
12:    
13:  #publish version of the the powershell cmdlets we are using  
14:  (Get-Module Azure).Version  
15:    
16:  $rg = Get-AzureResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue  
17:    
18:  if (!$rg)  
19:  {  
20:    # Create a new storage account  
21:    Write-Output "";  
22:    Write-Output "Creating Resource Group [$ResourceGroupName] in location [$Location]"  
23:    
24:    
25:    New-AzureResourceGroup -Name "$ResourceGroupName" -Force -Location $Location -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null  
26:    
27:    if (!($?))   
28:    {   
29:      throw "Cannot create new Resource Group [$ResourceGroupName] in region [$Location]. Error Detail: $errorVariable"   
30:    }  
31:       
32:    Write-Output "Resource Group [$ResourceGroupName] was created"   
33:      
34:  }  
35:  else  
36:  {  
37:    Write-Output "Resource Group [$ResourceGroupName] already exists"  
38:  }  
39:    
40:  New-AzureResourceGroupDeployment -Name stgdeployment -ResourceGroupName $ResourceGroupName -TemplateFile .\createstorageaccts.json -TemplateParameterFile $ParametersFile  

 

Trust but Verify

Line 1: It calls deploy.ps1 script and passes in resource group name, location and parameters file.

Line 54-92: Show the details of the storage accounts that were provisioned.

1:  PS C:\git\ArmExamples\CreateStorageAccounts> .\deploy.ps1 -ResourceGroupName ARM-Dev -Location "West US" -ParametersFile  
2:   .\storageaccts-dev.json  
3:    
4:  Creating Resource Group [ARM-Dev] in location [West US]  
5:  VERBOSE: 3:54:11 PM - Created resource group 'ARM-Dev' in location 'westus'  
6:  Resource Group [ARM-Dev] was created  
7:  VERBOSE: 3:54:13 PM - Template is valid.  
8:  VERBOSE: 3:54:14 PM - Create template deployment 'stgdeployment'.  
9:  VERBOSE: 3:54:22 PM - Resource Microsoft.Storage/storageAccounts 'rajarchdev' provisioning status is running  
10:  VERBOSE: 3:54:22 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is running  
11:  VERBOSE: 3:54:24 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is running  
12:  VERBOSE: 3:54:24 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is running  
13:  VERBOSE: 4:04:03 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is succeeded  
14:  VERBOSE: 4:04:03 PM - Resource Microsoft.Storage/storageAccounts 'rajarchdev' provisioning status is succeeded  
15:  VERBOSE: 4:04:05 PM - Resource Microsoft.Storage/storageAccounts 'rajappdev' provisioning status is succeeded  
16:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is succeeded  
17:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is succeeded  
18:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajdbdev' provisioning status is succeeded  
19:  VERBOSE: 4:04:13 PM - Resource Microsoft.Storage/storageAccounts 'rajwebdev' provisioning status is succeeded  
20:    
21:    
22:  DeploymentName  : stgdeployment  
23:  ResourceGroupName : ARM-Dev  
24:  ProvisioningState : Succeeded  
25:  Timestamp     : 8/14/2015 9:04:25 PM  
26:  Mode       : Incremental  
27:  TemplateLink   :  
28:  Parameters    :  
29:            Name       Type            Value  
30:            =============== ========================= ==========  
31:            storageAccountList Array           [  
32:             {  
33:              "name": "rajappdev",  
34:              "location": "Central US",  
35:              "storageAccountType": "Standard_LRS"  
36:             },  
37:             {  
38:              "name": "rajdbdev",  
39:              "location": "Central US",  
40:              "storageAccountType": "Standard_GRS"  
41:             },  
42:             {  
43:              "name": "rajwebdev",  
44:              "location": "Central US",  
45:              "storageAccountType": "Standard_ZRS"  
46:             },  
47:             {  
48:              "name": "rajarchdev",  
49:              "location": "West US",  
50:              "storageAccountType": "Premium_LRS"  
51:             }  
52:            ]  
53:    
54:  Outputs      :  
55:            Name       Type            Value  
56:            =============== ========================= ==========  
57:            stgobject1    Object           {  
58:             "provisioningState": "Succeeded",  
59:             "accountType": "Standard_LRS",  
60:             "primaryEndpoints": {  
61:              "blob": "https://rajappdev.blob.core.windows.net/",  
62:              "queue": "https://rajappdev.queue.core.windows.net/",  
63:              "table": "https://rajappdev.table.core.windows.net/"  
64:             },  
65:             "primaryLocation": "Central US",  
66:             "statusOfPrimary": "Available",  
67:             "creationTime": "2015-08-14T20:54:32.9062387Z"  
68:            }  
69:            stgobject2    Object           {  
70:             "provisioningState": "Succeeded",  
71:             "accountType": "Standard_GRS",  
72:             "primaryEndpoints": {  
73:              "blob": "https://rajdbdev.blob.core.windows.net/",  
74:              "queue": "https://rajdbdev.queue.core.windows.net/",  
75:              "table": "https://rajdbdev.table.core.windows.net/"  
76:             },  
77:             "primaryLocation": "Central US",  
78:             "statusOfPrimary": "Available",  
79:             "secondaryLocation": "East US 2",  
80:             "statusOfSecondary": "Available",  
81:             "creationTime": "2015-08-14T20:54:32.0468124Z"  
82:            }  
83:            stgobject3    Object           {  
84:             "provisioningState": "Succeeded",  
85:             "accountType": "Standard_ZRS",  
86:             "primaryEndpoints": {  
87:              "blob": "https://rajwebdev.blob.core.windows.net/"  
88:             },  
89:             "primaryLocation": "Central US",  
90:             "statusOfPrimary": "Available",  
91:             "creationTime": "2015-08-14T20:54:29.9062389Z"  
92:            }  

Cleanup

To remove all the resources you provisioned you can use the Remove-AzureResoureGroup cmdlet as shown below

  Remove-AzureResourceGroup -Name ARM-Dev  

Doggy Bag Please

You can access all the samples from my GitHub Repository here: https://github.com/rajinders/ArmExamples

Summary

I hope you found this sample helpful. I will post more samples on a regular basis.

Resources to learn Azure Resource Manager (ARM) Language

Azure Resource Manager(ARM) was announced in Spring 2014. It is a completely different way of deploying services on Azure platform. It matters because before the release of ARM it was only possible to deploy one service at a time. When you were deploying applications using PowerShell or Azure CLI you had to deploy all the services via a script. As the number of services increased the scripts got increasingly complex and brittle. Over the past year ARM capabilities have evolved rapidly. All future services will be deployed via ARM cmdlets or templates. The current Azure Service management API’s will be eventually deprecated. Even when using ARM you have two choices:

  • Imperative: This is very similar to how you were using Service Management API’s to provision services.
  • Declarative: Here you define the application configuration with a JSON Template. This template can be parameterized. Once this is done a single PowerShell cmdlet New-AzureResourceGroupDeployment deploys your entire application. This deployment can span regions as well. You can define dependencies between resources and deployment process will deploy them in the order necessary to make the deployment successful. If there are no dependencies it parallelizes the deployment. You can repeatedly deploy the same template and the deployment process is smart enough to determine what changed and only deploy/update the services that changed. ARM templates can not only provision the infrastructure they also also execute tasks inside the provisioned VM’s to fully configure your application. On Windows VM’s you can either use DSC or PowerShell scripts to customize it. On Linux you can use bash scripts to customize the VM after it has been created.

AWS has had a similar capability for many years. It is called CloudFormation. While ARM and CloudFormation are similar and are trying to achieve similar goals there are some differences between them as well.

Resources

If you believe in DevOps and work with Microsoft Azure platform understanding ARM will be beneficial. Another thing worth mentioning is that ARM templates will allow you to deploy services in your private cloud when Azure stack is released. I want to share some helpful resources to make it easier for you to learn ARM.

  1. Treat your Azure Infrastructure as code is an excellent overview of ARM and its benefits: https://www.linkedin.com/pulse/treat-your-azure-infrastructure-code-krishna-venkataraman?trk=prof-post
  2. ARM Language Reference: https://msdn.microsoft.com/en-us/library/azure/Dn835138.aspx?f=255&MSPPError=-2147217396
  3. Azure Quick Start Templates at Github: If you are like me you learn from examples. Here is a large repository of ARM templates. https://github.com/Azure/azure-quickstart-templates
  4. Ryan Jones from Microsoft posted many simple ARM samples here: https://github.com/rjmax/ArmExamples
  5. Full Scale 180 blog is another excellent resource to learn how to write ARM templates.  http://blog.fullscale180.com/building-azure-resource-manager-templates/   I especially like the Couchbase Sample: https://github.com/Azure/azure-quickstart-templates/tree/master/couchbase-on-ubuntu
  6. If you still want to use the imperative method of deploying Azure resource check out this sample for Joe Davies that walks you through the process provisioning a VM here: https://azure.microsoft.com/blog/2015/06/11/step-through-creating-resource-manager-virtual-machine-powershell/
  7. Here is a sample showing how to lock down your resources with Resource Manager Lock. http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/06/18/lock-down-your-azure-resources.aspx
  8. Neil Mackenzie posted a sample for creating a VM with a instance IP address here: https://gist.github.com/nmackenzie/db9a4b7abdee2760dba8 https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  9. Alexandre Brisebois posted a sample showing how to provision Centos VM using an ARM.  In this example he shows how to customize the VM after its creation using  a bash script. https://alexandrebrisebois.wordpress.com/2015/05/25/create-a-centos-virtual-machine-using-azure-resource-manager-arm/
  10. Kloud Blog has a nice overview of how to get started with ARM and many samples: http://blog.kloud.com.au/tag/azure-resource-manager/
  11. If you want learn about best practices for writing ARM templates this is a must read document. https://azure.microsoft.com/en-us/documentation/articles/best-practices-resource-manager-design-templates/
  12. This blog post shows how you can use output section of the template publish information about newly created resources. http://blogs.msdn.com/b/girishp/archive/2015/06/16/azure-arm-templates-tips-on-using-outputs.aspx
  13. Check out this list of resources for ARM by Hans Vredevoort. It is very comprehensive. https://onedrive.live.com/view.aspx?resid=96BA3346350A5309!318670&app=OneNote&authkey=!APNWE3DZp1C-RjY
  14. This blog post shows how you can use arrays, length function, resource loops, outputs to provision multiple storage accounts http://www.rajinders.com/2015/08/14/adventures-with-azure-resource-manager-part-i/

 

Samples

As I work with ARM templates I am constantly developing or looking for samples that can help me. These sample templates were created by product teams in Microsoft but have not been integrated into Quick Start templates yet. I will use this section to document some of the helpful samples I have found.

  1. Azure Web Site with a Web Job Template: This template was created by David Ebbo. This is the only ARM template sample that shows you how to publish webjobs with an ARM template. https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/WebAppWithWebJobs.json
  2. Length Function: As I began learning the template language I found it annoying that I had to pass in Array and its length as separate parameters. I just found a sample created by Ryan Jones which shows how to calculate length of an array. https://github.com/rjmax/ArmExamples/blob/master/copySampleWithLength.json

Tools

ARM documentation is still evolving and sometimes it is difficult to find samples you are looking for. If you are trying to create a new template and you cannot find any documentation here are few things that may be helpful

  1. Azure Resource Explorer: This is an essential tool for anybody writing ARM templates. You can deploy a resource using the portal and use the resource explorer to see the JSON schema for the resource you just created. You can make changes to the resources: https://resources.azure.com/
  2. ARM Schemas: This is the location where MSFT ARM teams are posting their schemas. https://github.com/Azure/azure-resource-manager-schemas

Debugging

You can view the logs using these PowerShell cmdlets.

  1. Get-AzureResourceLog: Gets logs for a specific Azure  resource
  2. Get-AzureResourceGroupLog: Get logs for a Azure resource group
  3. Get-AzureResourceProviderLog Gets logs for an Azure resource provider
  4. Get-AzureResourceGroupDeploymentOperation Get logs for the deployment operation

When your template deployment operation fails the error message may not have enough detail to tell you the reason for failure. You can go to the preview azure portal and examine the audit logs. You can filter by resource group, resource type, and time range. I was able to get detailed error message from the portal.

Surprises

In addition to running the cmdlet Switch-AzureMode –Name AzureResourceManager I also had to enable my subscription for specific Azure resource providers. When I was using Service Management API’s this was not necessary. As an example to be able to provision virtual networks with ARM I had to run the following cmdlet:

Register-AzureProvider –ProviderNamespace Microsoft.Network

Even though Template language can work with JSON arrays it cannot determine the number of elements in the JSON array so you have to pass the count separately. on 08/04/2015 I removed the previous line as length function is now available.

I hope these resources are helpful. If you are aware of other helpful ARM resources feel free to mention them in comments on this  blog post and I can add them to my list.

I will be posting ARM samples on my blog as well.

Installing Java Runtime in Azure Cloud Services with Chocolatey

I recently wrote a blog post about installing Splunk on Azure Web/Worker roles with the help of a startup task. You can see that blog post here. In this blog post I will show you how to install Java runtime in web/worker roles. Azure Web/Worker roles are stateless so the only way to install third party software or tweak windows features on web/worker roles is via startup tasks.

Linux users have had the benefit of tools like apt, yum etc to download and install software via command line. Chocolatey provides you with similar functionality on Windows Platform. If you into DevOps and automation on Windows platform you should check out Chocolatey here. It has nearly 15000 packages already available.

Once you have Chocolatey installed installing java is a breeze. It is as simple as

 choco install javaruntime -y  

The statement above is self explanatory. Option –y  answers y to all the questions including accepting the license so you are not prompted to answer any questions.

I already provided detailed steps to define startup tasks in my  previous blog post. So I will just share the startup script along with the service definition file that shows how to deploy Java runtime in Azure web/worker role with a startup task.

Step 1

Create a startup.cmd file and add it to your worker/web role implementation. It should be saved as “Unicode (UTF-8 without signature) – Codepage 65001”.

Set the “copy to output directory” property of startup.cmd to “copy if newer”

Line 9 checks to see if the startup task ran successfully  and end if it did

Line 16 installs chocolatey

Line 22 install java run time

Line 26 only execute if java was installed successfully and it creates startupcomplete.txt file in approot directory.

1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  Echo Installing Chocolatey >> "%LogPath%" 2>&1  
15:    
16:  @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin  >> "%LogPath%" 2>&1  
17:    
18:  IF ERRORLEVEL EQU 0 (  
19:    
20:       Echo Installing Java runtime >> "%LogPath%" 2>&1  
21:    
22:       %ALLUSERSPROFILE%\chocolatey\bin\choco install javaruntime -y >> "%LogPath%" 2>&1  
23:    
24:       IF ERRORLEVEL EQU 0 (            
25:                 ECHO Java installed. Startup completed. >> "%LogPath%" 2>&1  
26:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
27:                 EXIT /B 0  
28:       ) ELSE (  
29:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
30:            EXIT %ERRORLEVEL%  
31:       )  
32:  ) ELSE (  
33:    ECHO An error occurred while install chocolatey The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
34:    EXIT %ERRORLEVEL%  
35:  )  
36:    

 

Step 2

Update the service definition file to define the startup task.

Lines 5 through 19 define the statup task.

1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureJavaPaaS" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Step 3

Publish the cloud service to Azure. I enabled remote desktop to be able to verify if the worker role was configured successfully.

Verification

I used Remote Desktop to log into the worker role. I  looked in

C:\Resources\Directory\d063631e14c1485cb6c838c8f92cd7c3.MyWorkerRole.LogsPath and found startup.txt

It had the following content. As you can see below that java was installed successfully.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  Installing Chocolatey   
6:  Installing Java runtime   
7:  Chocolatey v0.9.9.8  
8:  Installing the following packages:  
9:  javaruntime  
10:  By installing you accept licenses for the packages.  
11:    
12:  jre8 v8.0.45  
13:   Downloading jre8 32 bit  
14:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106246'  
15:   Installing jre8...  
16:   jre8 has been installed.  
17:   Downloading jre8 64 bit  
18:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106248'  
19:   Installing jre8...  
20:   jre8 has been installed.  
21:   PATH environment variable does not have D:\Program Files\Java\jre1.8.0_45\bin in it. Adding...  
22:   The install of jre8 was successful.  
23:    
24:  javaruntime v8.0.40  
25:   The install of javaruntime was successful.  
26:    
27:  Chocolatey installed 2/2 package(s). 0 package(s) failed.  
28:   See the log for details (D:\ProgramData\chocolatey\logs\chocolatey.log).  
29:  Java installed. Startup completed.   
30:    

I also verified that e:\startupcomplete.txt file was created.

I verified that java was installed in D:\Sun\Java directory

You can get the source code for this entire project from my GitHub Repository https://github.com/rajinders/azure-java-paas.

How to migrate from Standard Azure Virtual Machines to DS Series Storage Optimized VM’s

Background

We are implementing Azure solutions for a few clients. Most of our clients are use cloud services and virtual machines to implement their solutions on Azure platform. For many years Azure platform  only offered just one performance tier for storage. You can see the sizes of virtual machines and cloud services and the disk performance they offer here:

https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

For standard Azure virtual machines each disk is limited to 500 IOPS per disk. If you needed better performance you had to use disk striping with multiple disk to get better performance. Number of disks one could add to Azure Virtual Machine is constrained by the size of VM. One core allows us to add 2 VHD’s. Each VHD is a page blob with a maximum size of 1 TB. When we were deploying packaged software or custom applications with high IOPS requirements it was challenging to meet the needs of our customers. All this changed with the following announcement by Mark Russinovich where he announced General Availability of Azure Premium Storage.

http://azure.microsoft.com/blog/2015/04/16/azure-premium-storage-now-generally-available-2/

Azure premium storage offers durable SSD storage. Along with premium storage Microsoft also released storage optimized virtual machines called DS Series VM’s. These are capable of achieving up to 64000 IOPS and 524 MB/sec. This will enable many scenarios like NoSQL or even large SQL database that need higher IOPS than the standard Azure virtual machines offer. You can read about the specifications for DS Series VM’s in the link posted above. If you were using a standard Azure VM you can easily scale up or down to another standard virtual machine using Portal, PowerShell and Azure CLI. Unfortunately it is currently not possible to upgrade/migrate a standard Azure virtual machine to a DS Series virtual machine with premium storage. In this blog post I will show you how you can migrate an existing virtual machine to a DS’s series virtual machine with Premium(durable SSD) storage. It will provide a PowerShell script you can leverage to migrate a standard virtual machine to a DS Series virtual machine.

Details

Creating Premium Storage Account

Premium storage account is different than standard storage accounts. If you want to leverage premium storage you need to create a new storage account in the azure preview portal. Account type you need to select is “Premium Locally Redundant”

It is not possible to use the existing Azure management portal to provision premium storage account.

New Storage Account

Here is how you can use PowerShell cmdlet to create premium storage account. As you can see it is similar to how you create standard storage accounts. I was unable to find what value I had to specify for Type and I had to read the actual source code to determine that it was ‘Premium_LRS’

001
002
003
004
005
006
007
008
$StorageAccountTypePremium = ‘Premium_LRS’

$DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

if (!($?)) 
{ 
    throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
}

 

Premium storage and DS Series virtual machines are not available in all regions. The complete script I will provide validates your location preference and fails if you specify a location where premium storage and DS Series VM’s are not available.

Creating DS Series Virtual machine is identical to creating standard virtual machines.

Here are a few things I learned about DS Series Virtual machines and Premium storage.

  • Premium storage does not allow us to add  disks smaller than 10 GB. If your VM has a disk smaller than 10 GB the script will fail
  • Default Host Caching option for Premium storage data disks is “Ready Only” as compared with “None” for standard data disks.
  • Default Host Caching option for Premium storage OS disk is “Read Write” which is same as standard OS disks
  • Currently this script only migrates virtual machines to the same subscription. It can be easily extended to support migration to different subscriptions. 
  • It can migrate VM’s to a different region as long as premium storage is available in that region
  • It shuts down the existing source VM before making of copy of the VHD’s for the virtual machine.
  • It validates that virtual network for the destination VM exists but does not validate if subnet also exists
  • It gives new names to the disks in the destination virtual machine
  • Currently I am only copying disks, end points, VM extensions. I am not copying ACL’s and other type of extensions like malware extension
  • I only tested the script with PowerShell SDK Version 0.9.2
  • I tested migrating standard VM in West US to DS Series VM in West US only. I logged into the newly created VM and verified that all disks were present. This is the extent of my testing. My VM with 3 Disk’s copied in 10 minutes.
  • If your destination storage account already exists it has to be of type “Premium_LRS”. If you have an existing account of different type the script will fail. If the storage account does not exist it will be created.

Sample Script

You can access the entire source code from my public GitHub repository

https://github.com/rajinders/migrate-to-azuredsvm

I have also pasted the entire source code here for your convenience.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
<#
Copyright 2015 Rajinder Singh
 
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#>

<#
.SYNOPSIS
Migrates an existing VM into a DS Series VM which uses Premium storage.
 
.DESCRIPTION
This script migrates an exitsing VM into a DS Series VM which uses Premium Storage. At this time DS Series VM’s are not available in all regions.
It currently expects the VM to be migrated in the same subscription. It supports migrating VM to the same region or a different region.
It can be easily extended to support migrating to a different subscription as well
 
.PARAMETER SourceVMName
The name of the VM that needs to be migrated
 
.PARAMTER SourceServiceName
The name of service for the old VM
 
.PARAMETER DestVMName
The name of New DS Series VM that will be created.
 
.PARAMTER DestServiceName
The name of the Service for the new VM
 
.PARAMTER Location
Region where new VM will be created
 
.PARAMTER Size
Size of the new VM
 
.PARAMTER DestStorageAccountName
Name of the storage account where the VM will be created. It has to be premium storage account
 
.PARAMETER ResourceGroupName
Resource group where the cache will be create
 
.EXAMPLE
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2″ -SourceServiceName “rajsourcevm2″ -DestVMName “rajdsvm12″ -DestServiceName “rajdsvm12svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg18′ -DestStorageAccountContainer ‘vhds’
 
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2″ -SourceServiceName “rajsourcevm2″ -DestVMName “rajdsvm16″ -DestServiceName “rajdsvm16svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg19′ -DestStorageAccountContainer ‘vhds’ -VNetName rajvnettest3 -SubnetName FrontEndSubnet
 
#>

[CmdletBinding(DefaultParameterSetName=“Default”)]
Param
(
    [Parameter (Mandatory = $true)]
    [string] $SourceVMName,

    [Parameter (Mandatory = $true)]
    [string] $SourceServiceName,

    [Parameter (Mandatory = $true)]
    [string] $DestVMName,

    [Parameter (Mandatory = $true)]
    [string] $DestServiceName,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘West US’,‘East US 2′,‘West Europe’,‘East China’,‘Southeast Asia’,‘West Japan’, ignorecase=$true)]
    [string] $Location,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘Standard_DS1′,‘Standard_DS2′,‘Standard_DS3′,‘Standard_DS4′,‘Standard_DS11′,‘Standard_DS12′,‘Standard_DS13′,‘Standard_DS14′, ignorecase=$true)]
    [string] $VMSize,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountName,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountContainer,

    [Parameter (Mandatory = $false)]
    [string] $VNetName,

    [Parameter (Mandatory = $false)]
    [string] $SubnetName
)

#publish version of the the powershell cmdlets we are using
(Get-Module Azure).Version

#$VerbosePreference = “Continue”
$StorageAccountTypePremium = ‘Premium_LRS’

#############################################################################################################
#validation section
#Perform as much upfront validation as possible
#############################################################################################################

#validate upfront that this service we are trying to create already exists
if((Get-AzureService -ServiceName $DestServiceName -ErrorAction SilentlyContinue) -ne $null)
{
    Write-Error “Service [$DestServiceName] already exists”
    return
}

#Determine we are migrating the VM to a Virtual network. If it is then verify that VNET exists
if( !$VNetName -and !$SubnetName )
{
    $DeployToVNet = $false
}
else
{
    $DeployToVNet = $true
    $vnetSite = Get-AzureVNetSite -VNetName $VNetName -ErrorAction SilentlyContinue

    if (!$vnetSite)
    {
        Write-Error “Virtual Network [$VNetName] does not exist”
        return
    }
}

Write-Host “DepoyToVNet is set to [$DeployToVnet]”

#TODO: add validation to make sure the destination VM size can accomodate the number of disk in the source VM

$DestStorageAccount = Get-AzureStorageAccount -StorageAccountName $DestStorageAccountName -ErrorAction SilentlyContinue

#check to see if the storage account exists and create a premium storage account if it does not exist
if(!$DestStorageAccount)
{
    # Create a new storage account
    Write-Output “”;
    Write-Output (“Configuring Destination Storage Account {0} in location {1}” -f $DestStorageAccountName, $Location);

    $DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

    if (!($?)) 
    { 
        throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
    } 
   
    Write-Verbose “Created Destination Storage Account [$DestStorageAccountName] with AccountType of [$($DestStorageAccount.AccountType)]”    
}
else
{
    Write-Host “Destination Storage account [$DestStorageAccountName] already exists. Storage account type is [$($DestStorageAccount.AccountType)]”

    #make sure if the account already exists it is of type premium storage
    if( $DestStorageAccount.AccountType -ne $StorageAccountTypePremium )
    {
        Write-Error “Storage account [$DestStorageAccountName] account type of [$($DestStorageAccount.AccountType)] is invalid”
        return
    }
}

Write-Host “Source VM Name is [$SourceVMName] and Service Name is [$SourceServiceName]”

#Get VM Details
$SourceVM = Get-AzureVM -Name $SourceVMName -ServiceName $SourceServiceName -ErrorAction SilentlyContinue

if($SourceVM -eq $null)
{
    Write-Error “Unable to find Virtual Machine [$SourceServiceName] in Service Name [$SourceServiceName]”
    return
}

Write-Host “vm name is [$($SourceVM.Name)] and vm status is [$($SourceVM.Status)]”

#need to shutdown the existing VM before copying its disks.
if($SourceVM.Status -eq “ReadyRole”)
{
    Write-Host “Shutting down virtual machine [$SourceVMName]”
    #Shutdown the VM
    Stop-AzureVM -ServiceName $SourceServiceName -Name $SourceVMName -Force
}

$osdisk = $SourceVM | Get-AzureOSDisk

Write-Host “OS Disk name is $($osdisk.DiskName) and disk location is $($osdisk.MediaLink)”

$disk_configs = @{}

# Used to track disk copy status
$diskCopyStates = @()

##################################################################################################################
# Kicks off the async copy of VHDs
##################################################################################################################

# Copies to remote storage account
# Returns blob copy state to poll against
function StartCopyVHD($sourceDiskUri, $diskName, $OS, $destStorageAccountName, $destContainer)
{
    Write-Host “Destination Storage Account is [$destStorageAccountName], Destination Container is [$destContainer]”

    #extract the name of the source storage account from the URI of the VHD
    $sourceStorageAccountName = $sourceDiskUri.Host.Replace(“.blob.core.windows.net”, “”)
   

    $vhdName = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  1].Replace(“%20″,” “) 
    $sourceContainer = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  2].Replace(“/”, “”)

    $sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary
    $sourceContext = New-AzureStorageContext -StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceStorageAccountKey

    $destStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccountName).Primary
    $destContext = New-AzureStorageContext -StorageAccountName $destStorageAccountName -StorageAccountKey $destStorageAccountKey
    if((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
    {
        New-AzureStorageContainer -Name $destContainer -Context $destContext | Out-Null

        while((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
        {
            Write-Host “Pausing to ensure container $destContainer is created..” -ForegroundColor Green
            Start-Sleep 15
        }
    }

    # Save for later disk registration
    $destinationUri = “https://$destStorageAccountName.blob.core.windows.net/$destContainer/$vhdName”
   
    if($OS -eq $null)
    {
        $disk_configs.Add($diskName, “$destinationUri”)
    }
    else
    {
       $disk_configs.Add($diskName, “$destinationUri;$OS”)
    }

    #start async copy of the VHD. It will overwrite any existing VHD
    $copyState = Start-AzureStorageBlobCopy -SrcBlob $vhdName -SrcContainer $sourceContainer -SrcContext $sourceContext -DestContainer $destContainer -DestBlob $vhdName -DestContext $destContext -Force

    return $copyState
}

##################################################################################################################
# Tracks status of each blob copy and waits until all the blobs have been copied
##################################################################################################################

function TrackBlobCopyStatus()
{
    param($diskCopyStates)
    do
    {
        $copyComplete = $true
        Write-Host “Checking Disk Copy Status for VM Copy” -ForegroundColor Green
        foreach($diskCopy in $diskCopyStates)
        {
            $state = $diskCopy | Get-AzureStorageBlobCopyState | Format-Table -AutoSize -Property Status,BytesCopied,TotalBytes,Source
            if($state -ne “Success”)
            {
                $copyComplete = $true
                Write-Host “Current Status” -ForegroundColor Green
                $hideHeader = $false
                $inprogress = 0
                $complete = 0
                foreach($diskCopyTmp in $diskCopyStates)
                { 
                    $stateTmp = $diskCopyTmp | Get-AzureStorageBlobCopyState
                    $source = $stateTmp.Source
                    if($stateTmp.Status -eq “Success”)
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor Green
                        $complete++
                    }
                    elseif(($stateTmp.Status -like “*failed*”) -or ($stateTmp.Status -like “*aborted*”))
                    {
                        Write-Error ($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)
                        return $false
                    }
                    else
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor DarkYellow
                        $copyComplete = $false
                        $inprogress++
                    }
                    $hideHeader = $true
                }
                if($copyComplete -eq $false)
                {
                    Write-Host “$complete Blob Copies are completed with $inprogress that are still in progress.” -ForegroundColor Magenta
                    Write-Host “Pausing 60 seconds before next status check.” -ForegroundColor Green 
                    Start-Sleep 60
                }
                else
                {
                    Write-Host “Disk Copy Complete” -ForegroundColor Green
                    break 
                }
            }
        }
    } while($copyComplete -ne $true) 
    Write-Host “Successfully Copied up all Disks” -ForegroundColor Green
}

# Mark the start time of the script execution
$startTime = Get-Date 

Write-Host “Destination storage account name is [$DestStorageAccountName]”

# Copy disks using the async API from the source URL to the destination storage account
$diskCopyStates += StartCopyVHD -sourceDiskUri $osdisk.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $osdisk.DiskName -OS $osdisk.OS

# copy all the data disks
$SourceVM | Get-AzureDataDisk | foreach {

    Write-Host “Disk Name [$($_.DiskName)], Size is [$($_.LogicalDiskSizeInGB)]”

    #Premium storage does not allow disks smaller than 10 GB
    if( $_.LogicalDiskSizeInGB -lt 10 )
    {
        Write-Warning “Data Disk [$($_.DiskName)] with size [$($_.LogicalDiskSizeInGB) is less than 10GB so it cannnot be added” 
    }
    else
    {
        Write-Host “Destination storage account name is [$DestStorageAccountName]”
        $diskCopyStates += StartCopyVHD -sourceDiskUri $_.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $_.DiskName
    }
}

#check that status of blob copy. This may take a while if you are doing cross region copies.
#even in the same region a 127 GB takes nearly 10 minutes
TrackBlobCopyStatus -diskCopyStates $diskCopyStates

# Mark the finish time of the script execution
$finishTime = Get-Date 
 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Host “The disk copies completed in $TotalTime seconds.” -ForegroundColor Green

Write-Host “Registering Copied Disk” -ForegroundColor Green

$luncount = 0   # used to generate unique lun value for data disks
$index = 0  # used to generate unique disk names
$OSDisk = $null

$datadisk_details = @{}

foreach($diskName in $disk_configs.Keys)
{
    $index = $index + 1

    $diskConfig = $disk_configs[$diskName].Split(“;”)

    #since we are using the same subscription we need to update the diskName for it to be unique
    $newDiskName = “$DestVMName” + “-disk-“ + $index

    Write-Host “Adding disk [$newDiskName]”

    #check to see if this disk already exists
    $azureDisk = Get-AzureDisk -DiskName $newDiskName -ErrorAction SilentlyContinue

    if(!$azureDisk)
    {

        if($diskConfig.Length -gt 1)
        {
           Write-Host “Adding OS disk [$newDiskName] -OS [$diskConfig[1]] -MediaLocation [$diskConfig[0]]”

           #Expect OS Disk to be the first disk in the array
           $OSDisk = Add-AzureDisk -DiskName $newDiskName -OS $diskConfig[1] -MediaLocation $diskConfig[0]

           $vmconfig = New-AzureVMConfig -Name $DestVMName -InstanceSize $VMSize -DiskName $OSDisk.DiskName 

        }
        else
        {
            Write-Host “Adding Data disk [$newDiskName] -MediaLocation [$diskConfig[0]]”

            Add-AzureDisk -DiskName $newDiskName -MediaLocation $diskConfig[0]

            $datadisk_details[$luncount] = $newDiskName

            $luncount = $luncount + 1  
        }
    }
    else
    {
        Write-Error “Unable to add Azure Disk [$newDiskName] as it already exists”
        Write-Error “You can use Remove-AzureDisk -DiskName $newDiskName to remove the old disk”
        return
    }
}

#add all the data disks to the VM configuration
foreach($lun in $datadisk_details.Keys)
{
    $datadisk_name = $datadisk_details[$lun]

    Write-Host “Adding data disk [$datadisk_name] to the VM configuration”

    $vmconfig | Add-AzureDataDisk -Import -DiskName $datadisk_name  -LUN $lun
}

#read all the end points in the source VM and create them in the destination VM
#NOTE: I don’t copy ACL’s yet. I need to add this.
$SourceVM | get-azureendpoint | foreach {

    if($_.LBSetName -eq $null)
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)]]”
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn
    }
    else
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)], LBSetName is [$($_.LBSetName)]”       
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn -LBSetName $_.LBSetName -DefaultProbe
    }
}

#
if( $DeployToVnet )
{
    Write-Host “Virtual Network Name is [$VNetName] and Subnet Name is [$SubnetName]” 

    $vmconfig | Set-AzureSubnet -SubnetNames $SubnetName
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -VNetName $VNetName -Location $Location
}
else
{
    #Creating the virtual machine
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -Location $Location
}

#get any vm extensions
#there may be other types of extensions that be in the source vm. I don’t copy them yet
$SourceVM | get-azurevmextension | foreach {
    Write-Host “ExtensionName [$($_.ExtensionName)] Publisher [$($_.Publisher)] Version [$($_.Version)] ReferenceName [$($_.ReferenceName)] State [$($_.State)] RoleName [$($_.RoleName)]”
    get-azurevm -ServiceName $DestServiceName -Name $DestVMName -Verbose | set-azurevmextension -ExtensionName $_.ExtensionName -Publisher $_.Publisher -Version $_.Version -ReferenceName $_.ReferenceName -Verbose | Update-azurevm -Verbose
}

 

Conclusion

I had to look at many different code samples as well as MSDN documentation to create this script. I am grateful to all the open source samples folks are contributing and this is my way of giving back to the Azure community. If you have questions and/or features requests drop me a line and I will do what  I can to help.

Azure SDK 2.6 Diagnostics Improvements for Cloud Services

I haven’t blogged for a while because of being very busy at work. Things are slowing down a bit so I will try to write more frequently.

History

Azure SDK 2.5 made big changes to Azure diagnostics. It introduced Azure PaaS Diagnostic extension. Even though this was a good long term strategy the implementation was less than perfect. Here were a few issues that were introduced as a result of Azure SDK 2.5

  1. Local emulator did not support diagnostics
  2. No support for using different diagnostics storage account for different environments
  3. Manual editing required to create XML configuration file needed by Set-AzureServiceDiagnosticsConfiguration. This PowerShell cmdlet was required to deploy the diagnostic extension
  4. To make matters worse there was a bug in the PowerShell cmdlet which surfaced when you had a . in the name of the roles.

All these factors made it impossible to do continuous integration/deployment for Cloud service projects.

A few days ago Azure SDK 2.6 was released. I went through the release notes and read up the documentation. I ran tests to see if sanity has been restored. I am glad to report all the issues introduced by SDK 2.5 have been fixed. Here is a summary of improvements.

  1. Local emulator now supports diagnostics.
  2. Ability to specify different diagnostics storage account for different service configuration
  3. To simplify configuration of paas diagnostics extension the package output from Visual Studio contains the public configuration XML for the diagnostics extension for each role.
  4. PowerShell version 0.9.0 which was released along with the Azure SDK 2.6 also fixed the pesky bug that was happening when you had a . in the name of the role.

Here is a document that provides all the gory details for Azure SDK 2.6 diagnostics changes.

https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx

Overview

If you are developing application and still not using continuous integration and continuous deployment  you should be learn more about it. I will use rest of this blog post to show how you can use PowerShell cmdlets to automate the installation and updating of PaaS diagnostics extension for Cloud Services built using Azure SDK 2.6.

Details

I installed Azure SDK 2.6 on my development machine. I install PowerShell cmdlets(version 0.9.0) and Azure CLI as well.

I created a simple Cloud Service Project. I added a web role and a worker role to it.

I  added one more Service Configuration called “Test” to this project.

image 

I examined the properties of the WebRole1 to see what has changed with SDK 2.6

If you select “All Configurations”  you can still enable/disable the diagnostics like you used to do in SDK 2.5

image

When I selected “Configure” button to configure the diagnostics I found that we don’t have to select the diagnostics storage account in the “General” tab like we used to do. Rest of the configuration is same.

image

Returning back to the configuration of the WebRole1 I changed the Service Configuration to “Cloud”.

In the past there was no way to configure diagnostics storage account for configuration type.

But now we can define a different diagnostics storage account for each configuration type.

image

Quick examination of the ServiceConfiguration.Cloud.cscfg confirmed that diagnostics connection string was defined in it.

This makes a lot of sense because rest of the environment specific configuration setting are also defined in the same file.

image 

I did not want to deploy this project directly from Visual Studio because most build servers do not use Visual studio to deploy applications.

First I created a deployment package by selecting the Cloud project and select Package.

image

Selected the “Cloud” Service Configuration and Press “Package” button.

image

The project was built and packaged successfully. It opened up the location where the package and related files were created.

It created a directory called app.publish in the bin\debug directory under the cloud service project.

This is not any different from the past. However there is a new directory called Extensions.

image

Extensions directory has PubConfig.xml file for each role type. You had to create this file manually from diagnostics.wadcfg in the past. These files are needed by PowerShell cmdlets that are used to deploy diagnostics extension.

image

We use AppVeyor for continuous integration and deployment. It uses msbuild to build the projects.

I ran “Developer Command Prompt for Visual Studio 2013” and used the following command to build and package the cloud project.

msbuild <ccproj_file> /t:Publish /p:PublishDir=<temp_path>

I verified that msbuild also created the package and all the related files.

PowerShell Cmdlets for Azure Diagnostics

image

For new Cloud Services there are two ways to apply diagnostics extensions.

  1. You can pass the extension configuration to New-AzureDeployment via –ExtensionConfiguration parameter.
  2. You can create the Cloud Service first and use Set-AzureServiceDiagnosticsExtension to apply the PaaS diagnostics extension.

You can learn about it here.

https://msdn.microsoft.com/en-us/library/azure/dn495270.aspx

I chose method one because it was faster than applying extension in a separate call.

Deploying PaaS Diagnostics Extension for the first time

The following script creates a new Cloud Services, creates the diagnostics configuration and deploys the package which also deploys the PaaS diagnostics extension.

I am setting the diagnostics extension for each Role separately.

At the end of this script I use Get-AzureServiceDiagnosticsExtension to verify if the diagnostics has been installed.

You can also use Visual Studio Server Explorer to view the diagnostics.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Provisions a new cloud service with web/worker role built with SDK 2.6 and applies diagnostics extension
 
.DESCRIPTION
This script will create a new cloud service, deploy cloud service and apply azure diagnostics extension to each role type.
This cloud service has a WebRole1 and WorkerRole2
#>

$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop”

$SubscriptionName = “Your Subscription Name”
$VMStorageAccount = “storage account used during deployment”
$service_name = ‘cloud service name’
$location = “Central US”
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”
$slot = “Production”
#diagnostics storage account
$storage_name = ‘diagnostics storage account name’
#diagnostics storage account key
$key= ‘storage account key’


# SDK 2.6 tool generate these pubconfig files for each role type
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#set the default storage account for the subscription
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccountName $VMStorageAccount

if(Test-AzureName -Service $service_name)
{
    Write-Host “Serivice [$service_name] already exists”
}
else
{ 
    #Create new cloud service
    New-AzureService -ServiceName $service_name -Label “Raj SDK 2.6 Diagnostics Demo” -Location $location
}

#create storage context
$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key

$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1″
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1″

#deploy to the new cloud service and diagnostics extension
New-AzureDeployment -ServiceName rajsdk26diagdemo -Package $package -Configuration $configuration -Slot $slot -ExtensionConfiguration @($workerconfig,$webconfig)

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

 

Update PaaS Diagnostics Extension

I wanted to see how we can update diagnostics extension so I made these changes to my project.

I added a new worker role to the same project. I also changed the the configuration of diagnostics.

Typically an extension is only deployed once. To deploy the extension again you have two option:

  1. You can either change the name of the extension
  2. You can remove the extension and install it again

I chose the second option.

Here is what this script does:

It removes the PaaS Diagnostics extension from the cloud service

It creates PaaS diagnostics configuration for each role.

It updates the Cloud Service and applies PaaS diagnostics extension to each role including the new worker role Hard.WorkerRole.

Having a . in the name used to break the Set-AzureServiceDiagnosticsExtension. It is nice to see it is working now

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Updates an existing Cloud service and applies azure diagnostics extension as well
 
.DESCRIPTION
This script removes diagnostics extension, updates cloud service, applies azure diagnostics extension to each role type.
This cloud service had a WebRole1 and WorkerRole2 initially. I added a new role called Hard.WorkerRole
I put . in the name because SDK 2.5 Set-AzureServiceDiagnosticsExtension had a bug where . in the name broke it.
#>

# Set the output level to verbose and make the script stop on error
$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop” 

$service_name = ‘Cloud service name’
$storage_name = ‘diagnostics storage account’
$key= ‘storage account key’
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#remove the old diagnostics extension
Remove-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable
if (!($?)) 
{ 
        Write-Error “Unable to remove diagnostics extension from Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”
$hardwrkdiagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.Hard.WorkerRole.PubConfig.xml”
 

#create extension config
$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1″
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1″
$hardwrkconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $hardwrkdiagconfig -role “Hard.WorkerRole”

#upgrade the existing code and apply diagnostic extension at the same time
Set-AzureDeployment -Upgrade -ServiceName $service_name -Mode Auto -Package $package -Configuration $configuration  -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable -ExtensionConfiguration @($workerconfig,$webconfig, $hardworkconfig)
if (!($?)) 
{ 
        Write-Error “Unable to upgrade Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

Summary

Azure SDK 2.6 has addressed most of the issues related to deploying diagnostics to Cloud Services that were introduced by SDK 2.5. Cleanest way to update diagnostics extensions is the remove the existing diagnostics extension and setting it again during the deployment.  I tested deploying Diagnostics extension individually on each role it took 3-4 minutes to deploy each extension so if  you have a large number of roles your deployment times may increase. In my case with 3 role types it was taking 12 minutes for the script to run. When I used –ExtensionConfiguration parameter of New-AzureDeployment and Set-AzureDeployment it took only 5 minutes for the entire script to run.

NLog Target for Azure ServiceBus Event Hub

NLog is a popular open source logging framework for .Net applications. It writes to various destinations via Target. It has a large number of Targets available available. I created a NLog Target that can send message to Azure ServiceBus EventHub. You can get the source code and documentation here: https://github.com/rajinders/nlog-targets-azureeventhub

I also created a NuGet package which you can download from here: https://www.nuget.org/packages/NLog.Targets.AzureEventHub/

If you already know how to use NLog it will take you a few minutes to start using the target.

Feel free to use it and let me know if you have any suggestions for improvements.

You may be wondering why would anyone would to send logs to Azure Event Hub. Most applications use logging frameworks to write application logs. These logs are not only helpful in debugging issues they are also a source for business intelligence. There are already successful companies like Splunk, Logentries and Loggly who provide cloud based log aggregation services. If you wanted to create your own log aggregation service without write a lot of code you can do so in Azure platform. You can send you log messages to EventHub with NLog or Serilog targets for EventHub. You can leverage Azure stream analytics service to process your log streams. You can even send these logs to Power BI to create dashboards. Both Azure Event Hub and stream analytics are highly scalable. Scaling up can be achieved by simple configuration changes.

Bloggers Guide to Azure Event Hub

I love Integration/Middleware space. In the spring of 2004 I was working on a large implementation for a client. We had to integrate externally, internally with a large number of systems and we also had a need for long running processes. We were already using BizTalk 2002. We came to know about a radical new version of BizTalk Server called BizTalk 2004. It was based on .Net and was re-written from scratch. As soon as we learned about its capabilities we knew that it was a far better product for what we were implementing. We made a decision to use BizTalk Server 2004 Beta during our development. Since the product we were building was releasing in fall/winter we knew that it will become generally available before we go live. Making the decision to switch to BizTalk 2004 was an easy decision. Hard part came when I had to design 30 plus long running processes using BizTalk. There wasn’t any documentation. There were no BizTalk experts we could reach out it. At that time somebody began publishing a guide called “Bloggers Guide to BizTalk”. It was a compiled help file which included blog posts from authors all over the world. Without this guide we would have failed to implement our solution using BizTalk 2004.

I  still like middleware space but I have added Cloud, IOT, DevOps to my list of technologies I use every day. Azure Event Hub is a relatively new PAAS Service that was announced in last Build conference. It became Generally Available at TechEd Barcelona in October 2014. I will use this blog post to document various resources about Azure ServiceBus EventHub service. I named it “Bloggers Guide to Azure Event Hub” as an Ode to “Bloggers Guide to BizTalk”. I want to make it easier for anybody learning about Azure Event Hub to find helpful resources that will quickly get them started. I will make weekly updates to keep it current.

 

Videos

Introduction to EventHub from TechEd Barcelona: http://channel9.msdn.com/Events/TechEd/Europe/2014/CDP-B307

Cloud Cover Show about Event Hub: http://search.channel9.msdn.com/content/result?sid=b8411351-e4b2-4fff-bb3c-a64b566c7d99&rid=85437dcd-37ee-4965-ab09-a3d4013c30d7

 

MSDN Documentation

Event Hub Overview: https://msdn.microsoft.com/en-us/library/azure/dn836025.aspx

Eventh Hubs Programming Guide: https://msdn.microsoft.com/en-us/library/azure/dn789972.aspx

Event Hub API Overview: https://msdn.microsoft.com/en-us/library/azure/dn790190.aspx

 

Event Processor Host

EventProcessorHost class: https://msdn.microsoft.com/en-us/library/azure/microsoft.servicebus.messaging.eventprocessorhost.aspx

EventProcessor Host is covered in the API overview but I want to call this out once again as it is the easiest way to process messages out of Event Hub. It may meet the needs of more 90-95% of scenarios. To get an in depth understanding of EventProcessorHost you should read this series of blog posts from Dan Rosanova.

Event Processor Host Best Practices Part I : http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx

Event Process Host Best Practices Part II: http://blogs.msdn.com/b/servicebus/archive/2015/01/21/event-processor-host-best-practices-part-2.aspx

 

Code Samples

ServiceBus Event Hubs Getting Started : https://code.msdn.microsoft.com/windowsapps/Service-Bus-Event-Hub-286fd097

Scale Out Event Processing with Event Hub: https://code.msdn.microsoft.com/windowsapps/Service-Bus-Event-Hub-45f43fc3

ServiceBus Event Hub Direct Receiver: https://code.msdn.microsoft.com/windowsapps/Event-Hub-Direct-Receivers-13fa95c6

 

Reference Architecture

data-pipeline

https://github.com/mspnp/data-pipeline

If you are looking for reference architecture and code sample for how to build a scalable real world application data-pipeline will be helpful to you.

Real-Time Event Processing with Microsoft Azure Stream Analytics

http://azure.microsoft.com/en-us/documentation/articles/stream-analytics-real-time-event-processing-reference-architecture/

This reference architecture is about Stream Analytics but it shows how Event Hub is a core part of the real-time event processing architecture.

 

Tools

ServiceBus Explorer

https://code.msdn.microsoft.com/windowsapps/Service-Bus-Explorer-f2abca5a

Anybody developing ServiceBus application should be using this tool. It has Queues, Topics and EventHub support as well.

 

Provisioning

If you want to provision EventHub in Azure you options are:

1. Use the Azure Management Portal

2. Use the SDK to provision it in code

3. Use the REST API

4. Paolo Salvatori created a PowerShell Script that invokes the REST API to create Service Bus namespace and EventHub. This is the script I am using my current project. http://blogs.msdn.com/b/paolos/archive/2014/12/01/how-to-create-a-service-bus-namespace-and-an-event-hub-using-a-powershell-script.aspx

 

Logging Framework

EventHub makes an excellent target for ingesting logs at scale.

Serilog

Serilog is a easy to use .Net structured logging framework. It already has an EventHub Appender. You can check it out here:

https://github.com/serilog/serilog/tree/dev/src/Serilog.Sinks.AzureEventHub

 

Miscellaneous Blog Posts

Azure Event Hubs – All my thoughts by Nino Crudele: http://ninocrudele.me/2014/12/12/azure-event-hub-all-my-thoughts/

Getting Started with Azure Event Hub by Fabric Controller: http://fabriccontroller.net/blog/posts/getting-started-azure-service-bus-event-hubs-building-a-real-time-log-stream/

Azure’s New Event Hub – Brent’s Notepad: https://brentdacodemonkey.wordpress.com/2014/11/18/azures-new-event-hub/

Sending Raspberry Pi data to Event Hub and many blog posts about Azure Event Hub on Faister’s blog: http://blog.faister.com/

Sending Kinect data to Azure Event Hub at Alejandro’s blog: http://blogs.southworks.net/ajezierski/2014/11/10/azure-event-hubs-the-thing-and-the-internet/

Azure Stream Analytics, Scenarios and Introduction by Sam Vanhoutte. This blog post is about Azure Stream Analytics but both of these services will work together in many scenarios. http://www.codit.eu/blog/2015/01/azure-stream-analytics-getting-started/

Azure Event Hub Updates from a NetMF Device on Dev Mobiles blog: http://blog.devmobile.co.nz/2014/11/30/azure-event-hub-updates-from-a-netmf-device/

Azure Usage monitoring with Azure Automation

When you purchase an Azure subscription it comes with usage caps for various resources. As an example the usage cap for number of cores is 20. You can call use Azure Support and open a free billing support case to increase this core limit.

In the past few years I have had many clients ask for basic alerting capability when they are about to exceed their resource limits. They have Azure subscriptions that are being used by various teams and they want to know if they are reaching their Azure usage limit. They can install Azure PowerShell cmdlet and easily find the answer to this question. However they are looking for automated alerting service. I heard this request last week so I thought I will use Azure Automation to implement this solution.

There are two use case scenarios for this script:

1. It can be used by Azure  Subscription Owner to understand if they are about to exceed the resource(compute cores) quota for an Azure subscription.

2. There have been times when you keep Azure services running longer than you need them. This script will run on a schedule and inform you about the compute cores you are currently using. This could have helped me last year when I left  HD Insight cluster with 32 cores running for a month.

Azure Automation recently became generally available and it can be used to automate error prone, time consume, cloud management tasks. It leverages PowerShell based workflow scripts to automate tasks. You can learn more about it here:

http://azure.microsoft.com/en-us/services/automation/

I also highly recommend this course in virtual academy.

http://www.microsoftvirtualacademy.com/training-courses/automating-the-cloud-with-azure-automation

Here are the high level steps to implement this script.

  1. Create Azure automation account
  2. Create Credential Asset for Azure Administration
  3. Create Credential Asset for Office 365 user that will be used to send emails
  4. Create the runbook
  5. Test the runbook
  6. Publish the runbook
  7. Link it to a schedule
  8. View Job history

Create Azure Automation Account using Azure Portal

You can do so by selecting Automation and “+ Create” button.

image

Right now you can create Azure Automation account in “East US”, “Southeast Asia” and “West Europe” only.

When you create an account in a region it stores its assets in that region. However this account can automate tasks in any other region.

image

Creating Credentials

Azure Active Directory for Azure Credentials

Create a new user in Azure Active Directory

Use Azure Portal and select “Active Directory”

Select your Active Directory instance and navigate to “User” section and use “Add User” button in the bottom toolbar.

image

Select “new user in your organization”

Enter the user name.

image

Enter user information in the User Profile section.

image

Press “Create” button and it will show you the temporary password.

Sign into the Azure Active Directory as this newly created user and change the temporary password.

Sign in to Windows Azure Active Directory

CoAdmin Access

Make this new user a Co-administrator for the Azure subscription you want to monitor.

You do this by select “Settings” –>Administrators and press “Add” button in the bottom toolbar

image

 

On the “Add A CO-Administrator” screen specify the Azure AD user you just created and select the appropriate subscription from the list below.

image 

Create an asset of type Credentials in your automation account

Automation accounts has assets that can be used by runbooks. These are convenient place to securely store user names, passwords and connection strings.

We need to create Credentials to get access to the Azure subscription. Select your newly created Azure automation account and select “Assets”. Press in the “Add Setting” button

image

Select “add credential”

image

There are two options for credentials:

1. Windows PowerShell Credential

2. Certificate

You need to select “Windows Azure PowerShell”

image

Enter the name and password of the Azure AD user that is also a Co-Administrator to the Azure subscription you are monitoring.

image

Create Office 365 Credentials to send out emails

I have Office 365 small business account. I have a separate Azure subscription. Until now I never had a need to use Active Directory associated with my Office 365 account. Here are the steps to setup credentials for Office 365 as assets in Azure Automation.

Use Azure Management Portal   New->App Services ->Active Directory-Directory-Custom Create

On the Add Directory popup you need to select “Use Existing directory”

image

You will be asked to sign in as administrator for your Office 365 account.

Once Office 365 Directory has been added to the Portal you can see the list of existing users or add a new user that will be used to send out emails about Azure  resource usage.

You need to create an asset of type Credentials in your Azure Automation account next.

The steps the create the Credentials are identical to steps to create Azure administration account. I name the credential object O365Cred.

Create Runbook

Select your Azure Automation Account and select “New->Automation->RunBook->QuickCreate to create you new RunBook.

You can use the Author Tab to create the run book. Authoring in the portal worked OK for me but I had trouble navigating through the script as it grew longer. I tried IE and Chrome and got the same results. In future I may first create the runbook in PowerShell ISC first and unit test it in the Azure portal.

Here is the script for the runbook. It looks like a normal PowerShell script with a few differences

You declare Parameters for the runbook in lines 3 through 9.

You retrieve the credentials for the Azure administration account in line 12.

You determine the current resources consumed in line 16

I want you to look at line 21 carefully as this is where I get list of services that are not in “StoppedDeallocated” status. These are the services that are incurring compute charges. Automation runbooks do not support positional parameters. I had to add –FilterScript after the Where-Object to make this expression work. Without the –FilerScript I was getting the following error:

azure automation parameter set cannot be resolved using the specified named parameters.

You retrieve the office 365 credentials in line 30

You send email with Send-MailMessage cmdlet in line 36

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
workflow Get-CurrentAzureResourceUsage
{
    param (
       [Parameter(Mandatory=$False)]
       [string] $AzureAdmin = “autoadmin@xxxxxxxxxx.onmicrosoft.com”,   
       [Parameter(Mandatory=$False)]
       [string] $SubName = “Your sub name”, 
       [Parameter(Mandatory=$False)]
       [string] $MessageTo = “Your email address” 
    ) 
   
    $cred = Get-AutomationPSCredential -Name $AzureAdmin
   
    Add-AzureAccount -Credential $cred
   
    $details = Get-AzureSubscription -Name $SubName -ExtendedDetails
   
    $MaxCoreCount = $details.MaxCoreCount
    $CurrentCoreCount = $details.CurrentCoreCount
   
    $VMSNotDeallocated = get-azurevm | Where-Object -FilterScript { $_.Status -ne ‘StoppedDeallocated’ } | Select-Object ServiceName

    $MessageBody =  [string]::Format(“You are using {0:N0} of {1:N0} cores.”,$CurrentCoreCount, $MaxCoreCount)

    if($VMSNotDeallocated)
    {
        $MessageBody =  $MessageBody + [string]::Format(“The following services are still incurring compute charges:{0}”, $VMSNotDeallocated)       
    }
          
    $AzureO365Credential = Get-AutomationPSCredential -Name “O365Cred”
   
    if ($AzureO365Credential) 
    { 
        $MessageFrom = $AzureO365Credential.Username 
        $MessageSubject = “Azure Subscription Resource Usage”
        Send-MailMessage -To $MessageTo -Subject $MessageSubject -Body $MessageBody -UseSsl -Port 587 -SmtpServer ‘smtp.office365.com’ -From $MessageFrom -BodyAsHtml -Credential $AzureO365Credential  
    } 
    else 
    { 
      throw “AzureO365Credential not found” 
    } 
   
    Write-Output “Finished running script”
}

Testing

You can test the runbook in the portal by pressing the “Test” button in the bottom toolbar. When you run your tests you will see a window to enter the parameters. If the script runs successfully you will see the output.

Publishing

Once your testing is complete  you can press the “Publish” button to publish this run.

Here is an email received from the runbook.

You are using 2 of 20 cores.The following services are still incurring compute charges:@{ServiceName=sansoroprovtest; PSComputerName=localhost; PSShowComputerName=True; PSSourceJobInstanceId=5d402195-f0a1-4a72-8b72-c27f0633ab58}

You can schedule this run book to run on daily or hourly basis.

You can create a new schedule by selecting “Schedule” and “Link to New Schedule”

Adding a Schedule

image

image

image

You can view the Job History by looking at the Job section of the runbook.

 

image

You can drill down and view the details of the last run.

Summary section of the history shows job summary, input parameters and script output.

image

image

There is also a history section that shows information about previous executions of the runbook.

image

With this simple example I hoped to demonstrate how you can automate cloud management tasks using Azure automation runbooks. Here are a few things about Azure automation worth mentioning:

  • Runbooks can call other runbooks inline or invoke them asynchronously.
  • You can leverage integration modules as well. As an example I wanted to use Azure Resource Manager with Azure Automation but it is currently not supported. All I had to do was zip the Azure Resource Manager directory upload it and start using it. It is still not officially supported.
  • I was surprised to learn that we can call Runbooks from on premise PowerShell cmdlets.
  • You can run parallel activities in these runbooks
  • Since they are based on workflow you can save the state of a running runbook and rollback if needed.
  • Runbooks don’t support positional parameters
  • Certain cmdlets like Write-Host are not supported. I replaced Write-Host with Write-Output

Azure Automation is an easy, secure, flexible, extensible and scalable way to automate cloud management tasks. Most of your existing PowerShell scripts can be easily converted into runbooks. There is already a gallery of runbooks available in the Azure portal. You can import these runbooks and use them to automate tasks. You can find many sample runbook’s here:

https://social.technet.microsoft.com/Search/en-US/scriptcenter?query=azure%20automation&beta=0&ac=5#refinementChanges=&pageNumber=2&showMore=false

I highly recommend reading “Authoring Automation Runbooks” guide

http://technet.microsoft.com/en-us/library/dn469262.aspx

As a future enhancement you can pass in subscriptions via a JSON/XML file stored in blob storage. You can send push notifications in addition to emails.

We have barely scratched the surface of Azure Automation. It can and will play pivotal role in implementing continuous deployment and other tasks related to DevOps. I have started using Azure automation in a few of my projects. I will share my learning in future blog posts. How are you using Azure automation?