Forecast Cloudy – Profiling Azure Cloud Service

We can get an in-depth analysis of the computational aspects of how azure application runs by using the Visual Studio Profiler.  Below I will show you how to use Sampling performance gathering method in Visual Studio Profiler to profile Cloud Service in Azure. If you need basic information on Visual Studio Profiler start here –

https://docs.microsoft.com/en-us/visualstudio/profiling/beginners-guide-to-performance-profiling .
Sampling is a statistical profiling method that shows you the functions that are doing most of the user mode work in the application. Sampling is a good place to start to look for areas to speed up your application.

At specified intervals, the Sampling method collects information about the functions that are executing in your application. After you finish a profiling run, the Summary view of the profiling data shows the most active function call tree, called the Hot Path, where most of the work in the application was performed. The view also lists the functions that were performing the most individual work, and provides a timeline graph you can use to focus on specific segments of the sampling session.

Sampling is the most common method to application profiling, there are other methods as well, but they may come with more performance overhead or require more application instrumentation.
Make sure you enable appropriate settings when publishing your application to Azure

  • In Solution Explorer, open the shortcut menu for your Azure project, and then choose Publish.
  • In the Advanced Settings tab, select the Enable profiling

publish

  • Choose the Settings And select Sampling  and Enable Tier Interaction Profiling, and click OK.
  • Click Next and Publish the application

publish2

Once you ran your profile you can now view profile reports. To get them do following

  • Using Visual Studio Server Explorer, expand Azure -> Cloud Services ->Your_Cloud_Role ->Production (Profiling), right click on the instance, click on View Profiling Report.
  • report_view
  • It may take couple minutes for the profiling report to show up. Click on the Save icon to save a local copy for analysis

profile

  • Once you are done with profiling you may republish application without that option checked.

Happy performance hunting. For more see – http://msdn.microsoft.com/library/azure/hh369930.aspx.

Purgamentum Init Venari – Analyzing Java GC Using IBM Pattern Modeling Tool

Recently again looking at some GC issues on IBM Websphere platform for a buffy of mine I got to learn new tool – Pattern Modeling and Analysis Tool for IBM Java Garbage Collector (PMAT).  This is second post-mortem IBM analysis tool I had priviledge to work with from IBM , as I previously profiled JCA – Javacore Dump Analysis Tool here https://gennadny.wordpress.com/2016/03/28/javacore-dump-analysis-using-jca-ibm-thread-and-monitor-dump-analyzer-for-java/ 

The PMAT tool parses verbose GC trace, analyzes Java heap usage, and recommends key configurations based on pattern modeling of Java heap usage.  

Why do we need it?

When the JVM (Java virtual machine) cannot allocate an object from the current heap because of lack of space, a memory allocation fault occurs, and the Garbage Collector is invoked. The first task of the Garbage Collector is to collect all the garbage that is in the heap. This process starts when any thread calls the Garbage Collector either indirectly as a result of allocation failure or directly by a specific call to System.gc(). The first step is to get all the locks needed by the garbage collection process. This step ensures that other threads are not suspended while they are holding critical locks. All other threads are then suspended. Garbage collection can then begin. It occurs in three phases: Mark, Sweep, and Compaction (optional).

Sometimes you run into issues, most common are either performance issues in applications due to especially long running , aka “violent” GCs or rooted objects on the heap not getting cleaned up and application crashing with dreaded OutOfMemory error and causing you have to analyze Garbage Collection with Verbose GC on.

JRE Java.Lang.OutOfMemory

Verbose GC is a command-line option that one can supply to the JVM at start-up time. The format is: -verbose:gc or -verbosegc. This option switches on a substantial trace of every garbage collection cycle. The format for the generated information is not designed and therefore varies among various platforms and releases.

This trace should allow one to see the gross heap usage in every garbage collection cycle. For example, one could monitor the output to see the changes in the free heap space and the total heap space. This information can be used to determine whether garbage collections are taking too long to run; whether too many garbage collections are occurring; and whether the JVM crashed during garbage collection.

How does it work?

PMAT analyzes verbose GC traces by parsing the traces and building pattern models. PMAT recommends key configurations by executing a diagnosis engine and pattern modeling algorithm. If there are any errors related with Java heap exhaustion or fragmentation in the verbose GC trace, PMAT can diagnose the root cause of failures. PMAT provides rich chart features that graphically display Java heap usage.

The following features are included:

Where do I get it?

You can download this tool from – ftp://public.dhe.ibm.com/software/websphere/appserv/support/tools/pmat/ga456.jar

Running the tool

      Run gaNNN.jar with the Java Run-time Environment. (NNN is the version number).

      You will see following initial screen

pmat1Select and open verbosegc log

pmat2

Process Log and View Summary\Reports

pmat3

More detailed information on the tool can be found in presentation here – http://www-01.ibm.com/support/docview.wss?uid=swg27007240&aid=1 

Jinwoo Hwang was technical leader at IBM WebSphere Application Server Technical Support that created this tool, as well as JCA.  I recommend that everyone reads his articles on JVM internals – http://websphere.sys-con.com/node/1229281http://www-01.ibm.com/support/docview.wss?uid=swg27011855 , http://www-01.ibm.com/support/docview.wss?uid=swg27018423 

Hope this short note helps

Capre Noctem – Using SQL Server Diagnostics (Preview) to analyze SQL Server minidump

Microsoft just released the SQL Server Diagnostics (Preview) extension within SQL Server Management Studio and Developer APIs to empower SQL Server customers to achieve more through a variety of offerings to self-resolve SQL Server issues.

diagnostic

So what can it do:

    • Analyze SQL Server dump.

Customers should be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix.

  • Review recommendations to keep SQL Server instances up to date.

 

Customers will be able to keep their SQL Server instances up-to-date by easily reviewing the recommendations for their SQL Server instances. Customers can filter by product version or by feature area (e.g. Always On, Backup/Restore, Column Store, etc.) and view the latest Cumulative Updates (CU) and the underlying hotfixes addressed in the CU.

  • Developers who want to discover and learn about Microsoft APIs can view developer portal and then use APIs in their custom applications. Developers can log and discuss issues and even submit their applications to the application gallery.

 

So I installed this extension and decided to give it a go with one of the SQL Server minidumps I have.

diagnostics2

It actually correctly identified an issue and issues suggestions. So, while this may not work on every dump, it should worth trying before you start up WinDbg.

Give it a try. Hope this helps.

Forecast Cloudy – NGINX On Microsoft Azure

NGINX

NGNIX. Nginx (pronounced “engine x”) is software to provide a web server. It can act as a reverse proxy server for TCP, UDP, HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache. Nginx comes in two flavors , free and open source basic version and more advanced Plus version. Here is comparison of both in terms of features from Nginx docs:

https://www.nginx.com/products/feature-matrix/

As you may know NGNIX Plus flavor is already available as VMs in Azure Marketplace.  So to setup in Azure, all I had to do is following:

  • Go to Azure Marketplace, find NGNIX Inc
  • Pick Resource Group or Classic Deployment (You will want to pick Azure Resource Group – see below)
  • Pick  my VM size (I picked smallest A1 Standard  for my experiment)

nginx2

This is all done via portal , but what if you really want to use Azure Resource Manager Templates to automate this?

Azure originally provided only the classic deployment model. In this model, each resource existed independently; there was no way to group related resources together. Instead, you had to manually track which resources made up your solution or application, and remember to manage them in a coordinated approach. To deploy a solution, you had to either create each resource individually through the classic portal or create a script that deployed all of the resources in the correct order. To delete a solution, you had to delete each resource individually. You could not easily apply and update access control policies for related resources. Finally, you could not apply tags to resources to label them with terms that help you monitor your resources and manage billing.

With the introduction of the new Azure Portal it possible to view resources (such as websites, virtual machines and databases) as a single logical unit. This logical unit is called a resource group. The following screenshot shows a resource group called ecommerce-westus which contains a website, SQL database and an Application Insights instance.

nginx4

Resource groups aren’t visible in the old portal, but almost everything within your Azure subscription exists within a set of resource groups that were created by default when the preview portal opened for business. If you access the new portal, press the Browse button on the jump bar (the icons down the left hand side of the screen) and navigate to resource groups you should see a whole bunch listed.

The benefit of resource groups is that they allow an Azure administrator to roll-up billing and monitoring information for resources within a resource group and manage access to those resources as a set. This can be extremely useful when you have a single subscription but you need to do cost recovery on the resources used by a customer or internal department.

For more on Azure Resource Manager see  docs – https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/ , https://channel9.msdn.com/Shows/Edge/Edge-Show-121-Azure-Resource-Manager , http://www.codeproject.com/Articles/854592/Using-Azure-Resource-Manager

Azure  Resource Manager allows for new deployment paradigm via ARM Templateshttps://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-deployment-model

Below is sample template top deploy NGINX VM:

"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "location": {
            "type": "String"
        },
        "virtualMachineName": {
            "type": "String"
        },
        "virtualMachineSize": {
            "type": "String"
        },
        "adminUsername": {
            "type": "String"
        },
        "storageAccountName": {
            "type": "String"
        },
        "virtualNetworkName": {
            "type": "String"
        },
        "networkInterfaceName": {
            "type": "String"
        },
        "networkSecurityGroupName": {
            "type": "String"
        },
        "adminPublicKey": {
            "type": "String"
        },
        "availabilitySetName": {
            "type": "String"
        },
        "availabilitySetPlatformFaultDomainCount": {
            "type": "String"
        },
        "availabilitySetPlatformUpdateDomainCount": {
            "type": "String"
        },
        "storageAccountType": {
            "type": "String"
        },
        "diagnosticsStorageAccountName": {
            "type": "String"
        },
        "diagnosticsStorageAccountId": {
            "type": "String"
        },
        "diagnosticsStorageAccountType": {
            "type": "String"
        },
        "addressPrefix": {
            "type": "String"
        },
        "subnetName": {
            "type": "String"
        },
        "subnetPrefix": {
            "type": "String"
        },
        "publicIpAddressName": {
            "type": "String"
        },
        "publicIpAddressType": {
            "type": "String"
        }
    },
    "variables": {
        "vnetId": "[resourceId('gennadykngnix','Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]",
        "subnetRef": "[concat(variables('vnetId'), '/subnets/', parameters('subnetName'))]",
        "metricsresourceid": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/', 'Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'))]",
        "metricsclosing": "[concat('')]",
        "metricscounters": "",
        "metricsstart": "",
        "wadcfgx": "[concat(variables('metricsstart'), variables('metricscounters'), variables('metricsclosing'))]",
        "diagnosticsExtensionName": "Microsoft.Insights.VMDiagnosticsSettings"
    },
    "resources": [
        {
            "type": "Microsoft.Compute/virtualMachines",
            "name": "[parameters('virtualMachineName')]",
            "apiVersion": "2015-06-15",
            "location": "[parameters('location')]",
            "plan": {
                "name": "nginx-plus",
                "publisher": "nginxinc",
                "product": "nginx-plus-v1"
            },
            "properties": {
                "osProfile": {
                    "computerName": "[parameters('virtualMachineName')]",
                    "adminUsername": "[parameters('adminUsername')]",
                    "linuxConfiguration": {
                        "disablePasswordAuthentication": "true",
                        "ssh": {
                            "publicKeys": [
                                {
                                    "path": "[concat('/home/', parameters('adminUsername'), '/.ssh/authorized_keys')]",
                                    "keyData": "[parameters('adminPublicKey')]"
                                }
                            ]
                        }
                    }
                },
                "hardwareProfile": {
                    "vmSize": "[parameters('virtualMachineSize')]"
                },
                "storageProfile": {
                    "imageReference": {
                        "publisher": "nginxinc",
                        "offer": "nginx-plus-v1",
                        "sku": "nginx-plus",
                        "version": "latest"
                    },
                    "osDisk": {
                        "name": "[parameters('virtualMachineName')]",
                        "vhd": {
                            "uri": "[concat(concat(reference(resourceId('gennadykngnix', 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2015-06-15').primaryEndpoints['blob'], 'vhds/'), parameters('virtualMachineName'), '20161130115146.vhd')]"
                        },
                        "createOption": "fromImage"
                    },
                    "dataDisks": []
                },
                "networkProfile": {
                    "networkInterfaces": [
                        {
                            "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('networkInterfaceName'))]"
                        }
                    ]
                },
                "diagnosticsProfile": {
                    "bootDiagnostics": {
                        "enabled": true,
                        "storageUri": "[reference(resourceId('gennadykngnix', 'Microsoft.Storage/storageAccounts', parameters('diagnosticsStorageAccountName')), '2015-06-15').primaryEndpoints['blob']]"
                    }
                },
                "availabilitySet": {
                    "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
                }
            },
            "dependsOn": [
                "[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]",
                "[concat('Microsoft.Compute/availabilitySets/', parameters('availabilitySetName'))]",
                "[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
                "[concat('Microsoft.Storage/storageAccounts/', parameters('diagnosticsStorageAccountName'))]"
            ]
        },
        {
            "type": "Microsoft.Compute/virtualMachines/extensions",
            "name": "[concat(parameters('virtualMachineName'),'/', variables('diagnosticsExtensionName'))]",
            "apiVersion": "2015-06-15",
            "location": "[parameters('location')]",
            "properties": {
                "publisher": "Microsoft.OSTCExtensions",
                "type": "LinuxDiagnostic",
                "typeHandlerVersion": "2.3",
                "autoUpgradeMinorVersion": true,
                "settings": {
                    "StorageAccount": "[parameters('diagnosticsStorageAccountName')]",
                    "xmlCfg": "[base64(variables('wadcfgx'))]"
                },
                "protectedSettings": {
                    "storageAccountName": "[parameters('diagnosticsStorageAccountName')]",
                    "storageAccountKey": "[listKeys(parameters('diagnosticsStorageAccountId'),'2015-06-15').key1]",
                    "storageAccountEndPoint": "https://core.windows.net/"
                }
            },
            "dependsOn": [
                "[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'))]"
            ]
        },
        {
            "type": "Microsoft.Compute/availabilitySets",
            "name": "[parameters('availabilitySetName')]",
            "apiVersion": "2015-06-15",
            "location": "[parameters('location')]",
            "properties": {
                "platformFaultDomainCount": "[parameters('availabilitySetPlatformFaultDomainCount')]",
                "platformUpdateDomainCount": "[parameters('availabilitySetPlatformUpdateDomainCount')]"
            }
        },
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('storageAccountName')]",
            "apiVersion": "2015-06-15",
            "location": "[parameters('location')]",
            "properties": {
                "accountType": "[parameters('storageAccountType')]"
            }
        },
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('diagnosticsStorageAccountName')]",
            "apiVersion": "2015-06-15",
            "location": "[parameters('location')]",
            "properties": {
                "accountType": "[parameters('diagnosticsStorageAccountType')]"
            }
        },
        {
            "type": "Microsoft.Network/virtualNetworks",
            "name": "[parameters('virtualNetworkName')]",
            "apiVersion": "2016-09-01",
            "location": "[parameters('location')]",
            "properties": {
                "addressSpace": {
                    "addressPrefixes": [
                        "[parameters('addressPrefix')]"
                    ]
                },
                "subnets": [
                    {
                        "name": "[parameters('subnetName')]",
                        "properties": {
                            "addressPrefix": "[parameters('subnetPrefix')]"
                        }
                    }
                ]
            }
        },
        {
            "type": "Microsoft.Network/networkInterfaces",
            "name": "[parameters('networkInterfaceName')]",
            "apiVersion": "2016-09-01",
            "location": "[parameters('location')]",
            "properties": {
                "ipConfigurations": [
                    {
                        "name": "ipconfig1",
                        "properties": {
                            "subnet": {
                                "id": "[variables('subnetRef')]"
                            },
                            "privateIPAllocationMethod": "Dynamic",
                            "publicIpAddress": {
                                "id": "[resourceId('gennadykngnix','Microsoft.Network/publicIpAddresses', parameters('publicIpAddressName'))]"
                            }
                        }
                    }
                ],
                "networkSecurityGroup": {
                    "id": "[resourceId('gennadykngnix', 'Microsoft.Network/networkSecurityGroups', parameters('networkSecurityGroupName'))]"
                }
            },
            "dependsOn": [
                "[concat('Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'))]",
                "[concat('Microsoft.Network/publicIpAddresses/', parameters('publicIpAddressName'))]",
                "[concat('Microsoft.Network/networkSecurityGroups/', parameters('networkSecurityGroupName'))]"
            ]
        },
        {
            "type": "Microsoft.Network/publicIpAddresses",
            "name": "[parameters('publicIpAddressName')]",
            "apiVersion": "2016-09-01",
            "location": "[parameters('location')]",
            "properties": {
                "publicIpAllocationMethod": "[parameters('publicIpAddressType')]"
            }
        },
        {
            "type": "Microsoft.Network/networkSecurityGroups",
            "name": "[parameters('networkSecurityGroupName')]",
            "apiVersion": "2016-09-01",
            "location": "[parameters('location')]",
            "properties": {
                "securityRules": [
                    {
                        "name": "default-allow-ssh",
                        "properties": {
                            "priority": 1000,
                            "sourceAddressPrefix": "*",
                            "protocol": "TCP",
                            "destinationPortRange": "22",
                            "access": "Allow",
                            "direction": "Inbound",
                            "sourcePortRange": "*",
                            "destinationAddressPrefix": "*"
                        }
                    }
                ]
            }
        }
    ],
    "outputs": {
        "adminUsername": {
            "type": "String",
            "value": "[parameters('adminUsername')]"
        }
    }
}

Obviously parameters. json has my values like storage account and resource group names, VM sizes, network names and finally SSH key. All of those parameters would be different for you.

More on NGINX on Azure as load balancer or reverse proxy  here https://www.nginx.com/products/nginx-plus-microsoft-azure/ 

Hope this helps

Forecast Cloudy – Using Azure Blob Storage with Apache Hive on HDInsight

The beauty of working with Big Data in Azure is that you can manage (create\delete) compute resources with your HDInsight cluster independent of your data stored either in Azure Data Lake or Azure blob storage.  In this case I will concentrate on using Azure blob storage\WASB as data store for HDInsWight Azure PaaS Hadoop service

With a typical Hadoop installation you load your data to a staging location then you import it into the Hadoop Distributed File System (HDFS) within a single Hadoop cluster. That data is manipulated, massaged, and transformed. Then you may export some or all of the data back as resultset for consumption by other systems (think PowerBI, Tableau, etc)
Windows Azure Storage Blob (WASB) is an extension built on top of the HDFS APIs. The WASBS variation uses SSL certificates for improved security. It in many ways “is” HDFS. However, WASB creates a layer of abstraction that enables separation of storage. This separation is what enables your data to persist even when no clusters currently exist and enables multiple clusters plus other applications to access a single piece of data all at the same time. This increases functionality and flexibility while reducing costs and reducing the time from question to insight.

hdinsight

In Azure you store blobs on containers within Azure storage accounts. You grant access to a storage account, you create collections at the container level, and you place blobs (files of any format) inside the containers. This illustration from Microsoft’s documentation helps to show the structure:

blob1

Hold on, isn’t the whole selling point of Hadoop is proximity of data to compute?  Yes, and just like with any other Hadoop system on premises data is loaded into memory on the individual nodes at compute time. With Azure data infrastructure setup and data center backbone within data center built for performance, your job performance is generally the same or better than if you used disks locally attached to the VMs.

Below is diagram of HDInsight data storage architecture:

hdi.wasb.arch

HDInsight provides access to the distributed file system that is locally attached to the compute nodes. This file system can be accessed by using the fully qualified URI, for example:

hdfs:///

More important is ability access data that is stored in Azure Storage. The syntax is:

wasb[s]://@.blob.core.windows.net/

As per https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-use-blob-storage you need to be aware of following:

  • Container Security for WASB storage.  For containers in storage accounts that are connected to cluster,because the account name and key are associated with the cluster during creation, you have full access to the blobs in those containers. For public containers that are not connected to cluster you have read-only permission to the blobs in the containers.  For private containers in storage accounts that are not connected to cluster , you can’t access the blobs in the containers unless you define the storage account when you submit the WebHCat jobs.
  • The storage accounts that are defined in the creation process and their keys are stored in %HADOOP_HOME%/conf/core-site.xml on the cluster nodes. The default behavior of HDInsight is to use the storage accounts defined in the core-site.xml file. It is not recommended to directly edit the core-site.xml file because the cluster head node(master) may be reimaged or migrated at any time, and any changes to this file are not persisted.

 You can create new or point existing storage account to HDinsight cluster easy via portal as I show below:

Capture

You can point your HDInsight cluster to multiple storage accounts as well , as explained here – https://blogs.msdn.microsoft.com/mostlytrue/2014/09/03/hdinsight-working-with-different-storage-accounts/ 

You can also create storage account and container via Azure PowerShell like in this sample:

$SubscriptionID = “<Your Azure Subscription ID>”
$ResourceGroupName = “<New Azure Resource Group Name>”
$Location = “EAST US 2”

$StorageAccountName = “<New Azure Storage Account Name>”
$containerName = “<New Azure Blob Container Name>”

Add-AzureRmAccount
Select-AzureRmSubscription -SubscriptionId $SubscriptionID

# Create resource group
New-AzureRmResourceGroup -name $ResourceGroupName -Location $Location

# Create default storage account
New-AzureRmStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -Location $Location -Type Standard_LRS

# Create default blob containers
$storageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -StorageAccountName $StorageAccountName)[0].Value
$destContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
New-AzureStorageContainer -Name $containerName -Context $destContext

 

The URI scheme for accessing files in Azure storage from HDInsight is:

wasb[s]://<BlobStorageContainerName>@<StorageAccountName>.blob.core.windows.net/<path>

The URI scheme provides unencrypted access (with the wasb: prefix) and SSL encrypted access (with wasbs). Microsoft recommends using wasbs wherever possible, even when accessing data that lives inside the same region in Azure.

The <BlobStorageContainerName> identifies the name of the blob container in Azure storage. The <StorageAccountName> identifies the Azure Storage account name. A fully qualified domain name (FQDN) is required.

I ran into rather crazy little limitation\ issue when working with \WASB and HDInsight. Hadoop and Hive is looking for and  expects a valid folder hierarchy to import data  files, whereas  WASB does not support a folder hierarchy i.e. all blobs are listed under a container. The workaround is to use SSH session to login into head cluster node and use mkdir command line command to manually create such directory via the driver.

The SSH Procedure with HDInsight can be found here – https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-linux-use-ssh-unix

Another one recommended to me was that “/” character can be used within the key name to make it appear as if a file is stored within a directory structure. HDInsight sees these as if they are actual directories.For example, a blob’s key may be input/log1.txt. No actual “input” directory exists, but due to the presence of the “/” character in the key name, it has the appearance of a file path.

For more see – https://social.technet.microsoft.com/wiki/contents/articles/6621.azure-hdinsight-faq.aspx,  https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-use-blob-storagehttps://www.codeproject.com/articles/597940/understandingpluswindowsplusazureplusblobplusstora

Hope this helps.

Forecast Cloudy–Azure Resource Manager Templates

Azure originally provided only the classic deployment model. In this model, each resource existed independently; there was no way to group related resources together. Instead, you had to manually track which resources made up your solution or application, and remember to manage them in a coordinated approach. To deploy a solution, you had to either create each resource individually through the classic portal or create a script that deployed all of the resources in the correct order. To delete a solution, you had to delete each resource individually. You could not easily apply and update access control policies for related resources. Finally, you could not apply tags to resources to label them with terms that help you monitor your resources and manage billing.

With the introduction of the new Azure Portal  its possible to view resources (such as websites, virtual machines and databases) as a single logical unit. This logical unit is called a resource group. The following screenshot shows a resource group called ecommerce-westus which contains a website, SQL database and an Application Insights instance.

resource_group

 

Resource groups aren’t visible in the old portal, but almost everything within your Azure subscription exists within a set of resource groups that were created by default when the preview portal opened for business. If you access the new portal, press the Browse button on the jump bar (the icons down the left hand side of the screen) and navigate to resource groups you should see a whole bunch listed.

The benefit of resource groups is that they allow an Azure administrator to roll-up billing and monitoring information for resources within a resource group and manage access to those resources as a set. This can be extremely useful when you have a single subscription but you need to do cost recovery on the resources used by a customer or internal department.

For more on Azure Resource Manager see  docs – https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/ , https://channel9.msdn.com/Shows/Edge/Edge-Show-121-Azure-Resource-Manager , http://www.codeproject.com/Articles/854592/Using-Azure-Resource-Manager

As an example I created a template in Visual Studio 2015 , this sample with ARM Template and parameters file that creates two Windows Server 2012R2 VM(s) with IIS configured using DSC. It also installs one SQL Server 2014 standard edition VM, a VNET with two subnets, NSG, loader balancer, NATing and probing rules.

resources_geroup2

To start I used one of the open source templates available on GitHub and just added\modified resource.

So how does one author these templates in VS 2015?

Prerequisites: You will need Visual Studio 2015 installed.  In my case I have VS 2015 Update 1 and Azure Tools SDK V2.8.2.

Create Azure Resource Group  Project.

When you first create an Azure Resource Group Project in Visual Studio you are presented with a list of common templates you can use to start defining your desired infrastructure. For example, if you wanted a set of Virtual Machines that could be used to run a scalable web server workload.

RG3

Project structure for Resource Group Project.

Using blank template project results in a project that contains everything you need to begin authoring and eventually deploy your template. As shown below, the project structure contains three folders; Scripts, Templates, and Tools.

rg4

The Scripts folder contains the PowerShell script that is used to deploy the environment you describe in the project. This is a fully functional script that essentially requires no editing. It just works! However, if you have a need to modify the script to support some advanced deployment scenarios you can do so.

The Templates folder contains the JSON files that describe the environment you want to create. The azuredeploy.json file is where you describe the resources you want in your environment, such as the storage account, virtual machine, and other required resources. The azuredeploy.parameters.json file is where you can provide parameter values that you want passed into the template during deployment, such as the type of storage account, size of the virtual machine, credentials to use to sign into the virtual machine, and so on.

The Tools folder contains AzCopy.exe and is used in scenarios where your deployment template needs to copy files to a storage account used for deploying application and configuration files into a resource. For a virtual machine, this typically would be Desired State Configuration (DSC) and DSC resources that you want pushed into the virtual machine after the virtual machine is provisioned to do things like configure Active Directory or IIS Web Server roles in the virtual machine. The script in the Scripts folder uses AzCopy in the Tools folder to copy these kinds of artifacts to a storage account so that ARM can then copy them from storage into the virtual machine after the instance is fully provisioned

More information available here – https://azure.microsoft.com/en-us/documentation/articles/vs-azure-tools-resource-groups-deployment-projects-create-deploy/ , https://azure.microsoft.com/en-us/blog/azure-resource-manager-2-5-for-visual-studio/

In my case finished VS project structure looks like this

Capture

Note that AzureDeploy.json is a JSON template that describes all of the above mentioned infrastructure that I want to deploy in Azure, Deploy-AzureResourceGroup.ps1 is a PowerShell script to connect and deploy it to Azure, there is also parameters files with passwords, subscription names, etc, and finally test project to test such deploy in VS, all part of my test solution in Visual Studio.

So using my test template project  I created all of the above resources in resource group test in Central US data center:

rg5

Obviously this is just first baby steps, but should show you what is possible with this technology.  For more on Azure ARM Templates see – https://blogs.perficient.com/microsoft/2016/03/azure-arm-template-define-web-app-application-settings/, https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-create-first-template, https://blogs.msdn.microsoft.com/kaevans/2015/11/22/creating-arm-templates-with-azure-resource-explorer/, https://azure.microsoft.com/en-us/resources/templates/

Hope this helps.

Iter in ignotus–Installing SQL Server vNext on Ubuntu Linux 16.10

SQL-Loves-Linux_2_Twitter-002-640x358

 

 

Over the holidays I had a chance to install and run SQL Server vNext on my Ubuntu Linux machine. I did run into some issues during install, but was able to work around all of those and have SQL Server engine successfully running on Ubuntu 16.10 x64 .

To start out make sure that you are installing SQL Server vNext on Ubuntu Linux version 16 or higher and 64 bit. My first issue I ran into attempting to install on Ubuntu 14.x on 32 bit.

Here are steps to follow in Bash:

  1. Import the public repository GPG keys:
    curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
  2. Register the Microsoft SQL Server Ubuntu repository:
    curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list
    
  3. Run Update thru apt-get. If you didn’t update you may run into an error later, as I did
    sudo apt-get update
    
  4. Run actual install via apt-get again
    sudo apt-get install -y mssql-server
    

    If you are on 32 bit or didnt update source you can see this error

    Unable to locate package mssql 
    

     

  5. Now if you in latest Ubuntu x 64 with updated components tou should see install occuring in your terminal window. After the package installation finishes, run the configuration script and follow the prompts.
    sudo /opt/mssql/bin/sqlservr-setup
    

    Once the configuration is done, verify that the service is running

    systemctl status mssql-server
    

Now that install is done in 5 easy steps and SQL Services are running on Ubuntu Linux you can use SSMS on Windows to connect to your SQL Server on Linux. But in order to connect from Linux to this instance on Linux I will need to install SQL Client Tools for connectivity stack.

 

The following steps install the command-line tools, Microsoft ODBC drivers, and their dependencies. The mssql-tools package contains:

  1. sqlcmd: Command-line query utility
  2. bcp: Bulk import-export utility.

Install on Ubuntu in 3 easy steps

  1. Import the public repository GPG keys:
    curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
    
  2. Register the Microsoft Ubuntu Repository
    curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list
    
  3. Update again via apt-get
    sudo apt-get update
    
  4. Now run actual install
    sudo apt-get install mssql-tools
    

    If you are on 32 bit or did not update source you can see this error

    Unable to locate package mssql-tools
    

Now lets connect to our instance on local SQL and run a quick query again all via Terminal

sqlcmd -S localhost -U SA -P ''
SELECT @@version;
GO

Well, that answers what I was doing over the Holidays. Happy bashing to you SQL folks

.

For more see – https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux

https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-server-on-linux/#sm.001on55ayi1dduo11m02qmercn8t3, https://docs.microsoft.com/en-us/sql/linux/