Collecting Trash Revisited – Garbage Collection Essentials in .NET

Microsoft_net_4

In my recent past as Field Engineer working with customers I had to deal with many managed applications that had performance and stability issues due to memory usage. With .NET 4 there were definitely big interesting additions introduced to memory management , but talking to customers I believe they may have drowned out in a see of tech announcements and many developers in the field simply do not remember these. Therefore here is some basic information on Garbage Collection flavors in .NET and some ways to troubleshoot GC issues for existing applications.

Workstation vs. Server GC 

There are actually two different modes of garbage collection, which you can control using gcServer tag in the runtime configuration part of your config file. Workstation Garbage Collection is set by default. The gcServer configuration flag was introduced long ago , with .NET 2.0, and you can use it to control whether Workstation or Server GC is used on multi processor machines. With server GC, a thread for every core is created just for doing GC. There is also a small object heap and a large object heap created for each GC thread. All of the program’s allocations are spread among these heaps (more on large object heaps later). When no GC is happening, these threads are blocked and do nothing. When a GC is triggered, all of the user threads get paused, and all the GC threads wake up at highest priority and do collection in parallel. All of these optimizations lead to server GC usually being much faster than workstation GC.

So Server GC:

  • Multiprocessor (MP) Scalable, Parallel
  • One GC thread per CPU
  • Program paused during marking

Workstation GC

  • Great for UI driven apps as minimizes pausing during full GC collection
  • Runs slower

Concurrent vs. Background

Now besides Server vs Workstation, there are Concurrent and Background operation modes. Both of them allow a second generation to be collected by a dedicated thread without pausing all user threads. Generation 0 and 1 require pausing all user threads, but they are always the fastest. This of course increases the level of responsiveness an application can deliver.

Concurrent Mode.

Default mode for Workstation GC and will provide a dedicated thread performing GC, even on multiprocessor machines. You can turn it off using gcConcurrent tag. This mode offers much shorter user threads pauses as the most time consuming Gen2 collection is done concurrently, on the cost of limited allocation capabilities during concurrent GC. While Gen2 collection takes place, other threads can only allocate up to the limit of current ephemeral segment, as it’s impossible to allocate new memory segment. If your process runs out of place in current segment, all threads will have to be paused anyway and wait for concurrent collection to finish. This is because Gen0 and Gen1 collections cannot be performed while concurrent GC is still in progress. Concurrent GC also has a slightly higher memory requirements.

Background Mode

Starting with .NET 4.5 this is actually default mode that replaces Concurrent It’s also available for both Workstation and Server modes, while Concurrent was only available for Workstation. The big improvement being that Background mode can actually perform Gen0 and Gen1 collections while simultaneously performing Gen2 collection. 

Workstation garbage collection can be:

  • Concurrent
  • Background
  • Non-concurrent

while options for Server garbage collection are as follows:

  • Background
  • Non-concurrent

For more see – https://blogs.msdn.microsoft.com/tess/2009/05/29/background-garbage-collection-in-clr-4-0/, https://blogs.msdn.microsoft.com/dotnet/2012/07/20/the-net-framework-4-5-includes-new-garbage-collector-enhancements-for-client-and-server-apps/

So how do I collect my GC pause information to make decision on what mode is best for me? 

The first thing to do is take a look at the properties of the process using the excellent Process Explorer tool from Sysinternals (imagine Task Manager on steroids). It will give you a summary like the one below, the number of Gen 0/1/2 Collections and % Time in GC are the most interesting values to look at.

time-in-gc

The other way to get very deep insight is grab an ETW trace as explained by Maoni Stephens here – https://blogs.msdn.microsoft.com/maoni/2014/12/22/gc-etw-events-1/, this is a method of looking into GC stats that I use utilizing pretty popular PerfView tool that works on top of Event Tracing for Windows (ETW). Another good tutorial on the same can be found here – http://www.philosophicalgeek.com/2012/07/16/how-to-debug-gc-issues-using-perfview/

Hope this little note is useful.

 

Advertisements

Data Cargo In The Clouds – Running SQL Server 2017 in Docker

sqlserver_and_docker

In my previous post I attempted to explain my new interest in Docker and running data-centric applications in container. Starting here I will show how to install SQL Server for Linux on Azure IaaS in container.

Lets start by creating Linux VM in Azure that will run Docker for us.  I dont want to spend all of the time and space going through steps in portal or PowerShell since I already went through these in this post, Good video titorial  can also be found here .

Assuming you successfully created Linux VM and can login like this:

azlinux1

Next thing we will install Docker on that VM.  We will get latest and greatest version of Docker from Docker repository.

  • First, add the GPG key for the official Docker repository to the system:

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
  • Add the Docker repository to APT sources:

    $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  • Next, update the package database with the Docker packages from the newly added repo:

    $ sudo apt-get update
  • Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:
    apt-cache policy docker-ce

    You should see following output:

    docker-ce:
      Installed: (none)
      Candidate: 17.03.1~ce-0~ubuntu-xenial
      Version table:
         17.03.1~ce-0~ubuntu-xenial 500
            500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
         17.03.0~ce-0~ubuntu-xenial 500
            500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
    
    

    Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-ce version number might be different.

  • Finally install Docker
    $ sudo apt-get install -y docker-ce
  • Make sure its installed and running
    $ sudo systemctl status docker

    Output should be similar to below showing that daemon is started and running

    linuxdoc

Next we need to pull latest SQL Server Docker image:

$ sudo docker pull microsoft/mssql-server-linux

Once the image is pulled and extracted you should see similar output

linuxdoc2

Now lets run container image with Docker:

$ sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -e 'MSSQL_PID=Developer' --cap-add SYS_PTRACE -p 1401:1433 -d microsoft/mssql-server-linux

Now we are running Docker container with SQL Server 2017 in Azure. We should be able to list our containers like:

 $ sudo docker ps -a

And see output like:

linuxdoc3

Connect to SQL Server in the container.

The following steps use the SQL Server command-line tool, sqlcmd, inside the container to connect to SQL Server. First lets connect to bash inside the container using docker exec command. Note I am providing container id here as parameter fetched from previous output of ps command

$ sudo docker exec -it d95734b7f9ba  "bash"

Now lets connect via sqlcmd here

/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>'

Now lets make simplest query:

SELECT @@version
     GO

You should see output like:

linuxdoc4

Now you can go ahead and start creating databases, tables, moving data, etc.

For more see – https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-dockerhttps://docs.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker. SQL Server images on Docker Hub – https://hub.docker.com/r/microsoft/mssql-server-linux/https://mathaywardhill.com/2017/05/08/how-to-attach-a-sql-server-database-in-a-linux-docker-container/

 

Hope this helps.

Forecast Cloudy – The Magic of AzCopy As Database Migration Tool

Here is a quick note that can be helpful to folks. Recently I had to migrate database to Azure via taking a backup from on-premises SQL Server and restoring this backup on SQL Server on Azure VM.  Now, there is a better , more structured method available for the same using Database Migration Assistant (DMA) from Microsoft – https://www.microsoft.com/en-us/download/details.aspx?id=53595 , however I went old fashioned way purely through backups.

But how do I move 800 GB backup quickly to the cloud? What about network issues and need to restart on network hiccup? All of these issues and more can be solved via AzCopy tool.

AzCopy is a command-line utility designed for copying data to and from Microsoft Azure Blob, File, and Table storage using simple commands with optimal performance. You can copy data from one object to another within your storage account, or between storage accounts.

There are two versions of AzCopy that you can download. AzCopy on Windows is built with .NET Framework, and offers Windows style command-line options. AzCopy on Linux is built with .NET Core Framework which targets Linux platforms offering POSIX style command-line options. I will showcase Windows version here for now.

So what I did first is created a File Share on Azure Blob Storage. Azure File storage is a service that offers file shares in the cloud using the standard Server Message Block (SMB) Protocol. Both SMB 2.1 and SMB 3.0 are supported. You can follow this tutorial to do so – https://www.petri.com/configure-a-file-share-using-azure-files . Next, I needed to upload my backup to this file share via AzCopy:

The basic syntax to use AzCopy from command line as follows

AzCopy /Source: /Dest: [Options]

I can upload a single file, called test.bak , from C:\Temp to the demo container in the storage account (blob service) using the following command:

AzCopy /Source:C:\Temp /Dest:https://pazcopy.blob.core.windows.net/demo /DestKey:QB/asdasHJGHJGHJGHJ+QOoPu1fC/Asdjkfh48975845Mh/KlMc/Ur7Dm3485745348gbsUWGz/v0e== /Pattern:"test.bak"

Some notes:

  • Note how the URL of the destination storage account specifies the blob service.
  • I have specified the storage account access key with the /DestKey option. This key is unique to the storage account (source or destination).
  • The /Pattern option specifies the file being copied

Once file copied ypu should see stats in command line as well:

azcopy

Once file is uploaded from my destination Azure VM all I have to is map to the File Share (example here – https://blogs.msdn.microsoft.com/windowsazurestorage/2014/05/12/introducing-microsoft-azure-file-service/)  and restore backup.

Hope this helps.  More on AzCopy – https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopyhttps://blogs.technet.microsoft.com/canitpro/2015/12/28/step-by-step-using-azcopy-to-transfer-files-to-azure/

 

Data Cargo In The Clouds – Data and Containerization On The Cloud Platforms

container

This topic is where I actually planned to pivot to for quite a while, but either had no bandwith\time with moving to Seattle or some other excuse like that. This topic is an interesting to me as its intersection of technologies where I used to spend quite a bit of time including:

  • Data, including both traditional RDBMS databases and NoSQL designs
  • Cloud, including Microsoft Azure, AWS and GCP
  • Containers and container orchestration (that is area new to me)

Those who worked with me either displayed various degrees of agreement, but more often of annoyance, on me pushing this topic as growing part of data engineering discipline, but I truly believe in it. But before we start combining all these areas and do something useful in code I will spend some time here with most basic concepts of what I am about to enter here.

I will skip introduction to basic concepts of RDBMS, NoSQL or BigData\Hadoop technologies, as if you didn’t hide under the rock for last five years, you should be quite aware of those. That brings us straight to containers:

As definition states – “A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.”

Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer’s laptop to a test environment, from a staging environment into production, and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud.

container.jpg

But why containers if I already have VMs? 

VMs take up a lot of system resources. Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. This quickly adds up to a lot of RAM and CPU cycles. In contrast, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program.
What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM.  In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment. 

On Linux, containers run on top of LXC. This is a userspace interface for the Linux kernel containment features. It includes an application programming interface (API) to enable Linux users to create and manage system or application containers.

Docker is an open platform tool to make it easier to create, deploy and to execute the applications by using containers.

  • The heart of Docker is Docker engine. The Docker engine is a part of Docker which create and run the Docker containers. The docker container is a live running instance of a docker image. Docker Engine is a client-server based application with following components :
    • A server which is a continuously running service called a daemon process.
    • A REST API which interfaces the programs to use talk with the daemon and give instruct it what to do.
    • A command line interface client.

             docker_engine 

  • The command line interface client uses the Docker REST API to interact with the        Docker daemon through using CLI commands. Many other Docker applications    also use the API and CLI. The daemon process creates and manage Docker images, containers, networks, and volumes.
  • Docker client is the primary service using which Docker users communicate with the Docker. When we use commands “docker run” the client sends these commands to dockerd, which execute them out.

You will build Docker images using Docker and deploy these into what are known as Docker registries. When we run the docker pull and docker run commands, the required images are pulled from our configured registry directory.Using Docker push command, the image can be uploaded to our configured registry directory. 

Finally deploying instances of images from your registry you will deploy containers. We can create, run, stop, or delete a container using the Docker CLI. We can connect a container to more than one networks, or even create a new image based on its current state.By default, a container is well isolated from other containers and its system machine. A container defined by its image or configuration options that we provide during to create or run it.

For more info on containers , LXC and Docker see – https://blog.scottlowe.org/2013/11/25/a-brief-introduction-to-linux-containers-with-lxc/, https://www.docker.com/what-container#/virtual_machines http://searchservervirtualization.techtarget.com/definition/container-based-virtualization-operating-system-level-virtualization

That brings us to container orchestration engines. While the CLI meets the needs of managing one container on one host, it falls short when it comes to managing multiple containers deployed on multiple hosts. To go beyond the management of individual containers, we must turn to orchestration tools. Orchestration tools extend lifecycle management capabilities to complex, multi-container workloads deployed on a cluster of machines.

Some of the well known orchestration engines include:

Kubernetes.  Almost a standard nowdays , originally developed at Google. Kubernetes’ architecture is based on a master server with multiple minions. The command line tool, called kubecfg, connects to the API endpoint of the master to manage and orchestrate the minions.  The service definition, along with the rules and constraints, is described in a JSON file. For service discovery, Kubernetes provides a stable IP address and DNS name that corresponds to a dynamic set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube-proxy) running on the source machine to one of the corresponding backend containers.Kubernetes supports user-implemented application health checks. These checks are performed by the kubelet running on each minion to ensure that the application is operating correctly.

kub

For more see – https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#kubernetes-is

Apache Mesos. This is an open source cluster manager that simplifies the complexity of running tasks on a shared pool of servers. A typical Mesos cluster consists of one or more servers running the mesos-master and a cluster of servers running the mesos-slave component. Each slave is registered with the master to offer resources. The master interacts with deployed frameworks to delegate tasks to slaves. Unlike other tools, Mesos ensures high availability of the master nodes using Apache ZooKeeper, which replicates the masters to form a quorum. A high availability deployment requires at least three master nodes. All nodes in the system, including masters and slaves, communicate with ZooKeeper to determine which master is the current leading master. The leader performs health checks on all the slaves and proactively deactivates any that fail.

 

Over the next couple of posts I will create containers running SQL Server and other data stores, both RDBMS and NoSQL, deploy these on the cloud and finally attempt to orchestrate these hopefully as well.  So lets move off theory and pictures into world of parctical data engine deployments.

Hope you will find this detour interesting.

Forecast Cloudy – Profiling Azure Cloud Service

We can get an in-depth analysis of the computational aspects of how azure application runs by using the Visual Studio Profiler.  Below I will show you how to use Sampling performance gathering method in Visual Studio Profiler to profile Cloud Service in Azure. If you need basic information on Visual Studio Profiler start here –

https://docs.microsoft.com/en-us/visualstudio/profiling/beginners-guide-to-performance-profiling .
Sampling is a statistical profiling method that shows you the functions that are doing most of the user mode work in the application. Sampling is a good place to start to look for areas to speed up your application.

At specified intervals, the Sampling method collects information about the functions that are executing in your application. After you finish a profiling run, the Summary view of the profiling data shows the most active function call tree, called the Hot Path, where most of the work in the application was performed. The view also lists the functions that were performing the most individual work, and provides a timeline graph you can use to focus on specific segments of the sampling session.

Sampling is the most common method to application profiling, there are other methods as well, but they may come with more performance overhead or require more application instrumentation.
Make sure you enable appropriate settings when publishing your application to Azure

  • In Solution Explorer, open the shortcut menu for your Azure project, and then choose Publish.
  • In the Advanced Settings tab, select the Enable profiling

publish

  • Choose the Settings And select Sampling  and Enable Tier Interaction Profiling, and click OK.
  • Click Next and Publish the application

publish2

Once you ran your profile you can now view profile reports. To get them do following

  • Using Visual Studio Server Explorer, expand Azure -> Cloud Services ->Your_Cloud_Role ->Production (Profiling), right click on the instance, click on View Profiling Report.
  • report_view
  • It may take couple minutes for the profiling report to show up. Click on the Save icon to save a local copy for analysis

profile

  • Once you are done with profiling you may republish application without that option checked.

Happy performance hunting. For more see – http://msdn.microsoft.com/library/azure/hh369930.aspx.

Purgamentum Init Venari – Analyzing Java GC Using IBM Pattern Modeling Tool

Recently again looking at some GC issues on IBM Websphere platform for a buffy of mine I got to learn new tool – Pattern Modeling and Analysis Tool for IBM Java Garbage Collector (PMAT).  This is second post-mortem IBM analysis tool I had priviledge to work with from IBM , as I previously profiled JCA – Javacore Dump Analysis Tool here https://gennadny.wordpress.com/2016/03/28/javacore-dump-analysis-using-jca-ibm-thread-and-monitor-dump-analyzer-for-java/ 

The PMAT tool parses verbose GC trace, analyzes Java heap usage, and recommends key configurations based on pattern modeling of Java heap usage.  

Why do we need it?

When the JVM (Java virtual machine) cannot allocate an object from the current heap because of lack of space, a memory allocation fault occurs, and the Garbage Collector is invoked. The first task of the Garbage Collector is to collect all the garbage that is in the heap. This process starts when any thread calls the Garbage Collector either indirectly as a result of allocation failure or directly by a specific call to System.gc(). The first step is to get all the locks needed by the garbage collection process. This step ensures that other threads are not suspended while they are holding critical locks. All other threads are then suspended. Garbage collection can then begin. It occurs in three phases: Mark, Sweep, and Compaction (optional).

Sometimes you run into issues, most common are either performance issues in applications due to especially long running , aka “violent” GCs or rooted objects on the heap not getting cleaned up and application crashing with dreaded OutOfMemory error and causing you have to analyze Garbage Collection with Verbose GC on.

JRE Java.Lang.OutOfMemory

Verbose GC is a command-line option that one can supply to the JVM at start-up time. The format is: -verbose:gc or -verbosegc. This option switches on a substantial trace of every garbage collection cycle. The format for the generated information is not designed and therefore varies among various platforms and releases.

This trace should allow one to see the gross heap usage in every garbage collection cycle. For example, one could monitor the output to see the changes in the free heap space and the total heap space. This information can be used to determine whether garbage collections are taking too long to run; whether too many garbage collections are occurring; and whether the JVM crashed during garbage collection.

How does it work?

PMAT analyzes verbose GC traces by parsing the traces and building pattern models. PMAT recommends key configurations by executing a diagnosis engine and pattern modeling algorithm. If there are any errors related with Java heap exhaustion or fragmentation in the verbose GC trace, PMAT can diagnose the root cause of failures. PMAT provides rich chart features that graphically display Java heap usage.

The following features are included:

Where do I get it?

You can download this tool from – ftp://public.dhe.ibm.com/software/websphere/appserv/support/tools/pmat/ga456.jar

Running the tool

      Run gaNNN.jar with the Java Run-time Environment. (NNN is the version number).

      You will see following initial screen

pmat1Select and open verbosegc log

pmat2

Process Log and View Summary\Reports

pmat3

More detailed information on the tool can be found in presentation here – http://www-01.ibm.com/support/docview.wss?uid=swg27007240&aid=1 

Jinwoo Hwang was technical leader at IBM WebSphere Application Server Technical Support that created this tool, as well as JCA.  I recommend that everyone reads his articles on JVM internals – http://websphere.sys-con.com/node/1229281http://www-01.ibm.com/support/docview.wss?uid=swg27011855 , http://www-01.ibm.com/support/docview.wss?uid=swg27018423 

Hope this short note helps

Capre Noctem – Using SQL Server Diagnostics (Preview) to analyze SQL Server minidump

Microsoft just released the SQL Server Diagnostics (Preview) extension within SQL Server Management Studio and Developer APIs to empower SQL Server customers to achieve more through a variety of offerings to self-resolve SQL Server issues.

diagnostic

So what can it do:

    • Analyze SQL Server dump.

Customers should be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix.

  • Review recommendations to keep SQL Server instances up to date.

 

Customers will be able to keep their SQL Server instances up-to-date by easily reviewing the recommendations for their SQL Server instances. Customers can filter by product version or by feature area (e.g. Always On, Backup/Restore, Column Store, etc.) and view the latest Cumulative Updates (CU) and the underlying hotfixes addressed in the CU.

  • Developers who want to discover and learn about Microsoft APIs can view developer portal and then use APIs in their custom applications. Developers can log and discuss issues and even submit their applications to the application gallery.

 

So I installed this extension and decided to give it a go with one of the SQL Server minidumps I have.

diagnostics2

It actually correctly identified an issue and issues suggestions. So, while this may not work on every dump, it should worth trying before you start up WinDbg.

Give it a try. Hope this helps.