Jan 29

Terraform + Azure Availability Zones

While learning Terraform some time back, I wanted to leverage Availability Zones in Azure. I was specifically looking at Virtual Machine Scale Sets. https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html Looking at the documentation Terraform has, I noticed there is no good example on using zones. So, I tried a few things to see what was really needed for that field. While doing some research, I noticed there are many people in the same situation. No good examples. I figured I’d create this post to help anyone else. And, of course, it’s a good reminder for me too if I forget the syntax on how I did this.

Here’s a very simple Terraform file. I just created a new folder then a new file called zones.tf. Here’s the contents:

variable "location" {
  description = "The location where resources will be created"
  default = "centralus"
  type = string
}

locals {
  regions_with_availability_zones = ["centralus","eastus2","eastus","westus"]
  zones = contains(local.regions_with_availability_zones, var.location) ? list("1","2","3") : null
}

output "zones" {
  value = local.zones
}

The variable ‘location’ is allowed to be changed from outside the script. But, I used ‘locals’ for variables I didn’t want to be changed from outside. I hard coded a list of Azure regions that have availability zones. Right now it’s just a list of regions in the United States. Of course, this is easily modifiable to add other regions.

The ‘zones’ local variable uses the contains function to see if the specified region is in that array. If so, then the value is a list of strings. Else it’s null. This is important. The zones field in Azure resources required either a list of strings or null. An empty list didn’t work for me.

As it is right now, you can run the Terraform Apply command and you should see some output. Changing the value of the location variable to something not in the list and you may not see output at all simply because the value is null.

Now, looking at a partial example from the Terraform documentation:

resource "azurerm_virtual_machine_scale_set" "example" {
  name                = "mytestscaleset-1"
  location            = var.location
  resource_group_name = "${azurerm_resource_group.example.name}"
  upgrade_policy_mode = "Manual"
  zones = local.zones

Now the zones field can be used safely when the value is either a list of strings or null. After I ran the complete Terraform script for VM Scale Set, I went to the Azure Portal to verify it worked.

Azure Portal - VMSS - Availability Zone Allocation

I also changed the specified region to one that I know does not use Availability Zones, South Central US.

Azure Portal - VMSS - Availability Zone Allocation

This proved to me that I can use a region with and without availability zones in the same Terraform script.

For a list of Azure regions with Availability Zones, see:
https://docs.microsoft.com/en-us/azure/availability-zones/az-overview

Jan 27

Removing an Azure Application Gateway

While working with Terraform scripts I created many Azure Application Gateways. Sometime after they were created I would delete them as they were only needed to prove my scripts were working with Azure DevOps. I was using Terraform functions and special *magic* to get things just right. Then I noticed one of my App Gateways refused to delete.

I was using the Azure Portal as I have done many times. Simply selected the resources, then Delete, applied ‘yes’ when prompted. After a few minutes they were all gone as expected. One day, one of the App Gateways with the required resources like Public Ip Address, Virtual Network, etc, were still there after an attempt to delete.

Selecting the App Gateway showed the details that included the IP address, version, etc. But, it also showed in a large bar: “Failed”. I have seen it show “Deleting” before but never failed. I selected the Delete option again and after many minutes nothing changed. So, I tried to delete it using PowerShell.

$gtw = Get-AzApplicationGateway -Name “dev-example-appgateway”

$gtw

Executing the two lines above showed the App Gateway’s Provisiong State as Failed and Operation State as Stopping.

Application Gateway State Failed

I did some research and tried several things:

Start-AzApplicationGateway -ApplicationGateway $gtw

Stop-AzApplicationGateway -ApplicationGateway $gtw

Set-AzApplicationGateway -ApplicationGateway $gtw

Each took a few minutes and either did nothing or gave an error message. One message caught my eye.

/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup1/providers/Microsoft.Network/publicIPAddresses/dev-example-public-ip used by resource

/subscriptions/ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /resourceGroups/ myResourceGroup1/providers/Microsoft.Network/applicationGateways/dev-example-appgateway is not in Succeeded state. Resource is in Failed state.

Looking closely I noticed the issue may not be the App Gateway after all, but a dependent resource; Public Ip Address.

$pip = Get-AzPublicIpAddress -Name dev-example-public-ip -ResourceGroupName myResourceGroup1
$pip

Showing the details of the PIP also showed it was in a failed state.

PIP Provisioning State Failed

All this time I thought it was the Application Gateway. So, with this extra knowledge, I tried a different approach. Some of the research suggested executing the Set command with no changes.

Set-AzPublicIpAddress -PublicIpAddress $pip

$pip

PIP Provisioning State Succeeded

It worked! Well, at least for that resource. I tried the Set again for the App Gateway and it didn’t show a change. Instead, it gave the same error. Ok, let’s try the delete again just to see. Mind you, I have tried this command before including the -Force switch.

Remove-AzApplicationGateway -Force -Name dev-example-appgateway -ResourceGroupName myResourceGroup1

After a few minutes, it simply returned to a prompt. No error message this time. So, I went back to the Azure Portal and refreshed. It worked! Problem solved.

The significance here is that one resource can show a failed state when it’s really a dependent resource that is in trouble. I hope this helps someone else and you don’t spend the research time like I did.

Jan 25

Azure Cosmos DB Replication

While learning about Cosmos DB I had a lot of misunderstandings around consistency levels. And, it’s not surprising. Many people, certainly those coming from a SQL Server background like I did, have these misunderstandings. But, before I can jump into Cosmos DB consistency levels (covered in another post) I have to cover replication. This post is about the intricate details of replication that I had to wrap my head around for consistency levels to make sense. Although learning consistency levels does not require understanding replication first, it was helpful for me when developing use case scenarios.

With SQL Server it’s understood that data resides in databases that can be spread across File Groups. Those File Groups could simply be different files in the same folder, different folders, and even different drives. When it comes to replicated instances, the data could be spread across servers and even data centers in different states. But, Cosmos DB is very different from SQL Server. Not only is Cosmos DB not a relational database like SQL Server, but there isn’t a file structure to worry about.

Cosmos DB has, within each Azure region, 4 replicas that make up a “replica set”. One is a “Leader” and another a “Forwarder”. The other 2 are “Followers”. The Forwarder is a Follower but has the additional responsibility to send data to other regions. As data is received into that region, it is written to every replica. For a Write to be considered “committed” a quorum of replicas must agree they have the data. A quorum, as it pertains to the replicas in each region, is 3 out of the 4. This means that regardless of which region is receiving the data, the Leader replica and 2 others must signal they have received the data. This is also true regardless of the consistency level used on the Write operation.

Showing Cosmos system sending data to all four replicas at the same time.

A code’s client connection does not have to worry about replica count, if quorum has been met, or which replica does not yet have the data. The Cosmos DB system manages all of that. For our code, the Insert/Update/Delete operation has either succeeded or not.

Global Replication

Cosmos DB has a simple way of enabling global replication. Using the Azure Portal you can select 1 or more of the many data centers available all over the world. In a matter of minutes, another Cosmos DB instance is available with your data. For the discussion in this post I’m only going to cover Single Master. But, Multi-Master, also known as Multi-Write Enabled, is available. *Just a note on that though, once enabled you cannot turn it back off except with an Azure Support Ticket.

Data stored in Containers are split across Replica Sets by the Partition Key you provide when creating the Container. And, each Replica in the Replica Set contains only your data. The Replica is not shared with other Azure customers. As the amount of data grows, the Cosmos system also manages the partitioning and replication to other regions as needed. So, not only is Cosmos extremely fast but the sizing and replication is automatically handled for us.

With data in 2 Replica Sets, for example, in each region you enabled has an exact copy. Looking at a Replica Set in one region and the matching one in another region, this is known as a Partition Set. The Partition Sets are what manage the replication between regions for their respective Replica Sets.

Replication latency only pertains to global replication. The time to replicate data inside a Replica Set is so fast, it’s not part of the latency concerns. However, from one region to another there is some latency. Given the distance data must travel there is inevitable delays. Microsoft, at least in the United States, has a private backbone for region to region networking. This has an effect with your applications if using the Strong consistency level.

Multi-Region Replication

The image above depicts that replication from the primary region to the other regions may have many miles to travel. The latency is between the regions. The only latency the client connection will notice is that of the replication to the furthest region away. This is because the replication is concurrent and not sequential.

With all consistency levels, except Strong, once data has hit quorum “committed” then the client connection is notified. At the same time, the data is replicated to the other regions as enabled. With Strong, that quorum is a little different. In this case a “Global Majority” has be met. With 2 regions, this means 6 of the 8 replicas must agree on the data. With 3 regions, at least 2 regions must agree. With 4 regions, at least 3 regions must agree. Basically, once using 3+ regions, N – 1 regions is the quorum. Again, this only applies to the Strong consistency level.

Oct 02

Using nuget.config to control Nuget package reference sources.

Sometimes in software development you have to work around interesting obstacles.

I created a proof-of-concept code working with Cosmos DB. I needed to be able to run this code from my laptop as well as from a VM inside the Azure region. I copied all the code so I could tweak things as needed so the two client connections can behave differently. It was using the Azure Cosmos SDK v3 (3.0.0 to be precise).  https://github.com/Azure/azure-cosmos-dotnet-v3

The challenge came when I noticed the Cosmos Client Options did not contain a way to change the consistency level. That version of the library retrieves the consistency from the Cosmos account. This means there is not a way to choose a lower consistency level than defined in the account.

Thankfully, looking at the latest github link https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/CosmosClientOptions.cs for the code shows that they have included a public property to set the consistency level. However, that code had not been released to a newer version of that Nuget package. I needed to be able to modify the consistency level on the connection and wasn’t able to wait for their next release.

Choices

I did clone the code to my laptop and was able to compile it. I thought, I could just change the reference in my code from Nuget to an assembly reference. But, the assembly has so many other dependencies and I didn’t want to chase down everyone of them.

So, going back to the Cosmos code, I had it create a Nuget package locally. This works great. With my PoC code I just made another Nuget source to that folder. That worked well, however, that doesn’t work with the code on the Linux VM. It can’t reference a folder on my computer. So, I did this instead.

Custom Nuget Package

I copied the Nuget package to the VM. It now resides in the bin/Debug folder. But, now I had to tell that code where to find the package. Nuget.config to the rescue.

I created a nuget.config file. The existence of the file tells the compiler to leverage it for where and how to retrieve Nuget packages. I added a file source to the bin/Debug folder. This entry was right after the reference to nuget.org. So, now it will attempt to find the packages at nuget.org first. Since it can’t find the locally made one, it looks at the next source in the list. That’s where it found my newly compiled Cosmos library that contains a way to adjust the consistency level.

<configuration>
    <packageSources>
         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />
    </packageSources>
</configuration>

This is the link to the Microsoft documentation on using nuget.config.
https://docs.microsoft.com/en-us/nuget/reference/nuget-config-file

Not only can you control where Nuget packages are pulled from, but you can also add credentials. This is very useful for when pulling from a private Nuget source like in Azure Dev Ops.
In my example, I have a the nuget.org source as well as one called “local”. If that “local” source was actually in Azure Dev Ops I would add credentials like:

    <packageSourceCredentials>
        <local>
            <add key="username" value="some@email.com"/>
            <add key="password" value="..."/>
        </local>
    </packageSourceCredentials>

Notice that the element “local” matches the package source name above.

If using an unencrypted password:

    <packageSourceCredentials>
        <local>
            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>
        </local>
    </packageSourceCredentials>

Complete file example:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <packageSources>
         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />
         <add key="privateAzureDevOpsSource" value="https://blahblah.com/foo/bar/example" />
    </packageSources>
    <packageSourceCredentials>
        <privateAzureDevOpsSource>
            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>
        </privateAzureDevOpsSource>
    </packageSourceCredentials>
</configuration>

Conclusion

The point here is that you can take code and make a private Nuget package then make accessible for your needs. The nuget.config file makes that possible.

Update

After seeing that Microsoft did release a new version that included the consistency level option, I did revert to using their latest package version. My custom “fix” was meant to be temporary anyway.

 

Sep 26

Microservices — The Easy Way is the Wrong Way

I’ve had the pleasure to give my microservices presentation at the Kansas City Developers Conference (KCDC) https://www.kcdc.info/session/ses-84969
and also at at Tulsa .NET User Group.

On Oct 15th I’ll be presenting this again at DevUp.

The slide deck is now available on Slide Share.

Oct 12

Your First Azure Kubernetes Service Cluster – Using .NET Core MVC Website

In this post I’m going to show you the steps I do in my conference talk “Getting Started with Azure Kubernetes Service”.

Prerequisites

To start with, you need to have a few things. If you don’t already have an Azure account look at getting some free resources at https://azure.microsoft.com/en-us/free/

I’m on a Windows 10 system with Bash enabled and Ubuntu installed. I like using WSL with Ubuntu because my target OS type for my microservices and websites run on Linux. It helps me stay sharp with Linux commands, etc.

I have .NET Core installed. Make sure you have the latest version. https://www.microsoft.com/net/download

I also have Docker for Windows so I can build the images. You’ll see this later in this post.
Other things you’ll need is the AZ CLI. Once that is installed you can execute the AZ command to install the Kubernetes CLI

az aks install-cli

Hopefully everything is installed and ready at this point. Now, from your terminal, you need to log into your Azure Subscription.
Execute the following command will give you a series of characters you use at the site https://microsoft.com/devicelogin.  If not already logged in you’ll be prompted to log into your Azure subscription. Once done your terminal will show some details of the subscription you just logged into.

az login

Build a Cluster

In my talk I mention a script that I use to create a cluster and an Azure Container Registry. I found the script some time ago and tweaked it a little. It uses AZ commands so you’ll need to make sure you’re logged into the subscription you want first. I recommend making a copy of the commands in the script, paste into a text editor, then modify to your needs. Start with the Environment Variables at the top. **Warning** It takes roughly 15 minutes and could be longer. Most of the time is taken waiting for the VM’s to be provisioned and come online. The script is located at https://gist.github.com/seanw122/e7b43b543f2a44be767739ce3866237f

Building the MVC Site

While the cluster is being created you can create the ASP.NET MVC site. On your computer create a new folder. The name of the folder will be the name of the project so chose wisely. In my example I have a simple name of proj1. Amazing name I know.
So, now I have my folder “D:/code/proj1”. In the address bar of the window click once. It should highlight the whole string. Type cmd and press Enter. You should see the Command Prompt window located at “D:/code/proj1”.
Now for some .NET Core. Type in the following command. It will create an ASP.NET MVC website using a generic template.

dotnet new mvc

After the site is created you’ll see several files. Find in the Controllers folder the file HomeController.cs. Edit that file and modify the method About:

public IActionResult About()
{
    ViewData["Message"] = "My About page. " + Environment.MachineName + ": " + Environment.OSVersion + ": " + DateTime.Now.ToString();
    return View();
}

This shows the Machine Name, OS Version, and the current date and time in UTC.
This is point out that the machine name is the name of the pod it is running on. The OS Version will prove that it’s running on Linux though it will show “UNIX” with some version. The date and time shows the page is running live.

Now create a new file name “Dockerfile” with no extension then put in the contents:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY . .
RUN dotnet restore proj1.csproj
RUN dotnet build proj1.csproj -c Release -o /app

FROM build AS publish
RUN dotnet publish proj1.csproj -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "proj1.dll"]

Place this file in your proj1 folder. It needs to be there for the next command to work.

 

Docker Image

Now we’ll use Docker to build an image, tag to a different version, then push to our new Azure Container Registry.

docker build . -t myproj:v1

docker tag myproj:v1 {ACR Name}.azurecr.io/myproj:v1

docker push {ACR Name}.azurecr.io/myproj:v1

The first command builds the image. It will pull the image for .NET Core, ASP.NET Core then layer on our new MVC site.  The second command adds a tag to the image. There is only one final image. Now it has two tags. I do it this way so there is one for local and one specific to the ACR we’re pushing to.  Be sure to replace the {ACR Name} with the actual ACR Name you used in the script to create the cluster. The third command pushes the image to ACR specified in the tag.

In the case where the push tries but fails stating “Authorization Required”, use the following command to login. Be sure to use the name of the ACR you’re targeting without the curly braces.

az acr login --name {ACR Name}

 

Deploying to Cluster

By now the image should be successfully pushed to the Azure Container Registry. With you text editor create two new files and place the following contents.

First file save as <i>myproj-service.yml</i>

apiVersion: v1
kind: Service
metadata:
  name: my-project-service
spec:
  selector: 
    app: my-project-server
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: 80

And the second one as <i>myproj-deployment.yml</i>

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-project-deployment
spec:
  replicas: 3
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: my-project-server
    spec:
      containers:
        - name: my-project
          image: {ACR Name}.azurecr.io/myproj:v1
          ports:
            - containerPort: 80

 

Using the same terminal used to create the cluster we’re going to send these two files to the cluster to create a Deployment of 3 Pods that is accessible by a Service.  First you may need to navigate to the folder where you created these files. In my example I created the files in the same location as my MVC site, “D:/code/proj1”. To navigate to that location:

cd /mnt/d/code/proj1

Now execute this command to list the files and verify the two files we’re going to send are indeed in the folder.

ls -al

Now to create the Service. Why Service first?  In our case it will obtain a public IP address and that takes a few minutes. So, we should get that started now.

kubectl create -f myproj-service.yml

With the Service creation on the way we’ll now create the Deployment and the Pods.

kubectl create -f myproj-deployment.yml

A Deployment specifies information about the Pods to be created. Behind the scenes it creates a Replica Set. In our example we have it set to 3 replicas. The Scheduler works to maintain that number of active Pods.

To see the status of our Service execute:

kubectl get service -o wide

Look for your Service listed and the associated External IP address.  It may still be <i>Pending</i> in which case just wait a minute and try again.

Once you have an External IP address copy that address and we’ll put that in a browser’s address bar. But, note the port number. In this example it’s set to 8080. So, be sure to specify that in the browser as well.

Now your new site should appear!  Click on the link at the top for About. In the text that appears you should see the machine name, OS Version, and date & time.  The machine name is the name of the Pod that is hosting that request of the web server. The OS Version will say “Unix” plus some version numbers.

 

Congrats!

You created a new Kubernetes cluster on Azure and now hosting a new ASP.NET MVC website. There’s SO much more about AKS. To get a list of links I found useful go to my other post at http://seanwhitesell.com/2018/06/23/resources-for-getting-started-with-azure-kubernetes-service-with-net-core-prometheus-and-grafana

Jul 23

Latest Kubectl – Older Cluster

I’m currently working Azure Container Service “ACS” until Azure Kubernetes Service “AKS” is available in my production data center. Why? Because if I use an Azure service in a data center but have data in another data center then I have to pay data egress charges. Anytime data comes out then you have to pay for it. So for now I have my ACS v1.7.7 setup.

I just configured another laptop to connect to the cluster. I installed the AZ CLI then Kubectl CLI. After making sure things were authenticated to the cluster I tried a simple command.

kubectl get pods

 

to which I received the this error message:

No resources found. Error from server (NotAcceptable): unknown (get pods)

 

I ran the command on my other working system and things are fine. The cluster responded with the list of pods I expected to see. So, what’s the problem??

That’s when I remembered that Kubernetes 1.11 just became public. The Kubectl CLI I just installed is 1.11. Apparently, it has issues with a cluster from the 1.7.7 version. Which, by the way, is the latest version you can have in ACS!

*sigh* Ok, so thankfully I was able to downgrade the Kubectl CLI to a previous version.

sudo apt-get remove kubectl

 

then

sudo apt-get update -q && \
sudo apt-get install -qy kubectl=1.10.0-00

 

I then re-authenticated to the cluster.

az acs kubernetes get-credentials --resource-group {resource group name here} --name {name of azure container service here} --ssh-key-file {path to key file here}

 

then

kubectl get pods

 

returned the list of pods expected.

Jun 23

Resources for “Getting Started with Azure Kubernetes Service with .NET Core, Prometheus, and Grafana”

Intro

This is the best post I have found for getting started with containers and running them on Azure Container Services:
Run .NET Core 2 Docker images in Kubernetes using Azure Container Service and Azure Container Registry | Pascal Naber

When you have Kubernetes (K8s) up and running you’ll want to view the Kubernetes Dashboard. I have seen many tutorials on how to get it started and the link they mention never worked for me. So, I simply do

1
kubectl proxy

It starts a proxy connection between the cluster and your localhost.

The link to view the dashboard is http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default.

But notice the version is V1. There will be a time when that link will need to be updated for a newer version.

Script to create a new resource group, Azure Kubernetes Service, and Azure Container Registry https://gist.github.com/seanw122/e7b43b543f2a44be767739ce3866237f

.NET

I have way too many so I’ll show one for now.
Pro C# 7

Docker

Play with Docker Classroom
eBook – Docker in Action 2nd Ed.
A Developer’s Guide To Docker – Docker Swarm | Okta Developer
Dockerize a .NET Core application | Docker Documentation
50+ Useful Docker Tools | Caylent
Interactive Browser Based Labs, Courses & Playgrounds | Katacoda
Running Docker containers on Bash on Windows – Jayway
Introduction to containers and Docker | Microsoft Docs

Kubernetes

eBook – Kubernetes in Action
kubernetes/autoscaler: Autoscaling components for Kubernetes
brendandburns/k8s-playbooks: Some ansible playbooks for managing my k8s cluster(s)
Web UI (Dashboard) | Kubernetes
Watching auto-recovery
Azure: “Kubernetes the Easy Way” Managed Kubernetes on Azure AKS | E101 – YouTube
Kubernetes Co Founder Brendan Burns Orchestration is Becoming a Commodity – YouTube
Scaling Docker Containers using Kubernetes and Azure Container Service – Ben Hall – YouTube
Workloads – Kubernetes Dashboard
Ben Hall’s Blog – I don’t know darling, I’m doing my work
az acs kubernetes | Microsoft Docs
Introducing Play with Kubernetes – Docker Blog
Kubernetes Security: from Image Hygiene to Network Policies // Speaker Deck
Overview – Kubernetes Dashboard

Azure Container Services

Azure Region Availability
Azure Container Service – How to change your public key | Azure Container Service | Channel 9
Building Microservices with AKS and VSTS – Part 1 – Azure Development Community
Building Microservices with AKS and VSTS – Part 2 – Azure Development Community
Building Microservices with AKS and VSTS – Part 3 – Azure Development Community
A Closer Look at Microsoft Azure’s Managed Kubernetes Service – The New Stack
Introducing AKS (managed Kubernetes) and Azure Container Registry improvements | Blog | Microsoft Azure
SSH into Azure Container Service (AKS) cluster nodes | Microsoft Docs
Frequently asked questions for Azure Container Service | Microsoft Docs
Your very own private Docker registry for Kubernetes cluster on Azure (ACR)
Service principal for Azure Kubernetes cluster | Microsoft Docs
SSH keys on Windows for Kubernetes with Azure Container Service (ACS) | Pascal Naber
Setting up a Kubernetes cluster with Azure Container Service: Terraform, Azure Resource Manager, CLI
Manage Azure Kubernetes cluster with web UI | Microsoft Docs
Manage Azure Container Services cluster with web UI | Microsoft Docs

Prometheus

How to Setup Prometheus Monitoring On Kubernetes Cluster [Tutorial]
A monitoring solution for Docker hosts, containers and containerized services

Grafana

https://grafana.com
Monitor Azure services and applications using Grafana | Microsoft Docs

Jun 23

Dockerizing an Existing App

I have been playing with Docker for a bit now and have always started play apps with Docker enabled. I decided to Dockerize an existing app. Ok, so, Right-click on project, select Add, then Docker Support. Ok great, there’s my additional project in the solution for Docker-Compose. I then decide to start debugger with Docker environment as primary. It fails to build. That’s strange. So, I do a Clean and Rebuild. Same issue. Docker-compose is complaining about a npm package that is not even in my project but in another app altogether! Sheesh!

When you Dockerize your app, select that project and then Show All Files. What I found is that the Docker-Compose files are in the parent folder of my application. It now sees and attempts to use ALL projects in the sub-folders from that point.

Simple solution; copy the solution file, Docker-compose files, and my application folder to a new parent folder. Now it only has my application(s) and Docker-Compose.

Aug 05

Speaker Confession

@geekygirlsarah helped start a trend called #speakerconfessions. I just submitted mine.

In 2016 at Tulsa TechFest I gave two talks back to back right after lunch. I was congested at lunch so I took an antihistamine and drank a bottle of water. During my talks I always drink water. So,during my first talk I drank another bottle of water. Then, in between sessions, I went to the bathroom. Well, 20 minutes into my 2nd talk I had to leave to pee again!

Older posts «

hublot replica | replica watches | cartier replica sale | breitling replica sale