Jul 20

Azure VM marked for deletion

Are you serious?!?!

The other day I was building out two Virtual Machines using an ARM template. I had been building many other VMs with no issues. Now, I have a failure. And two other VMs had issues being created. All four VMs had a Provision Status of Failed.

I checked the quota and determined that was not the issue. Looking at the failure message of each VM only said a failure occurred and provided no real details.

This was ultimately fixed. This post is about a few things that occurred that allowed me to get these VMs out of a failed state.

The two VMs built using an ARM template worked successfully in another region. That other region had Availability Zones where the region with the failure did not. I did modify the ARM template to use Availability Sets instead.

The other two VMs were created by hand by a coworker. The steps used worked on other VMs in the same region. So, why the failures at all? Unfortunately, I don’t know. Fluke in the “matrix” for all I know. In those cases were still stuck with determining the next step.

One suggestion worked on the two VMs not created by the ARM template. I changed the size of the VM. They went from “Standard_F4s_v2” to “Standard_F4s”. After the size change I was able to start the VMs. A coworker then confirmed he was able to install needed software.

I tried to change the size on the VMs created by the ARM template. But, it failed to work. Why? Because I tried to delete the VMs so I could start over. That actually hurt me. I didn’t yet know about attempting to change the VM size. So, what finally fixed it for me?

This info was what I got from a Microsoft ticket. The first round of discussions with them resolved nothing. It wasn’t until they realize another piece of my puzzle that they gave a different suggestion.

Our VMs were in Availability Sets. That alone is not an issue. The trick was to stop each VM in the Availability Sets. Once each VM was stopped I retried the delete of the problematic VMs. IT WORKED!! I was finally able to remove those VMs and recreate them.

Once recreated I then went to each of the other VMs and started them. They all come back online. Eureka!!!

Hopefully, this extra tidbit of detail will help you. If a VM fails on creation, don’t delete. Instead, try to change the VM size. If that fails, look at what else help dictate how a VM is allocated. For me, the Availability Set was the extra detail. Stopping all the VMs in an Availability Set allowed me to delete the problematic VMs and recreate them.

Jun 26

Using Pulumi to Create an Azure Load Balancer

This code will create an Azure load balancer with a frontend IP configuration, health probe, and a backend pool. There is also a method for associating a VM’s network interface (NIC) to a backend pool. Since VM can have NICs, and each NIC can have multiple IP configurations.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
public class Constants
{
        public const string SKUSTANDARD = "Standard";
        public const string SKUBASIC = "Basic";
        public const string IPVERSION4 = "IPv4";
        public const string STATIC = "Static";
        public const string DYNAMIC = "Dynamic";
 
        public const string ALL = "*";
        public const string TCP = "TCP";
        public const string UDP = "UDP";
        public const string ALLOW = "Allow";
 
        public const string VIRTUALAPPLIANCE = "VirtualAppliance";
        public const string VIRTUALNETWORKGATEWAY = "VirtualNetworkGateway";
 
        public const string PREMIUM_LRS = "Premium_LRS";
        public const string LRS = "LRS";
 
        public const string STORAGEACCOUNT_BLOB = "BlobStorage";
        public const string STORAGEACCOUNT_BLOCKBLOB = "BlockBlobStorage";
        public const string STORAGEACCOUNT_FILE = "FileStorage";
        public const string STORAGEACCOUNT_STORAGE = "Storage";
        public const string STORAGEACCOUNT_STORAGEV2 = "StorageV2";
        public const string STORAGEACCOUNT_DEFAULT = "StorageV2";
 
}
 
class LoadBalancerBuilder
{
        private readonly string _namePrefix;
        private readonly string _location;
        private readonly ResourceGroup _resourceGroup;
        private readonly Dictionary<string, Subnet> _subnets;
 
        public LoadBalancerBuilder(string namePrefix, string location, ResourceGroup resourceGroup, Dictionary<string, Subnet> subnets)
        {
            _namePrefix = namePrefix;
            _location = location;
            _resourceGroup = resourceGroup;
            _subnets = subnets;
        }
 
        public LoadBalancer BuildWebAppsLoadBalancer()
        {
            var frontEndConfig = new LoadBalancerFrontendIpConfigurationArgs()
            {
                Name = "LoadBalancerFrontEnd",
                PrivateIpAddress = "11.0.1.10",
                PrivateIpAddressVersion = Constants.IPVERSION4,
                PrivateIpAddressAllocation = Constants.STATIC,
                SubnetId = _subnets["DMZ"].Id,
 
            };
 
            var lb = new LoadBalancer(_namePrefix + "-lb-webapps", new LoadBalancerArgs()
            {
                Location = _location,
                ResourceGroupName = _resourceGroup.Name,
                Sku = Constants.SKUSTANDARD,
                FrontendIpConfigurations = frontEndConfig
            });
 
            var healthProbe = new Probe("Probe1", new ProbeArgs
            {
                Name = "Probe1",
                LoadbalancerId = lb.Id,
                ResourceGroupName = _resourceGroup.Name,
                IntervalInSeconds = 5,
                NumberOfProbes = 2,
                Port = 22,
                Protocol = Constants.TCP
            });
 
            var backendPool = new BackendAddressPool("Pool1", new BackendAddressPoolArgs
            {
                ResourceGroupName = _resourceGroup.Name,
                LoadbalancerId = lb.Id,
                Name = "Pool1"
            });
 
            return lb;
        }
 
        public void AttachVmToLoadBalancerBackendPool(BackendAddressPool backendAddressPool, LoadBalancer lb, NetworkInterface nic, string ipConfigName)
        {
            var rnd = new Random();
            var rndNumber = rnd.Next(100, 999);
 
            new NetworkInterfaceBackendAddressPoolAssociation("assoc"+ rndNumber,
                new NetworkInterfaceBackendAddressPoolAssociationArgs
                {
                    BackendAddressPoolId = backendAddressPool.Id,
                    NetworkInterfaceId = nic.Id,
                    IpConfigurationName = ipConfigName
                });
        }
}

Jun 24

Using Pulumi to Create a Azure Linux Virtual Machine

This code relies on the constants you can find in other posts. This will create a storage account used for boot diagnostics. Then it create a network interface, then a VM using an Ubuntu image.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
class ExampleVm
{
    private readonly string _location;
    private readonly ResourceGroup _resourceGroup;
    private readonly string _vmSize = "Standard_DS3_v2";
    private readonly string _ipAddress = "10.20.1.10";
    private readonly string _adminUsername = "adminAccount";
    private readonly string _adminPassword = "SuperSecretP@ssw3rd";
    private string _computerName = ""; //replaced on init
 
    public ExampleVm(string location, ResourceGroup resourceGroup)
    {
        _location = location;
        _resourceGroup = resourceGroup;
    }
 
    //Example that using attribute will output designated properties
    [Output] public Output<string> PrimaryBlobEndpointConnectionString { get; set; }
 
    public Account BuildBootDiagnosticStorageAccount(string storageAccountName)
    {
        // Create an Azure Storage Account
        var storageAccount = new Account(storageAccountName, new AccountArgs
        {
            Name = storageAccountName,
            Location = _location,
            ResourceGroupName = _resourceGroup.Name,
            AccountReplicationType = Constants.LRS,
            AccountTier = Constants.SKUSTANDARD,
            AccountKind = Constants.STORAGEACCOUNT_DEFAULT
        });
 
        PrimaryBlobEndpointConnectionString = storageAccount.PrimaryBlobEndpoint;
        return storageAccount;
    }
 
    public void BuildVM(string vmName, Account bootDiagStorageAccount, Dictionary<string, Subnet> subnets)
    {
        _computerName = vmName;
 
        var nic = new NetworkInterface(vmName + "-nic", new NetworkInterfaceArgs()
        {
            Location = _location,
            ResourceGroupName = _resourceGroup.Name,
            EnableAcceleratedNetworking = true,
            IpConfigurations = new NetworkInterfaceIpConfigurationArgs
            {
                Name = "Ipconfig1",
                Primary = true,
                SubnetId = subnets["Web"].Id,
                PrivateIpAddressVersion = Constants.IPVERSION4,
                PrivateIpAddress = _ipAddress,
                PrivateIpAddressAllocation = Constants.STATIC
            }
        });
 
        var vm = new VirtualMachine(vmName, new VirtualMachineArgs()
        {
            Location = _location,
            ResourceGroupName = _resourceGroup.Name,
            Name = vmName,
            VmSize = _vmSize,
            Zones = "1",
            BootDiagnostics = new VirtualMachineBootDiagnosticsArgs()
            {
                Enabled = true,
                StorageUri = bootDiagStorageAccount.PrimaryBlobEndpoint
            },
            NetworkInterfaceIds = nic.Id,
            OsProfile = new VirtualMachineOsProfileArgs
            {
                AdminUsername = _adminUsername,
                AdminPassword = _adminPassword,
                ComputerName = _computerName
            },
            OsProfileLinuxConfig = new VirtualMachineOsProfileLinuxConfigArgs
            {
                DisablePasswordAuthentication = false
            },
            StorageImageReference = new VirtualMachineStorageImageReferenceArgs
            {
                Sku = "18.04-LTS",
                Offer = "UbuntuServer",
                Publisher = "Canonical",
                Version = "Latest"
            },
            StorageOsDisk = new VirtualMachineStorageOsDiskArgs
            {
                Name = vmName + "-OSDisk",
                Caching = "ReadWrite",
                CreateOption = "FromImage",
                DiskSizeGb = 30,
                OsType = "Linux",
                ManagedDiskType = Constants.PREMIUM_LRS
            }
        });
    }
}

Jun 24

Using Pulumi to Create Azure Network Security Groups

Constants class used in the example code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class Constants
{
        public const string SKUSTANDARD = "Standard";
        public const string SKUBASIC = "Basic";
        public const string IPVERSION4 = "IPv4";
        public const string STATIC = "Static";
        public const string DYNAMIC = "Dynamic";
 
        public const string ALL = "*";
        public const string TCP = "TCP";
        public const string UDP = "UDP";
        public const string ALLOW = "Allow";
 
        public const string VIRTUALAPPLIANCE = "VirtualAppliance";
        public const string VIRTUALNETWORKGATEWAY = "VirtualNetworkGateway";
 
        public const string PREMIUM_LRS = "Premium_LRS";
        public const string LRS = "LRS";
 
        public const string STORAGEACCOUNT_BLOB = "BlobStorage";
        public const string STORAGEACCOUNT_BLOCKBLOB = "BlockBlobStorage";
        public const string STORAGEACCOUNT_FILE = "FileStorage";
        public const string STORAGEACCOUNT_STORAGE = "Storage";
        public const string STORAGEACCOUNT_STORAGEV2 = "StorageV2";
        public const string STORAGEACCOUNT_DEFAULT = "StorageV2";
}

This code creates a few NSG rules. Then creates a Network Security Group. Then associates to subnet Web.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
class ExampleNsgRules
{
        private readonly string _location;
        private readonly ResourceGroup _resourceGroup;
        private readonly Dictionary<string, Subnet> _subnets;
 
        public ExampleNsgRules(string location, ResourceGroup resourceGroup, Dictionary<string, Subnet> subnets)
        {
            _location = location;
            _resourceGroup = resourceGroup;
            _subnets = subnets;
        }
 
        public void BuildWebNSG(string nsgName)
        {
            var nsgRuleWebServers = new NetworkSecurityGroupSecurityRuleArgs
            {
                Access = "Allow",
                DestinationAddressPrefixes = new[] { "10.20.1.10", "10.20.1.11", "10.20.1.12", "10.20.1.13" },
                DestinationPortRanges = new[] { "80" },
                Protocol = Constants.TCP,
                SourcePortRange = Constants.ALL,
                Name = "WebAccess",
                Direction = "Inbound",
                Priority = 200
            };
 
            //only allow access to DB servers from web servers
            var nsgRuleWebToDb = new NetworkSecurityGroupSecurityRuleArgs
            {
                Access = "Allow",
                DestinationAddressPrefixes = new[] { "10.20.2.10", "10.20.2.11"},
                DestinationPortRanges = new[] { "1433" },
                Protocol = Constants.UDP,
                SourcePortRange = Constants.ALL,
                Name = "DatabaseAccess",
                SourceAddressPrefixes = new[] { "10.20.1.10", "10.20.1.11", "10.20.1.12", "10.20.1.13" },
                Direction = "Inbound",
                Priority = 210
            };
 
            //restrict SSH access to web servers to specified IP sources
            var nsgRuleWebServersSSH = new NetworkSecurityGroupSecurityRuleArgs
            {
                Access = "Allow",
                DestinationAddressPrefixes = new[] { "10.20.1.10", "10.20.1.11", "10.20.1.12", "10.20.1.13" },
                DestinationPortRanges = new[] { "22" },
                Protocol = Constants.TCP,
                SourcePortRanges = new[] { "10.20.20.5", "10.20.20.6" },
                Name = "WebAccess",
                Direction = "Inbound",
                Priority = 220
            };
 
            var rules = new List<NetworkSecurityGroupSecurityRuleArgs> { nsgRuleWebServers, nsgRuleWebToDb };
 
            var nsg = new NetworkSecurityGroup(nsgName, new NetworkSecurityGroupArgs()
            {
                ResourceGroupName = _resourceGroup.Name,
                Location = _location,
                SecurityRules = rules
            });
 
            new SubnetNetworkSecurityGroupAssociation("webNsgAssociation", new SubnetNetworkSecurityGroupAssociationArgs
            {
                NetworkSecurityGroupId = nsg.Id,
                SubnetId = _subnets["Web"].Id
            });
        }
}

Jun 24

Using Pulumi to Create Azure Route Table

I have a class with constants to keep a tight reign on magic strings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class Constants
{
        public const string SKUSTANDARD = "Standard";
        public const string SKUBASIC = "Basic";
        public const string IPVERSION4 = "IPv4";
        public const string STATIC = "Static";
        public const string DYNAMIC = "Dynamic";
 
        public const string ALL = "*";
        public const string TCP = "TCP";
        public const string UDP = "UDP";
        public const string ALLOW = "Allow";
 
        public const string VIRTUALAPPLIANCE = "VirtualAppliance";
        public const string VIRTUALNETWORKGATEWAY = "VirtualNetworkGateway";
 
        public const string PREMIUM_LRS = "Premium_LRS";
        public const string LRS = "LRS";
 
        public const string STORAGEACCOUNT_BLOB = "BlobStorage";
        public const string STORAGEACCOUNT_BLOCKBLOB = "BlockBlobStorage";
        public const string STORAGEACCOUNT_FILE = "FileStorage";
        public const string STORAGEACCOUNT_STORAGE = "Storage";
        public const string STORAGEACCOUNT_STORAGEV2 = "StorageV2";
        public const string STORAGEACCOUNT_DEFAULT = "StorageV2";
}

This code builds a few routes, then a route table. After the route table is made, it adds associations to subnets.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
class BuildRouteTables
{
    private readonly string _location;
    private readonly ResourceGroup _resourceGroup;
    private readonly string _dmzSubnetAddress;
    private readonly string _webSubnetAddress;
    private readonly string _dataSubnetAddress;
 
    public Dictionary<string, Subnet> Subnets { get; private set; }
 
    public BuildRouteTables(string location, ResourceGroup resourceGroup)
    {
        _location = location;
        _resourceGroup = resourceGroup;
        Subnets = new Dictionary<string, subnet="">();
 
        var octets = vnetAddressSpace.Split('.');
        const string subnetSize = "24";
        var firstTwoOctets = octets[0] + "." + octets[1];
        _dmzSubnetAddress = firstTwoOctets + ".0.0/" + subnetSize;
        _webSubnetAddress = firstTwoOctets + ".1.0/" + subnetSize;
        _dataSubnetAddress = firstTwoOctets + ".2.0/" + subnetSize;
    }
 
    public void BuildRouteTable(string routeTableName, string firewallIpAddress)
    {
		var dmzRoute = new RouteTableRouteArgs()
		{
			Name = "DMZ",
			AddressPrefix = _dmzSubnetAddress,
			NextHopInIpAddress = firewallIpAddress,
			NextHopType = Constants.VIRTUALAPPLIANCE
		};
 
		var webRoute = new RouteTableRouteArgs()
		{
			Name = "Web",
			AddressPrefix = _webSubnetAddress,
			NextHopInIpAddress = firewallIpAddress,
			NextHopType = Constants.VIRTUALAPPLIANCE
		};
 
		var dataRoute = new RouteTableRouteArgs()
		{
			Name = "Data",
			AddressPrefix = _dataSubnetAddress,
			NextHopInIpAddress = firewallIpAddress,
			NextHopType = Constants.VIRTUALAPPLIANCE
		};
 
		var routes = new InputList{ dmzRoute, webRoute, dataRoute };
 
		var routeTable = new RouteTable(routeTableName, new RouteTableArgs()
		{
			Location = _location,
			ResourceGroupName = _resourceGroup.Name,
			Routes = routes
		});
 
		var dmzAssociation = new SubnetRouteTableAssociation("DmzAssoc", new SubnetRouteTableAssociationArgs()
		{
			RouteTableId = routeTable.Id,
			SubnetId = Subnets["DMZ"].Id
		});
 
		var webAssociation = new SubnetRouteTableAssociation("WebAssoc", new SubnetRouteTableAssociationArgs()
		{
			RouteTableId = routeTable.Id,
			SubnetId = Subnets["Web"].Id
		});
 
		var dataAssociation = new SubnetRouteTableAssociation("DataAssoc", new SubnetRouteTableAssociationArgs()
		{
			RouteTableId = routeTable.Id,
			SubnetId = Subnets["Data"].Id
		});
    }
}

Jun 24

Using Pulumi to create Azure Virtual Network

This code segment is to create a virtual network in Azure. It’s assumed you already have Pulumi installed and connected to Azure.

To start, you do need to know what Resource Group you want to house the virtual network (vnet). And, you need to know what address space you want to use and any subnets. You need at least one subnet for resources that need an IP address.

We’ll start with a blank MyStack class. I create other classes for the resources so I don’t have every network piece, VMs, storage accounts, etc all in the case class. I like the segregation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MyStack : Stack
{
    private readonly ResourceGroup _resourceGroup;
    private readonly Dictionary<string, Subnet> _subnets;
 
    private const string ResourceGroupName = "seanrsgsetuptest";
    private const string Location = "southcentralus";
    private const string VnetAddressSpace = "10.20.0.0/17";
 
    public MyStack()
    {
        _resourceGroup = new ResourceGroup(ResourceGroupName,
                new ResourceGroupArgs
                {
                    Name = ResourceGroupName,
                    Location = Location
                });
 
       var vnet = new VirtualNetworkBuilder(Location, _resourceGroup, VnetAddressSpace);
       vnet.BuildVnetAndSubnets();
       _subnets = vnet.Subnets;
    }
}

I have a class VirtualNetworkBuilder because there can easily be a lot of code for making the vnet and subnets. I have the Subnets dictionary available after creation because there are other resources that need a subnet by name and/or the ID.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
internal class VirtualNetworkBuilder
{
        private readonly string _location;
        private readonly ResourceGroup _resourceGroup;
        private readonly string _vnetAddressSpace;
        private readonly string _dmzSubnetAddress;
        private readonly string _webSubnetAddress;
        private readonly string _dataSubnetAddress;
        private readonly string _gatewaySubnetAddress;
 
        public VirtualNetwork Vnet { get; private set; }
        public Dictionary<string, Subnet> Subnets { get; private set; }
 
        public VirtualNetworkBuilder(string location, ResourceGroup resourceGroup, string vnetAddressSpace)
        {
            _location = location;
            _resourceGroup = resourceGroup;
            _vnetAddressSpace = vnetAddressSpace;
            Subnets = new Dictionary<string, Subnet>();
 
            Vnet = new VirtualNetwork("vnet", new VirtualNetworkArgs()
            {
                ResourceGroupName = resourceGroup.Name,
                AddressSpaces = new[] { vnetAddressSpace }
            });
 
            var octets = vnetAddressSpace.Split('.');
            const string subnetSize = "24";
            var firstTwoOctets = octets[0] + "." + octets[1];
            _dmzSubnetAddress = firstTwoOctets + ".0.0/" + subnetSize;
            _webSubnetAddress = firstTwoOctets + ".1.0/" + subnetSize;
            _dataSubnetAddress = firstTwoOctets + ".2.0/" + subnetSize;
            _gatewaySubnetAddress = firstTwoOctets + ".3.0/" + subnetSize;
        }
 
        public void BuildVnetAndSubnets()
        {
            var dmzSubnet = new Subnet("DMZ", new SubnetArgs()
            {
                ResourceGroupName = _resourceGroup.Name,
                VirtualNetworkName = Vnet.Name,
                AddressPrefixes = _dmzSubnetAddress
            });
            Subnets.Add("DMZ", dmzSubnet);
 
            var webSubnet = new Subnet("Web", new SubnetArgs()
            {
                ResourceGroupName = _resourceGroup.Name,
                VirtualNetworkName = Vnet.Name,
                AddressPrefixes = _webSubnetAddress
            });
            Subnets.Add("Web", webSubnet);
 
            var dataSubnet = new Subnet("Data", new SubnetArgs()
            {
                ResourceGroupName = _resourceGroup.Name,
                VirtualNetworkName = Vnet.Name,
                AddressPrefixes = _dataSubnetAddress
            });
            Subnets.Add("Data", dataSubnet);
 
            var gatewaySubnet = new Subnet("GatewaySubnet", new SubnetArgs()
            {
                ResourceGroupName = _resourceGroup.Name,
                VirtualNetworkName = Vnet.Name,
                Name = "GatewaySubnet",
                AddressPrefixes = _gatewaySubnetAddress
            });
            Subnets.Add("GatewaySubnet", gatewaySubnet);
         }
}

Jan 29

Terraform + Azure Availability Zones

While learning Terraform some time back, I wanted to leverage Availability Zones in Azure. I was specifically looking at Virtual Machine Scale Sets. https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html Looking at the documentation Terraform has, I noticed there is no good example on using zones. So, I tried a few things to see what was really needed for that field. While doing some research, I noticed there are many people in the same situation. No good examples. I figured I’d create this post to help anyone else. And, of course, it’s a good reminder for me too if I forget the syntax on how I did this.

Here’s a very simple Terraform file. I just created a new folder then a new file called zones.tf. Here’s the contents:

variable "location" {
  description = "The location where resources will be created"
  default = "centralus"
  type = string
}

locals {
  regions_with_availability_zones = ["centralus","eastus2","eastus","westus"]
  zones = contains(local.regions_with_availability_zones, var.location) ? list("1","2","3") : null
}

output "zones" {
  value = local.zones
}

The variable ‘location’ is allowed to be changed from outside the script. But, I used ‘locals’ for variables I didn’t want to be changed from outside. I hard coded a list of Azure regions that have availability zones. Right now it’s just a list of regions in the United States. Of course, this is easily modifiable to add other regions.

The ‘zones’ local variable uses the contains function to see if the specified region is in that array. If so, then the value is a list of strings. Else it’s null. This is important. The zones field in Azure resources required either a list of strings or null. An empty list didn’t work for me.

As it is right now, you can run the Terraform Apply command and you should see some output. Changing the value of the location variable to something not in the list and you may not see output at all simply because the value is null.

Now, looking at a partial example from the Terraform documentation:

resource "azurerm_virtual_machine_scale_set" "example" {
  name                = "mytestscaleset-1"
  location            = var.location
  resource_group_name = "${azurerm_resource_group.example.name}"
  upgrade_policy_mode = "Manual"
  zones = local.zones

Now the zones field can be used safely when the value is either a list of strings or null. After I ran the complete Terraform script for VM Scale Set, I went to the Azure Portal to verify it worked.

Azure Portal - VMSS - Availability Zone Allocation

I also changed the specified region to one that I know does not use Availability Zones, South Central US.

Azure Portal - VMSS - Availability Zone Allocation

This proved to me that I can use a region with and without availability zones in the same Terraform script.

For a list of Azure regions with Availability Zones, see:
https://docs.microsoft.com/en-us/azure/availability-zones/az-overview

Jan 27

Removing an Azure Application Gateway

While working with Terraform scripts I created many Azure Application Gateways. Sometime after they were created I would delete them as they were only needed to prove my scripts were working with Azure DevOps. I was using Terraform functions and special *magic* to get things just right. Then I noticed one of my App Gateways refused to delete.

I was using the Azure Portal as I have done many times. Simply selected the resources, then Delete, applied ‘yes’ when prompted. After a few minutes they were all gone as expected. One day, one of the App Gateways with the required resources like Public Ip Address, Virtual Network, etc, were still there after an attempt to delete.

Selecting the App Gateway showed the details that included the IP address, version, etc. But, it also showed in a large bar: “Failed”. I have seen it show “Deleting” before but never failed. I selected the Delete option again and after many minutes nothing changed. So, I tried to delete it using PowerShell.

$gtw = Get-AzApplicationGateway -Name “dev-example-appgateway”

$gtw

Executing the two lines above showed the App Gateway’s Provisiong State as Failed and Operation State as Stopping.

Application Gateway State Failed

I did some research and tried several things:

Start-AzApplicationGateway -ApplicationGateway $gtw

Stop-AzApplicationGateway -ApplicationGateway $gtw

Set-AzApplicationGateway -ApplicationGateway $gtw

Each took a few minutes and either did nothing or gave an error message. One message caught my eye.

/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup1/providers/Microsoft.Network/publicIPAddresses/dev-example-public-ip used by resource

/subscriptions/ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /resourceGroups/ myResourceGroup1/providers/Microsoft.Network/applicationGateways/dev-example-appgateway is not in Succeeded state. Resource is in Failed state.

Looking closely I noticed the issue may not be the App Gateway after all, but a dependent resource; Public Ip Address.

$pip = Get-AzPublicIpAddress -Name dev-example-public-ip -ResourceGroupName myResourceGroup1
$pip

Showing the details of the PIP also showed it was in a failed state.

PIP Provisioning State Failed

All this time I thought it was the Application Gateway. So, with this extra knowledge, I tried a different approach. Some of the research suggested executing the Set command with no changes.

Set-AzPublicIpAddress -PublicIpAddress $pip

$pip

PIP Provisioning State Succeeded

It worked! Well, at least for that resource. I tried the Set again for the App Gateway and it didn’t show a change. Instead, it gave the same error. Ok, let’s try the delete again just to see. Mind you, I have tried this command before including the -Force switch.

Remove-AzApplicationGateway -Force -Name dev-example-appgateway -ResourceGroupName myResourceGroup1

After a few minutes, it simply returned to a prompt. No error message this time. So, I went back to the Azure Portal and refreshed. It worked! Problem solved.

The significance here is that one resource can show a failed state when it’s really a dependent resource that is in trouble. I hope this helps someone else and you don’t spend the research time like I did.

Jan 25

Azure Cosmos DB Replication

While learning about Cosmos DB I had a lot of misunderstandings around consistency levels. And, it’s not surprising. Many people, certainly those coming from a SQL Server background like I did, have these misunderstandings. But, before I can jump into Cosmos DB consistency levels (covered in another post) I have to cover replication. This post is about the intricate details of replication that I had to wrap my head around for consistency levels to make sense. Although learning consistency levels does not require understanding replication first, it was helpful for me when developing use case scenarios.

With SQL Server it’s understood that data resides in databases that can be spread across File Groups. Those File Groups could simply be different files in the same folder, different folders, and even different drives. When it comes to replicated instances, the data could be spread across servers and even data centers in different states. But, Cosmos DB is very different from SQL Server. Not only is Cosmos DB not a relational database like SQL Server, but there isn’t a file structure to worry about.

Cosmos DB has, within each Azure region, 4 replicas that make up a “replica set”. One is a “Leader” and another a “Forwarder”. The other 2 are “Followers”. The Forwarder is a Follower but has the additional responsibility to send data to other regions. As data is received into that region, it is written to every replica. For a Write to be considered “committed” a quorum of replicas must agree they have the data. A quorum, as it pertains to the replicas in each region, is 3 out of the 4. This means that regardless of which region is receiving the data, the Leader replica and 2 others must signal they have received the data. This is also true regardless of the consistency level used on the Write operation.

Showing Cosmos system sending data to all four replicas at the same time.

A code’s client connection does not have to worry about replica count, if quorum has been met, or which replica does not yet have the data. The Cosmos DB system manages all of that. For our code, the Insert/Update/Delete operation has either succeeded or not.

Global Replication

Cosmos DB has a simple way of enabling global replication. Using the Azure Portal you can select 1 or more of the many data centers available all over the world. In a matter of minutes, another Cosmos DB instance is available with your data. For the discussion in this post I’m only going to cover Single Master. But, Multi-Master, also known as Multi-Write Enabled, is available. *Just a note on that though, once enabled you cannot turn it back off except with an Azure Support Ticket.

Data stored in Containers are split across Replica Sets by the Partition Key you provide when creating the Container. And, each Replica in the Replica Set contains only your data. The Replica is not shared with other Azure customers. As the amount of data grows, the Cosmos system also manages the partitioning and replication to other regions as needed. So, not only is Cosmos extremely fast but the sizing and replication is automatically handled for us.

With data in 2 Replica Sets, for example, in each region you enabled has an exact copy. Looking at a Replica Set in one region and the matching one in another region, this is known as a Partition Set. The Partition Sets are what manage the replication between regions for their respective Replica Sets.

Replication latency only pertains to global replication. The time to replicate data inside a Replica Set is so fast, it’s not part of the latency concerns. However, from one region to another there is some latency. Given the distance data must travel there is inevitable delays. Microsoft, at least in the United States, has a private backbone for region to region networking. This has an effect with your applications if using the Strong consistency level.

Multi-Region Replication

The image above depicts that replication from the primary region to the other regions may have many miles to travel. The latency is between the regions. The only latency the client connection will notice is that of the replication to the furthest region away. This is because the replication is concurrent and not sequential.

With all consistency levels, except Strong, once data has hit quorum “committed” then the client connection is notified. At the same time, the data is replicated to the other regions as enabled. With Strong, that quorum is a little different. In this case a “Global Majority” has be met. With 2 regions, this means 6 of the 8 replicas must agree on the data. With 3 regions, at least 2 regions must agree. With 4 regions, at least 3 regions must agree. Basically, once using 3+ regions, N – 1 regions is the quorum. Again, this only applies to the Strong consistency level.

Oct 02

Using nuget.config to control Nuget package reference sources.

Sometimes in software development you have to work around interesting obstacles.

I created a proof-of-concept code working with Cosmos DB. I needed to be able to run this code from my laptop as well as from a VM inside the Azure region. I copied all the code so I could tweak things as needed so the two client connections can behave differently. It was using the Azure Cosmos SDK v3 (3.0.0 to be precise).  https://github.com/Azure/azure-cosmos-dotnet-v3

The challenge came when I noticed the Cosmos Client Options did not contain a way to change the consistency level. That version of the library retrieves the consistency from the Cosmos account. This means there is not a way to choose a lower consistency level than defined in the account.

Thankfully, looking at the latest github link https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/CosmosClientOptions.cs for the code shows that they have included a public property to set the consistency level. However, that code had not been released to a newer version of that Nuget package. I needed to be able to modify the consistency level on the connection and wasn’t able to wait for their next release.

Choices

I did clone the code to my laptop and was able to compile it. I thought, I could just change the reference in my code from Nuget to an assembly reference. But, the assembly has so many other dependencies and I didn’t want to chase down everyone of them.

So, going back to the Cosmos code, I had it create a Nuget package locally. This works great. With my PoC code I just made another Nuget source to that folder. That worked well, however, that doesn’t work with the code on the Linux VM. It can’t reference a folder on my computer. So, I did this instead.

Custom Nuget Package

I copied the Nuget package to the VM. It now resides in the bin/Debug folder. But, now I had to tell that code where to find the package. Nuget.config to the rescue.

I created a nuget.config file. The existence of the file tells the compiler to leverage it for where and how to retrieve Nuget packages. I added a file source to the bin/Debug folder. This entry was right after the reference to nuget.org. So, now it will attempt to find the packages at nuget.org first. Since it can’t find the locally made one, it looks at the next source in the list. That’s where it found my newly compiled Cosmos library that contains a way to adjust the consistency level.

<configuration>
    <packageSources>
         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />
    </packageSources>
</configuration>

This is the link to the Microsoft documentation on using nuget.config.
https://docs.microsoft.com/en-us/nuget/reference/nuget-config-file

Not only can you control where Nuget packages are pulled from, but you can also add credentials. This is very useful for when pulling from a private Nuget source like in Azure Dev Ops.
In my example, I have a the nuget.org source as well as one called “local”. If that “local” source was actually in Azure Dev Ops I would add credentials like:

    <packageSourceCredentials>
        <local>
            <add key="username" value="some@email.com"/>
            <add key="password" value="..."/>
        </local>
    </packageSourceCredentials>

Notice that the element “local” matches the package source name above.

If using an unencrypted password:

    <packageSourceCredentials>
        <local>
            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>
        </local>
    </packageSourceCredentials>

Complete file example:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <packageSources>
         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />
         <add key="privateAzureDevOpsSource" value="https://blahblah.com/foo/bar/example" />
    </packageSources>
    <packageSourceCredentials>
        <privateAzureDevOpsSource>
            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>
        </privateAzureDevOpsSource>
    </packageSourceCredentials>
</configuration>

Conclusion

The point here is that you can take code and make a private Nuget package then make accessible for your needs. The nuget.config file makes that possible.

Update

After seeing that Microsoft did release a new version that included the consistency level option, I did revert to using their latest package version. My custom “fix” was meant to be temporary anyway.

 

Older posts «