Azure Automanage Machine Configuration Introduction – Configuration as Code

Azure Automanage Machine Configuration Introduction – Configuration as Code

Managing server configurations in hybrid or on-premises environments can be challenging, with outdated tools and manual processes leading to errors and inefficiencies. This post explores how to simplify configuration management with Desired State Configuration (DSC) and Azure Automanage Machine Configuration.

Ever noticed this Machine Configuration option under VMs and wondered how it works?

Me too! And Microsoft’s documentation is a bit scattered, with old information about Azure Automation accounts to deploy DSC MOF files and Azure Policy and suddenly you’re knee deep in JSON files. What?!

Let’s take it back a step and talk about Configuration-as-Code (CaC). Infrastructure-as-Code (IaC) seems to have stolen all the thunder and oftentimes can also include configurations but more so when talking about things such as Container deployments. However, typically IaC refers things such as networks, virtual machines, load balancers, and connection topologies.

In a nutshell, IaC avoids manual configuration and enforces consistency by representing desired environment states via well-documented code in formats. We can practically use the same definition for CaC but apply it to the configurations running inside of virtual machines.

This “well-documented code” typically lives in a code repository of some type. Often this is version controlled through Git and won’t really matter where it is hosted (Github, Azure DevOps, or Gitlab are all common examples). This version-tracking allows for automatic tracking of who changed, what, and when while also supporting advanced collaboration features such as branches and pull requests to make sure not anyone can just make changes in the configuration without proper approval and checks in place.

What if you’re rocking a bit more ‘classical’ infrastructure? Good old, Active Directory Domain Services, GPO’s, probably some Hybrid stuff but lots and lots of servers that run all sorts of applications! Databases, ERP systems, maybe you’ve even still got on-premises Exchange and/or SharePoint. How are these configurations managed? ‘Painfully by hand’ is often the answer, even though we’ve had a solution from Microsoft on-hand for years! It’s called Desired State Configuration (DSC) which lets your entire infrastructure be as simple as a text file (and therefore, CaC) – sort of.

Brief history

DSC has been around for a while and if you’ve ever looked at it, you’ll probably agree it’s not the most user-friendly thing to get into. Rest assured, this blog post won’t get too deep into it, but you do need to be aware that in the background, DSC is being used and there are various versions that floating around:

  • DSC 1.1 – Legacy version originally shipped in Windows Powershell 5.1
  • DSC 2.0 – Current version shipped In Powershell 7.1
  • DSC 3.0 – Preview version with cross-platform features and currently supported by Machine Configuration feature of Azure AutoManage.

This last point is important, as there are different implementations of DSC found within Azure. For example, Azure Automanage Machine Configuration (as shown in the introduction) is the latest and greatest, using both v2 and v3, respectively for Windows and Linux.

However, if you create an Azure Automation Account, you’ll see there is Configuration Management and State configuration (DSC) listed there too!

This is an older version of DSC, and while it’s not deprecated – but it will be soon. Microsoft will be officially retiring it in September 2027 and does recommend migrating to the new Machine Configuration instead.

To add even more confusion, Machine Configuration used to be called ‘Guest Configuration’ and was renamed a couple of years ago so much of the documentation will reference that.

The problem we’re trying to solve:

Configuration management: a simple enough concept and something we’ve been able to do through the likes of GPOs for decades. But GPOs don’t exist in the cloud. Instead, we have Intune and its policies but only really for end user devices, so what about servers?

Even GPO’s only get you so far (you don’t really deploy applications using GPO’s, right?), which ends up leading to Administrators logging into individual systems and making manual changes. Microsoft’s Configuration Manager is an alternative but comes with its own problems (have you looked at those licensing costs?).

In some instances, these changes are made as a result of a change request, meaning at least some tracking is done but for any system running long enough it becomes impossible to know what the exact configuration actually is. Or whether it’s still compliant with security or regulatory requirements (who hasn’t quickly disabled the firewall just to test if that makes things work and forget to turn it back on later).

When I’ve come to a customer to solve a problem, and I ask: “What is the actual configuration of that server?” 9 times out of 10 there is no one that can describe it. At best, there is a build document used for the initial setup and a ticket history of how it has evolved from that point over time, which isn’t very helpful and fails to give a solid understanding of the server in a troubleshooting scenario.

Thats where DSC comes in – throughout its various versions and deployment models/modes. Push/pull from a DevOps project, locally hosted or in the cloud, it all does the same: a configuration (MOF) file is provided to the machine defining what should be configured (declaratively) and the LCM figures out how to do that all by itself.
The real power here is that the machine can monitor itself for configuration drift and automatically change settings back (autocorrect).

Configurations as text files

DSC uses Managed Object Format (MOF) files to define configurations declaratively (for now… DCS 3.0 is moving towards JSON). But if you’ve ever looked at a MOF file, it’s not very read-friendly:

Luckily, you’ll never need to write these yourself, as we have a host of PowerShell tools at our disposal that makes compiling these MOF files much easier.
It is important to understand that these are the files that are delivered to any given server (or workstation if you want) and fed into the Local Configuration Manager (LCM) to perform the required changes.

You can fairly simply begin to play with DSC by compiling basic MOF files and applying them to your local machine or copying them to other machines to apply configurations. How to develop a custom machine configuration package – Azure Automanage | Microsoft Learn

If you follow the instructions, you’ll realize that the configuration file prior to compilation is only for a single machine. This works fine in a small environment but does not scale well to larger environments where you may have lots of servers with overlapping roles (and thus, configurations). You’ll end up doing a ton of copying/pasting and each configuration file can become increasingly long.

You’ll also soon realize that creating configuration packages by hand is not very scalable. It requires an authoring environment with all the right PowerShell modules installed and you have to compile, upload a .zip file to a storage account and create an Azure Policy manually. All things very prone to human-error. Luckily, we can automate everything!

How can we solve the problem?

Now that we know a bit more about DSC and how it works with MOF files (which, remember, are just text), let’s begin to look at the larger solution: DevOps. To start, because everything is just text, we can put everything nicely into a Git repository. This gives us the benefit of version history and tracking, as well as a set of tried and proven collaboration tools so that you don’t have to be the only person in your team making configuration packages anymore.

Automation

Using pipelines, we can introduce automation to build the configuration packages outside of the local workstation machines. This ensures the build environment is the same each time and that there aren’t any lingering artifacts hanging around that can affect the final build.

Ability to make smaller changes, more often

Oftentimes, change requests take a long time to get through all the checks and balances within an organization, meaning that many smaller changes often get bundled together. This results in larger changes being made which makes troubleshooting more difficult. By optimizing the change process it’s possible to rapidly iterate on smaller changes increasing troubleshooting efficiency and traceability.

Versioning

A common enough thing to do in writing software (and now also in Infrastructure-as-Code) but severely lacking in the infrastructure configuration scene. Every small change can have its own version. If something unexpected happens, we know what changed, with which version and can easily revert back to a known-good configuration.

Building trust

Automated testing and deployments ensure there is less chance of human error. The build process can preemptively test and check for configuration problems. For example, two servers with the same IP address shouldn’t be possible. It also ensures the exact same process is used for deployment every time.

Read-only Friday enforced

Using a pipeline allows settings rules for when deployments can happen. These rules can be approval flows or specific days in the week/month where the pipeline is allowed to deploy ensuring that configurations will only ever get pushed out when you decide to.

Enough theory, what does it look like?

Luckily some smart people at Microsoft have created a blueprint for us to deploy an entire Pipeline solution for us called DSCWorkshop: dsccommunity/DscWorkshop: Blueprint for a full featured DSC project for Push / Pull with or without CI/CD

Some of the documentation is a bit dated, and not much of it references Azure Automanage Machine Configuration, so I ended up stripping out quite a bit and made a few changes to get it to the state you’ll see in this blogpost, but it is still a wonderful starting block.
What is left is a Pipeline that runs automated tests and automatically deploys a Machine Configuration policy to the specified tenant.
It also features some more advanced concepts such as DSC Composite resources and Datum, both of which would require whole blog posts to themselves.

On towards the juicy stuff! while the configuration of all of this is a bit too big to get into for an already long blogpost, I do want to briefly showcase a demo environment where I’ll make a configuration change to an Azure Arc onboarded server VM.

We start in the DevOps repository where I’ve defined a node called DC02 (reflects the hostname of the machine) and I’m going to define a new folder to be created on the C: drive called “BlogPost” like so:

Following a commit, the pipeline immediately kicks off as I haven’t configured any further approvals or requirements in this demo environment.

Having a look at the machine in Azure, we can see it’s an Azure Arc machine and that it’s currently fully compliant with version 1.6.5 of an earlier policy I pushed to it:

During the pipeline run, it automatically increases the version number and compiles the required files, publishes them to a storage account and makes API calls to Azure to apply the new Policy and kick off a remediation:

Upon a successful build we can see DC02 now go into an ‘Audit’ state with the new version number:

Once the machine reports back with a compliance state it will tell us which configurations are out of place and consequently goes into an ApplyAndAutoCorrect state after some time:

After some more time (typically within ~30 minutes), it will state that it is now compliant:

And we can see exactly which rules it is compliant with:

On the machine itself we can verify that the folder has now been created. If I were to delete this folder, it would just reappear again within ~10 minutes as the machine autocorrects.

Another cool thing to check is the Resultant Set of Policies file that also gets generated by the pipeline and published as an artifact:

This is the result of Datum; here we can see the folder specified to be created and under exactly which configuration file it was defined in. This begins to dig into Datum a bit, but it’s super cool! This is the configuration file you can hand to someone if they ask what the configuration for a specific server is.

The end goal

Imagine a perfect world where all your servers are centrally defined in configuration files:

  • It doesn’t matter where they’re hosted – As long as they’re Arc-onboarded, they can be on-premises or on other clouds.
  • Through Azure policy they can be audited and configured automatically.
  • No more GPOs needed!
  • Security compliance can be both enforced and guaranteed (a disabled firewall will just turn back on again as long as it’s defined in the configuration). Bring it on NIST, CIS, or NIS!
  • Any configuration changes have to be made through the configuration files where tests are applied and changes approved prior to deployment.
  • A full history of all changes made is automatically tracked for future reference.
  • Multiple teams can collaborate on making sound configuration changes on shared infrastructure, reducing problems and troubleshooting time and increasing knowledge sharing and feedback.
  • Possibility of rolling back to an earlier revision of a known-good configuration.
  • Did I mention it is cross-platform compatible? Linux machines can also have their configurations managed through Azure Machine Configuration, but that will be another blog post.

Have questions about Azure Automanage Machine Configuration or interested in more blog posts on this topic? Reach out!

+ posts

Security consultant with focus on infrastructure, cloud, and automation.

Table of Contents

Share this post
Search blog posts
Search
Authors
Modern Workplace consultant and a Microsoft MVP in Enterprise Mobility.

Modern Workplace consultant and a Microsoft MVP in Windows and Devices.

Infrastructure architect with focus on Modern Workplace and Microsoft 365 security.

Cloud & security specialist with focus on Microsoft backend products and cloud technologies.

Cloud & security specialist with focus on Microsoft 365.

Cloud & Security Specialist, with a passion for all things Cybersecurity

Cloud and infrastructure security specialist with background in networking.

Infrastructure architect with focus on design, implementation, migration and consolidation.

Infrastructure consultant with focus on cloud solutions in Office365 and Azure.

Modern workplace and infrastructure architect with a focus on Microsoft 365 and security.

follow us in feedly
Categories

Follow on SoMe