DSC Pull Server with GIT and a few environments with automated flow

Hi Guys,

I need some advice from you about setting up infrastructure. I have general knowledge about DSC and Powershell but I need your help more with architectural approach and decisions.

I have two kind of servers groups - production and dev (we share those servers between two AWS accounts) - each contains a few environments like prod, preview or dev1,dev10 etc. We want to have one DSC Pull server for everything - inside those experiments, we have different node types of servers - like DB, WSUS etc. Each configuration should be automated by Powershell DSC and stored in GIT repository. DEVDev servers should be assigned to branch DEV in GIT and PROD servers with MASTER in that repository.

I need to assign those configurations to:

  • PROD or DEV groups of server
  • Type of server
My questions are:
  1. What do you think about the above approach?
  2. How to force DSC to works with GIT code and branches and generation of MOF files?
My first idea is:
  1. Create one folder with DEV code and generate from that folder code for dev servers
  2. Create the second folder with MASTER code ang generate from that folder code for prod servers
In order to do this i need to create some scripts and some work with GIT and flow. Everything should be automated. I need to just commit changes to GIT repo and whole flow should works.

What do you think? Is there any other approach to achieve reliable environment? For now we need to build own Pull server and we are in AWS. Maybe if future we can think about Azure and DSC Automation feature - maybe there it would be easier.


If I understand you need to compile and apply DSC config from Git.
You have 2 server groups and several server types.

What you need is a kind of CD platform when you can trigger an action to an agent on each pull server depending on a pull request in a branch (Dev or Prod).

Azure DevOps can do that, also Jenkins and other tools



I think separating PROD and DEV on branches is not great (I’ve tried that before).
The best is to build the artefacts for PROD and DEV environments at the same time, and just have a different release ‘cadence’ or schedule to the pull server (i.e. use an intermediary share/repository for those artefacts).

One potential issue you will face otherwise, is when you have released a version to Prod, started the next work on DEV, and realise you now need to push a hotfix through DEV and PROD, without releasing the new work on DEV…

Now when you say that you need to assign configurations to PROD and DEV groups of server, and Type of Servers, I suppose you’re hitting the classic Configuration Data Problem, and you’d like to start composing DSC roles (and the video at the end)…

Feel free to have a look at my Datum module to manage this into git, but if you’d like to see a more end-to-end pipeline, then have a look at the DSCWorkshop project Raimund, Jan-Hendrik and I created.

And yes, for compiling MOFs from git you’ll need a CI tool, such as Azure DevOps / Pipelines as Olivier mentioned.



I read through the linked articles(it really hit the nail on many issues we are dealing with, trying to move our infra to DSC) but there is one thing I fail to grasp.

I think I get the part where you describe on how to deal with roles in the ConfigurationData + Dantum. That does indeed solve one of our problem where we have to deal with websites that requires specific IP’s assigned to them. But for that way to work, am I right in thinking that, using this method, almost the only things we should find in a Configuration file are calls to DSC Composite Ressources that wraps around DSCRessources(both Custom and official ones) and that we will almost never find a direct call to a DSC Ressource?

I have cloned your DscInfraSample to try to figure it out but I’d like to hear if I’m completely wrong thinking that.


I’m not too sure I understand your question.

What do you call a Configuration File there? The role YAML file?
It’s not that you have to use only DSC composites Resources, you could potentially call directly DSC Resources, (minus a limitation on the way the resource ‘ExecutionName’ is set by the Get-DscSplattedResource commands, but you could override that), but it’s not the recommended approach.

By doing Composites, you package it up in an artefact with a pipeline of its own, where you can test and validate the code, with typical data (I use Test-Kitchen).
Then, when you trust that artefact, you reference it in your control repo, and you only provide different data (aka input), and you validate that in your environment (but you already trust the code).

The idea is to have small components, loosely coupled, but with high cohesion, to reduce the scope of a change.

Don’t get too hung up on such details, experiment and try to find what works for you.

Also, I did a detailed reply on the flow in this post: https://powershell.org/forums/topic/blue-green-methodology-with-dsc/

You might find some useful stuff there.

By Configuration files, I meant a server profile(role), yes. Basically, what we were doing right now is: if we had 3 web servers that a the same web app installed on it, we would create a Configuration named AppXWebserver and call it a “profile”, that Configuration file AppXWebserver.ps1 would then include calls to DSC ressources(mostly Composite) that would configure the server correctly. We were meeting many limitation with that model, mostly with code that would duplicate itself eventually.

But after taking a look at the DSCWorkshop repo, and your Datum documentation, I now realize this is taken to a whole new level where the ConfigurationData is built dynamically, and can go down to the node itself(another real big problem we were facing, people really love to name their cattle), and, from what I can understand so far, a Configuration called AppXWebserver.ps1 is now irrelevant since the complete profile Configuration is also built dynamically from the roles that are specified in the yml files?

I’m still digging through the code to try to grasp fully how everything is built and interact, but this looks like it would solve 90% of the drawbacks we are facing right now.

One question: Is there a particular reason to have the Composite DSC ressources(under DSC_Configurations) installed from a chocolatey package instead of simply having them in the git repo itself?

And thanks for sharing all this, it’s a real eye opener!

Yes, DSC Composite Resources (aka configurations) should be artefacts built in their own pipeline in isolation to reduce the scope of change, and overall trust in the system. Only references to them should exist in the control repo.

It might feel faster not to do so initially, but it’s just tech debt waiting to bite you (did the mistake myself).

Decoupling those from config data will also help you with versioning, single piece flow, release cadence, merging & branching strategies…

Like any debt, it’s not necessarily bad on the short term but it should not be left unmanaged.

In short, it’s ok while you play on your laptop to get the hang of it, but don’t do this in production.