Is Desired State Configuration designed for the masses? How about scalability?

Hello!
In the last time I have played heavily with Desired State Configuration.
So there comes up some questions.

The first Question is: How about scalability?

Is Desired State Configuration designed for the masses? Or only to use with a human manageable small amount of Servers?

In the following write up I use the term Configuration-function for the new PowerShell Keyword to make it different from the term configuration which is the current state of a system (node).

I don’t see any scalability in DSC. At the end of the road you always have a 1 to 1 relationship.

I know DSC is designed to use even outside of Active Directory in the cloud and with systems without Operating Systems like the BIOS but I speak here only for Systems with Operating Systems like Windows or Unix.

1.) Node to Pull-Server relationship

Every system (node) can only have 1 Pull-Server 1:1 .There is no concept like in Active Directory to use a group of Servers to serve the configurations.
What if you have 100000 Nodes do you like to maintain which Node should call which Pull-Server?
We probably could use the Netlogon share for a DSC-SMB-Pull-Server but we could not set the LCM to use the Netlogon of the next Domaincontroller.

2.) Configuration-function to .mof file to Node relationship

The Windows PowerShell Configuration-function is good at scale you can design one Configuration for one System (Node) and you have a 1:1 relationship , or you can design one Configuration-function to produce .mof files for many Systems (Nodes) and you have a 1:n relationship.
That is quite fine. But I don’t suggest to use the 1:n Configuration-function. Because one day you like to have one system out of all these systems to have a different config then another one and so on. So you have to change the 1:n Configuration-function every time. The 1:n Configuration-function grows and is cluttered up! This is very hard to maintain for a human being.
Here I suggest even to have always a 1:1 relationship and have one Configuration-function for one System (Node). Because at the end of the day the Node can only have one GUID.mof file on the PullServer and also a node can have only one Current configuration in the $env:systemRoot\system32\configuration folder.
So it is easy to maintain one Configuration-function for one GUID.mof file for one node 1:1:1.

3.) Current configuration to history relationship

Even if a node is in the Pull-mode you can create Push Configuration-functions to push Hotfix configurations to the Nodes with the –force parameter.
Or you have Administration colleges which do funny things and push configurations arround ;-).
So you have to watch yourself that the current configuration of the System (Node) is not drifted away from the desired configuration on the Pull Server. Because with that the current configuration of the Node has a different state then the desired configuration because bypass configurations were made.
Here we need a history mechanism to reproduce every step a node has made. With that history you can setup other systems equal to the current.

I hope my English was comprehensible to you! And my ideas too!
Greets Peter

So where do you see scalability in Desired State Configuration ?

Johan Akerstrom wrote o Facebook:

  1. Use DSC with DFS or NLB for the DSC WebService
  2. DSC has nested configurations. Use them. How you categorize your configurations with exceptions is your responsibility in DSC. There might be configuration limitations in the DSC DSL but this is just to build the MOFs. Do you see scalability issues with the LCM?
  3. Here I agree.

One thing to remember is that DSC is very much a 1.0 product. By that I mean that its the initial release. I expect future releases to build and expand on what we have now.

In terms of scalability I don’t think the number of servers is as important as the rate of change of those servers. If you build servers to perform a particular task and then leave them to get on with that task they aren’t going to have much impacy on your DSC servers

In terms of configurations being changed by others - put them under change control! If you don’t have control of your environment it doesn’t matter what technologies you use - they will be compromised and you will get failures in your processes.

Hi Richard thank you for your Answer! Realy apreciate it!

Yes I know it is a V.1.0 Product I discuss here me thougts before I make suggestion on MS Connect and perhaps the PowerShell Team read along here :wink:
DSC is so Powerful and I want to use it to mange Windows 7 or 8 OS Clients not Server OS so I am asking for scalabillity!
My thougt was, If I only have one Pull Server and so many Windows Clients what if the Pull-Server smokes up ? Where is Plan B?
But ohan Akerstrom pointed (me) in the right direction.

In terms of configurations being changed by others – put them under change control! If you don’t have control of your environment it doesn’t matter what technologies you use – they will be compromised and you will get failures in your processes.

Full ACK :wink:
But I think an (optional) history does not cost much Disk space if you record it as a zip file or so, and with a good history you can reproduce the machine!

greets Peter

Hey Peter,

Regarding scale, the pull service is very scalable, since the majority of the computational work is done out of band in generating the configs. The main scalability concern for the pull server is how many servers will check in at that time that will need to pull new modules. There you are capped by bandwidth. BranchCache could help there…

In regards to the questions directly -

1 - Use a load balancer and dns name to hide which pull server you are connecting to. You can also use subnet prioritization to direct clients to a local pull server.

2 - 1 to 1 is fine… you really shouldn’t have to care about the configuration mofs. The scripts you build to generate those mofs are more important.

3 - Yeah, I don’t like how pushing a config changes the local config manager. I’m working on a resource that helps check other nodes to make sure they are configured correctly, but that’s also another good task for a monitoring system to check.