From MF4 to WMF5, and it's implications on a deployed DSC

There seem to be a lot of (semi) published news abut new features and options with v5. Before running it in test I figured I’d read the patch notes and ran into a few things I found … curious in a couple of categories:

META Mof:
So in the documentation it mentions creating a “Special” configuration type of “meta” so that you can push LCM settings to nodes before pushing the usual configuration. I found this whole section confusing , quite frankly.

Currently I can already add the LocalConfigurationManager {} into my configuration and it will generate two mofs: a meta.mof with the LCM data and the “expected” mof. In 5.0 am I going to have to generate the meta.mof separately? It says not for now due to backwards compatibility the old version will still work, but overall a bit of a of a step back as far as can tell if they sunset this. I’m guessing this is so it works better when leveraged by 3rd party tools as I can’t see an advantage to forcing me to generate two separate configs compared to a single config automatically generating both?

What am I missing?

MODULES, VERSIONS, etc:
Pretty big change here on how modules are zipped up and unzipped into folders it seems. While this one I “get” why it’s being done, the question becomes: what about all my already-deployed modules. will they continue to be pulled properly … or should I immediately attempt to re-package new zips of all of them in “v5 format” as when I upgrade the nodes will they look for the zips only in the new format?

Partial Configurations:
This seems … well … dangerous to me. Again, I know things like Puppet don’t support composite configurations so I’m guessing this is the answer to that, but isn’t waiting until it reaches the node to see if multiple MOF files are compatible a bit too late?

Sorry for the rant, but these things all stuck out there so I just figured I’d ask for others thoughts on these changes.

Meta: Yeah, you’re intended to create this separately, as it’s gotten a lot more complex. But as it notes, v5 will still accept a v4-style meta for backward compatibility. The intent has always been to have the meta be separate; the fact that it worked all-in-one before was kind of an anomaly. And you always had a separate MOF before, right? You were just creating them in one configuration script. Now it just wants two configuration scripts, which could still even be in the same .PS1 file.

v5 nodes are going to want the modules in the new format. And, it’s a good idea to rebuild them as classes anyway. So you’d create a new version of your modules. It’s fine to have old/new on the pull server at the same time, because they’d have different versions. Your v5 configs would refer to the new version.

Partials are, IMHO, a terrible idea. And yes, waiting until everything hits the node to see if anything explodes is a bit too late. This merging should be done before or at the pull server, but we don’t have the tooling for that. I think partials are going to turn out to bite everyone in the butt. I’m not alone in this opinion, either - several other MVPs have the same thoughts. We’re hoping MS will open-source the pull server code at some point so we can just build a smarter pull server and ship single MOFs to nodes from there.

We get into this in several sessions at PowerShellSummit.org, if you’re going to make it. You should, if you’re going to play with this stuff. It’ll save you a tonne of time and error.

As always: thanks for the feedback, it helps a lot.

To clarify my first confusion yes: I currently run a single configuration “masterconfig” that then calls buckets-o-composites to build a mof for each my resources. If I add the localconfigurationamanger in, it spits out two mofs per node. Said resource is buried somewhere in my “base” composite configuration right now, easy enough to pull out.

If I make the meta a separate configuration, do I still have to run through mof generation twice (one for each configuration)? Or is “meta” “special/reserved” so it doesn’t need to be specifically called out? Guess it’s easy enough to find out. Ill just run and see :slight_smile:

Also thank you for the confirmation on the partials. There were lots of reasons I was going to avoid it unless the company forced my hand, but your statement pretty much confirms I’ll just pretend the option doesn’t exist.

As for the summit … I’m tentatively reserved. As in I have made the official request … been verbally told I can do it … but still waiting for that “official sign off” that always gets you holding your breath. For me it’s the Dev-Ops practices, really want to get a better handle on that as I’m pushing hard to change our internal processes but concepts without context can be rough.

Keep in mind Summit’s only got about two dozen seats left, so don’t let them put it off too long.

“f I make the meta a separate configuration, do I still have to run through mof generation twice (one for each configuration)? Or is “meta” “special/reserved” so it doesn’t need to be specifically called out?”

What you can do is have a .PS1 with your main Configuration block (which is kinda like a function, right?), and then the MetaConfiguration block. At the very bottom, run both. But the Meta is much more of a “config” and less of a “couple settings” now that the LCM understands partials, configuration names (vs GUIDs), pull server secrets, etc. Even if you’re not using partials, the LCM needs a lot more data, and is a lot more likely to differ across machines. But for me, the meta is still a one-off thing you PUSH to the node (usually putting it in pull mode), whereas the “regular” config is something I build into a MOF and dump onto a pull server. Keep in mind that the meta.mof is also useful in bare-metal injection scenarios, which is another reason to produce it from a separate script.

In terms of the underlying technology, there are some reasons why the LCM gets configured through a distinct MOF, including the fact that the LCM doesn’t like to be reconfigured when it’s processing a configuration.

I think it’s really a question of how you set up your tooling, and how you plan to initially provision servers. If you’re doing one-MOF-per-server, having a distinct script to produce the meta.mof is probably a good idea from a maintenance perspective.

Some of this also plays into whether you’re treating your servers as pets or cattle. For example, the whole GUID thing in v4 was very pet-based. Now, with configuration names, we can be a little more cattle-based, meaning your configuration names indicate a “role,” rather than (say) a host name. Whatever machine is assigned a role, pulls the MOF for that role and configures itself. Need a machine to change roles? Just reconfigure the LCM to pull a different role-based MOF, stand back, and wait. In that kind of scenario, producing the meta MOF from the same configuration script doesn’t make any sense; the meta is still going to a specific node (either through push or injection), and doesn’t really have anything to do with the role-based configuration. Combining meta+config into a single script only makes sense if your servers are pets.

There’s a lot of DSC that makes no sense in the context of how we’ve traditionally managed servers. Microsoft has a vision for how we should be doing it, but they’re not so great at communicating that vision, so we kind of have to infer the vision from the technology.

“What you can do is have a .PS1 with your main Configuration block (which is kinda like a function, right?), and then the MetaConfiguration block. At the very bottom, run both”.

This is what I was looking for … and unfortunately means I have to re-tool things quite a bit. I do actually keep my meta as something pulled, but mostly because of certificates and the annoying fact that they expire. By having a meta configuration that is pulled, I can update thumbprint info on the node as they are about to expire and keep things chugging along (you help me last week come up with this model, and it works rather effortlessly. I use a custom resource that monitors certificates and when a new one is auto-enrolled it exports the public key to a UNC for use by the script that generates the MOF).

Unfortunately it gets a bit uglier from there. I’ve tried rather hard to keep to the “cattle not pets” mantra, and as a result the nodes are (very) simple in ConfigurationData: NodeName, Location, Role, and Service. That’s it. Everything is pulled form those simple parameters including pull server URL (based on the location param, in my “BASEConfig” composite resource). By making it essentially “separate”, I really need to create a unique “LCM” Composite resource to be called by the configurationdata … one that contains only LCM specific settings. I can no longer just drop it into one of my existing composite resources.

The upside, I guess, is i can schedule my meta generation at a different frequency from my “general mof” generation … but I’m still not certain what the advantage is except maybe tuning it down to a less frequent value (we regenerate mofs weekly right now to coincide with release cycles). I still want to generate meta semi-regularly so I can keep up with expiring/renewing certs and the updated thumbprint i need to publish, and if i DO compile the meta separate from the main mof I risk sending a different thumbprint from what I used to encrypt.

EDIT: OK maybe i don’t need to retool a ton … more I think of it it’s just a handful or resources that have to get moved to their own composite resource so meta can be compiled separately. I think in my mind I’m just a fan of “one script to rule them all” as a result of my OSD days.

What about “local injection?”

Main configuration references a custom resource, and specifies new LCM config. This isn’t the meta per se. On the node, custom resource takes the desired new LCM config and spits out a local meta-MOF, dropping it in the appropriate folder for the LCM to pick it up. LCM picks it up on its next meta check.

It’s basically how you do bare metal injection, by dropping the meta.mof into the magic location.

My problem with that is that because the resource will scan/correct itself at a much more aggressive interval than my mof generation (30 minutes vs. a week), I leave large windows of time where the provided mof was encrypted with a thumbprint that no longer matches the LCM. In the “old method” both configurations are updated and pushed at once so this never became an issue (the LCM seems to always apply the meta.mof before the mof if it finds both have changed, so the worst I’ve had to deal with was a node checking the UNC path every x minutes to make sure it’s provided the updated public key for the next build cycle … which generated no errors).

I like using local injection during an initial VM deployment … it’s an elegant way to “kick things off” … but once things are going I’m not sure I’m a fan.

Hurry up and get MS to opensource the pull server … we could add certificate maintenance to it like puppet :wink:

EDIT: oh, in case your bored (yeah right) and you want to see my simple little v4 module to keep certs updated: cLCMCertManager Github

Pretty straight forward stuff, just accepts a a few filters to help isolate the “best cert” and does a thumbprint compare to the current LCM to see if it needs to update anything … if so … copy time.