Can AgentID be leveraged to use friendly names for single-node mof files?

Does the new registration feature allow you to use friendly names for your node-specific mofs, or is it only reserved for ConfigurationNames? In other words: could I drop in “server-01.mof” into the configuration folder and expect it to translate the agentid and grab the right mof? Or do I essentially have to set the servername as one of the configurationnames?

You’d need to set the configuration name appropriately in the LCM configuration. There’s no intelligence in the LCM to think “ah, I’m named Server1, so I’ll look for Server1.mof.” That flies in the face of the “treat them as cattle” idea, where you wouldn’t know the name, or even care.

That explains a lot.

I was thinking more on the other end where the server connects with it’s AgentID and the PULL Server says “ah, server1 registered that AgentID, so here’s that node specific mof”.

Basically I may not care about the server’s name, but that doesn’t mean it wont have some unique properties like machine certificates and other “things” that may end up specific to that box, even if I don’t explicitly call it out and let some other logic inject that info into the configdata.

I like the “treat them as cattle” idea … don’t like the “let multiple things collide on the node if you can just pre-merge it all beforehand with a little logic” … but I can see the advantage here of just having some common “role” based MOFs and you just pick said configuration name instead of an ever growing configuration data hashtable.

Hi,

The node uses the RegistrationKey value to “authenticate” to the pull server and in return gets an AgentID that is then recorded in the Devices.mdb file on the pull server.

This new ability should allow you to stop thinking with Server01 as a value in the Configurationnames but rather something like this:

ConfigurationNames = @{‘Srv_Netwrok’,‘Srv_Shares’,‘Srv_Users’} etc
allowing you to split your configurations in to multiple small parts, placing Srv_Network.mof, Srv_Shares.mof etc in your configuration folder for the nodes to pull

I appreciate the suggestions … I’m just struggling to see a “point” to it.

If you’ll entertain me, Ill play devil’s advocate a bit:

In WMF4, I could already do this very easily: I simply used composite configurations and called an attribute from my node hashtable to filter each configuration. I could have multiple departments/people/whatever each check-out and work on their own composite configuration, and at the end of the day, it was still effectively multiple small parts, and ontop of that has one huge advantage: I get one complete mof that is infinitely easier to test against than multiple little mofs that ultimately end up merging or colliding on the node before you see an error code if you aren’ t good at pester or whatever. TO me that’s a big deal: I know how the MOF looks BEFORE I push it without giving up the multiple contributors or small size/manageability.

I can also get a LOT more complex and complete if everything is fully assembled before hitting the pull server as well. In a composite configuration, for example, I can query information on other nodes and adapt settings accordingly because i’m running against a complete configurationdata hash table. I know that if i add a node, and that node happens to be the 2nd DC in a site, that not only will that node get a mof on how to deploy, but every other server in that site will also get a updated mof that includes updated DNS entries to account for the new DNS server that is on said DC. It has nothing to do with treating my other nodes as “pets vs cattle”, it has to do with I get more information by using composite configurations and a “full compile”.

In fact the only advantage I can see is about AgentID and that apparently only works for ConfigurationNames, which seems really short sighted if the pull sever is keeping track of machines in the devices.mdb. I, on the other-hand, had to create a bit of a custom function that generates GUIDs and drops them into a csv file so the team and I can configure against friendly names (because manually maintaining GUIDs sucks).

That’s, oddly enough, the only reason I’m trying to leverage AgentIDs but maintain individual MOFs … so I can retire those custom functions. I see no reason why I would even want someone to have to open a custom meta configuration, tweak a ConfigurationNames, then manually launch it. That’s too much work. I have a generic script that is set “on first run” on the golden images, that includes a generic [guid]::newguid() that is uploaded to a share. Our deployment/ops never directly touch the server … they set the server name using the deployment template, then add the server to the ConfigurationNodes hashtable … the box will be polling as soon as it’s up and a configuration will be ready for it. Again a huge advantage: if you “forgot” a configurationame you have to go back to the server and push another dsclocalconfiguraiton … i just update my hashtable. Oh and that update can be tracked in github.

Apologize for my rant … but a lot of people keep saying “but you can have fewer generic mofs” but I really don’t see an advantage to it that offsets the risk and increase in testing complexity, especially if all those mofs are spit out from one command.

This is one of those situations I’ve mentioned to you before: the way you want to manage your machines isn’t the way DSC wants to do it. You’re trying to force it into your worldview, and that’s why you’re struggling.

AgentID is mainly used for the reporting server function, because on the reporting end, you obviously need to keep track of each and every node, individually. And frankly, the reporting server function is exceedingly primitive in its current form.

DSC doesn’t “want” to provide individualized instruction for every machine - it wants to deal with them in role-based groups. Yes, under v4, the use of GUIDs and lack of any kind of modular MOF capability more or less forced us to deal with machines individually. That’s wasn’t compliant with the vision, it was just what they could get out the door in the first rev.

You’re also making the mistake - which a lot of people are making - of looking at v4 and v5 and assuming that these somehow represent the penultimate implementation of The Vision. They don’t. What you’re seeing now remains another intermediate step, as Microsoft seeks to iterate quicker and quicker.

You’re also running smack into the lack of tooling, which a lot of people do. You’re saying, “I don’t see people manually running a custom script to produce one MOF,” and you’re right. I don’t see it either. What I do see is some kind of higher-level tooling automating the production of that MOF. Us, running around firing off scripts, is NOT The Vision. We’re only doing that because we don’t have any tooling atop DSC to help with the management of all this. Your custom functions and whatnot? Those are home-grown tooling. And for many businesses, that kind of home-grown tooling is going to be all they need or want, and they’re going to go nuts with them.

The piece you’re missing is some kind of higher-layer tooling - like an eventual System Center product, or Puppet, or Chef. In the meantime, you’re struggling to make DSC work by retiring your own tooling, and you’re not going to achieve happiness that way.

I’ve also mentioned before that the existing Pull Server functionality is also fairly primitive, and that Microsoft is presently - in my opinion, with partial MOFs - not heading in a good direction with it. Until we get that code, or until someone decides to decipher the protocol documentation for a pull server, so we can write a better pull server, this is what we’ve got. Nobody’s pretending its perfect.

Right now, you lack any kind of central configuration database that designates roles for nodes. Something where some administrator can push a button that says, “we need a new DC,” and everything is automated from there. That database would be responsible for going out and picking up the various “partial configurations” that lead to a DC, merging them together, and then delivering them to a new VM it was preparing to spin up. Until we have that tooling, you’re either going to make it yourself, or live with the fact that there are a lot of rough edges, still.

But at the moment, AgentID is essentially intended only for use by the Reporting Server function. We don’t have to like that, and we could wish it was otherwise, but it ain’t. There’s kind of no sense wishing it was otherwise until we can gain some ability to rewrite the Pull Server function.

So that’s my rant for this morning ;).

I think you’ll find I’m a rather benevolent dictator :slight_smile:

First off I appreciate your response, I can come off a bit stand-offish at times. My “world view” as it were, I think is more of a “I think this is the best way to use it right now” largely because of the other things you’ve mentioned: lack of tooling and limitations in current iterations. Ill grow and adapt … we all have to in this career.

That said I’m merely trying to balance homegrown tooling against that perceived vision … and sometimes I read the tea leaves wrong: for example the intent of AgentID. I see the feature sneak into a blog and think “cool, don’t need that database of server names and GUIDs anymore”. Oops, maybe I do. Ditto on deploying by groups (I’d argue I"m still deploying by groups, just groups of composite configurations rather than groups of MOFs, but at this point it’s neither here nor there). I get it, I see the advantage, then lack of tooling says maybe not yet. MS doesn’t exactly have a clear road-map for those of us gleaning all we know from powershell.org and technet. We just gobble up what trickles out and speculate.

I oh so wish the pull server was opensource. I made the suggestion on the feedback forum and the answer I got back was “we put a lot of effort into AA DSC, please explain the short comings and why you don’t just use that”. Maybe when Azure Stack gets released that may be a better solution to me (I already see credential storage on the road-map), and honestly I wonder if the pull server’s future will be short lived with that on the horizon. Feedback certainly implies it.

Appreciate your rant in response to my rant. At the end of the day they are a collision of ideas, which to me a is a good thing.

One thing to bear in mind is that Azure has only recently started embracing PowerShell at the infrastructure level, and that is going to drive a lot of good improvement. So yes, Azure Stack is the beginning of the vision starting to come to fruition.

Also, I’ve a strong feeling you’ll see an open-sourced pull server at some point. There are some internal technical (not political, which is good) things that need to happen first, but several people on the team that I’ve spoken to feel positively about taking that step.

And FWIW, the DSC pull protocol is documented. So you could technically build your own web service right now.

I’m surprised Azure only recently started embracing PowerShell at that level. I would have figured with Snover at the helm of the server team this was more of a “plan all along” but that reveals a lot. I’ve read in multiple places that the Azure team are big puppet fans, I wonder if that kind of opened their eyes…

I have unrealistically high-hopes for Azure Stack. In a LOT of ways most of my push and and sometimes misguided dive into DSC is I’m viewing WAS as the payoff: i think once ARM is available in a more “trusted” location (perception matters) it will be the breaking of the damn and those of us who took the time to prepare will be very happy we did. It’s going to be a very divisive time in the windows world: The GUI admins vs those who embrace the new model of document based configuration.

Yeah I could build my own webservice … but if I’m so easily looking to retire some of my existing functions … you can probably guess I’m not eager to start that project if Open Sourcing is on the horizon.

I’ll just add something “small” from my pov.

Though I understand theres a learning curve which might pose a barrier for newcomers, you need to master pester. We should not go the path devs have went in the past. TDD is crucial in getting quality of “Build” at the level you can actually say with almost 100% that your dsc scripts will run.

In that sense its easier to test smaller parts then larger parts, plus you get to write the tests for the orchestration.

Remember, this is devops vision, not just lets provision servers, this is the entire alm and devops cycles, primarily for me its lets provision servers with applications, using same methodologies and concepts. I’m aware dsc lacks in certain areas and it would require some homemade tools, but I’m trying to aim for the minimum needed to make it tool agnostic or language agnostic, so it will be easier in the future to connect to newer versions of dsc and 3rd party software, should the organization choose to do so.

Some of the units in my env want to write their dsc scripts themselves or have control over them, which means I’m the one having to make sure each of their scripts passes unit tests before I’m allowing them to commit to my orchestration. And i treat the lcm script just as I do with dsc scripts, all goes into source control, nothing is hand written, all done via a GUI or entered to a configuration db/cmdb. So what you’re referring to “I don’t see people changing lcm parameters to change configuration names” to me its a matter of tool and UI. None of the it should change that. They should use an appropriate UI, the creation of config data, and lcm scripts is done by a central service you write, thus conforming to the idea of micro services.

To me the idea of configuration names is extremely beneficial, I have IT people authoring dsc scripts, combined with configuration data coming from a db, all tested prior to allowing them into the system. Then a self service website allowing non -IT people like product managers entering their requests picking from the pre entered templates, and all the dsc script actually needed is to change a lcm of the target node to pull the desired templates.

If that makes sense :wink: