TL,DR: AFAIK there’s no easy solution, and no ‘out-of-the-box solution’ that actually solves the problem.
Just laying a GUI on top on DSC is not that easy, and arguably either defies the point, or already exists (git/text files/VCS & CI tools).
I agree with this point (and used/tested it in prod):
Choco does a good job of managing the package installs, all that DSC has to do is provide a place to keep track of software versions that Choco will manage.
Bear in mind it’s always harder than it looks, and you’re handing over a lot to choco, which is not idempotent by design (It depends of the packages, mostly). There’s also the boundary between Software Management and Configuration Management, which is not always easy to maintain on Windows. But it sounds like you’re familiar with this already.
You’re actually defining two problems here:
The question is how do we bring it all together so its easy for clients not experienced with DSC to use?
My biggest headache at the moment is how to manage updates to configuration files. At the moment when you need to update a software version you have to manually make a change to a configuration file, upload it to Azure and then apply it to how every many servers you have running.
Here are the two problems:
- How to make a change easy for the multiple users/consultants, not necessarily familiar with DSC (Interface)
- How to deliver that changes to the nodes with no friction (delivery)
I’m no expert in Azure DSC but I think you’ve reached one of the limitation of letting Azure Automation DSC doing most of the work for you. AADSC currently has limited support as to what it can use for compiling the MOFs, so you are bound to use ‘basic’ DSC Configuration document (no external tooling), and a plain Hashtable / PSD1 document.
You could change the way you use AADSC currently so that you build your MOFs in your own build pipeline (i.e. using VSTS or something else), instead of letting the service doing it for you, and then assign those MOFs to target nodes (that can be automated, iirc).
The idea is that on a change request (the change of the ‘Configuration document’), it will go through your pipeline and deploy that change for you. Which means you’re only left with the second problem, having an interface easy to use and somehow declarative enough for non DSC literate users.
The key principle to Infrastructure as Code, is to improve the capacity to change. It does not matter how well written is your infrastructure in code if it’s impossible or even difficult to change, its usefulness is… limited.
This is why DSC separates imperative code (resources) into declarative documents defining state (Configurations), to provide an abstraction layer easier to change, while abstracting the change details (imperative code).
Yet DSC Configurations may contain logic to bridge the gap between the provided configuration data meaningful to the business, into something meaningful to the DSC platform and eventually the LCM (each layer has a contract with the one below).
Putting both ‘DSC resource logic’, and ‘business level configuration data’ into the DSC configuration document gives a too detailed view for those only interested in high level changes, and ultimately does not scale: If they want to change a software version, they may not need to see all information about configuration logic (two problems actually: Abstraction & Scope of change).
This is a problem with the Interface not giving enough abstraction, and you may need to separate the DSC logic (and its DSL) from the data it uses.
The data now lives outside the configuration, so you need a way to ‘inject’ it into your DSC configuration, and this is where the DSC tooling problem comes from: There’s nothing out-of-the-box that allows you to do that.
Others before me found that and worked towards a solution for DSC, I extended and improved on the principles with my Datum module.
The format in which the data is stored is not DSC specific anymore (as long as we can inject it in a way compatible with DSC, via hashtables), and we can focus on what format makes the more sense for the user making changes to it.
The most widely used interface for managing configurations are Yaml files stored in source control (i.e. git), managed in Version Control Systems (providing added benefits, such as Web Interface), because they’re terse, easy to read for the human eye, and to parse for computers. Source control provides the collaboration workflow and tools to manage (audit, review, version) changes, and usually a CI tool is ‘plugged’ to the VCS to form a pipeline so that it automatically test and build trust to the change initiated by humans, and eventually release/deploy it. This is what Chef (Roles, Datatbags), Puppet (Hiera), Ansible (roles and Playbook) uses.
In the end, the solution you implement depends on how painful your current problem is, and the investment you can make to solve it. Hope that helped seeing the bigger picture.