Forcing a server to re-apply its configuration

I have my servers configured to use Pull mode with ConfigurationNames. I have them set to “ApplyAndMontor”, because I don’t want any server doing something to change its config without my telling it to. I’m setting up audits so I know if any machine gets out of line. On occasion, something on a server may change, and I will want to force it to conform itself to the designated configuration again.

Using Update-DscConfiguration sounds like it would do the trick, but apparently this only works if the config on the pull server has changed.

What’s the best method of forcing a server to re-configure itself?

Thanks,
Eric

Well… let me answer the technical question first. Create a new configuration on the pull server, and a new checksum file to go with it. This might be just entering an extra blank line at the end of the existing file, so it checksums differently, and “tricks” the node into re-pulling it. Ought to work.

There presently isn’t a way to set to ApplyAndMonitor, and then manually “poke” it to force it into Apply mode.

Potentially, you could also reconfigure the LCM to ApplyAndAutoCorrect, and once it reports back in as Compliant, switch it back to ApplyAndMonitor. Sounds awkward. Might work.

But… the bigger point, now. I completely understand your desire to not have machines reconfigure themselves randomly throughout the day, but you’re working contrary to what DSC wants to do. That’s completely your privilege, and you very likely have excellent reasons for it, but you’re going to find it difficult because that isn’t what the technology wants to do. It’s kind of like saying, “I really want to drive this old 1968 Chevy, but I’m opposed to using gas.” You’ve picked the wrong tool for the job. Something like Chef or Puppet, which offer more granular control, might be more suitable.

I’d argue that DSC making changes without you knowing isn’t the problem, it’s why the node isn’t configured the way it was supposed to be in the first place. If nothing changes on the node, DSC won’t do anything. If something changes, DSC is just putting it back. Presently, lacking any awareness of maintenance windows, DSC making changes can definitely be disruptive - but that’s what it does and where it is, technically, today. You’re finding yourself at cross-purposes with it, and it’s going to be awkward to work with because your processes don’t line up with what DSC wants to do. So I’d say the real answer is, “either change your processes to align with what DSC does, or use something that aligns with your processes.”

Very much not trying to be snarky here, but I run into a lot of people banging their heads against DSC because it isn’t the right tool to align with their processes, and they’re either not willing or not able to change those processes. Just want you to be aware that’s where you’ve gotten to :).

So: DSC isn’t designed, at present, to do what you want in a “clean” fashion. I’m not saying what you want is wrong, or trying to justify DSC’s perspective, just that you and DSC are in different playbooks.

Need to Pin this reply :slight_smile:

This would save me some explanation of the same ideas to other IT professionals ive been talking to, emphesizing what DSC can and cant do and what it should be used for and what it shouldn’t.

Its similar with setting the RequiredBoot property.

Don,

Thanks for your very thorough answer. What you are saying makes sense, and it may be that we change our processes to match DSC. After all, GPO’s always change a system without asking, and we’ve used those extensively.

It gets blurry for us because parts of our configuration are pretty static, but other parts (namely how the networking is set up) are drawn from a configuration database that may change independently of our DSC config. Right now our custom DSC modules check the machine’s network config against the db and can detect and rectify a mismatch. Hence my desire to manually force a reconfiguration apart from a change in the DSC configuration itself.

So probably in that way we are bending DSC into a shape not intended by its designers. Perhaps we should take a snapshot of the desired network config from the db and make it part of the DSC config. If the db changes, we would have to update our DSC config (which could be machine-generated from the db). Or we could excise the network configuration from DSC altogether and set up a separate configuration/verification system for that.

Okay, I’ll stop rambling now. Thanks again for your help.

“Right now our custom DSC modules check the machine’s network config against the db and can detect and rectify a mismatch”

I love that approach. This is totally the right thing to do. But definitely worth having ApplyAndAutoCorrect so your custom Test/Set can run and do just that. Since it’s a custom resource, you could certainly build in “maintenance window awareness,” too - in the Test, just return $True if you’re not in a defined maintenance window for changes.

Another approach I’ve monkeyed with: Make a custom resource that doesn’t nothing but check to see if you’re in a maintenance window when changes can be made. If not, its Test returns $false. It’s Set does nothing. Make every other configuration setting dependent on that. So every consistency check will essentially stop hard if it’s not in a maintenance window. I haven’t played with it enough to see if it puts the LCM into any kind of weird loop or not, but it could be a tactic for stopping changes, while still leaving AutoCorrect turned on. And “maintenance window” could be any external flag, really.

Hey Don

Unsure if I’m doing anything wrong, but regarding your monkeying I’ve tried that before and depends on only appears to be ordering it does not do anything to stop subsequent resources testing or setting.

Try and run the below configuration, at least on my test environment myfile.txt is always removed regardless of test being returned true or false.

configuration test {

    Script test { 

        GetScript =  { 
            @{
                GetScript = $null
                TestScript = $null
                SetScript = $null
                Result = $null

            }
        }
        TestScript = {return $false}
        SetScript = { write-verbose "fsfs"} 

    }


    File tstfile {


        DependsOn = "[Script]test"
        DestinationPath = "C:\temp\myfile.txt"
        Ensure = "absent"

    }

}

Huh. I’ll have to fuss with it a bit - it’s unfortunately going to be a few weeks before I can, but I wanna run it in trace mode.

@eric-hodges:

“So probably in that way we are bending DSC into a shape not intended by its designers. Perhaps we should take a snapshot of the desired network config from the db and make it part of the DSC config. If the db changes, we would have to update our DSC config (which could be machine-generated from the db). Or we could excise the network configuration from DSC altogether and set up a separate configuration/verification system for that.”

This is exactly the purpose of building a pipeline with DevOps mindset. All your DSC data should come from a DB like the network config does. Usually that’s the use case for a CMDB.
All data is inserted to that DB via what ever UI you want, all your configuration scripts are stored in a source control.
This is where you build your pipeline - a Build process assembles the configuration scripts and the data per node/ per role / per other , runs tests (Unit and integration - a.k.a. Pester) and then creates the mof files, then the next process in the pipeline - the Deployment occurs and a command is run to push the mofs or just copy them to the pull server, if you used that method. Last but not least, you can then run powershell OVF to test that youre indeed at the state you wanted.

@ Don & Luke : It doesn’t have to be a custom resource, you can use a registry key or a environment variable or as Luke used a text file, but I think whats missing in that example is that
TestScript = {return $false}
should be changed to actually test if the file exists, and not via a second resource with DependsON
The removal of the file should be done via a different process and not a configuration script.

Hey Arie,

The demonstration config was purely to show that Depends On does not stop the execution of DSC it merely orders it, so in this case the Script resource Test runs first then the File resource runs afterwards and the fact that the test-dscresource is failing on Script has no impact on DependsOn - it will still execute the “tstfile” not caring that the dependent configuration is failing.

I’d be interested in how you have configured maintenance window functionality, we’ve been using custom resources with a defined workflow that utilizes external modules to determine if the set should run.

Following up on my previous comment, it looks like dependsOn does not respect test chains - if the test is still false after the set dependent resources will still execute - but if an error is thrown dependent resources will not execute yet anything outside the dependency chain will. At least this is what I see from my experiments.


configuration test {

    Script test { 

        GetScript =  { 
            @{
                GetScript = $null
                TestScript = $null
                SetScript = $null
                Result = $null

            }
        }
        TestScript = {return $false}
        SetScript = { Write-Error "Hello"} 

    }


    File tstfile {


        DependsOn = "[Script]test"
        DestinationPath = "C:\temp\myfile.txt"
        Ensure = "absent"

    }


    File tstfile1 {

        DestinationPath = "C:\temp\myfile111.txt"
        Ensure = "present"
        Contents = "MYContent"
    }


}


Yeah, it’s something we’re gonna be discussing in DSC Camp this week.

So, we’ve been discussing this.

Here’s what we believe is the LCM’s thinking - which seems square with what you’ve observed. First, the LCM kind of assumes you’re in “apply” mode - anything else is just monitoring, which means DependsOn logically doesn’t apply, right?

So:

Resource One: test.
If False, Resource One: set.

  • If set doesn’t throw an error, assume all’s well - don’t re-test.
  • Now safe to move on to Resource Two.
  • If set throws an error, stop execution.

So the assumption is that Set is a high-quality operation, to use Steve’s phrase, and that it will toss an error if anything other than total success is achieved. But it doesn’t re-run Test after Set; Set is assumed to be doing a good job.