DSC Custom Compliance Database

Hello Guys,

I am using an http pull server and my goal is to configure around 300 computers with DSC.
I find the Compliance server limited and I would like to have my own devices.mdb file (customs-devices.mdb) without modifying the LCM.

I would like to add a script to the configuration that would connect to the mdb and write some information (serial / hostname / etc.).

The thing is I have no idea how to proceed. I would like to use HTTP because I have multiple networks and don’t have SMB between them.

Since all the clients are reporting correctly to the http pull server, I am wondering how the LCM is updating the devices.mdb and if I can use the same technic with my custom db.

Any help would be greatly appreciated. Who knows, maybe this could also help other techs :slight_smile:

See you

The current Reporting database is a Windows Internal database - it doesn’t accept external connections. At all. It also doesn’t have a schema that supports additional information, and you really can’t modify the schema.

You could certainly stand up a SQL Server computer, and have nodes connect to it and write whatever you want to whatever table(s) you create. But that traffic isn’t HTTP(S); it’s SQL Server over TCP/IP. But it also isn’t SMB. SMB is for file sharing and print sharing.

The LCM sends its data to the Pull Server web service via HTTP. The web service then makes a local connection to the Windows Internal Database to make updates.

What you’re asking for is essentially a mini-SCCM of sorts, which is fine on the scale you’re proposing. I’d just do it with SQL Server. If HTTP is mandatory, add IIS to the SQL Server, and create a small web service that can accept data over HTTP and then write it to SQL Server. For 300 machines, even the free Express edition should be fine.

Hello,

Thanks for your suggestion.
I finaly was able to create a webservice like you suggested. Overall, it is working great :slight_smile: Now I can have the computer names, serial, duplicates check, etc.

However, I have an issue with the most important data which is the IsDesiredState status.
I wanted to use Test-DscConfiguration.

I am using a script in the configuration like script UpdateDB {Get/Set/TestScript inside}.

I tried to put my code in the TestScript but had an error saying Test-DscConfiguration is already running. Then I tried to put the code in the SetScript (I put $false in the TestScript) - The error is gone but I still don’t have the status (neither True or False).

Any idea how I could do solve that or what I am doing wrong?

BR,
Jeremie

Hi,

My initial idea was same as Don’s but i think it will actually not be the best idea.

As long as you limit what youre writing to the sql and when youre doing it, that might be ok, Say for example, a set of variable only at the start, and never aftter each action as as sort of action Log.

BUT

Youre basically getting into workflow management. To be able to write to an external database you will need the prior action to be fully done. Logically you wouldnt write to a db that an action was done until it was actually done.Thats mostly done either via DependsOn or using the WaitFor family of DSC resouces.

The reason i dont think its the best idea as it makes you into an Active action when what you need to be is actually passive one.

If you have a few reources running and your db is for some reason experiencing either shortage of power, downtime or simply execution delay on the Insert commands, what do you think will happen to the DSC script running ? Or cases when the Webserver is having “hickups” for what ever reason.

It will hold. Hopefully based on the workflow you have with DependsOn or WaitFor, other resources will not wait and conitnue but i think some will and then youre risking the entire process failing and NOT becuase something was wrong with the script itself, but becuase of “external influence”.

A more passive approach might be to use the Log Resource to update a log on the node with whatever action results you want and then have a log collecing mechanism. You can even go for writing information to a local file via the file resource or similar.

As much as i want to extend my LCM with tons of ideas, were on the boundry of a mini SCCM as Don said and maybe thats not exactly what the great maker of DSC intended it to be. though things change very rapidly :slight_smile:

As DSC is still young but evolving, scenarios like yours is exactly the kind of imformation the powershell team are eager to read to make better decision on features and updates and maybe as Don said some time ago, maybe they will Open Source it or give us an API to work with to extend it to our needs.

Hey Arie,

Thanks for your reply.

While I do agree with you I shoudn’t send too much data too frequently, I don’t agree on some other points as:

“If you have a few reources running and your db is for some reason experiencing either shortage of power, downtime or simply execution delay on the Insert commands, what do you think will happen to the DSC script running”

Well, here is the line that sends the data to the webservice :

Invoke-RestMethod -Method Post -Uri “https://mywebservice” -Body $theData -TimeoutSec 10

As you see, there’s a time-out so if the website is unavailable, there would be no impact to the LCM.

“As much as i want to extend my LCM with tons of ideas, were on the boundry of a mini SCCM as Don said and maybe thats not exactly what the great maker of DSC intended it to be. though things change very rapidly :)”

Well, the great makers of powershell did created a compliance database and that is all what I want :slight_smile:
In it’s current state, the compliance report is just not usable for me because it is IP address based where I want to be able to know which system is not compliant and act on it if necessary.
With only the IP available, I have multiple issues:

  • I have duplicates whenever a user uses VPN or travels to another location
  • When I am looking the report, I cannot identify what computer is not compliant (I obviously don’t know each IP :slight_smile: and we’re in DHCP).

So what I did is just created my own compliance server.
It is working great ! I am more than happy with what I did.

The only issue is for me to be able to test the compliance.
As you said, I could use DependsOn but even though I know the script runs last, I wouldn’t know if the previous tasks were successfull.

Any thoughts ?

Hi,

From your description you’re not using the DependsOn or WaitFor, which is what gives you the uncertainty that the action before was actually completed.

If you’re using a timeout and that Invoke command does time out then your compliance server isn’t exactly accurate to the state the node is in, kinda counter the reason of having it :wink:

“Use the Log Resource, Luke !” :wink:

Remember that DependsOn is an array, so even if you use the Invoke command a couple of times during the script or just one time at the end, you can have it depend of all the actions before that, so basically it will not act before all the resources before it, have ended successfuly.

You can always track the status of a node via Get-DscConfigurationStatus or directly on the node LCM by looking at LCMState and LCMStateDetails.

Read my post on the ComplianceServer, at the end there will be links on how to query. The only thing it requires as an input is the AgentID\ConfigurationID, which is something you can collect from each node and store in a db. The AgentID is a unique, per node GUID that is created once during the first ever registration of the node to the pull server.

Even if you’re not using Pull as a method, you can still create a pull server just for the reporting part and set your nodes to use that for the reporting part.

Last but not least, when you tie yourself like this in a production environment, you become a potential point of failure in the process so safeguards should always be applied, the most dominant one is updating your services on test environments on each version change of powershell, especially when it involves changing to DSC and LCM engine and doing it fast to not be the one preventing say powershell upgrades to the server for say security reasons, just because you haven’t yet fully tested the Compliance Server’ :wink:

I agree that its extremely important that you’re happy with the results and you get what you expected, just reminding that what System and IT usually look for is stability and the least amount of intrusiveness to keep the uptime up.
As a DBA, among other things, i endlessly try to teach devs to avoid opening ‘back doors’ for apps to query data not in their own db. I’d they need the info, there should be an interface written and not a direct table manipulation for example, kinda why I’m also a DevOos entusiast.

If its a must, try keeping it to small amounts of data and not constantly.
Put the code on a GitHib repo, share with the community, you might get a nice input back on ways to extend and grow :blush:

Now back to my my plan to conquer the world with DSC :wink:

My only food for thought:

“As much as i want to extend my LCM with tons of ideas, were on the boundry of a mini SCCM as Don said and maybe thats not exactly what the great maker of DSC intended it to be. though things change very rapidly :)”

IMO it’s a framework, and frameworks are meant to be built upon. There’s a reason you will see Snover speak very highly f Chef and Puppet and his hopes to see them leverage DSC more. If you don’t want to pay for a “prebuilt” solution, by all means: build it.

Then you can end up on here often like me asking crazy questions :slight_smile: