PowerShell inventory run as a service

PS Inventory

I wanted to see if the PowerShell collective had any opinios about this task I have.

Our company provides local IT support for many middle to large size businesses, all with different network topolgies.
We have a need to keep ‘accurate’ IT inventory for each client, which we do to some degree through PS inventory scripts running from their domain controllers, if they have them, or to run manually on single workstations from a workgroup.

Problem is that is alot of scripts spread out amoung different servers, all which is a challenge to manage. Also there are failures in remoting to workstaions at times due to different reasons, one of which that the laptop in question is no longer within the network, or an update may have impeded remote connectivity…the reasons vary.

We are looking for an automated way of doing our inventory and have looked to examples like Nagios, which is more of a monitoring tool that runs as a service on servers and responds when a threshold is met and then sends an email for notification. However, the concept is that there is a service that runs on the machine and then reports back to home.

I am wondering if it is possible to create an inventory PowerSHell script which I convert to a service on a workstation/server and then reports the specs on said machine, either as a Json or html file, to some sort of repository for reporting.

I wanted to see if anyone has had to do something like this or knows of a better way to accomplish what I am trying to do.

I’m a firm believer anything is possible with powershell. Have you seen some of the projects such as FormatPX, PSHTML, Polaris, Universal Dashboard, etc? I mean there is a new-service command, so there must be a way. But you probably need to ask yourself if powershell is the right tool to run as a service. Don’t get me wrong, I have plenty of scheduled tasks, GPO scripts, and other powershell automation - I love powershell. If you haven’t looked at C#, I recommend you at least consider it. If you are familiar with powershell than you’ll feel pretty at home. Something to think about. But back on the powershell front, with being cross platform and the continued work it may be more versatile - depending on your needs. To answer your question, I would use powershell to deploy the .net/c# service just for posterity.

Edit: and we use many tools such as network detective, IT glue, etc to try and tackle the same problem you describe. With this huge shift to work from home, a central scanning utility just ain’t gonna cut it.

I really hope you find this helpful.

IMHO, it’s a bit of a can of worms. There is CMDB and system management software that’s primary purpose is the collect data for inventory purposes. As your purview spans across many businesses, each with different topologies, software, etc. you’re wanting an inventory collection for reporting for your organization. Most organizations have multiple agents already deployed, collecting the same information, but are being used by different teams and different reporting structures. If they have the data and already have solved some of the below items, it would be ideal to just leverage their data:

  • Discovery and Deployment - How you going to deploy the agent? Are the devices all Windows devices? Are all devices domain joined?
  • Permissions - Are any elevated permissions required to get the inventory data?
  • Check-in and Decommission - If you've ever managed an Enterprise solution like SCCM, you have to keep on top of agent health. Once you have thing deployed, you're going to have to create processes to see why an agent hasn't reported in X days and look at de-commission of that device to not skew reporting.
  • Data Collection - Getting data isn't an issue, but getting data to your database has many questions. Enterprises using data collection like this normally have a hub and spoke approach. Endpoint Node > Collection Node > Database. Not sure the size of these orgs, let's say 2500 nodes. If you have an agent writing to a database, that is 2500 nodes making connections to a database. Most solutions have collectors and a primary collectors to reduce the number of connections writing to the database. Think of a solution like Splunk that's purpose is gathering information and rolling it up. Even if these are multiple small companies, you still have 50,150,50,300,1500,50 and now there are 2100 nodes reporting.
  • Security - How are you going to secure this data? Transmission and where are you are collecting it?

Even if you deploy this as a simple scheduled task, it could work. You want to run it as a service using an exe to obfuscate the code?

I know this is late to post to, but to add my two cents.

We had a need for this at our company, mainly our team. The task was to inventory roughly 100 or so servers.

To complete the task we had a few hurdles to over come.

  1. Connect to multiple computers.
  2. Collect data
  3. Keep scripts/functions in a centralized location
  4. Centralize collection of the data
To ensure the code for the scripts/functions was in a centralized location we created a repository in azure devops.

1 repo = 1 module

We then created a nuget feed to allow us to in effect have a private powershell reposiroty.

Created personal access token to ensure credentials werent being used and the PAT had access to pull modules.

A powershell script is excuted at build time that creates 2 scheduled tasks, 1 to update modules nightly, the other to run the inventory functions on a schedule time.

The scheduled tasks run as nt authority\system and on the db use nt authority\network so no need for credentials there either.

the invenotry script executes a hand full of functions that collect data, create a copy of it in a csv file, then once done a function scans the folder, converts the csv files to a datatable, connects to a sql db creates a temporaroy table (Not temp table because we didnt want to lose data if connection is dropped), uploads data to the table, merges data to main instance then drops the temporary table.

If we change or update our functions they are updated nightly because first tasks pulls the latest module.

 

I know this is a little much but it seems to work well for us.

I am not married to the idea of a service. I took that idea from nagios and it would seem that deployment and activation of a service might easier than installing and configuring a scheduled task on a large number of machines.

We are talking about maybe 500 clients spread out through the US and the number of machines per client ranges from 2 to 50. So installing as a service would seem ideal to me, but I am open to other options.

The ‘data’ would just be basic computer specs, (hostname, CPU, memory, HD(make and model), bios, etc…