We have a few internal-only PowerShell modules that have been growing over the years. There are several commands that I suspect are no longer in use and could be removed. However, I need some metrics to prove to myself that they truly are no longer used.
Does anyone know of a good solution for capturing telemetry from a module? I only need basic data on functions used, like the function name and a timestamp, sent somewhere that I could use to report from. If we need to build something custom, that’s definitely on the table. But why reinvent the wheel?
Thanks for the info Olaf. I took a quick look and based on what I saw, you should be able to easily get the function and timestamp info with a bit of parsing on the Message results. For me, all I wanted was the scripts/modules called/opened and was able to easily get this without using the module as I am unable to use external components. However, this is the “meat” of the module code to get the events:
Get-WinEvent -FilterHashtable @{ProviderName="Microsoft-Windows-PowerShell"; Id = 4104}
Once you have this data, you should be able to parse for what you want. The
I didn’t know that existed, but it’s not quite what I’m looking for. I’m looking to find function executions from our internal modules (where I can control the code) across potentially hundreds or even thousands of computers across on-prem and AWS resources. Trying to accumulate log data from all of those sources and parse through code in message bodies to find individual commands sounds like quite a bit of work. Plus, there are certainly security barriers to accessing those logs that might be difficult to overcome (politically speaking).
I was thinking of something that would centralize the data collection, like adding an API call into each function that would send function metadata to an API endpoint and accumulate that data into a database.
Again, just trying not to reinvent the wheel, knowing it’s possible that this wheel hasn’t been invented yet.
That’s why I think it might be a good idea to base your solution on already existing functions.
You know you can forward events from one server to another, don’t you? If you use one server to collect all the events from the target servers you don’t have to collect them individually.
And collecting telemetry data from hundreds or thousand of computers is not an easy or simple task anyway.
I couldn’t agree more. Sounds like the “juice aint worth the squeeze” to me just to do cleanup for unused functions in modules? Olaf does have a good point on forwarding though. I know that when we tried that, over time it became a big mess to manage, but that was in our use case.
Windows log forwarding really isn’t an option - I’m not an admin on all those servers, and many are ephemeral EC2 instances. So pulling that data together that way would end up a major undertaking involving other teams and security. Lots of hurdles there.
There are other ways to collect the logs - we are running Elastic, so that could be used. But the parsing is the killer of any log collection method. Writing a parser to look for the commands in a module would be a major challenge, especially in Elastic.