CIM Sessions and Jobs

Is it possible to pass a CIM Session to a job, or to call it from a job somehow? The big benefit to using CIM over WMI is being able to open a session, allowing you complete multiple calls while only requiring one single handshake.[br]
Here is what I tried, but it of course did not work…[br]

$Session = New-CimSession -ComputerName $Computer -SessionOption $Option[br]
[br]            
$sb = {[br]
    Param ($Session)[br]
    Get-CimInstance -Query "SELECT * FROM  Win32_UserProfile" -CimSession $Session | Sort-Object -Property LastUseTime -Descending | Select-Object -First 1[br]
}[br]
[br]
$j = Start-Job -ScriptBlock $sb -ArgumentList $Session[br]
Wait-Job $j -Timeout 15 | Out-Null[br]
[br]
$LastUser = $j | Receive-Job[br]
$j | Remove-Job[br]
[br]
$clientHealth.LastUser = $($LastUser.LocalPath -replace [regex]::escape('C:\Users\'),'')

[br]
I also tried passing the computer name and then calling the session in line like "-CimSession (Get-CimSession $Computer) ", but that didn’t work either. [br]
A quick Google search came back with nothing, so I am pretty confident that this is a dead end, but I figured it couldn’t hurt to pose the question to you guys. Pretty sure I am not the first to try this.

Yeah, the problem is that CIM sessions don’t support pooling, which is essentially what you’re trying to get them to do. CimSession objects are also a little complex, and they don’t necessarily serialize/deserialize well. However, I think the specific problem in your case is that $session isn’t being unwound. I think you’d want to pass that in via -Argument… although I don’t think you’re taking the approach PowerShell really wants.

I think the PowerShell team’s intent is that you wouldn’t have one machine firing off CIM queries. Rather, you’d use Invoke-Command and tell the remote machines to query themselves and send the results back. That’d use Remoting’s built in connection management. it’s why Get-CimInstance doesn’t have an -AsJob parameter.

Thanks for your response Don!

The reason I wanted to wrap the queries in jobs is simply so I could easily set a timeout for each call. I have run into issues in the past with WMI queries that simply wouldn’t return anything (not even an error), essentially freezing the script indefinitely. I run some of my larger scripts as scheduled tasks, which are completely unattended. So I need to be able to rely on the fact that one unhealthy system won’t take down the entire process when polling 2000 +/- workstations for information. Since it is my understanding that CIM utilizes WMI libraries on Windows, I am concerned that I will run into similar issues with CIM as I have with WMI. I wrapped my WMI calls in jobs with a timeout in the past to address the issue, which worked quite well. I was hoping to do something similar with CIM, but there is no performance benefit to CIM, if I can’t utilize a CIM session.

You wouldn’t happen to know of an alternative way to address this issue?

So, not suggesting you switch to WMI.

Invoke-Command -ScriptBlock { Get-CimInstance -Query “whatever” } -Computer $list -AsJob

That’s not using WMI. Uses Remoting to send the Get-CimInstance request to the computers, which execute it against themselves, and return the results. This’ll run in parallel, as opposed to sequentially, so it’ll get done faster. It’s still a job, just the queries are running in a different spot. Notice that Get-CimInstance doesn’t specify a computer name - it’ll run against the local computer. Invoke-Command takes care of distributing the commands to the remote computer(s).

CIM does not utilize WMI. It talks to the same underlying repository, yes, but it uses its own set of protocols (WS-MAN, not RPC) and executables. It tends to be more performant in some areas.

It is my understanding that CIM can only use WSMan on clients with PowerShell 3 or newer. CIM does support DCOM as an alternate protocol for older clients. For instance most of our Workstations are Win7 and only have PowerShell 2 installed. I guess I could convince our leadership to push WMF 3 or 4 to all of them, if there was a significant advantage to doing so. We currently do not utilize DSC, so there really hasn’t been much of a push for this sort of thing.

I do see what you are saying though, and it makes perfect sense to use Invoke-Command in that way to run the queries locally, avoiding the handshake entirely. I guess I could simply create a PSSession than run each query as a job via invoke-command. Which should theoretically achieve the same result as what I was trying to do to begin with.

Thanks for your help.

Ah, correct. CIM as a technology is on PowerShell v3 and later. I misunderstood; when you said “CIM” I thought that’s what you were referring to.

CIM does not support DCOM; the Cim cmdlets (notice CIM vs Cim) in PowerShell can fall back to DCOM/RPC and talk to the WMI service. I know that’s a really nitpicky thing to write, but it’s an important distinction. Using Get-CimInstance over DCOM is exactly the same as using Get-WmiObject, from a performance and protocol perspective.

There is a big advantage to using WMF3+; CIM being a huge one and the Remoting improvements being another. However, Remoting is also in v2. If you were to enable Remoting on those clients:

Invoke-Command -Script { Get-WmiObject -Query "whatever" } -Computer $list -AsJob

Could be used to have each computer query itself via WMI, in parallel, as a set of independent jobs, and you wouldn’t be getting messed up by the timeout. There’s no need to create sessions in advance, either - it’ll handle it all for you, and much more cleanly and with less memory overhead.

You could also, without using Remoting:

Get-WmiObject -query "whatever" -computer $list -AsJob

And it’d run over DCOM, as a set of independent parallel jobs (one for each computer), and the timeout wouldn’t be nearly so annoying. These examples assume $list is an array of computer names.

I do have PSRemoting enabled on all systems across the enterprise. While we have mostly Windows 7 devices, we do have a number Windows 8.x devices as well. So I will be using a mixture of protocols. Basically, using test-wsman, I check the protocol version. If version 3 use WSMan, otherwise use DCOM with the cim cmdlets. If all fails, fall back to wmi cmdlets. Not sure if I need that last part though, if using the cim cmdlets over DCOM is the same as using the wmi cmdlets.

I still don’t really like the idea of throwing a list of 2000 (give or take) computers at Invoke-Command for parallel processing as jobs, since there is no way to limit the number of jobs that are being processed concurrently. Also, I can’t forget about timeouts. Jobs that take longer than n seconds need to be squashed in order to keep the ball rolling.

The way I currently have things setup I am utilizing a runspace pool with a limited number of threads. So my equivalent of $list will never have more than one computer name per thread. I am pulling a ton of information from each system, which is why I want to establish a connection and keep it open until I am done.

I wish I could show you exactly what I have been doing, and what I am working on at present, to give you a better idea of the scale of the script and what the end goal is.

Basically I created a large PowerShell script that runs on a schedule to pull various information from each workstation, such as factors effecting SCCM client health (low disk space, expired certs, subject mismatch on certs, broken WMI, etc), HBSS agent health (HIPS state, Virus definitions, etc.), various other troubleshooting information, and much more. The script dumps this info to various tables in an SQL database. Then I built a web based interface allowing our technicians to easily consume this data. We use this in a variety of ways, ensuring our systems are healthy and receiving patches is one, troubleshooting user issues is another, but we also use it for things like hardware inventory, and tracking software licensing. I am currently tracking information for ~19k workstations, and we intend to start doing something similar for servers soon, which will push it over the 20k mark.

Invoke-Command automatically throttles to 32 connections; -ThrottleLimit lets you change that. And it times out almost immediately when it can’t connect; because you’re running Get-WmiObject locally on each target machine, there shouldn’t be a timeout. And who cares if there is? It’ll only tie up one thread, and you’ve got - by default - 31 others still running. Microsoft has done this with 25k computers; 2k is well within the command’s capabilities.

So considering that each job has to complete before I can receive the result, wouldn’t I need to ensure that each job is stopped at some point?

You can use Wait-Job to wait until they all complete or failed. “Failed” would mainly be those that Invoke-Command couldn’t reach, but you could certainly enumerate the jobs to find out. Keep in mind that you really only need to wait for the top-level parent job to be not “Running;” its status will be the worst case of the child jobs (e.g., if one child fails, the parent will have a status of failed).

That brings me back to my original point. I have had the experience that some WMI calls don’t return anything and just freeze the script indefinitely. While this is rare, it is a problem that I need to address being that I run my scripts unattended against large pools of systems. So I do need to look at each job individually. It is safe to assume that once a certain amount of time has passed that the thread may be locked up and might not stop on it’s own. I have been using Wait-Job $j -Timeout [n] to accomplish this.

I try to put fail safes in place for just about any conceivable possibility that could cause the script to fail or stop prematurely. I just want it to log the error, and move on.

So, let’s review that.

  1. The WMI calls are going to be running locally, which makes them far less likely to just “hang” that with a remote WMI call.

  2. If Invoke-Command doesn’t get a result back in a certain amount of time, it’ll fail the associated child job anyway.

  3. The “moving on” happens already, unless all 32 threads get “hung.” If that’s the case, you don’t need failsafes, you need to re-engineer your environment :).

I’m not sure you need to look at each job individually. But, if you’re just hyper-paranoid and really really really really really want to, sure, go ahead. Wait-Job can take all the child jobs as input and just wait for all of them. I think it might be worth, you know, testing it a bit before you assume it won’t work and need to resort to that, but totally your call.

Fair enough. :slight_smile:

It is something that occurs once in a blue moon, but it has been a problem in the past so perhaps I am a bit paranoid. I am testing this in a few different ways, I just want to ensure I have all the angles covered, rather than having to figure it out later after its been pushed to production.