Everything works like a charm, when running on a local machine.
I want to run this script across several machines, but what takes 2-5 seconds when running local, takes 1-2 minutes when running it remote.
I can guess that the Get-ChildItem is returning all the objects back over the network and into the calling console and that is what makes an impact on the performance.
I understand that I could start some background jobs and have them running in parallel and just check when they are done.
But what I really would like to know and understand, is whether there is a way to just execute a script on a remote machine? An old school fire and forget, but with the local execution performance. I could easily push the same script onto the servers, in the same location and then execute it.
Is it possible to execute a local stored script via powershell remoting, but make sure that the execution is actually done local? Should I start a powershell.exe with parameters to achieve the results expected?
Any other ideas to achieve what I’m looking would be of incredible help!
So, you don’t mention how you’ve run this remotely. I suspect you’re using a UNC or a mapped drive.
Were you to use Invoke-Command to send that exact command (referencing a drive letter that it local to the remote machine) over the network, it does execute locally. Now, the Remoting provider process is a background process, so it definitely can be given less priority than an interactive process; as an admin, you can obviously modify process priority. But it would definitely execute locally, without doing the multiple round-trips your current command is doing.
So if we were to use the Invoke-command but referencing the a script that is locally stored on the host of the console, will it take the entire content of the script, go to the remote computer and then execute it locally and only bring back the results? Am I understanding this correctly?
Hello Mr. Jones, I just finished reading your book “PowerShell in a month of lunches” and I first and foremost have to thank you for co-authoring such a wonderful book. And because of this book, instead of googling scripts, I was able to write my scripts from scratch. So a big thanks to you and Mr. Hicks!!
So after reading the chapter 13 of the book, which explains the benefits of using the Invoke-Command, I incorporated it in my script instead of just using the plain SMO for querying SQL Server. But I have noticed that when I explicitly write the entire code in the -ScriptBlock, the Invoke-Command returns the results in 5-8 seconds, but if I use the Invoke-Command with -Filepath, the same query takes on a average 25-35 seconds longer? I am querying the same SQL Server over WAN from the same console.
Also, in general, does parameterization of a script degrades its performance?
Below is the code for your reference;
There’s a bit more overhead using -FilePath because PowerShell has to open the local file, package it, and transmit it, before the remote end can start executing. It’s usually pretty minimal in my own experience; I’ve not run into a 3x penalty like you’re seeing. If I was hell-bent on figuring out where the difference was, I’d probably run some very detailed traces with Trace-Command to see if there’s a particular part of the process that’s taking notably longer.
I’ll give that Trace-Command a shot to figure out because these are simple scripts querying SQL Server to get basic info and if it takes 40-45 seconds, it’s a deterrent for my colleagues to use as they are all BASH guys and I want to make PowerShell work and look good
Also, my other question was about variables. I have also noticed that when scripted using variable, there was a 2x-2.5x penalty? Is that some that is expected?