I’ve been trying to think of a good way to speed up a relatively lengthy, if single threaded, process of scanning every single file share on our NetApp appliance (they’re just CIFS file shares, i.e. \Netapp\Share1 , Share2, Share3, etc…) and Although I can run Robocopy to parse the results quite nicely due to some example scripts i’ve found, what I want to do is run more than one Robocopy at a time.
So what i have together is 2 powershell scripts. One uses the NetApp Powershell cmdlets to return a list of shares to me. I can then do ‘foreach’ on that list and call the 2nd powershell script that does all of the robocopy (to NULL) parsing to retrieve the total size, file count, etc… and output a file per share. This all works great, except when I run it, It’s a fire and forget process launcher in the first script. I ran it on a 16 core blade with 32gb of ram and it handled it ok… but technically I spawned close to 600 powershell windows all at once, all starting robocopy. I almost immediately went from zero cpu/memory consumption to 100% CPU across all cores and about 17gb of RAM. I knew this would happen, i was just curious
So what I want to know is… If I have a list of shares… is there a way to specify a number (lets say 8) and have powershell chunk through the list of shares spawning up to 8 instances at a time… when one finishes… start another… so there are always 8 running until it gets to the end of the list? This should cut the 12 hour runtime of a single script scanning the whole appliance down to mere hours. I want to schedule this daily for trending and when I have a variable timeframe nearly 12 hours or so in length, the results will be skewed for the trending i wanna do.
To Recap: Basically… 680 ‘things’, I want to do something against 8 at once… not single threaded, until all 680 are done.