copy-item vs robocopy

I’m current testing calling robocopy from a powershell script in a GPO see below to copy/sync changes to our company published favorites which we currently are doing it with User Configuration Preferences Windows Settings Files and we copy folders and files(url/links) to the users profile\favorites but when we update these urls or remove them it’s causing extreme slowness at a few of our remote offices. This is what I have and it seems to work but my boss thinks using Copy-Item would be a better way. I don’t see that Copy-Item will update changes to files or remove files like the mir switch does with robocopy.

C:\WINDOWS\SYSTEM32\ROBOCOPY “\aredomain\NETLOGON\Favorites-New\OurCompany Published Favorites.” “c:\Documents and Settings$env:username\Favorites\OurCompany Published Favorites.” /MIR /R:1 /W:1 /Z""

IMO, Robocopy will tend to be better than Copy-Item, as it cuts down on the amount of overall data that needs to be transferred. (It looks at the file size and last modified timestamp first, to cut down on the amount of actual copying that needs to be done.)

If you’re having challenges with remote offices doing this on-demand, you might be better off setting up a branch file server and something like a BITS transfer or DFS replication out to there. When you update something at the home office, the slower transfer out to the remote office can take as long as it takes, while the people inside the remote office are still quickly accessing the old version. Once the home office transfer is done, you trigger a job to overwrite the live location on the branch file server. This will be slower in terms of convergence time, but as a trade-off, the users in those remote offices should never see that extreme slowness again.

But to be fair, you can use PowerShell to review files and copy over only the changed or missing ones as an alternative to Robocopy. You just have to compare the source and destination files. I use Get-FileHash and Test-Path to do this along with Copy-Item. For example:

$Source = Get-ChildItem -Path C:\Scripts\TestDir -File

ForEach($File in $Source){

    $Orig = $File | Get-FileHash | Select-Object Path,Hash
    $DestFile = "\\server01\share\TestDir\$File"
    If(-not(Test-Path $DestFile)){
            Write-Output "Copying Missing File $DestFile"
            Copy-Item $Orig.Path -Destination $DestFile}
    $Dest =  Get-FileHash -Path $DestFile | Select-Object Path,Hash

    If($Orig.Hash -ne $Dest.Hash){Copy-Item $Orig.Path -Destination $Dest.Path -Verbose }
    Else{Write-Output "Files are equal"}

}#ForEach

Then you copy only the files that are different or missing.

If you’re calling Get-FileHash on a network location, you’re already transferring the file over the network just to compute the hash. Then, if it’s different, you’d be transferring the data again. If you wanted to implement an efficient solution based on hashes, you’d need to compute the hashes on the server side and make them available to the client in some way. (Perhaps by having a .hash file on the share alongside every data file, etc.) Then the client could download the hash file first when checking for differences. The larger the average data file size, the more benefit you’d see from that approach.

While you technically could reimplement some or all of robocopy’s functionality in PowerShell, I’m not sure I see the benefit. Aside from the potentially wasted effort, it would very likely run slower than robocopy.

Does your company use SCCM? That would be my first port of call, it’s perfect for this type of thing.