Running Scripts via SCCM

Hello to all

I have been asked to create a script to copy and replace files and folders from a UNC path to any path on a local machine.

The powers that be want it to deploy via SCCM 2012 so we can monitor deployment etc. The script contains all the error checking and environment variables that it needs and it works perfectly, that is except for copying files to $env:Program Files or $env:Program Files (x86) for which it comes up with access is denied.

I have tested the script by elevating powershell to administrator locally on one of the machines and run the script and the files copy exactly how I would like it to.

The bottom line is this. (if at all possible)

I need to be able to elevate a script to administrator mode without user interaction to press yes on UAC.

If you can solve this or at least point me in the direction of an answer that isn’t hidden in a forum minefield then it would be gratefully appreciated by an awful lot of people in my organisation!


Certa Cito

Your approach is not correct from my point of view. The SCCM/ConfigMgr agent runs as Local System account. Which means it connects to the network shares under the computer account DOMAIN\MyComputer$ but granting the Domain Computers group or the specific computer account is not a good practice.

From my experience the correct approach is to point the Content path of the application deployment type to a UNC path (usually a dedicated repository) with all necessary files including your PowerShell script to which the Primary Site or CAS services have read access to. You distribute the content of the application to all distribution points. Once the content is distributed you deploy the application to all necessary clients and let the SCCM/ConfigMgr client fetch the content from its closest distribution point via HTTP or HTTPS. It will download the content into a new folder under C:\Windows\ccmcache and invoke the script it downloaded together with the other content.

No need to open additional firewalls or play with share and NTFS permissions.

I hope that helps and I might be wrong but that is how we are doing it where I am working. This approach scales and accounts for large networks with multiple firewalls and low latency links between sites with the correct placement of distribution points.


This is more SCCM question than Powershell. I feel like we’re getting half the story. Programs are either run as Administrator or User. It sounds like you are running the program as the USER and updating some folders under their profile or something and trying to copy to Program Files. You can approach it a couple of ways. Administrator (SYSTEM) has access to all profiles, so you can say for each profile, copy the files and then you can do a single copy to your program files directory. The second option is to seperate the script to what would run as a USER and what would run as SYSTEM. Create one program to run as the USER and the other to run as SYSTEM and then set one to run as a prerequisite to the other.

I agree with Daniel and Rob.
You can accomplish this but the issue is not with the powershell script it is with the security context in which SCCM executes installs. If you run the install with administrative access the install runs as the local system account. To test your script within that context you would need to use a scheduled task running under the local system account to test your script. That being said there are better ways to accomplish the task as described via SCCM. I would suggest creating a SCCM package and use the unc as the package source. If you need to refresh the files regularly that can be scheduled or done manually if the updates are infrequent. You can then deploy the files without needing to run the process under an account that needs additional security privileges. This method will allow you to target users and machines as needed via SCCM and get the control and reporting you are looking for,