-confirm keeps asking to confirm after selecting "All"


Excerpt from my script:

   # Process each file
   foreach($file in $Files) {
      # Create output filename from the first line of the file
      $newFile=(Get-Content -LiteralPath "$file" -totalcount 1).Trim()

      # Remove any special characters that are invalid in Windows file names
      $newFile=[regex]::Replace($newFile, '[\\\/\:\*\?\"\<\>\|]', '')

      # Does the new file already exist?  If so, add a filename suffix (1,2,3,etc)
      # We can derive the suffix by getting the count from a dir command 
      $count=@(Get-Item (($Output,$newFile -join "\"),"*",$Extension -join "")).length

      # If the file(s) already exist add the filename suffix
      if ($count -eq 0) {
         $newFile=(($Output,$newFile -join "\"),$Extension -join "")
      } else {
         $newFile=(($Output,$newFile -join "\"),"_",$count,$Extension -join "")
      if ($PSCmdlet.ShouldProcess(
         ("{0,-32} to {1}" -f $file.Name, $newFile),"Rename HCP File"
         Try {
            Copy-Item -LiteralPath "$file" -Destination "$newFile" -Force -ErrorAction Stop
         } Catch {
            Throw "Cannot copy '$file' to '$newFile'"
         # Copying the file preserves the original file date/times,
         # so reset the CreationTime and LastModificationDate of the new file
         (dir "$newFile").CreationTime=$date
         (dir "$newFile").LastWriteTime=$date

         # Rename source file to "processed" file extension
         Try {
            Move-Item -LiteralPath "$file" -Destination "$file.processed" -Force -ErrorAction Stop
         } Catch {
            Throw "Cannot rename '$file' to '$file.processed'"

I run this with -confirm, say Yes a few times to make sure all is well, then say A (All). It keeps asking for confirmation on each iteration.

What am I doing wrong?


You’ll have to save scripts with a TXT filename extension to attach them.

The “problem” here is that you’re in a foreach loop. So you’re calling the cmdlet multiple times, each time with only one input object. Each unique time the cmdleyt gets called, it’ll confirm.

So, with this:

 Get-Stuff | Do-Something 

The Do-Something cmdlet gets initiated ONCE with a bunch of input, so it’ll take "A"ll on the confirm. With this:

ForEach ($thing in (Get-Stuff)) { Do-Something -Inputobject $thing }

Do-Something is being repeatedly initialized and executed.

You should read this blog post:

It highlights things you should consider when adding should process support.

Thanks Don and Posh, both for your replies and your fantastic support to the community.

Yeah, TBH I thought that might be the case. However, I need to be in the loop to create the message I want $PSCmdlet.ShouldProcess() to display. The current code works splendidly for -whatif or -verbose (and silently if neither are specified). And it works as desired when I confirm one operation at a time. It’s only “All” that’s a pain. And the script may be processing hundreds of files.

IMO, it would be nice if Microsoft had provided the functionality to “remember” the All choice (either Yes or No). If the end user chooses [A] Yes to All, then $PSCmdlet.ShouldProcess() returns “true” for any subsequent calls, and executes the codeblock for the remainder of the script. If [L] No to All, then $PSCmdlet.ShouldProcess() returns "false for any subsequent calls, and no-ops for the remainder of the script. And perhaps make this functionality configurable, something like $ShouldProcessActionPreference=“AllContinue”, or some such.

If there is a way to capture the choice the end user made to the confirm question, I may be able to wrap the call to $PSCmdlet.ShouldProcess() with some logic condition (maybe). But I don’t know how to do that, or if it’s possible.

If instead I have to roll my own ShouldProcess functionality, then I’ll just tell the end user not to use -confirm.


After further thought, I decided to attach my entire script. It’s possible that I’m doing processing in the END block that would be better done in the PROCESS block???

Brief description:

  • It takes a collection of files (obtained from end customers, with no real naming convention), reads the header line from each file (which does have an enforced convention), makes a copy of that file with the new name, and appends a “.processed” extension to the source file.

  • If multiple files have the same header line, add a filename suffix to the renamed file so they don’t get overwritten.

  • (This is key): >>> I want the script to support both command line parameter input and pipeline input. <<<

This 3rd point comes up in many of my scripts. I want to find a good paradigm to support this. IIRC, the current paradigm is a slight modification of a blog post created by Don re: processing pipeline input (the URL escapes me at the moment).

So, 1) I want to create a solid, repeatable paradigm to gracefully support both command line and pipeline input, and 2) if that changes where code ends up (PROCESS vs. END), perhaps it will affect the SupportsShouldProcess functionality.


Hi Scott,

I would usually offer a longer reply, but I’m quite busy today so I can only offer a short response at the moment.

I would definitely move the processing into the process block for several reasons:
a) it allows you to properly leverage the pipeline by processing objects and then releasing them (on the flipside, using the end block means you have to collect all objects in memory, which may be less desirable, especially for very large collections)
b) it allows you to properly supports should process in your command so that (A)ll actually works as you would expect.

Also, here is another great blog post about SupportsShouldProcess that specifically talks about the problem when supporting (A)ll might not work: http://becomelotr.wordpress.com/2013/05/01/supports-should-process-oh-really/

Hopefully that will give you the answers you are looking for. If not, reply again to this thread and I’ll look again later when I have more time.

Hi Posh,

I certainly understand being busy, so again thanks for the reply.

I want my script to support these invocations:

.\script.ps1 foo, …\bar, foo\bar -confirm: process all files in the directories .\foo, …\bar, and .\foo\bar

gci foo, …\bar, foo\bar | .\script.ps1 -confirm: same as above

I went searching for the post by Don I saw previously but couldn’t find it. However, I did find this one: http://blogs.technet.com/b/heyscriptingguy/archive/2011/05/10/use-the-pipeline-to-create-robust-powershell-functions.aspx.

Using that as a guide, I created the attached test script. However, -confirm A[ll] still isn’t working :frowning:

I guess I need to move the $PSCmdlet.ShouldProcess() method call outside all the foreach loops, but that won’t give me my desired functionality either.

I’m curious: using Don’s example in the URL above:

  Foreach ($computer in $computername) {

        # do stuff here with $computer


If he wanted to confirm each iteration of # do stuff here with $computer, i.e. ask the user “Do stuff with Computer1?” (Yes), “Do stuff with Computer2?” (No), “Do stuff with Computer3?” (Yes to All), how would one accomplish that (with no more prompting)?


Below I have shared a simplified example that demonstrates SupportsShouldProcess in action. It invokes ShouldProcess inside of a foreach loop in the process block, and PowerShell handles the rest when it comes to All or None options on Confirm requests. Note, you can run this freely, it does nothing harmful, just calls Rename-Item with a hardcoded -Whatif parameter. Here are the scenarios you should use to see how this works:

gci *.txt | Rename-MyFile -Confirm

$x = gci *.txt | select -expand fullname
Rename-MyFile -Path $x -Confirm

When I run this in a folder with about a half-dozen text files, I can do exactly what you requested: say yes to one, no to one, then yes to all, or some combination like that, and I don’t receive any undesirable prompting.

You should be able to convert your existing function to leverage this model and have it work in the scenarios you are targeting (pipeline input and parameter based input).

Some things to note:

  1. It uses a singular noun. Your command has a pluralized noun. It should be singular to follow PowerShell’s consistency guidelines.
  2. It uses ValidateNotNullOrEmpty on parameter input. That’s just a PowerShell freebie that stops incorrect input in its tracks.
  3. It uses string as the type for the Path parameter. Since Path accepts pipeline input by property name, there is no need to have this be a generic object and do any fancy type-checking internally. Let PowerShell figure out that you want a string and that you’re willing to accept it by property name if the object has a Path or a FullName property on it. Of course since you’re also accepting pipeline input by value, you can just pass in a set of strings too and that will work.

I think if you follow this model and convert your function to it, your code will be easier to follow and it will support the invocation use cases you want to support.

Here’s the simple function that demonstrates how this works:

function Rename-MyFile {
        [Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
    begin {
        Set-StrictMode -Version Latest
        $ErrorActionPreference = 'Stop'
    process {
        foreach ($item in $Path) {
            if ($PSCmdlet.ShouldProcess($item)) {
                Rename-Item -LiteralPath $item -NewName "$($item | Split-Path -Leaf).MyFile" -WhatIf

Kirk out.