Writing script modules vs scripts with cmdletbinding

Hi. I’m wondering how I should write my scripts. What I want is a file/script that behaves like a cmdlet. I am wondering if i should wrap my code in a function or not.

E.g.

function Write-Stuff {
    [CmdletBinding()]
    Param (
        [Parameter(Mandatory=$true)] [string] $Stuff1,
        [Parameter(Mandatory=$false)] [string] $Stuff2 = $null
    )

    Process {
        if ($Stuff2) {
            Write-Host "You gave me $Stuff1 and $Stuff2"
        } else {
            Write-Host "You gave me $Stuff1"
        }
    }
}

VS

[CmdletBinding()]
Param (
    [Parameter(Mandatory=$true)] [string] $Stuff1,
    [Parameter(Mandatory=$false)] [string] $Stuff2 = $null
)

Process {
    if ($Stuff2) {
        Write-Host "You gave me $Stuff1 and $Stuff2"
    } else {
        Write-Host "You gave me $Stuff1"
    }
}

I believe that the first one actually is a module and should have the .psm1 extension and the second one is a .ps1 script. The first block would not behave like I want since it can’t be run like a script.

I’ve written a lot of Python earlier and in Python you could write files that behave both like modules and like scripts. Is this possible in PowerShell?

Is any of the code blocks above considered wrong/bad practice?

Continued:

Let us pretend the above blocks is saved as Block1.psm1 and Block2.ps1

The first block has to be run like this:
PS C:> . Block1.psm1
PS C:> Write-Stuff -Stuff1 asdf -Stuff2 qwert

And the second:
PS C:> .\Block2.ps1 -Stuff1 asdf -Stuff2 qwert

Both methods are correct and it really depends on the personal preference to go for a script with parameters or an advanced function which can be loaded into powershell. I usually use advanced functions if I run something repeatedly so that I can put them in modules directory and load it when powershell starts, so that I can use the function whenever required. I use the scripts for one time requirements which has the parameters hard coded in to the script itself.

Please also note that the advanced function can also be saved as .ps1 and loaded to powershell using the dot sourcing method you used in your example.

#Example
PS [10:00:03] D:> gc .\block1.ps1
function Write-Stuff {
[CmdletBinding()]
Param (
[Parameter(Mandatory=$true)] [string] $Stuff1,
[Parameter(Mandatory=$false)] [string] $Stuff2 = $null
)

Process {
    if ($Stuff2) {
        Write-Host "You gave me $Stuff1 and $Stuff2"
    } else {
        Write-Host "You gave me $Stuff1"
    }
}

}

PS [10:00:12] D:> .\block1.ps1

PS [10:00:23] D:> .\Block1.ps1 -Stuff1 asdf -Stuff2 qwert

PS [10:00:33] D:> Write-Stuff -Stuff1 asdf -Stuff2 qwert
Write-Stuff : The term ‘Write-Stuff’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1

  • Write-Stuff -Stuff1 asdf -Stuff2 qwert
  •   + CategoryInfo          : ObjectNotFound: (Write-Stuff:String) [], CommandNotFoundException
      + FullyQualifiedErrorId : CommandNotFoundException
    
    
    

PS [10:00:44] D:> . .\block1.ps1

PS [10:00:56] D:\babu\POSH> Write-Stuff -Stuff1 asdf -Stuff2 qwert
You gave me asdf and qwert

PS [10:00:58] D:> gc .\block2.ps1
[CmdletBinding()]
Param (
[Parameter(Mandatory=$true)] [string] $Stuff1,
[Parameter(Mandatory=$false)] [string] $Stuff2 = $null
)

Process {
if ($Stuff2) {
Write-Host “You gave me $Stuff1 and $Stuff2”
} else {
Write-Host “You gave me $Stuff1”
}
}

PS [10:01:07] D:> .\Block2.ps1 -Stuff1 asdf -Stuff2 qwert
You gave me asdf and qwert

I tend to write advanced functions and put them in modules where I have a number of actions to perform - some or all or which - share components. I’ll also use modules for small pieces of functionality that I want to auto load.

Scripts I tend to reserve for long running actions that I’m going to call from some kind of scheduler, orchestration engine or whatever. Having said that the scripts often load modules…

It really comes down to what works best for what you are trying to achieve

One habit I’ve developed, being more a system admin than a developer, is if my script is going to make changes, I’ll wrap it in a function, but if it’s just getting info, I’ll leave it as a script. This way people have to dot-source it before they go and make changes. It’s a nice safety net.

But I too wondered if there is any kind of general opinion that is accepted for what is better most of the time. I like the responses so far about it really coming down to preference and intentions for the script itself.