We create a module, or series of modules with similar functions grouped together, and then use the import-module cmdlet to bring those into the scripts (or commands) we run on the servers (in some cases as scheduled tasks)?
Correct. The difference between a module and dot-sourcing involves a couple of things. One, PowerShell knows where to look for modules. In v3 and later, it can provide help for commands that aren’t yet loaded, and implicitly load commands that you try to execute. So it’s as if all modules are loaded all the time, which helps with command discovery. Unlike dot-sourcing, which pollutes the global scope, modules can also be cleanly unloaded. Modules also support the creation of private module-level variables, aliases, and functions, meaning you can have “private” utility functions that aren’t seen by someone using the module. Modules support module-level documentation (comment-based help). They are the correct way to package sets of commands for consumption by others.
Modules can also evolve from a single-script file into a multi-file module with a manifest. That might include your script, supporting scripts, formatting views, type extensions, and so on. They all get loaded and unloaded as a single piece.
What are your thoughts on adding these modules to the default powershell profiles on the target servers? Would it be too much of a waste to load things that might never be used? Would the default Powershell profile location cover things like Service Accounts which donâ€™t interactively logon to the machines, but run tasks in the Task Scheduler and when using the â€œRun as a different userâ€ option?
In v3, there’s no reason to import modules in a profile, provided the module is located in one of the paths referenced in the PSModulePath environment variable (which you can add to). That environment variable tells the shell where modules live; it will auto-load the modules on-demand provided they live in that path. That way you get the convenience of having them loaded all the time, without the downside of loading something you won’t use.
That said, your scripts should always document the modules they require - and the ability to do so, using the #requires comment, is another advantage of modules over dot-sourcing. PowerShell reads those comments and can fail the script with a useful error message if the required module(s) aren’t available.
Dot-sourcing was a quick-and-dirty means of including one script within another in v1, when the team was frankly a bit strapped for time on certain features they’d originally wanted to add. When v2 introduced modules, that was clearly the direction they’d wanted to go. Dot-sourcing is fine for quick-and-dirty, but it lacks the structure and support of modules. You can’t “discover” a dot-sourced script; you have to know about it in advance. The shell can’t tell you what commands the script implements without you loading it, and even then the shell won’t know which commands came from that script - there’s no connection back to it. A dot-sourced script can’t declaratively refer to other files (like views or type extensions); it must explicitly load them as part of its code. Dot-sourced scripts can’t be un-dot-sourced to remove them.
With modules, you get all of those things.
And frankly, form a technical perspective, a module is just the script you already have, with a .psm1 filename extension instead of .ps1, and located in the correct path for auto-discover and -loading. There’s no extra “overhead” in making a script module.