modules vs function libraries

In my first script, it got so big I started using functions. Then the functions got to be so numerous that I put them all in a “function library”, BulkUserFuncLib.ps1. I dot sourced this at the top of my main script like this: . “J:\Scripts and Programs\In development\User Creation Scripts\BulkUserFuncLib.ps1” Now, I’m building scripts that use GUI forms. And the functions that build those forms are quite large. I’m wondering if this is the best way to do it? Also, what’s the difference between a module and the “function library” that I used? I’m searching for the best blend of readability, ease of maintenance, and efficiency in running.

There are some differences between Script Modules and a dot-sourced script (what you’ve referred to as a “function library”), but they basically serve the same purpose of loading functions into the PowerShell session.

If you find yourself making multiple calls to dot-source that library script in the same PowerShell session for some reason (maybe multiple scripts run, and each one runs it “just in case”), then using a module and the Import-Module command would give you a bit of a performance gain. Multiple calls to Import-Module won’t keep executing the same code again and again, but repeatedly dot-sourcing does. Aside from that, there’s nothing really wrong with what you’re doing now (though I prefer modules, personally.)

Modules are declarative - the exist as a unit that can discovered. PowerShell can auto-load cmdlets in a module, and can offer discoverability for commands in a module. Modules are a formal unit of distribution. Modules can include multiple files, and can load and unload as a single unit. Modules can define aliases and variables that also unload as a unit with the module. Modules that have a manifest can include type extensions and default output views - which all load and unload as a unit with the module. Once loaded, you can more easily list the commands that are in a particular module. Modules don’t permanently pollute the global scope.

There are a lot of advantages. Modules are the “official” way of extending the shell and adding functionality to it. They didn’t exist in v1, and what did exist in v1 - snap ins - required .NET programming. So dot-sourcing was originally our only method of adding functions to the script, and that’s a big part of why the technique still exists.

hi,

Dot-sourcing is the quick and slightly dirty way of doing it. Nothing wrong with it and I use it all the time. Some nice tips for you:

I always include help information in my functions, that gives me the opportunity to do this:

Function Get-DotsourceLibFunctionsOrSomeNameYouChose { get-command | where {$_.definition -like "keyworkUsedInAllYourFunctions"} }

This function will list all the functions that has been tagged with the keyword, like listing the commands in a module

To check if everything has been dot-sourced, I use something in the lines of:

if ($cmd.indexOf(". ") -eq 0) { # yep, we where dot sourced } else { write-Error "Please dot source script by appending a . (period) and a space, example: '. .\script.ps1'" break }

You can also create an alias for the function that lists all the functions in you dot-source library:

Set-Alias -name dotcmds -value Get-DotsourceLibFunctionsOrSomeNameYouChose

Cheers

Tore

Dot sourcing is fine for testing functions but for a more formal approach modules is definitely the recommended way. The other advantage of modules is that you can control which functions are exposed and which you keep in the background as “helper” functions. This is especially useful if the module is going to be distributed for other people to use.

I’d recommend that anything you find yourself re-using is turned into a module

I only put the dot-sourced script once, at the top of the main program. I thought it was like an include statement in C++. Like… it just adds code onto the end of your script so you can use it as necessary. I use it both to make the script more readable, and to accumulate similar functions in one file. So… every function that I call from the main script, is in that library. Is that what you mean, Dave, by making repeated calls to that file? Are you saying that every time I use a function in that file, powershell has to go load the file again? If it’s doing this, I’m even more impressed with powershell now, as my large user creation script runs very very fast.

Thanks for the detailed explanation, Don. That is very helpful. I don’t need a lot of what modules do. But I’ll be looking into using them at some point in the future.

Once the script is dot-sourced once, the functions remain in memory, waiting to be called later. There’s no need to go read the file again.

It really doesn’t seem like the question is answered (here and on several other similar site/threads).

What is the technical different between importing a module and dot sourcing a file? (I added the word ‘technical’.)

I don’t know the answer and was hoping to find it.

Partial answer might be:

In Modules by default, all (top level) Functions in your script will be accessible to users who import your .psm1 file, but properties will not. (Defult export excludes any variables or aliases defined there.)

You can also export explicitly to restrict/change this list (e.g., only selected functions) or to export ‘properties’ (variables and aliases at top level of the module).

Dot sourcing is ‘just like’ having the code ‘inline’ – everything (top level) gets imported and any variable name set would be in the scope of the caller.)

Modules have the capability to include module level help, can have a manifest for including more stuff.

export-modulemember -function Test-MyModule
export-modulemember -variable TMM
export-modulemember -variable moduleCounter

I don’t seem to see any significant differences in how thing WORK once imported (if made available) but wonder if there are more differences…not mentioned above.

Knowing the technical differences helps to wisely select the proper method to use.

I too have seen this subject around the net on many sites.
Many folks going through their learning curve or adapting a pile of scripts get to this point as I have as well.
The major difference I see from just sourcing files vs. using modules is the ability to easily load and unload a module, which includes all its functions.
Anyone could write their own posh code to do much the same I suppose but in my case I figure why reinvent when I can use what is already there as its clean and well understood.
I use about 26 module directories to contain all my ps1 script files by purpose. I have one psd1 and one psm1 per directory that were a one time creation cost.
In my manifest psd1 I simply wildcard the items to export, which is essentially blank.
ie. ‘common.psd1’
RootModule = ‘common.psm1’ ; #

Functions to export from this module

FunctionsToExport = ‘*’

Cmdlets to export from this module

CmdletsToExport = ‘*’

Variables to export from this module

VariablesToExport = ‘*’

Aliases to export from this module

AliasesToExport = ‘*’

Then in my ‘common.psm1’ file I dot source the files I want to export as the filename and function name are one and the same :
$myModuleFiles = @(
“Find-InFiles” ;
“Find-InPowershellFiles” ;
“Get-DirectoryFileCount” ; )

ForEach ( $modulefile in $myModuleFiles ) {
. $PSScriptRoot$moduleFile.ps1
}

Then in my profile I have aliases that point to small functions I have that allow me to load/reload a module easily and quickly as I make changes .
New-Alias imc Install-Common -description “import or re-import my common/based module set”

Function Install-Common () {

INSTALL this module = makes working on it easier. Alias in profile.ps1 .

  $loadedModules = Get-Module 
  
  If ( $loadedModules.Name.Contains( "Common" ) ) {
     Remove-Module Common 
  }
  Import-Module Common ; 

}

So anytime I make a change or add a new function/file to my ‘common’ library (module) all i type is ‘imc’ and I am good.