I’m a bit stumped by the behavior of this simple script:

$y = 0
while($y -lt 1)
{
$y += .1
}
$y

I expected that the final result would be 1. Instead, the final result is 1.1. I know can add code to force the result to 1, but I’d still like to know why the while statement “$y -lt 1” is true when $y is equal to 1.

Do you need to increment by .1? If so try forcing $y to be a Decimal:

[decimal]$y = 0
while($y -lt 1)
{
$y += .1
}
$y

I think it has to do with the data type. I don’t work with System.Double data types really, but when you let PowerShell do its thing it turns the $y var into a system.double and I think because of that, it’s following a standard for binary floating-point arithmetic. As a result The ‘expected’ last loop is still true instead of false, because the number is actually very slightly smaller than 1 so it iterates once more.

I’m sure someone else could explain it better than I though, but as for a fix, the above should work as it forces it to a decimal type instead. Pretty sure this is a .Net thing and there’s probably some articles about it online

I don’t often use Powershell. But I thought it might be fun (really!) to see if I can fiddle with the Euler method in a differential equations class. I’ll pay more attention do data types since I’ll be using multiple decimal places.

To add to what @dotnVo mentioned, the data types must allow for decimal points. Such data types can be [double], [float], [single], and [decimal], but this is not an all-inclusive list as [float] is a type of [single] and [double] is a type of [System.Double].