Why isn't Remoting Disabled by Default on Windows Server?

I was recently involved in a brief and quite lively Twitter discussion with Don Jones and Jeffery Snover about PowerShell Remoting and why it is enabled by default. I have been involved in a number of discussions about this topic, but never with such a distinguished crowd such as this one. My opinion, and my original comments, where along of the line of “I believe Remoting should be off by default”, “Well, RDP is disabled by default, why not Remoting” and “SSH has been off by default for years”, whilst the counter arguments were of the form “Because Nano Server” or “You could always customise your environment to be off by default”.

Don Jones posted a follow up to this discussion on PowerShell.org titled “Why is Remoting Enabled by Default on Windows Server?” and asked me to put together a post on why I felt it should be off by default. This was difficult for me to put together, so here goes!

It has long been an industry practice, to disable/stop services which are not in use on your clients and servers. The argument is quite simple, enabled services are vulnerable servers, they expose your devices to potential risks. Simply having Remoting off, unless explicitly required, will reduce the attack surface area and increase the security of our systems.

Even Microsoft has followed an off by default methodology for the past 10 to 15 years. Services like POP3 and IMAP are off by default in Exchange, SQL servers do not listen for IP addresses by default, we need to install roles and features individually. Microsoft learnt from a number of major security blunders in the early days (Code Red, Slammer and even Blaster), and focused on a more secure development and deployment model. Why should there be an exception to this posture that has worked extremely well since Windows 2003?

Linux administrators, and developers of Linux distributions have been in a similar situation in the past. For a significantly long period of time, SSHD has been off by default, and administrators have still be able to manage their server fleets. One of the early reasons for an off by default approach in Linux, was that it ensured that administrators were aware of the risks prior to enabling SSHD. Now it can be argued, that this has been a failure, and I think most would agree. I do, however, believe that the failure is not in the off by default configuration, but is in the lack of documentation covering the secure configuration of SSHD. People in glass houses shouldn’t throw stones, as Remoting can be just as poorly deployed.

Remote Desktop is a great example where Microsoft followed these methodologies. RDP is off for a number of reasons with security being only one of them. Ironically, one of the obvious reasons to have RDP off by default is to encourage the move from on server management to remote management. Whilst adoption has not been as high as was expected (due to issues with third party vendors, administrators and to a big extent Microsoft), it is clearly a sign of how ahead of the curve Microsoft has been.

It has become increasingly dangerous to expose management services, be they SSH or RDP on the Internet. If you have ever been responsible to auditing the log files of a server where SSH or RDP is exposed to the Internet, you will be well aware of the automated scan attempts that are performed. Brian Kreb’s has posted on Internet criminals selling access to Linux and Windows servers whose credentials they have brute forced. What happens when the criminals discover Remoting? Bruteforcing credentials via Remoting should be even easier and have written about just such a thing on previous occasions. Should we be enabling these criminals and providing them with even more machines that they can take over?

Well, we are doing this to an extent right now. Users, administrators and developers have all been busy provisioning virtual machines on platforms like Azure and AWS, and whilst in many cases RDP endpoints are on random high ports, the same cannot be said for Remoting. Those who deployed and manage these systems may be well unaware of the risks that they have introduced to their networks. Moving to an off by default model could protect these environments from this sort of configuration error.

As a side note, it is still interesting to me how Microsoft changed Remoting from off to on by default in Windows Server 2012, with very little fanfare. In 2014 when I presented on Lateral Movement with PowerShell, audiences typically responded with a significant amount of surprise, be they from an administration or security background.

In Don’s post, he talks about the fact we could easily create an off by default environment if we so wanted. I really have to disagree with him, and say that he has missed the point to a degree. Whilst it is true, that we could use a customised gold/master image, Group Policy or some other tool to create an environment where Remoting is off by default, it must be highlighted that the inverse, an on by default environment would be just as simple to create with these tools. If you want it on, then turn it on, it isn’t that hard.

Don also talks about the fact that Remoting is an incredibly controllable, HTTP-based protocol. This introduces the other issue I have with Remoting. Unless you are deploying an Azure Virtual Machine, post install, you will be exposing Remoting over HTTP and not HTTPS. Is this 2015 or 2001? Do we really still need to talk about the virtues of HTTPS? It would be trivial for Microsoft to change the default from HTTP to HTTPS in a manner similar to RDP.

Now let’s talk about the big elephant in the room, or should I say Nano elephant in the room? What about Nano Server?!?!? Nano Server, whilst it is a new concept for some of us, isn’t a completely new in our industry. Whilst I agree, it is probably easier to have Remoting (and WMI) enabled by default, it isn’t like the deployment of a Nano Server is currently a simple process. Currently Nano Server is coming as a standalone WIM image, we need to manually add packages providing roles, and we currently need to join a domain during installation. How hard would it be to have a step enabling Remoting? It is trivial.

Having said all of that, perhaps the best middle ground would be to have Remoting enabled on Nano Server, and off for Core and Full installs? Administrators have more option on the latter two than the former. Perhaps a compromise is in order?

Another side note, why doesn’t Microsoft want to enable Remoting on Clients? If Remoting is safe for Internet exposed servers, shouldn’t it be ok for Windows Clients?

So in summary, why should Remoting be off by default?
[ul]Off by default is an industry standard practice.
Off by default has been Microsoft practice for over 10 years.
Linux administrators deal with SSHD off, so can we!
RDP has been off by default, we lived with that.
RDP and SSH are actively bruteforced, why open up another attack vector?
Off by default reduces administrative misconfiguration/insecure configuration
It is just as easy to switch it on, as it is to switch it off.
Nano Server isn’t as much of a challenge as we thing, but it could be the exception.
[/ul]
As Don said, whether you agree or not, it is entirely up to you and you are welcome to add your polite, professional comments to this post. Like Don, I wanted to explain and attempt to justify why I think Microsoft’s approach is not correct. I often believe the discussion is more important than the outcome, and I believe this is definitely the case here.

I have cross posted this on my blog at PoshSecurity.com

All good points. There is security and then there is having a useless server brick sitting in a datacenter. Where is the line of productivity over security? To your point, servers are locked down with firewalls on and RDP disabled and traditionally many “hackable” services are disabled or locked down on Exchange, SQL, etc. and must be configured.

We’re in an era of “Cloud Computing”, which to me doesn’t even mean even going over the internet to connect to a service, it’s internal too. Many of the places I worked our data centers are managed by a 3rd party and we have no physical access to a server. If a step is missed building a server and you’re twiddling your thumbs until it’s resolved. It’s not, “oh darn, forgot to enable X and let me walk back to the KVM and turn it on”.

You also have to look at Nano and Server Core as we make the OS footprint smaller with less hacking points, we remove administration points as well. In addition, Server 2012 is slowly pushing the use of the Server Management console for central administration, which remoting is required (to my knowledge) to administer that server from a centralized Server Management console.

In most scenarios, you are going to build a server with a template (VM), a service (e.g. SCCM OSD), a script or some “automated” fashion that you can configure the server to permit administrative access as you deem fit. Additionally, you are going to have the computer added to AD and have a plethora of GPO’s applied to configure security on the server. If something fails during the build, are you going to just be stuck rebuilding the server?

So, where is the line for security over productivity?

Kieran,

Thanks for bringing this discussion up, it is great to have. I am sure there was a lot of discussion within MS on this subject, and I’m guess we cannot all see that. So at least we can discuss in this forum.

I think that nano server is a good justification. MS is trying to build a server that there is no way to locally manage.

Regarding server/ server core, I can tell you a reason I like it on by default that I’m fighting. Security. Changing any setting in a large corporate environment requires a lot of analysis and approval.
One key piece your post is missing in your comparison to SSH is that Linux has been remotely managed by CLI forever. I’m sure some VNC, or maybe use remote X sessions; but CLI into Linux has been commonplace forever. How many Windows admins do you know, prior to PowerShell remoting, that ever used telnet into a Windows server? If MS made RDP and PS Remoting both off by default, at most companies RDP is already “approved” because it’s legacy. Whereas PS Remoting is “new”. So people would keep using RDP. I think MS is correctly trying to encourage and push Windows server administration to use GUI less remoting. And PS Remoting vs. GUI with RDP greatly reduces the security risk of a server.

Regarding WinRM over HTTP vs. HTTPS by default, this is obviously a big deal if you’re managing remotely, but less so if within an internal network. The con of requiring SSL in an internal network is requiring a certificate authority process, which should be there but may not be. I think I agree with you it probably should be HTTPS. Using the same justification as above, to push people into utilizing more security methodologies. This also has the added benefit of aiding a correct execution policy configuration.

I think firewall or IDS should counter the brute force security issue. Should you really have SSH or PS Remoting exposed on the internet if you don’t have controls in place to detect intrusion or brute force attacks?

Why is it enabled on clients and not servers is pretty easy. Generally servers should be used and deployed by people that have some expertise, and in situations where you need to remotely manage them because direct access isn’t available. IE cloud or data center.
MS can’t expect every Windows client user to even know what a firewall or PS remoting is, let alone make sure they’re configured correctly. I think in most homes, PS remoting isn’t needed. How many people are remotely managing all of the PCs in their household? So for that OS, it makes sense for companies that wish to manage clients remotely to enable it. I’m guessing few households have PS remoting enabled as I do. And most households will never run Windows server.

This is one of those debates with no right answer, but it’s great to discuss the pros, cons, and (guess) at reasons.

Interesting perspective and well written – thanks for the discussion!

Thanks for starting this discussion.

@Kieran, the point you have not touched on is that WinRM connections even over HTTP are encrypted by default. I do not know how good the encryption is but I assume Microsoft has done their due diligence.

Question for you, if you do not have some kind of 3rd party management agent install on a server and Remoting turned off. How are you going to manage the box remotely? Most 3rd party management agent do not like to be cloned without some tricks before taking a Sysprep.

If you just use a vanilla OS image from a recent ISO with a simple answer file to join the domain. Therefore I think having Remoting enabled by default is a blessing for system build and administration.

http://blogs.technet.com/b/ilvancri/archive/2010/03/31/techdays-follow-up-remote-powershell-what-s-encrypted.aspx

http://digital-forensics.sans.org/blog/2013/09/03/the-power-of-powershell-remoting

http://blogs.technet.com/b/jonjor/archive/2009/01/09/winrm-windows-remote-management-troubleshooting.aspx

https://channel9.msdn.com/Series/MCP-Insider-Series-AMA-with-Jeffrey-Snover/02#time=00m36s

Some responses, in no particular order:

I don’t think the “off by default” comparison with things like POP3 / IMAP is really an apples to apples comparison. Things like WMI, and various DCOM / RPC endpoints are on by default, and PS Remoting / WinRM falls more into that category.

If I remember correctly, the default Windows Firewall rules only allow WinRM HTTP connections from the local subnet, whereas HTTPS listeners can be accessed from anywhere, so I don’t think the “exposing remoting endpoints on the internet by default” should be too much of a problem.

I don’t think this makes it any easier for attackers to try to brute-force a server than any other protocol. Failed remoting logon attempts are still audited, and trigger account lockouts, do they not?

Using HTTPS by default is an idea I like, but it would need to be based on self-signed certificates (just like RDP). In order for a client to connect to an HTTPS remoting endpoint that’s using a self-signed cert, either the client-side remoting cmdlets would need to present the user with a “do you want to trust the certificate with this thumbprint?” type of confirmation prompt (most ideal), or there would need to be a way to retrieve the server’s certificate so that it could be stored as a trusted CA certificate (not so great, comparatively, but that’s how I access HTTPS endpoints with self-signed certs today.)

Hi Rod,

I work with alot of infrastructure in Azure, the rest in a locked down lights out DC. In both cases, there is that “think before you type” mentality, where you are often left wondering if what you have just done has killed the box and will result in a call to the DC support team (something I have thankfully not had to do).

If something happens in an automated build, who is to say Remoting will even be there for you?

I will agree that it is a hard line to draw.

Thanks Jason!

Hi Daniel,

Yes they are protected by a limited about of Kerberos encryption, but is that enough? Why not use industry standards?

SSL is a simple win for Microsoft. It would also make it easier to get through management in my opinion.

Hi Dave,

I usually lock down where DCOM and RPC can be connected to, I see RPC as one of those extremely risky services in my environment.

Your right, it does, but not in Azure. The customizations open that up to the world.

Account lockout is something that is off by default, or set to a high number these days. I a lot of respects we lost the account lockout battle.

It would be simple to write a CMDLet, say trust-remotingthumbprint which could pop off, get the ssl certs and place them in the trusted list, and I am kicking myself for not writing one sooner!