Securing your Web server with SSL/TLS

Status
Not open for further replies.

rael9

Seniorius Lurkius
4
Great articles, Lee. I wish they had been around when I was first starting out with NGINX.

There is a typo in one of the code blocks; there seems to be some extraneous HTML in the code block that lists the commands for getting the intermediate certs.

Also, a couple of suggestions:

1) On newer Ubuntu installs (starting somewhere in the 10.x series, I think), you can type:
Code:
service service-name command
instead of invoking the init script directly. So for instance:
Code:
service nginx restart

2) You should really check the configuration files' syntax before restarting NGINX. I find it's a good practice so that you can catch that error *before* you take down your site because of a typo or something. You can check it easily with:
Code:
nginx -t

Keep up the great work, Lee!
 
Upvote
2 (2 / 0)
Pokrface":ks2970v5 said:
Signaling that encryption is in place, without verifying that things are going to the intended recipient [...]
What a ludicrous idea. That's not in any way what I'm suggesting.

I'm proposing that a self-signed cert would appear exactly the same way as a NON-SSL site. No lock icon. No https:// in the URL bar. No nothing. Nothing to suggest that the site has been authenticated in any way, since that would be misleading and incorrect. (Maybe if you right-clicked on it, there would be an indication that the site was unknown and the certificate was untrusted, but the transport was SSL.)

I just want to do away with all the big, red, end-of-the-world warnings, at least for self-signed certs.

(I would keep the red warnings for wrong-domain certs, revoked certs, or expired certs.)

I can sign a cert saying that I'm http://www.google.com if I wanted to.
I'm guessing that was a typo, but it illustrates my point exactly: you don't have to. If I wanted to, say, phish Google+ users, I wouldn't sign my own Google cert. I'd just set up a HTTP server, put a regular ol' GIF lock icon on the page, and many people would fall for it. And the browser wouldn't raise any fuss.
 
Upvote
1 (2 / -1)
pancakesandbeyond":2th6mgat said:
doornail":2th6mgat said:
As Owdi mentioned, we're collapsing two separate problem domains. We have encryption based on open standards. Your browser supports it, my server supports it. The quality of this encryption does not suffer because I generated the certificate myself. Heck, I could argue it's higher because only *I* have a copy of it and it's never transmitted by whatever medium.

Authentication definitely has value -- but, IMHO, I think it makes an in-elegant conjoined twin.

hypothetically, Let's say I am a competitor, and I want to divert your customers underhandedly. I will set up a similar looking domain name and also provide a self signed cert. I will also send out suspect e-mails trying to trick people to come to my site.

This is dumb. Why would you use a self-signed certificate in your hypothetical situation? You could get a certificate from a CA.
 
Upvote
1 (3 / -2)

pokrface

Senior Technology Editor
21,531
Ars Staff
@portent - yes, meant https://www.google.com, sorry--trying to do too many things this afternoon.

I'm proposing that a self-signed cert would appear exactly the same way as a NON-SSL site.
On one hand, that would certainly do away with browser warnings, and at the same time wouldn't have the annoying side effect the current warning system has of conditioning users to blindly dismiss SSL errors like they do every other dialog box. I can also totally see the utility, especially for personal and/or internal web sites.

On the other hand, it's a pretty significant divergence from the intent of SSL/TLS (which is authentication and encryption), and if you ran it by an actual security researcher, I think s/he would say that such a move would weaken SSL/TLS overall.

Still, you're right--that would potentially make things easier. It's just a question of whether or not that level of "easier" is within the goal scope of SSL/TLs.
 
Upvote
1 (1 / 0)
brodie":3rvlx02u said:
portent":3rvlx02u said:
brodie":3rvlx02u said:
Even my 86 year-old grandma knows how to look for the little lock in the corner.

So yes, there is a certain expectation of privacy if a connection is encrypted. If the connection is not verified, that expectation is not necessarily true, so it bears calling attention to it.

With HTTP, there is no expectation of privacy (or there shouldn't be), so there's no reason to call attention to it.
I disagree with you (and cite every phishing site in my spambox as my evidence) but thank you for at least challenging my position without assuming I don't understand what CAs do.
How does a phishing site's attempts have any bearing on people's awareness of SSL?

If they started sending out gopher:// links, it wouldn't mean that people would suddenly gain knowledge of what that protocol was.

How many hundreds of thousands, or millions, of those emails get sent out, and how many people buy into it? I would bet that at least 70% of people who even got fooled enough to click the link would check for the lock in the corner once the page loaded. Of course, that's entirely anecdotal, so that take that for what it's worth. (In that vein, my dad can't help but click links... it's almost a compulsion; but I've taught him to at least look up and see if it's protected before entering a password or anything sensitive).
It demonstrates that the weakest link is unencrypted sites. Assuming your 70% guess is correct, 30% of users do have a wrongheaded "expectation of privacy."

Yet current browsers don't kick up any fuss when users visit unencrypted sites, and scream bloody murder whenever you visit a site with a self-signed cert. Neither is safe, but one generates no warning and the other does. That's just wrong.
 
Upvote
2 (3 / -1)

OldManBrodie

Ars Legatus Legionis
44,318
Subscriptor
Yet current browsers don't kick up any fuss when users visit unencrypted sites
Because there is nothing to raise a fuss about. The HTTP page is loading over an unsecure channel, so there is nothing to check/validate/raise a fuss about.

An SSL, by practice, if not design, is meant to ensure security and identity validation. When one fails, the browser raises an alert.

I'm honestly not sure where you're confused, or what your beef is.
 
Upvote
1 (3 / -2)
My biggest issue with self-signing certs is that browsers seem to just treat them all the same. For example, if I visit a site with a self-signed cert I want it to add the cert to the list of trusted (for me alone obviously) certs. Then the next time I visit if the cert does not match the one I said previously said matches the host, it should raise bloody murder about a man in the middle attack. Instead the most a modern browser will do is give the same warning message about the cert not being trusted like it was the first time I've visited the site. Well that's just not right, it's not just not trusted, it's fraudulent. This seems to be the same way that most mobile apps handle these connections as well. They either accept nothing or everything. With the growth of home NAS boxes being exposed to the outside world, why does no one seem to care about self-signed certs which are being used by a huge number of DIY's?
 
Upvote
1 (1 / 0)

Daedalus207

Wise, Aged Ars Veteran
197
@portent-
I believe I'm in agreement with what you are suggesting. It makes no sense to me either that a self-signed certificate should make a browser go nuts with end-of-the-world warnings, but unencrypted stuff is fine.

I have a VPS that I run Nginx and WordPress on. I host a small personal blog for myself and a few friends. WordPress, by default, serves nothing over HTTPS, meaning that when logging in (for any reason), your username and password are sent cleartext. Chrome has no issues at all with me sending my login info via cleartext, despite that being an incredibly bad idea.

Since I host 4 different domains on my VPS, I decided to self-sign certificates for each domain and enable HTTPS logins for WordPress. Now when I try to login via the encrypted page, Chrome goes nuts with the hyperbolic warnings, despite this configuration being much more secure than the previous unencrypted one.

I personally see a great amount of value in self-signed certificates. Would I trust it for an e-commerce site or a bank? Absolutely not, in those cases, the third-party verification would be useful.

I see a third-party certificate as telling me, "A company who sells certificates verifies that the server at 192.168.1.1 is really foobar.com."

I see a self-signed certificate as telling me "The server at 192.168.1.1 claims to be foobar.com, and that matches your DNS records, so there's a good chance that it's legit."

No certificate at all says "I dunno. Good luck with that."
 
Upvote
2 (2 / 0)
Pokrface":ymxjpfnh said:
A wildcard cert is actually crazy-useful--in addition to web sites, I'm also using my *.bigdinosaur.org cert for my firewall's web interface (so it doesn't throw SSL errors when I access it) and also for my domain's Openfire instant messaging server. Once you get a wildcard certificate, it starts being useful in all sorts of places.

That seems terrible.

So, you have (unencrypted copies of) your private key on your web server, and also on your IM server, and also on your firewall, and also on any other server on your domain. If any one of those programs or computers gets compromised, then you'd have to revoke the certificate and install new ones on all of those servers.

I'm especially wary of the firewall. I haven't seen a firewall device with an OS that I truly trusted, yet.

Also, it seems to me that your Nginx configuration is missing something. What I want is for the Nginx server to have read-only access, and to have no ability to change the certificate file. In traditional Unix, it might be having the certificate and its directory owned by root and giving read-only access to a group of which Nginx is the only user. Or maybe there's a better solution involving ACL. I haven't played with this, so I'm not sure about the best way.
 
Upvote
0 (1 / -1)

pokrface

Senior Technology Editor
21,531
Ars Staff
Decade":zn3f8ixj said:
That seems terrible.

So, you have (unencrypted copies of) your private key on your web server, and also on your IM server, and also on your firewall, and also on any other server on your domain. If any one of those programs or computers gets compromised, then you'd have to revoke the certificate and install new ones on all of those servers.

I'm especially wary of the firewall. I haven't seen a firewall device with an OS that I truly trusted, yet.
Risk versus convenience. For the personal domain, it's three servers all together, including the firewall (which is a box running smoothwall linux, not a dinky soho nat router). The chroniclesofgeorge cert is only on the web server.

Also, it seems to me that your Nginx configuration is missing something. What I want is for the Nginx server to have read-only access, and to have no ability to change the certificate file. In traditional Unix, it might be having the certificate and its directory owned by root and giving read-only access to a group of which Nginx is the only user. Or maybe there's a better solution involving ACL. I haven't played with this, so I'm not sure about the best way.
Right--you could, for example, set the ownership of the private key to root:certgroup, add the nginx user to the certgroup group, and then chmod the file to 640.
 
Upvote
1 (1 / 0)

raxx7

Ars Legatus Legionis
17,104
Subscriptor++
Daedalus207":1i5w7r9r said:
@portent-
I believe I'm in agreement with what you are suggesting. It makes no sense to me either that a self-signed certificate should make a browser go nuts with end-of-the-world warnings, but unencrypted stuff is fine.

(...)

I personally see a great amount of value in self-signed certificates. Would I trust it for an e-commerce site or a bank? Absolutely not, in those cases, the third-party verification would be useful.
I see a third-party certificate as telling me, "A company who sells certificates verifies that the server at 192.168.1.1 is really foobar.com."

I see a self-signed certificate as telling me "The server at 192.168.1.1 claims to be foobar.com, and that matches your DNS records, so there's a good chance that it's legit."

No certificate at all says "I dunno. Good luck with that."

One of the goals of SSL is to avoid man-in-the-middle attacks.
A 3rd party certificate tells you "the server you're talking to is controlled by whoever controls foobar.com", because the SSL certificate issuer verified that.

A self-signed certificate tell you nothing more than plain HTTP. If someone can setup a MITM on foobar.com, he/she'll surely be smart enough to set it up with his own certificate claiming to be foobar.com as well.

If browsers were to silently accept self signed certificates, then this would negate the usefulness of SSL as MITM attack protection.

Fortunately, there are CAs like StartSSL so we can get free certificates.
And, eventually, we'll have a DNSSEC based certificate validation, eliminating the need for CAs.
 
Upvote
1 (1 / 0)

AkaTG

Seniorius Lurkius
45
bartfat":2wcu66tg said:
Aw, there doesn't seem to be an option for multiple subdomains for free SSL certificates. Unless I pony up some $50. As it is, this seems to be pretty limited, since you can't use the same SSL certificate with a home server and say, Bluehost together. They'd require separate sub-domains... and the Bluehost would take up the standard domain and the www one too.

EDIT: I guess the way around it is to use multiple certificates for separate sub-domains. I can see this getting unwieldy though with more than a few sub-domains...
If you only need two you can do what I do. Get a certificate for sub.domain.com and use it for both domain.com and sub.domain.com. Of course www. wont work, but I dislike www. anyway.

rael9":2wcu66tg said:
2) You should really check the configuration files' syntax before restarting NGINX. I find it's a good practice so that you can catch that error *before* you take down your site because of a typo or something. You can check it easily with:
Code:
nginx -t
If you reload instead of restart it'll test the conf files for you and instead of failing to load it'll fail, tell you, and continue to run with the current working conf files.

service nginx reload
 
Upvote
0 (0 / 0)
gozar":5ojb88z1 said:
What about being your own Certificate Authority?
That's one of my problems with SSL. There's no network of trust. There's no Web of Trust like PGP, and there's no hierarchy of trust like DNS. There's only avaricious certificate authorities. You can't get a certificate that says, I'm the authority of my domain, now here's the signing key for the rest of the hosts in this domain. Likewise, you can't say: Comodo, you have failed me for the last time, be banished from my computer.

Oh, setting up a certificate authority is not that hard, especially if you use the easy-rsa scripts that come with OpenVPN, but no browser will consider your signatures to be valid. Unless you go through the effort of installing your personal CA's certificate into every computer that connects to your server. I don't think this is practical.
 
Upvote
1 (1 / 0)
Pokrface":4031qo5u said:
Risk versus convenience.
Indeed. In my personal life, I don't follow the strictest security policies.
Pokrface":4031qo5u said:
The firewall (a box running smoothwall linux, not a dinky soho nat router).
Eh. I've read The Unix-Haters Handbook. I don't really think of Linux as being terribly secure. The whole Unix scene is so much better than when The Unix-Haters Handbook was written, but hardly a week goes by when LWN isn't posting a security advisory about the kernel. And it seems that Smoothwall gets updated only like once a year. Of course, many systems are much worse.
 
Upvote
0 (1 / -1)

elkoraco

Seniorius Lurkius
45
Decade":14xk5qzv said:
Eh. I've read The Unix-Haters Handbook. I don't really think of Linux as being terribly secure. The whole Unix scene is so much better than when The Unix-Haters Handbook was written, but hardly a week goes by when LWN isn't posting a security advisory about the kernel. And it seems that Smoothwall gets updated only like once a year. Of course, many systems are much worse.

I seriously doubt they get regular system upgrades once a year. They're stable kernel branches, used in most enterprise distributions, and all the security fixes arrive there very fast. It's up to the user (I guess administrator is the better term in this case) to apply the updates. The yearly updates are directed at features and bug fixes, not security fixes. In this sense Debian gets "updated" only two or three times a year, Red Hat and CentOS only two times in a millennium and so on.
 
Upvote
0 (0 / 0)

pokrface

Senior Technology Editor
21,531
Ars Staff
I wish we had a super-brilliant network specialist on staff, because this could be a fascinating side article--what makes a good stateful firewall?

I'm pretty comfortable with a GNU/Linux firewall, primarily because I've got remote logon and remote web administration disabled (and by "disabled" I mean "constrained to the internal interface and unavailable on the external one"). I've got eight (I think, I'd have to log on to check) TCP or UDP ports forwarded to other boxes on my LAN, but there's no real way to interact with the firewall itself from the public interface.

Smoothwall is essentially a web front-end over iptables (though it'll also do dhcp and dns for your LAN if you want, and it also supports SNORT as an IDS). Though vulnerabilities in iptables do show up from time to time, they're not super-common, and they almost always tend to be crash or deny-service vulnerabilities, versus "run code remotely" type vulnerabilities.

The way I see it, there are only two things a person could do to be more secure than using a Linux-based firewall:

1) Switch to a BSD-based firewall like m0n0wall, or its bigger & more configurable cousin, pfsense. I haven't done this because I like the interface simplicity of smoothwall, and I like how it makes setting packet filter rules and DNAT/SNAT rules into a single step instead of two separate operations.

2) Fork out a few hundred bucks for a small Cisco ASA5505 or its equivalent. It's questionable that the cost will balance the benefit for a home user, though, and every time I think of going down this road I always come up with something better to do with the cash.
 
Upvote
1 (1 / 0)

pokrface

Senior Technology Editor
21,531
Ars Staff
PietjePuk75":1quhkw5w said:
Great series :)

There is a (consistent) error in this article (not in the previous one) wrt the /etc/nginx/conf directory, since it should be /etc/nginx/conf.d
Let me double-check and make sure I didn't mean for that to go in a separate directory, since conf.d should probably only be used for included config files.

Edited to add - Thanks for pointing this out. I've altered the guide so that it includes the creation of an "ssl" directory beneath /etc/nginx, and the cert & key files get moved there. That keeps them out of conf.d, which is good practice.

Probably the "best" way to do this would be to leverage /etc/ssl and /etc/ssl/private, but that would be a more substantial edit to the article, so I'll likely just leave it as-is.
 
Upvote
1 (1 / 0)

PietjePuk75

Smack-Fu Master, in training
92
Pokrface":2r4547n7 said:
I've altered the guide so that it includes the creation of an "ssl" directory beneath /etc/nginx, and the cert & key files get moved there. That keeps them out of conf.d, which is good practice.
Debian (and probably ubuntu too) uses conf.d directories a lot, therefor I jumped to the conclusion the conf dir was a typo.
But you're right, conf.d isn't an appropriate place for the certs.

Probably the "best" way to do this would be to leverage /etc/ssl and /etc/ssl/private
On my system, I'm using /etc/ssl/cert and /etc/ssl/private
 
Upvote
0 (0 / 0)

rael9

Seniorius Lurkius
4
Pokrface":2utoi4we said:
Maybe I'm blind, but I'm not seeing it--is it in the code block with the two wget commands, or the next one with cat? I'm look at both in the Ars CMS and I'm not seeing any extra characters or anything.

It turns out it was a Chrome extension that I had forgotten about that turns non-clickable links into clickable ones. Sorry for the confusion.
 
Upvote
1 (1 / 0)

jgrmnprz

Seniorius Lurkius
2
Thank you for this article.
I think you missed something about permissions on the private key file. This file shouldn't be readable by the nginx user but only by root.

# ps -ef | grep nginx
root 11597 1 0 11:58 ? 00:00:00 nginx: master process /usr/sbin/nginx
www-data 11598 11597 0 11:58 ? 00:00:00 nginx: worker process
www-data 11599 11597 0 11:58 ? 00:00:00 nginx: worker process

If someone can execute something as the workers processes user, he might be able to read the file.
 
Upvote
0 (0 / 0)

pokrface

Senior Technology Editor
21,531
Ars Staff
jgrmnprz":192bhsfz said:
Thank you for this article.
I think you missed something about permissions on the private key file. This file shouldn't be readable by the nginx user but only by root.
You are exactly right! My impression was that the Nginx worker processes needed to be able to read the key, but after chowning the key to root and chmod'ing it to 400 to restrict access to all but the owner, SSL continues to work without issue.

I'm going to ask the nginx mailing list why this works, though. I suppose it's because the nginx master process is running at root privilege and it retrieves the key for the workers, but I don't know for certain.
 
Upvote
0 (0 / 0)
Pokrface":1nvyyw3n said:
1) Switch to a BSD-based firewall like m0n0wall, or its bigger & more configurable cousin, pfsense. I haven't done this because I like the interface simplicity of smoothwall, and I like how it makes setting packet filter rules and DNAT/SNAT rules into a single step instead of two separate operations.

Just to be clear, that's not necessary on pfsense. You can opt to have it *not* create a matching firewall rule when adding a nat translation, and you can also opt to have it *not* make a turnaround rule that makes the mapping accessible from inside as well as outside, but the defaults have been to combine the rules for a few years now. As far as simplicity, I'd say pfsense is pretty simple - it's all web-based and the layout of the GUI is fairly logical. The QoS stuff can be complicated if you forego the setup wizard. Layer 7 stuff I've not touched.

Pokrface":1nvyyw3n said:
2) Fork out a few hundred bucks for a small Cisco ASA5505 or its equivalent. It's questionable that the cost will balance the benefit for a home user, though, and every time I think of going down this road I always come up with something better to do with the cash.

Don't forget to pay for licensing! :)

I've recently been snookered into dealing with SonicWall boxes, and they totally changed my viewpoint about what's "easy" and what's not as far as firewall GUIs. I used to shy away from recommending pfsense to average people and just push them to sonicwall, assuming that a huge commercial offering like that would be easy to use. Now that I've setup a dozen or so of those things I think they are way more convoluted than pfsense (they do force you to visit 3 or more menus to setup a simple port forward). As for Cisco, while I generally am OK with their routing and switching stuff, if the ASA is anything like their other stuff for setting up QoS, well god help you. It's decent gear, but not something I'd touch unless a customer absolutely needed it for the name and the ICSA cert.

I noticed we also had a few FreeBSD folks slapped down in the comments. Let's not forget that on the server side (as opposed to desktop, where you would be fiddling with wifi, video cards and oddball peripherals) FreeBSD is going to support all the same hardware and that out of the box you're not having to trim a bunch of fat to keep your web server populated with only what's needed for the intended task. And of course nginx was developed on FreeBSD and as far as I can tell it's still their main development platform. Not sure if you're covering Varnish later on, but that's another project that's lead by one of the FreeBSD core team members. The OS obviously doesn't have the PR and nerd momentum that Linux does, but it does quietly power a crapload of stuff, including Netflix's new in-house "CDN in a box": https://signup.netflix.com/openconnect/software

I'll gladly entertain any questions from my fellow Arsians that would like to try following this series on FreeBSD.
 
Upvote
0 (0 / 0)

pokrface

Senior Technology Editor
21,531
Ars Staff
jgrmnprz":yi52gg1n said:
I'm going to ask the nginx mailing list why this works, though. I suppose it's because the nginx master process is running at root privilege and it retrieves the key for the workers, but I don't know for certain.

Yep, I think this is the reason.
Got my answer, from one of the core nginx devs on the mailing list:
Worker processes doesn't read keys, but use keys already in memory
(read by the master process during reading/parsing the
configuration file, and inherited via fork() syscall, much like
all other configuration data).
 
Upvote
0 (0 / 0)

skapegoat

Ars Scholae Palatinae
884
Subscriptor++
The shutdown of the free Google Apps prompted me to have similar wonderings, not that $5/mo is insane or anything; I'd just rather not spend it.

I actually already have some domains registered with a standard webhost, and many of them are hosted there as well (naturally).

Could I potentially get a wildcard certificate for one of the domains I already own, and use it on both the webhost AND my webserver I'm throwing together at home based on this article? Eg. Register mydomain.com with my hosting company. Have it do its thing. Buy a *.mydomain.com certificate. have hosted.mydomain.com (and mydomain.com) use the cert, and also have home.mydomain.com use the same cert on my personal box?

That is fully too wordy for what in my head is a simply question and scenario. Hopefully there is a simple answer. =)
 
Upvote
1 (1 / 0)
skapegoat":2j4d3whg said:
Could I potentially get a wildcard certificate for one of the domains I already own, and use it on both the webhost AND my webserver I'm throwing together at home based on this article? Eg. Register mydomain.com with my hosting company. Have it do its thing. Buy a *.mydomain.com certificate. have hosted.mydomain.com (and mydomain.com) use the cert, and also have home.mydomain.com use the same cert on my personal box?

Absolutely! That's the whole point of a wildcard cert - it matters not if the subdomains are on the same host or not, it's still doing the job of certifying that the server your browser is connecting to possesses a cert that matches the hostname the browser is requesting a page from.

I use wildcards all over like that - customer-facing stuff as well as internal sites, all hosted on different hardware in different locations.
 
Upvote
0 (0 / 0)
First things first, thanks for the article series, and kudos for using Nginx, it's really a nice web server.
Sorry for the late reply, that dayjob thing is sucking time like there is no tomorrow :D

Pokrface":2w9pklen said:
portent":2w9pklen said:
I just think that it should not be treated as somehow more dangerous than an unencrypted, unauthenticated one. It's certainly not secure, but it's not any more dangerous than a "normal" non-SSL connection.
That turns into a sticky issue, though. The ultimate purpose of encryption isn't to hash your data, but rather to ensure that it's not going to be read by anyone other than the intended recipient. Encryption without end-to-end authentication means that you're sending your scrambled bits to...someone. The connection may or may not have an eavesdropper; there might or might not be a man-in-the-middle.

Ensuring you know who you're talking to is just as important as the actual scrambling of the data, and some kind of warning really is necessary.

I'll have to disagree. (TL;DR at the bottom)
Encryption without authentication is perfectly ok.

The SSL + CA system is broken beyond repair, does not protect against real-world major threats but is only efficient at stopping some minor threats.
A self-signed SSL certificate + P2P validation (eg. Convergence plugin for Firefox) is much more effective against real-world threats and much more resilient.

Let's see the 3 major threats faces by web user today:

1) small scale "man in the middle"
The user used a shared medium that can be intercepted (Wifi, cable modems, more difficult for DSL).
She visit a website with a CA-certified SSL key. If an attacker tries to intercept the communication, a more-or-less explicit message tells here the communication is not secured.
In that case, SSL + CA works.
In that case, self-signed + P2P verification also works (someone accessing the server from another network would get a different certificate from the same server -> warning).

Anyway, that doesn't really matters because the hacking of critical accounts (banking, webmails, social networking sites, etc) happen by compromising the user terminal (eg. key-logger) NOT by intercepting communications (much better risk vs. reward).

In case the server is compromised, the SSL+CA system fails badly because key revocation simply doesn't work. So the server owner knows there is a valid certificate "in the wild" allowing anyone to be authenticated as him, even if he gets a new valid one. With self-signed + P2P, just generate a new certificate, and after some time, that one will become the most trusted certificate (shit happens, trust is fluid not absolute, resilience is massively improved).

2) large-scale "man in the middle" => massive real-Wold threat
The user uses a network controlled by a large entity (corporation, government). By putting pressure on the CA, or by forcing the user to trust a rogue CA, that large entity can get fake certificates to impersonate any SSL-protected website.
In that case, SSL + CA fail while giving the user a fake sense of security, which is undoubtedly worse than having no security at all.
Is that case, self-signed + P2P also works. User is aware MitM is going on, even if there is probably no way to avoid it.

3) large-scale passive interception => massive real-Wold threat
The user visits a website that is not secured by any SSL. Unknowingly, she gives the list of everything she reads, the places she went, the places she'll go, who she is, who she talks to, what are her opinions, to her ISP, her ISP commercial partners, her government (thank to "data retention laws"), and shittons of technical third-parties that may or may not log everything for an unknown duration.
In that case, SSL+CA fails indirectly: because it is cumbersome (require registrations, sometimes paperwork and money), most servers are still transmitting everything in clear text.
If self-signed was the norm, every server install procedure could include the automated generation of a self-signed certificate and the redirection of any request on port 80 to port 443. Having an unsecured server would actually require some additional work.

As a matter of fact, even without a working P2P validation system, that 3rd threat is avoided entirely by having self-signed certificates everywhere: if anyone wants do log all the URL some user visit, or do some DPI voodoo, they'll have to do an active MitM attack, which tend to be highly illegal in most countries, and very, very not subtle: anyone technically inclined can detect the attack, collect solid proofs of it and expose the culprit, things impossible with passive snooping and subtle DPI stuff.



TL;DR aka "my conclusion": it's ok to have CA-signed certificates, but we know that the system is fundamentally broken, and that the most promising way out is self-signed certificates + P2P validation, so please, Mr Hutchinson, do not discourage the use of self-signed certificates.
A self-signed certificate is better than no certificate at all. Hopefully, the web browser designers will update their user interface to acknowledge that reality.
 
Upvote
0 (0 / 0)

pokrface

Senior Technology Editor
21,531
Ars Staff
WaggishWombat":z1uq7h3y said:
TL;DR aka "my conclusion": it's ok to have CA-signed certificates, but we know that the system is fundamentally broken, and that the most promising way out is self-signed certificates + P2P validation, so please, Mr Hutchinson, do not discourage the use of self-signed certificates.
A self-signed certificate is better than no certificate at all. Hopefully, the web browser designers will update their user interface to acknowledge that reality.
I agree in principal--you're still saying that authentication is a fundamental component of encryption, which is my overriding point. It's the last sentence, though, that is the problem. Web browsers throw up the same scary warnings for self-signed certificates as they do for actual invalid or revoked certificates, and encouraging self-signed certs right now encourages user click-through of those warnings. A self-signed cert is likely legitimate, but making users blind to browser cert warnings leaves them unprotected about actual MITM attacks or other true maliciousness.

So, as you say, to make self signed certs a valid option, browsers need to stop treating self signed certs as invalid and bad.
 
Upvote
0 (0 / 0)
Pokrface":1ukq224t said:
WaggishWombat":1ukq224t said:
TL;DR aka "my conclusion": it's ok to have CA-signed certificates, but we know that the system is fundamentally broken, and that the most promising way out is self-signed certificates + P2P validation, so please, Mr Hutchinson, do not discourage the use of self-signed certificates.
A self-signed certificate is better than no certificate at all. Hopefully, the web browser designers will update their user interface to acknowledge that reality.
I agree in principal--you're still saying that authentication is a fundamental component of encryption, which is my overriding point.
It's separate things.
Having one without the other is perfectly ok in certain scenarios. As I said, encryption alone force a passive listener to become an active attackers and expose itself, this is not a trivial, this is serious infosec jujutsu :)

Pokrface":1ukq224t said:
It's the last sentence, though, that is the problem. Web browsers throw up the same scary warnings for self-signed certificates as they do for actual invalid or revoked certificates, and encouraging self-signed certs right now encourages user click-through of those warnings. A self-signed cert is likely legitimate, but making users blind to browser cert warnings leaves them unprotected about actual MITM attacks or other true maliciousness.

So, as you say, to make self signed certs a valid option, browsers need to stop treating self signed certs as invalid and bad.
Agreed, the solution to scary self-signed certificates warning is not to abandon self signed certificates (that would let us with no chance to get out of the CA mess) but to demand that web browser designers fix their shit.

How difficult is it to come up with an icon that is very different from the key/padlock skeuomorph (should not confuse the user) and mean "the communication with that website is encrypted but there is now way to ensure authenticity"?
How about a big eye, 1984-style? :D
That way, self-signed certificates could become more widespread, and then when P2P authentication is mature enough, a website with secure and authenticated self-signed certificate could get a well deserved padlock, getting us definitely out of that CA clusterfuck.
DNS is centralized, so CA are ok for DNSSEC, but web stuff must be as decentralized as possible, and CA are not.
 
Upvote
0 (0 / 0)
Status
Not open for further replies.