Critical crypto bug exposes Yahoo Mail passwords Russian-roulette style

Status
Not open for further replies.
[url=http://meincmagazine.com/civis/viewtopic.php?p=26613467#p26613467:cs27i0oh said:
Infidel[/url]":cs27i0oh]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611303#p26611303:cs27i0oh said:
rakkuuna[/url]":cs27i0oh]How effective is revoking certificates? Don't the client apps need to check it by themselves? I wonder if they do it very often...
In Chrome, there's a setting for "Check for server certificate revocation"... it's off by default.

Which makes you wonder why in the world we have certificates in the first place if Google is so idiotic to turn a setting like that to off by default.

If we cannot send a list of certificates which are stolen or invalid or just not to trust to users worldwide there is no need for a repository either.
 
Upvote
5 (5 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26610491#p26610491:tquzyoh9 said:
Aurich[/url]":tquzyoh9]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26610469#p26610469:tquzyoh9 said:
Fblue[/url]":tquzyoh9]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26610385#p26610385:tquzyoh9 said:
Solidstate89[/url]":tquzyoh9]
You don't have to keep logging in under the throwaway account. They updated OpenSSL this morning.

I saw this. I wonder if they have swapped there SSL Cert yet? I would imagine Ars public key was compromised, everything else appeared to be.
Yes, we've updated all our certs.

I don´t think users see the severity of this issue, if you patched the system already, and even the certificate was replaced you can´t be sure if the server was not completely compromised already before this changes. If for someone reason attackers stole access to admin accounts, they could have modified something and even replaced logs. In Linux a compromised servers equals to full OS reboot and fresh new clean startup. Any security policy requires this, if there is suspicious of a compromised server, a OS swipe is required.

And since this could affect any user/login to the server via any SSL connection, this means someone could had access before this was even reported. Maybe the server is compromised and the admin is not aware of this.

This shows how extreme this issue is.
 
Upvote
0 (2 / -2)
No there is no need for C anymore since nearly 20 years.
I was not talking about java or .net.
I was talking about C++ and std::vector and std::string and C++ interfaces and serialization.
And I was talking about designing interfaces/APIs in a way that only a few number of different states are possible.
Such things like buffer-overflow or truncation or unsafe C functions or multiple-step-initialization or "goto failure" do not happen to me any more since nearly 20 years
-- because usage of C++ as such.

The problem is that everybody things he can write or even design software.
That there are different levels of quality of software has not entered most peoples thinking.
 
Upvote
-7 (8 / -15)

vigeelebrun

Ars Scholae Palatinae
855
[url=http://meincmagazine.com/civis/viewtopic.php?p=26613301#p26613301:2aikos9s said:
kliu0x52[/url]":2aikos9s]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612879#p26612879:2aikos9s said:
ExcessPhase[/url]":2aikos9s]ArrayBoundscheck?
Why does this sound like C programming?
Why am I not coding in C anymore since 1996?
And what would you like to use to write low-level security libraries that will be used in a wide variety of scenarios and processes?
Ada 2012

Ada helps churn out less-buggy code - Ada is good at detecting errors in programs early in the programming lifecycle. You submit your code to an Ada compiler, and it will inform you right away that you made mistakes that a lot of languages would let go right by. For some people, that's the bottom line. Errors cost money. The sooner you catch them, the less they cost. If the compiler misses an error, Ada has another line of checking when you run the program. People who use Ada generally find that if the program makes it through the compile-time and run-time checks, it's remarkably close to doing what it should. That gives them a sense of productivity and pride in the quality of their work.

Error Cost Escalation Through The Project Life Cycle
 
Upvote
12 (12 / 0)

Otus

Ars Tribunus Militum
2,125
Subscriptor++
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612323#p26612323:2uxogi19 said:
dangoodin[/url]":2uxogi19]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611775#p26611775:2uxogi19 said:
Otus[/url]":2uxogi19]Why are they POSTing a plain text password when they could hash it on the client side and avoid ever leaking it?
To work with a hash, the user must enter the corresponding plain-text password and it must be passed through PBKDF2 or another hashing algorithm. During this process, the plain-text password is temporarily entered into memory. There's no way around this. To verify a password, it must be processed by the computer and pass through its memory. The same process is what gives rise to "RAM scrapers" that scour the memory of point-of-sale terminals for credit card numbers before they are encrypted and transmitted to payment processors.
Now that it's the morning and I'm no longer drunk I can see where I went wrong in my simple idea – although even avoiding leaking the plaintext password is a plus, since the clueless user may have reused it and the salted hash won't help the attacker with other sites. However, you could use public key cryptography and end up with sign in tokens that don't help the attacker whether they read the database or a memory dump.

Have a signing key pair derived from the password on the client's side, use it to sign a unique message, then verify that on the server using the public key. The server never has access to the password or the private key, which would be required to sign in. They never leave the user's browser.

e.g.
in the client:
password + salt -> PBKDF2 -> crypto_sign_keypair_from_seed

Store the public key in the database on account creation or password update.

On sign-in use the secret/private key to sign the current universal time in the client and send it over. The server checks the signature and the time signed, and stores the latest sign-in time in the account. The next sign-in must use a higher time value so that an attacker can't reuse a token.

An attacker can still use normal brute force or dictionary attacks on the public key if they get it from the server database or memory, but they are never able to just sign in.
 
Upvote
0 (0 / 0)

kliu0x52

Ars Scholae Palatinae
757
I wonder if it would be worthwhile for browsers to do something proactive about this.

1) Browser is instructed to visit https://foo.example.com

2) Browser runs a test for this bug.

3a) Browser sees no problem, and adds foo.example.com to a list of safe domains (so that it won't check this every time)

3b) Browser sees a problem, warns user.
 
Upvote
1 (1 / 0)

traveller

Seniorius Lurkius
17
What is bad is not the severity about this issue but who, who was the idiot that made this public before vendors like Red Hat could even release patches.

Honestly, this issue was published before people could even patch their servers which makes me wonder why they did not gave at least 7 days until at least the majority of servers where patched. In particular for a severity of this size and so easily to exploit by just visiting a web sever and sending some code to it.

Was it not enough for security researchers to say there is a security hole and patch it first, instead of actually releasing in public explaining the exploit vector? Ars is just publishing it after its public already but the original persons that discovered and this and leaked them are amazingly irresponsible and have done a huge damage to the Internet.

I don't think you understand how this works. The exploits that were floating around within hours of the announcement were created from the patch itself, not because the researchers revealed too much information. They released the minimum information necessary, which by itself was by no means enough to do anything. The OpenSSL repositories are, by their very nature, public, so the moment the bug was fixed everybody was able to see exactly what it is and people were writing exploits based on that.
 
Upvote
13 (13 / 0)

kliu0x52

Ars Scholae Palatinae
757
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615119#p26615119:28q3npyu said:
traveller[/url]":28q3npyu]
What is bad is not the severity about this issue but who, who was the idiot that made this public before vendors like Red Hat could even release patches.

Honestly, this issue was published before people could even patch their servers which makes me wonder why they did not gave at least 7 days until at least the majority of servers where patched. In particular for a severity of this size and so easily to exploit by just visiting a web sever and sending some code to it.

Was it not enough for security researchers to say there is a security hole and patch it first, instead of actually releasing in public explaining the exploit vector? Ars is just publishing it after its public already but the original persons that discovered and this and leaked them are amazingly irresponsible and have done a huge damage to the Internet.

I don't think you understand how this works. The exploits that were floating around within hours of the announcement were created from the patch itself, not because the researchers revealed too much information. They released the minimum information necessary, which by itself was by no means enough to do anything. The OpenSSL repositories are, by their very nature, public, so the moment the bug was fixed everybody was able to see exactly what it is and people were writing exploits based on that.
While that's true for code, they could've privately contacted the security teams at places like Yahoo that serve millions of people. AFAICT, Google's services were all secured before the disclosure, probably because one of the people who found the problem works for Google's security team. While Yahoo is Google's competitor, the problem of password cross-pollination means that giving big places like Yahoo an early heads-up through private channels would've been good for Google's users, too.
 
Upvote
1 (5 / -4)

UnWeave

Smack-Fu Master, in training
59
[url=http://meincmagazine.com/civis/viewtopic.php?p=26614707#p26614707:vxjod5ea said:
nibb[/url]":vxjod5ea]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26613467#p26613467:vxjod5ea said:
Infidel[/url]":vxjod5ea]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611303#p26611303:vxjod5ea said:
rakkuuna[/url]":vxjod5ea]How effective is revoking certificates? Don't the client apps need to check it by themselves? I wonder if they do it very often...
In Chrome, there's a setting for "Check for server certificate revocation"... it's off by default.

Which makes you wonder why in the world we have certificates in the first place if Google is so idiotic to turn a setting like that to off by default.

If we cannot send a list of certificates which are stolen or invalid or just not to trust to users worldwide there is no need for a repository either.
I doesn't ignore certificates completely, it has its own list which is maintained in updates. See this Ars article. Though, particularly at present, I'm still keeping that box checked.
 
Upvote
1 (1 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612193#p26612193:2btkpzak said:
myforwik[/url]":2btkpzak]

IMO SSL/TLS is now completely broken. The number of potential certificates that have been exploited and that could now be used for man in the middle attacks could be in the millions..... the list of black listed certificates will be in the millions and/or the number of blacklisted sub certficate authorities is probably going to be 10,000+. Vendors already hate just including one or two items on the blacklist, let alone this number of items....

I've had my suspicions that the NSA/other TLA has had something like this up their sleeve for some time. If it wasn't something like this, they've probably just bought one of the major CAs.

IMHO, the whole idea of a CA is broken anyway - you are trusting a third party who may or may not be trustworthy. The only way to verify certs properly IMHO is to do it out of band via snail mail or some other method.
 
Upvote
0 (0 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26613301#p26613301:3u403zj7 said:
kliu0x52[/url]":3u403zj7]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612879#p26612879:3u403zj7 said:
ExcessPhase[/url]":3u403zj7]ArrayBoundscheck?
Why does this sound like C programming?
Why am I not coding in C anymore since 1996?
And what would you like to use to write low-level security libraries that will be used in a wide variety of scenarios and processes? Write this in a managed language, and now you have the problem of every program requiring TLS also requiring whatever runtimes, libraries, etc. that your managed language uses.

There is a reason people still use C, just as there is still a reason for people to understand assembly. Just because C isn't what you'd use to write most software doesn't mean that there aren't places where C is by far the best option available.

I dunno. Maybe ADA? Pascal? One of the myriad of other languages which isn't quite as sucky for performance as managed code, but at least does proper type checking and array bounds checking?
 
Upvote
6 (7 / -1)

UnWeave

Smack-Fu Master, in training
59
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615601#p26615601:1o941ifh said:
phil_s[/url]":1o941ifh]I am glad I use a password manager (http://www.stickypassword.com) so changing all my passwords will be quick and easy. Or do you think it is not necessary?
Change them if the website has fixed the flaw. Otherwise your new password will still be visible in potential attacks.
 
Upvote
4 (4 / 0)

Tridus

Ars Tribunus Militum
2,507
Subscriptor
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611081#p26611081:3ac05ybm said:
bombardier[/url]":3ac05ybm]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26610043#p26610043:3ac05ybm said:
Solomonoff's Secret[/url]":3ac05ybm]Bugs like this don't happen in memory-managed languages like Java. If we insist on writing our security software in C, perhaps it should be written in a variant that enforces the validity of memory accesses at runtime. Performance would suffer negligibly compared to the security benefit. Unfortunately certain operations would have to be disallowed but the resulting inconvenience is a small price to pay.

The performance difference is not that negligible. My company moved parts of the project from C++ to C# and all of our beta customers are complaining about performance problems. The difference is really big.

This is not a tool selection problem. This is developer problem. It doesn't make sense to switch to different tool (programming language) just because the individual who used the tool was not very proficient in using it. Everyone who had substantial experience in this type of programming would be aware of possible buffer overflow problems and would take care to sanitize the input data that comes from untrusted source. This is not something really tricky and hard to see that somehow surprised developer. This is really basic stuff when you have experience working in this field.

Not every developer is created equal and there is no universal tool you can use to level the playing field. Current tendency to use inappropriate tools just to minimize impact of bad developers is slowly coming to an end. We hit 4GHz limit with CPUs and there is no "let's wait for next years hardware that will improve our performance".

The problem with blaming this type of thing on "bad developers" is that it handwaves the problem away without solving anything. "Oh, bad developers did that! So it's not really a problem."

It's a huge problem. The fact that the same list of problems keeps coming up over and over again points to some fundamental issues that go beyond "a bad developer did it." Mistakes in this environment are too easy to make, and very costly when they happen.
 
Upvote
12 (12 / 0)

zuviel

Wise, Aged Ars Veteran
177
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615079#p26615079:xypy8gt6 said:
kliu0x52[/url]":xypy8gt6]I wonder if it would be worthwhile for browsers to do something proactive about this.

1) Browser is instructed to visit https://foo.example.com

2) Browser runs a test for this bug.

3a) Browser sees no problem, and adds foo.example.com to a list of safe domains (so that it won't check this every time)

3b) Browser sees a problem, warns user.

There's already at least one Chrome extension that offers to do this. The trouble is that testing the website can lead to trouble with local laws in some countries, so it can't be pushed universally to all installations.
 
Upvote
0 (0 / 0)

BajaPaul

Ars Tribunus Militum
2,883
Oops! This has probably been one of the NSA's biggest back doors. Look how much time and money this is going to cost to fix.

It is really getting to the point you don't want to trust the internet for anything anymore. It's not even safe to read an article oftentimes without getting backdoored by some malware on a compromised server.
 
Upvote
-6 (1 / -7)

Luridis

Seniorius Lurkius
43
[url=http://meincmagazine.com/civis/viewtopic.php?p=26614785#p26614785:16brrq9e said:
ExcessPhase[/url]":16brrq9e]No there is no need for C anymore since nearly 20 years.
I was not talking about java or .net.
I was talking about C++ and std::vector and std::string and C++ interfaces and serialization.
And I was talking about designing interfaces/APIs in a way that only a few number of different states are possible.
Such things like buffer-overflow or truncation or unsafe C functions or multiple-step-initialization or "goto failure" do not happen to me any more since nearly 20 years
-- because usage of C++ as such.

The problem is that everybody things he can write or even design software.
That there are different levels of quality of software has not entered most peoples thinking.

Yea, and there is another problem. People who can write or design one "kind" of software thinking the rules and principals they adhere to apply to every "layer" of software. Hardware does not understand interfaces, objects, strings or integers. Hardware only understands two types: Instructions and Data, both represented by binary numbers. Those "weak" types in C exist to allow the programmer to address the hardware in a meaningful way. The reason Java has those types is because it's built on top of these things called standard libraries, kernels and device drivers that do things like abstract character encoding and process scheduling from the upper layers. Even assembly is still needed to for things like ISRs. Just because you didn't take an OS or embedded programming class doesn't mean there was nothing new for your to learn there. C produces small fast and efficient programs that run well on bare metal, but using it requires extra care. Computer scientists have tried using managed languages for kernel writing, etc. and few have worked, and those that did brought new problems that need to be addressed. More research is needed and we may need additional support in the hardware if things like garbage collectors are ever going to work without causing timing issues.

[url=http://meincmagazine.com/civis/viewtopic.php?p=26615497#p26615497:16brrq9e said:
jrose[/url]":16brrq9e]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26613301#p26613301:16brrq9e said:
kliu0x52[/url]":16brrq9e]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612879#p26612879:16brrq9e said:
ExcessPhase[/url]":16brrq9e]ArrayBoundscheck?
Why does this sound like C programming?
Why am I not coding in C anymore since 1996?
And what would you like to use to write low-level security libraries that will be used in a wide variety of scenarios and processes? Write this in a managed language, and now you have the problem of every program requiring TLS also requiring whatever runtimes, libraries, etc. that your managed language uses.

There is a reason people still use C, just as there is still a reason for people to understand assembly. Just because C isn't what you'd use to write most software doesn't mean that there aren't places where C is by far the best option available.

I dunno. Maybe ADA? Pascal? One of the myriad of other languages which isn't quite as sucky for performance as managed code, but at least does proper type checking and array bounds checking?

Pascal is two years older than C and Ada appeared 8 years later. None of them are managed code in their original form, which tells me you're not entirely familiar with what that actually means.

Pascal got some traction as an OS language and Ada got very little. C took over the world of kernel, driver and low API programming and allows you to do everything you do on electronics today. There is a reason for that. Why don't you try finding out why C beat out the other languages in OS development before throwing disparaging statements at it before learning the when and why? I'll even give you a hint: #ifdef.

The bottom line is that Computer Science is still not at a point where the highest level of languages can fully address the hardware layer without jumping through all sorts of hoops. (Not to mention resorting to C & Assembly connectors to get the job done.) Research is ongoing... But for the time being we still need C and C++ to do jobs that Java & C# cannot.
 
Upvote
-1 (6 / -7)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611575#p26611575:2wgxfoqs said:
chromal[/url]":2wgxfoqs]I should like to know who submitted the code change, and how much the NSA may have paid them.

When I read the buggy code, I thought "this is the all-time winner for the underhanded C contest" (http://underhanded.xcott.com/). At the very least, it's instantly become the canonical buffer overflow example.

There's going to be a fun blame game to find out who actually submitted the code; I understand that the Open SSL team mostly integrates submissions from third parties and the commit logs may not be enough to identify who originally wrote the code (as opposed to who merged it in the code base).
 
Upvote
1 (2 / -1)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26616923#p26616923:21wc4ke0 said:
cpragman[/url]":21wc4ke0]So is this one of those times where certificates should be proactively revoked by the CSAs?
I'd say so.

Further, this is a moment in time where users should crank up the checking of CRLs in their clients.
 
Upvote
0 (0 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26617017#p26617017:2vg94x9b said:
Luridis[/url]":2vg94x9b]C took over the world of kernel, driver and low API programming and allows you to do everything you do on electronics today. There is a reason for that. Why don't you try finding out why C beat out the other languages in OS development before throwing disparaging statements at it before learning the when and why? I'll even give you a hint: #ifdef.
"C combines the power and performance of assembly language with the portability and ease-of-use of assembly language."

(It was a lot more true back before ANSI, especially back when I learned it around 1981 or so, but it's still somewhat true today. Note that C is actually still my own favorite language, but that doesn't mean I'm blind to its flaws, or that I consider it the appropriate language to use in every situation.)
 
Upvote
2 (2 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26617017#p26617017:7a3qxiq6 said:
Luridis[/url]":7a3qxiq6]
Yea, and there is another problem. People who can write or design one "kind" of software thinking the rules and principals they adhere to apply to every "layer" of software. Hardware does not understand interfaces, objects, strings or integers. Hardware only understands two types: Instructions and Data, both represented by binary numbers. Those "weak" types in C exist to allow the programmer to address the hardware in a meaningful way.

You are saying that there is no need for arrays or strings when writing device drivers?
And C++ is a super-set of C -- but I'm not going to tell you!
 
Upvote
3 (5 / -2)

Luridis

Seniorius Lurkius
43
[url=http://meincmagazine.com/civis/viewtopic.php?p=26617525#p26617525:3a27lphe said:
ExcessPhase[/url]":3a27lphe]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26617017#p26617017:3a27lphe said:
Luridis[/url]":3a27lphe]
Yea, and there is another problem. People who can write or design one "kind" of software thinking the rules and principals they adhere to apply to every "layer" of software. Hardware does not understand interfaces, objects, strings or integers. Hardware only understands two types: Instructions and Data, both represented by binary numbers. Those "weak" types in C exist to allow the programmer to address the hardware in a meaningful way.

You are saying that there is no need for arrays or strings when writing device drivers?
And C++ is a super-set of C -- but I'm not going to tell you!

I know what C++ is...

How do binary numbers become strings? Or, more precisely, how do your strongly typed java strings become binary numbers in the hardware? Or, to be even more precise, how do your strongly typed java strings become binary numbers with proper Endian Byte Order on a specific platform? I'd bet that involves something called Libc.

From the links:

Endianness is important as a low-level attribute of a particular data format. Failure to account for varying endianness across architectures when writing software code for mixed platforms and when exchanging certain types of data might lead to failures and bugs, though these issues have been understood and properly handled for many decades.

There's a reason you, as a java programmer, don't have to worry about such details... That reason is called "C".
 
Upvote
-3 (4 / -7)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611005#p26611005:2tc8thxz said:
blissfulight[/url]":2tc8thxz]And yet here we are, still using passwords.

The more secure a system is, the less usable it is.

Passwords are a reasonable compromise between usability and security. And frankly, it really doesn't matter if you're using passwords or not; this sort of thing could potentially undermine any security method you used.
 
Upvote
5 (5 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26612193#p26612193:1pbiaoy3 said:
myforwik[/url]":1pbiaoy3]
Even 11 is an understatement. Remember the servers involved have potentially been leaking their private key for their certificate! This means anyone can 'fake' being them.

It is not enough to do new certificates. All of the old certificates could now be used for man in the middle attacks! 2/3rds of the Internets certificates potentially need to be blacklisted! This is a MAJOR disaster.

It is unfeasible to blacklist such a large amount of certificates - as every device requires a list of all blacklisted certificates. This means all of the major CA's are going to have to black list their intermediate certificate authorities, and start issuing all new certificates under new CA's. This means even people who weren't effected will probably have to have their certificates blacklisted.

I know the reality is a little different but instead of blacklists why not rely on revocation providers? OCSP is pretty widely supported. Only a small percentage of servers supported TLS heartbeat, and pragmatically speaking only a subset of those could have potentially leaked their private keys, so maybe in the 6 figures is my guess (and Netcraft's).

I don't think any CA/B member CA is going to revoke an issuing CA, but they sure stand to make a lot of money from the misery now...
 
Upvote
0 (0 / 0)

Tyler X. Durden

Ars Tribunus Angusticlavius
9,166
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615405#p26615405:2hsul1f3 said:
Brian6String[/url]":2hsul1f3]It would've been good to redact the example user's Yahoo account name too. Sure glad they didn't use mine. Look at the Notepad screen shot &login=xxxxxxxxxx.
Especially since, if I'm reading that correctly, they have also given the exact length of the password.
 
Upvote
1 (1 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26617839#p26617839:gnaz83r5 said:
Titanium Dragon[/url]":gnaz83r5]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26611005#p26611005:gnaz83r5 said:
blissfulight[/url]":gnaz83r5]And yet here we are, still using passwords.

The more secure a system is, the less usable it is.

Passwords are a reasonable compromise between usability and security. And frankly, it really doesn't matter if you're using passwords or not; this sort of thing could potentially undermine any security method you used.
Some more than others, though. If you're using client certs and the client is not compromised, it's not as bad. If you're using something like SPNEGO with Kerberos, it's not as bad. If you're using something like the Blizzard authenticator, it's not as bad.

(In many of those sorts of cases, you could generally hijack a current session, but you couldn't use what you snagged to create a new session later.)
 
Upvote
1 (1 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615153#p26615153:226gimh3 said:
kliu0x52[/url]":226gimh3]
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615119#p26615119:226gimh3 said:
traveller[/url]":226gimh3]
What is bad is not the severity about this issue but who, who was the idiot that made this public before vendors like Red Hat could even release patches.

Honestly, this issue was published before people could even patch their servers which makes me wonder why they did not gave at least 7 days until at least the majority of servers where patched. In particular for a severity of this size and so easily to exploit by just visiting a web sever and sending some code to it.

Was it not enough for security researchers to say there is a security hole and patch it first, instead of actually releasing in public explaining the exploit vector? Ars is just publishing it after its public already but the original persons that discovered and this and leaked them are amazingly irresponsible and have done a huge damage to the Internet.

I don't think you understand how this works. The exploits that were floating around within hours of the announcement were created from the patch itself, not because the researchers revealed too much information. They released the minimum information necessary, which by itself was by no means enough to do anything. The OpenSSL repositories are, by their very nature, public, so the moment the bug was fixed everybody was able to see exactly what it is and people were writing exploits based on that.
While that's true for code, they could've privately contacted the security teams at places like Yahoo that serve millions of people. AFAICT, Google's services were all secured before the disclosure, probably because one of the people who found the problem works for Google's security team. While Yahoo is Google's competitor, the problem of password cross-pollination means that giving big places like Yahoo an early heads-up through private channels would've been good for Google's users, too.

This is why you don't reuse passwords (or, at least, don't reuse passwords on things like email/bank accounts).

Thing is, the more people know about something, the more likely it is to leak into the wild before the patch is out. Seeing as this attack leaves no trace, you wouldn't even know you'd been hit even after the fact, and the incentive to be evil is pretty high here - this was a really powerful exploit.

And yes, this HAS happened before, it isn't a theoretical vulnerability to such early warnings - it has been taken advantage of in the past.

Personally, I'd rather everyone be screwed at the same time than some people be safe and everyone else be exposed for a long period of time; the shorter the window of vulnerability, the fewer users would be affected.
 
Upvote
0 (0 / 0)
Your average joe writing...

I read this in the mainstream press somewhere, the article says I have to change all passwords. Heck, I don't even remember most of them as they are stored in the "auto login" or in cookies (heard that name somewhere and found it funny that's why I remember it) or in my email program (god (administrator) help me if I ever have to touch anything in there besides "reply" or "compose").

I have an average of 10 websites that I use often and about 10 more I forgot I ever created an account in.
To be safe, all of my passwords are something along the lines of "mypassword1" "Mypassword1" (the site requested a darn capital letter) and on and on.
When I want to browse a site and my browser says "security certificate not trusted" or something, and I know the site, the I just click "yes" or "go on" or whatever to get me into the site.

But I am scared, for a day, then the next figure about jobless people in my country comes out, a deadline on a project is tight or I wonder if they ever find that Malaysian black box (and if it was all a conspiracy of the illuminati (heard that somewhere too).

But, whenever a site asks me for a phone number for the "double security" I know what to do. NEVER release a phone number on the internet. You never know what those hackers might be into. Heck, once they use my number to send me a 6 digit code god knows what they ("they" always in a negative sense) are going to do with my phone next.

/average joe


This little and caricatured story goes to show what I personally think:
1) the password system is broken
2) the proposed fix is still obscure to all average users
3) IT people like you Ars readers have a great responsibility and opportunity to come up with a different system
4) the two way authentication with a calculator-style device separated from the internet is still the best
5) all of authentication will have to take into account the number of devices the average user possesses and hence not require a device that might get lost


I see the "security" side of the internet as one of our generations' greatest challenges (after the previous one brought us the internet and the devices we use to browse it as well as the endless possibility of what we can do with it (payments, mail, internet of things etc etc).
The internet of things, besides, won't become a reality if these issues have been solved in a practical and "consumer friendly" way.


Back in 2000 we had lessons about internet security (I studied communication sciences so not too technological or detailed) and it was clearly said by a professor that 2 way authentication is the only one currently viable as no key gets sent in the interwebs.

Might be true or not. But it struck me how much the internet is not secure. And yet we get asked to do everything on it (payments, taxation, even voting).

A big dilemma. Security that works "like magic" or a licence to use the internet??

I am exaggerating here, but I believe that's a challenge for you all. Skilled and knowledgeable. As I am only a tiny more technologically advanced than the average Joe I mentioned here (sorry for all whose name is joe btw)
 
Upvote
1 (1 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26609787#p26609787:3ous7bxq said:
issor[/url]":3ous7bxq]Probably TLS-supporting mail servers and OpenVPN clients as well. Atlassian JIRA is having issues as well, and they don't seem to be dynamically linked, either, so we have to wait on them.

Gmail had TLS heartbeat enabled as of an hour ago.

This is quite the nightmare.

Edit: I'm actually seeing conflicting reports, some places report TLS heartbeat support, but exploit scripts don't seem to recognize or be able to use it.

Google applied a patch yesterday morning.
 
Upvote
0 (0 / 0)
iMat: There is no such thing as perfect security. Anything which can be compromised, will be compromised, eventually. Your goal with crypto is not to be perfect, but to make it sufficiently ridiculously hard to compromise your security that it cannot be done within a reasonable time span. But when things go wrong, you still want as much security as possible.

The problem with every system is that every system has its own flaws.

Passwords have the advantage that they are device independent, and, if used properly, are reasonably secure. The downside of passwords is that they are very easy to misuse, good passwords are difficult to remember unless you're extremely good at memorizing random series of characters (or unless you're allowed to use very long passwords, which many sites do not allow), and you cannot reuse them because it means if one is compromised, all are compromised.

Every other system has its own flaws.

Biometrics have the problem that they have to be digitalized to be utilized, and as a result, if your digitalized biometric data ever is compromised, you are permanently hosed as you can't change your fingerprints or iris patterns.

Any sort of system involving physical devices means that the physical device can be stolen and then you are hosed, and depending on the setup of the system, it may be possible to compromise it in some other way, and end users would understand even less about how it was compromised. It also means that if you get robbed, it can be very difficult to fix things, because now you've effectively lost your ability to log in to anything.

This is coupled with the fact that the more secure the system is, the harder it is to use, which is a bad thing - the entire point of systems is to be usable.

Any system you use needs to be as lightweight as possible.
 
Upvote
1 (1 / 0)

wrkg_onit

Wise, Aged Ars Veteran
139
[url=http://meincmagazine.com/civis/viewtopic.php?p=26610189#p26610189:34ucitbq said:
tiagojn[/url]":34ucitbq]You can check whether a particular website is vulnerable using this link:
http://filippo.io/Heartbleed/

It looks like yahoo.com is still vulnerable

You have to scan the same website many times with this tool to (possibly) detect vulnerability.
 
Upvote
0 (0 / 0)

armwt

Ars Legatus Legionis
18,215
Moderator
I know a few people who were crowing yesterday about Microsoft not being affected… while IIS wasn't, learned of a potentially major issue for large "Enterprise" customers this AM. The version of OpenSSL used by McAfee in their ePolicy Orchestrator took was vulnerable… I expect it is in use in some *LARGE* corporate environments.

I think I'd be more worried about certificate issues in this case… an attacker could redirect traffic to a fake ePO instance, update their antivirus DATs and any other information being deployed, leaving client systems vulnerable to just about anything the attacker wanted.
 
Upvote
0 (1 / -1)

rtechie

Ars Scholae Palatinae
1,171
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615153#p26615153:2a11bg61 said:
kliu0x52[/url]":2a11bg61]While that's true for code, they could've privately contacted the security teams at places like Yahoo that serve millions of people.
I've had to report vulnerabilities before and it's a lot harder to track down the right person privately than you think. You're "random guy off the internet" and you want to talk to the head of security at a major corporation, most companies don't make their org charts public. So you have to go through the public PR channel which has no idea what you're talking about.

And if you do contact privately, how long do you wait for a fix? Days? Months? Years? Until they fix it?

In practice this just doesn't work. There are too many people working on too many projects to track down the right person to report to. Even if you report vulnerabilities publicly a lot of stuff STILL doesn't get patched because nobody's paying attention to the security sites.
 
Upvote
2 (2 / 0)
[url=http://meincmagazine.com/civis/viewtopic.php?p=26615119#p26615119:371ymoeb said:
traveller[/url]":371ymoeb]
What is bad is not the severity about this issue but who, who was the idiot that made this public before vendors like Red Hat could even release patches.

Honestly, this issue was published before people could even patch their servers which makes me wonder why they did not gave at least 7 days until at least the majority of servers where patched. In particular for a severity of this size and so easily to exploit by just visiting a web sever and sending some code to it.

Was it not enough for security researchers to say there is a security hole and patch it first, instead of actually releasing in public explaining the exploit vector? Ars is just publishing it after its public already but the original persons that discovered and this and leaked them are amazingly irresponsible and have done a huge damage to the Internet.

I don't think you understand how this works. The exploits that were floating around within hours of the announcement were created from the patch itself, not because the researchers revealed too much information. They released the minimum information necessary, which by itself was by no means enough to do anything. The OpenSSL repositories are, by their very nature, public, so the moment the bug was fixed everybody was able to see exactly what it is and people were writing exploits based on that.

similar to how some of the worst attacks on Windows systems happen right after patch updates.
MS fixes a big security hole and pushes the fix, but hackers then read those fixes and design attacks based off those holes.
Why?
Because people are not vigilant about updating...
 
Upvote
0 (0 / 0)
Status
Not open for further replies.