Cybersecurity as we know it will radically change when quantum computing takes over

BY VIVEK WADHWA -

“Spooky action at a distance” is how Albert Einstein described one of the key principles of quantum mechanics: entanglement.  Entanglement occurs when two particles become related such that they can coordinate their properties instantly even across a galaxy. Think of wormholes in space or Star Trek transporters that beam atoms to distant locations.

READ MORE ON SINGULARITY HUB

Week 4, Day 3: How Password Storage Actually Works

User authentication is a big, big deal. It seems like every other day there’s some data breach at an internet giant. 

I wanted to talk in this blog post a little bit about how password storage and authentication actually works on the Internet. It’s an interesting topic, and one that I didn’t know much about before starting at App Academy.


How Password Storage Actually Works

Most people think that when they log into their favorite website, it goes something like this:

  1. They send the server their password
  2. Server receives password and compares it to their recorded password
  3. If it’s a match, the server lets you in

That’s not at all what happens on any remotely competent website. You might guess: are the passwords encrypted then maybe, and then later decrypted when you check the passwords? I used to think this, but that’s wrong as well.

The passwords are actually hashed. What does it mean for a password to be hashed? It means that it’s run through a one-way, irreversible filter which can turn any piece of data into a random-looking string. 

That’s essentially what a hash function is. For any input data, it produces a random-looking string. But two important points. 1) A good hashing function will never produce the same random string for two different inputs, and 2) it’ll produce completely different strings if the input is changed even slightly. So the hashed version of “password″ might be “aslmdp20824” and the hashed version of “passworD” might be “64t09u02g.”

Note: this is not a code. It’s not transforming each letter by shifting it a certain amount or anything like that. A code is reversible; by definition, if you know how a code works, you can decode it. A hashed password is irreversible. The output is effectively random. “password″ will always produce “aslmdp20824″ if you run it through this function, but there’s no realistic way to get from the hashed password back to the original.

This is important. In our database, we’re never actually storing any passwords, nor are we storing encrypted passwords (i.e., password “codes”). We’re storing hashed passwords. That means there’s no way to get from a hashed password back to the original password. 

So how do we check if the typed password matches the stored one? When someone sends our website their password, we hash the password they give us through that exact same hashing function, and then compare that output hash to the hash that we saved in our database. Remember, hash functions always produce the same output for the same input. So if those hashes match, we let them in. 

Their real password isn’t saved anywhere on the database, it’s simply used in the business logic of our app and then immediately discarded. And here’s why this matters:


Let’s say our database gets stolen.

A team of tireless Chinese hackers have exploited a vulnerability in our hardware and stolen a copy of our database. As we speak, they’re hard at work on trying to break the passwords of all of our users so that they can wreak havoc. 

In our mind’s eye, we imagine the moment when they finally crack our database encryption. There are “oohs” and “ahhs” (or their Chinese equivalent) as they crowd around a dimly-lit screen and stare at a giant table. A giant table of… hashed passwords.

What can they do with hashed passwords?

Not a lot. Remember, good hashing algorithms are computationally impossible to reverse. This means that if a team of hackers can’t go from a hashed password back to the original password, they’re forced to go the other way. They have to guess a password, hash it, and see if the hashed output matches with what’s in the database. With a good (i.e., computationally expensive) hashing algorithm, this will take them a lot of time, a lot of guesses, and hundreds of thousands of dollars in computing power to crack even a moderate number of passwords.

Or, at least, it should.

The problem is that it won’t. At least, not if that’s the only precaution we’ve taken. In fact, within minutes of opening our hashed database, this team of Chinese hackers will have already broken hundreds of passwords. Our security will have failed.

Why?


Rainbow Tables are why.

I thought we hashed all of our passwords? Well, we did. The problem is not with our hashing algorithms, but with our users. Most of our users’ passwords are among the most common passwords. Hundreds of them use passwords like “qwerty123″ or “12345678.″ In fact, 91% of users use passwords that are within the 1000 most common passwords, and 99.8% of users use ones within the 10,000 most common (source). 

Much like us, our users are incompetent and can’t be trusted to get anything right.

So here’s the thing: if these hackers know that many people use passwords like “qwerty123,” if they were smart, they and their hacker buddies would join forces and compute a table of the hashes of the most common passwords in advance. Then anytime anyone cracks a database, the hackers can just check for hash values that show up anywhere in this giant pre-computed table, and then they instantly know what commonly-used password is being used in that database.

But where would a rogue team of Chinese hackers find something as amazing as that? Answer: by Googling it. They’re available on the Internet for free, and are fairly well-maintained at that.  They’re known as rainbow tablesand they are the bane of poorly secured websites. And right now, that’s ours. If anyone stole our database, their rainbow tables would grind us into the dust.

Here’s where salting comes in.


Salting Passwords

Salting is simply taking an (ideally random) arbitrary string, and appending it to each password before running it through our hashing function. So let’s say that our simplistic salt is to generate a number based on the seconds hand of the clock when our user signs up (0 - 60). So if you sign up at 2:52:42 PM, your salt is 42. What we do is we add that 42 to your password, and then we run your (salt + password) through our hashing algorithm.

Remember, slightly different inputs create totally different outputs. That’s what saves us here. Your “42qwerty123″ will produce a completely different hash from “qwerty123.” And your dumb buddy who also uses “qwerty123″ will now be “16qwerty123,” which also produces a completely different hash. 

The salt is still stored in the database. So when our hackers get access to the database, won’t they see the salts and just break our passwords then?

No, because the rainbow table is what gave our hackers the ability to break a bunch of passwords. And on that rainbow table, they don’t have entries for “12qwerty123″ and “45qwerty123,” or all of the other permutations of all the other most popular passwords. (And of course, in a real database, we’d have a much more complex salt than a two-digit number.) 

So their rainbow tables are useless. They’re back to guessing passwords, adding them to the salt, and trying to hash them to see if it matches what’s saved in the database. So for each user, they’d take the salt, and go down the list of most common passwords, adding the salt to the password to try to see if it matches.

And then, even if they’re finally able to crack one, they have to do the same thing for every single password in the database, even if the users’ passwords were exactly the same.

This is what’s known in expert circles as “winning.”

And that, in a nutshell, is how modern password authentication works. TL;DR: People are dumb.

Hope you learned something!


Haseeb Qureshi

Whether you call them ‘front doors’ or ‘back doors,’ introducing intentional vulnerabilities into secure products for the government’s use will make those products less secure against other attackers. Every computer security expert that has spoken publicly on this issue agrees on this point, including the government’s own experts.

The Atlantic explores the weird end of the NSA’s phone dragnet in a very enlightening essay:

Earlier this month, a federal appeals court ruled that while the surveillance agency has long claimed to be acting in accordance with Section 215 of the Patriot Act, the text of that law in fact authorizes no such program. The Obama Administration has been executing a policy that the legislature never passed into being.

But the law that doesn’t even authorize the program is set to expire at the end of the month. And so the court reasoned that Congress could let it expire or vote to change it. For this reason, the court declined to issue an order shutting the program down.

President Obama didn’t shut the program down either. One might think the illegality of its ongoing operations would bother him, but he’s effectively punted to Congress too.

Days ago, the House of Representatives acted: they voted overwhelmingly, 338 to 88, “to end the National Security Agency’s mass collection of phone records from millions of Americans with no ties to terrorism,” passing the USA Freedom Act, an effort “to rein in NSA surveillance while renewing key sections of the… Patriot Act.” The bill divided civil libertarians, some of whom thought it didn’t go far enough because the government could still access bulk data held by phone companies.

That brings us to the wee hours of Saturday morning. “After vigorous debate and intense last-minute pressure by Republican leaders, the Senate on Saturday rejected legislation that would end the federal government’s bulk collection of phone records,” The New York Times reports. “With the death of that measure — passed overwhelmingly in the House — senators then scrambled to hastily pass a short-term measure to keep the program from going dark when it expires June 1 but failed.”

Read the entire article.

Will our future Internet be paradise or dystopia?

By Sara Sorcher, CS Monitor, May 21, 2015

What does the perfect Internet look like?

The paradisiacal vision of its future–a scenario Atlantic Council senior fellow Jason Healey calls “Cyber Shangri La”–is one in which the dreams of Silicon Valley come true: New technologies are born and implemented quickly; secure online access is a human right.

There’s also what Mr. Healey, a Passcode columnist, dubs “Clockwork Orange Internet.” In this dystopian future, criminals and nation-states knock down attempts to secure networks and devices; people are afraid of shopping online or communicating freely with friends.

Passcode was the exclusive media partner for an event hosted by the Atlantic Council’s Cyber Statecraft Initiative on Wednesday focusing on alternate realities for the future of the Digital Age. Here are three things we learned from some of the country’s leading thinkers.

1. The future Internet could be fragmented. In one vision of the Internet, the world’s dominant powers take jurisdiction over users in their own countries. If more countries begin shutting down the Internet to prevent activists from organizing, for instance, or mandating encrypted technology have back doors to enable government access, people’s Internet experiences will be largely driven by the country they’re accessing it from, Healey said.

This could slow down global processes that rely on the Internet–and potentially spark massive economic and trade impacts. “That could really slow down every packet, which stops it from being a little bit of friction to a significant barrier to cross-border trade.”

2. The global paradigm is already shifting. The paradigm spanning back to the earliest days of cyberintelligence has always been that those who have the capability to conduct a destructive cyberattack didn’t have the intent to do so, Healey said. It’s also common thinking that those who had the intent to do so–such as terrorists, for instance–didn’t have the capabilities.

Now, for the first time, Healey said, there are “competent cyber adversaries” in Iran, or even Russia, who might have both the capabilities and willingness to launch a digital attack if political relationships worsen or their economic status weakens. And if that happens, Healey said, the US president may shoot back in cyberspace. “The gloves are going to come off.”

3. The definition of ‘cybersecurity’ could change over time. When everything is connected to the Internet, what’s known today as cybersecurity may be considered simply “security” for everyone in their daily life.

But what does it mean to be secure, asked Steve Weber, a professor at the University of California at Berkeley’s School of Information, when everything’s connected to the Internet? During the cold war, the narrow definition of security was “territorial autonomy, and decisional autonomy for nation-states,” he said. Once the freeze thawed, definitions for security expanded to include, for instance, environmental security and economy security.

So within a few years from today, the fact that human life is so dependent on machines may spark other consequences for the human race other than just breaches or vulnerabilities. It could, for instance, put at risk large numbers of jobs in developed countries, Weber said. “I think in a few years, we’re going to call that a cybersecurity issue,” he said. Solving that will require a very different set of people and models to solve what people currently see as today’s cybersecurity issues.

America’s schizophrenic anti-encryption cybersecurity strategy

If US officials want to improve cybersecurity, why do they attack strong encryption practices?

If America tackled property protection the way that officials approach cybersecurity, we’d see agency heads decrying strong door locks for securing the safety of terrorists, while their counterparts in Congress showcased grand schemes for sharing home invasion information with those same anti-lock federal agencies.

Officials would alternatively scold and sweet-talk the nation’s lockmakers into developing standardized keys tailored to law enforcement’s liking and access.

Locking your own doors with a more secure key — while not yet “illegal,” per se — would be contemptuously frowned upon by the self-styled patriots who insistently squawk that only the guilty should have something to hide.

The poor souls whose ill-defended homes would be inevitably ransacked in the process — unfortunate victims in the critical push for national security, to be sure — would be gently pressured to submit detailed invasion reports to the agencies that insisted on weakening their home defenses in the first place, their naked personal data shuttled away for indeterminate storage in a government database — for “analysis” and “sharing” and whatever else is determined to be useful.

It is completely crazy, but this is the dissonant strategy that our best and brightest in the federal government have for some reason chosen to pursue.

Keep reading

With the new model access depends solely on the device and user credentials, regardless of the employee’s network location. That means employee access is treated the same whether the user is at a corporate office, at home or in a coffee shop. This setup does away with the conventional virtual private network connection into the corporate network. It also encrypts employee connections to corporate applications, even when an employee is connecting from a Google building.

With this approach, trust is moved from the network level to the device level.

From now on...

I just got rid of the feature where people can write me anonymously. If someone has something to say to me, then they should show their face, so I can hold them accountable for their actions, if necessary.

People are welcome to disagree with me. Since I am putting my opinions on a public forum, people are welcome to comment on them, so long as they do not imply or directly voice intentions of physically harming me, my friends, family or my cats. 

Going forward, all threats will be forwarded to law enforcement. Telling me that I need to “worry about my top lip” is inappropriate and is implying violence. Behavior like that will not be tolerated in the future.
http://tinyurl.com/lz7ry5v

People are allowed to say what they want, even if I do not like it. I respect the First Amendment. The First Amendment is a two-way street. I also have the right to speak my mind, as long as I do not threaten violence. Which as of now, I have never done that. I refuse to allow others to use threats of violence to silence me. 

People may dislike what I say, but I still have the right to say it. They have two options: Ignore it. Or dispute my comments with me. (Ideally in a respectful manner) 

This is Tumblr. I don’t take things too seriously on this website. (And in my opinion, nor should anyone else.) If people call me terrible names and say I’m ugly, I don’t care. When I start to care is when people start implying violence against me because I said something that they do not like.

US Govt proposal to classify Security Tools as Weapons of War w/ Export Regulations.

This would be devastating to US business and security products.

In 2013, WA agreed to add the following to their list of dual-use goods: systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software; software specially designed or modified for the development or production of such systems, equipment or components;…

View On WordPress

The Federal Reserve Bank of St. Louis confirmed on Tuesday that hackers had successfully attacked the bank, redirecting users of its online research services to fake websites set up by the attackers…

On Tuesday, security experts said that the hackers could have gained valuable personal information from the attack. Attackers may have been able to glean email addresses and passwords from bankers and currency traders in the attack, for example, information that could be used for a more sophisticated attack on more valuable websites.

Aerial robots weren’t expected to become part of the Internet of Things, but now the Navy needs to protect them from cyber threats

The Navy says it’s not sure what kind of cyber threats its drones, sensors and missiles are up against. That’s because aerial weapons systems were not expected to become part of the so-called Internet of Things, the present-day entanglement of networked appliances, transportation systems and other data-infused objects.

So, the Navy has kicked off a project to collaborate with outside scientists on research and development that will help protect the branch’s flying munitions from hackers, according to the agency. A key aim is to ensure assets can bounce back in the event of a cyber strike.

“There is a paucity of cyber R&D and threat information for weapon systems and supporting systems that directly or indirectly ‘connect’ to weapon systems,” Naval Air Systems Command contracting documents state. Such tools include infection-prone devices such as laptops.

The effort runs parallel to Navy’s five-year cybersecurity strategic plan issued earlier this month by Navy Fleet Cyber Command, the branch’s central cybersecurity division.

On Monday evening, a Fleet Cyber Command official told Nextgov the effort aligns with the plan’s first goal: to reduce the Navy’s attack surface, partly by building security into systems before they go to production.

The attack surface imperiling the military includes known, unknown and potential vulnerabilities across all network infrastructure.

“A weapons system or warfighting platform cannot be susceptible to a cyber-intrusion or attack, because that obviously risks mission outcome and much more,” Fleet Cyber Command spokesman Lt. Cmdr. Joseph R. Holstead said in an email. The Air Systems Command initiative “highlights the Navy’s warfighting missions’ dependence on cyberspace and cybersecurity with respect to mission assurance.”

There have been few studies on threats introduced by linking industrial control systems and aerial vehicles, such as launch and recovery equipment, the Navy says. The command’s cyber vulnerabilities likely even run as deep as the software and configurations of weapons.

Through prototyping, officials expect to learn how to block intruders from compromising airborne systems and enable the equipment “to survive and continue to operate during close quarters battle,” states the notice for interested researchers released Friday.

Entrants have until May of next year to submit a research abstract. The competition for awards will consist of two an evaluation of all submitted abstracts and then an evaluation of full proposals from selected abstracts. Participants will learn if they are eligible for the second stage within three months after entering. The announcement does not include a contract ceiling or time period for the work.

The project will start with scientific research and end with a deployment of operational technology in real-world mission conditions.  

A few years ago, the branch’s widely used Navy Marine Corps Intranet fell victim to hackers, who were reportedly linked to Iran. That attack spurred the launch of a series of cyber defense game plans, including the new Task Force Cyber Awakening, a year-long effort to shore up computer hardware and software. The Fleet Cyber Command, Naval Air Systems effort and the task force are moving forward in coordination, Holstead said.

Read more at here  Follow us:  Hakon India Facebook |  Twitter |  Linkedin