📰 🔐 Complexity, Assurance, and Airplanes

Recent tweets from the President have brought the issue of complexity to the front of the news cycle. In response to the second crash of a Boeing 737 Max 8 Jet, the President tweeted:

Airplanes are becoming far too complex to fly. Pilots are no longer needed, but rather computer scientists from MIT. I see it all the time in many products. Always seeking to go one unnecessary step further, when often old and simpler is far better. Split second decisions are needed, and the complexity creates danger. All of this for great cost yet very little gain. I don’t know about you, but I don’t want Albert Einstein to be my pilot. I want great flying professionals that are allowed to easily and quickly take control of a plane!

So is the President right or wrong. Before I answer that, let’s explore the question of complexity and the risk that it brings. Any cybersecurity security expert worth their salt can tell you the three characteristics of a reference monitor:

  1. Always Invoked / Non-Bypassable.
  2. Tamper-proof.
  3. Never eat at a place called “Moms” Small enough to be easily understood and evaluated.

Why is that last point there? Simply, because complexity is the enemy of assurance. We’ve all heard of “feeping creaturism” — the way that software vendors keep adding in features to sell a product while not fixing known problems and making the product more reliable. This is because adding features sells products, while adding assurance does not. But the more and more features and capabilities you put into the code, the less assurance you have in its correctness. Logically, this makes a lot of sense: each feature has multiple inputs and options, each creating a new path through the code, and very quickly it becomes impossible to test all code paths. Simpler code means fewer code paths, meaning more reliability. Complex code means code that wasn’t completely tested in every possible situation, and as Hoare pointed out, once you find the first bug, you have an infinite number.

We are adding more and more complexity to the software we use every day. Remember the Toyota unintended acceleration problem? That turned out to be a software bug (which they claimed was a carpet mat problem, but they updated the software at the same time) from a rare complex interaction. Cars today have even more complex software, what with all the sensors monitoring things for safety. Most of the time these work, but there have been cases where problems have been identified due to software errors. Subaru, in fact, just had a recall to fix the software on the head unit related to the rear camera.

Airplane software is equally complex. When the Airbus Jets first came out, they were revolutionary in that they were “fly-by-wire”. In other words, instead of multiple physical hydraulic lines to control the rudders and wing surfaces, there was an electrical signal that went to the other end of the plane. Many people didn’t trust fly-by-wire and only flew the Boeing. It took multiple flights to convince the public of the safety of the systems, and now all modern jets use fly-by-wire.

So, are airplanes too complex to fly? Airplanes are controlled by software, and that software is very complex. But statistically, airplanes are safer than they were in the days when there were only simple physical controls. Similarly, cars are more complex, but they are statistically safer than vehicles from the 1950s and 1960s.

But that doesn’t mean the complexity doesn’t cause problems. In fact, it looks like Boeing is already adjusting the systems in the Max series: instead of just using one sensor to control nose down, they are using multiple sensors.

Now, let’s go to the second part of Trump’s statement: do you need a computer scientist from MIT to fly a plane? Flying a jet — even an older one like a Boeing 707 — is very different than flying a private two-seater Cessna. The number of systems that must be monitored are immense, and you need a strong understanding of the physics of flight. You don’t need to be a computer scientist — after all, you’re not programming the systems — but you do need to be comfortable with technology and have a strong understanding of physics. Given the choice, you want a pilot with lots of experience (and no mental problems) flying the plane; not a rookie MIT computer scientist. However, you might want that scientist writing the software.

Lastly, there is one other assertion in Trump’s tweet we need to address: “old and simpler is far better.” No, it isn’t. Old and simpler — both in technology and people — cannot grasp the complexity of today’s split second world. You want someone nimble, who truly has a deep understanding of the system. You want someone with years of experience with that technology at the helm.

Yes, those last two sentences were an allusion. As was the point that you need a pilot with no mental problems.

Share

📰 🔐 Cybersecurity: News and Sausage to Chew Upon

I haven’t done a news chum posts in a while, and the articles of interest are accumulating. So here’s a collection of articles that caught my eye, all dealing with cybersecurity:

  • Password Managers. Recently, there was an article about vulnerabilities related to common password managers, the gist of which was: All password managers are vulnerable to attack. Many people took that as an excuse to trigger their risk aversion, and to run away from password managers. Bad thing to do. The attacks in question all required physical access to the machine in question. Vaults in the cloud were safe. Further, if you had physical access to the machine, then a complicated attack to look at a residual password in a buffer is the least of your worries. This is a clear example of people not understanding the risks. The upshot: Use password managers. They make it so that you have longer, more complex, passwords in use; they also encourage the use of one password, unpredictable, per site. They are much more secure than algorithmic generation by humans, or writing things down.
  • Choosing Good Passwords. Another password related article looked at the surprisingly common password “ji32k7au4a83”. This is a good example of why a password that looks strong might not be. In this case, the password turned out to be the ASCII representation of the characters you get when you type the Chinese for “My Password” on a specific Taiwanese keyboard. I could imagine similar problems for Hangul, or possibly other representations. This is yet another argument for using password generators (I recommend Lastpass, but other good tools are the XKpasswd generator and the nonsense word generator… and for good measure, the username generator from Lastpass, if you don’t want to have the same username everywhere).
  • I Am Not A Robot. Some of us remember the days when everyone used a CAPCHA that required you to recognize letters and enter them in order to prove that you were not a bot. But you don’t see those very much anymore. You may see tests that require you to recognize what is in images, but even those are getting fewer. That’s because it is getting harder and harder to prove you are not a robot, and CAPTCHAs are having trouble catching up. Somedays, it seems that the only thing computers can’t reliably recognize is porn (but then again, neither can humans, and imagine the CAPCHAs). What you do see is a simple checkbox that “I am Not a Robot”. But why does something simple work. There’s actually a great explanation, which involves all the information your browser collects, and all those cookies you don’t think about track, that a bot does not have. Who knew?
  • Forgetting the Past. Recently, Gene Spafford (a grey-beard I know well from the days of USENET) visited the RSA conference. His reaction was very interesting, and reflected the feeling that many of us grey-beards and CBGs and other professional old-codger terms have: the youth of the cyber industry have forgotten what was done in the past. I’ll note that luckily, the people behind the Annual Computer Security Applications Conference haven’t, and we are starting to plan the 2019 Conference (web pages should be updated soon) that will include both new research, and reach-back into the relevant history. We’ll be doing our 2nd year in San Juan PR in December; mark your calendars now.
  • Listening and Privacy. We often use our computers thinking we’re the only ones who see what we are typing, just as we talk out in public as if we are the only one listening. Both are pretty far from the truth. Hopefully, you know that most public wireless access is not secure, and the best way to secure it is through the use of a VPN. Virtual Private Networks make sure that communication between your computer and a trusted endpoint are secured, and claim to provide security from that endpoint to your ultimate destination on the web. How much can you trust them? It depends on the VPN you choose, as some are better for privacy than others. But what about the real world? When you discuss things on the bus or the subway, how secure are you? Not very. One instructor gave their students an interesting assignment: find out as much as you can about that stranger sitting next to you on the bus, using only public information. They found out quite a bit by listening to the public side of phone conversations, looking at visible screens, and noticing other aspects of the person. Sherlock Holmes in the wild. But that’s not the only risk. It turns out that your hard disk might be eavesdropping as well. Sound waves create movement in disk heads, which can be monitored by sensors in the disk. So when will those concerned about eavesdropping move to SSDs to get rid of that risk?
  • AntiVaxxers and Cybersecurity. A meme has been going around asking why we are willing to inoculate our computers against viruses and malware, but not our children? As memes go, it makes an interesting point — but misses some of the differences between computers and the human immune system. Vaccines are a great example of how we train our immune system to work for us by exposing it to the potential malware — in a neutered form — to train it to recognize the real thing. Traditionally, humans have been great at this: that’s why babies crawl around and put things into our mouths — the exposure makes our immune system stronger. In fact, our current antiseptic and germaphobic environment has both weakened our immune response, and trained it to overreact. So yes, pick your nose and eat it, but not in public where anyone can see you. But I digress. Think about this in terms of computers. We install an anti-virus or anti-malware program; this is the equivalent of installing an immune system in our computer. But the success of that system depends on the collection of malware signatures that it downloads regularly. These signatures are benign snippets of code DNA that allow for safe identification of dangerous code. Exposure to those benign snippets is vital if our computer immune systems are to work, and we don’t lose the system. Similarly, vaccines allow our natural anti-virus mechanisms to recognize the malware that try to invade us — and more importantly, they protect those systems that — due to specialized wetware — cannot install the anti-virus. In short: Vaccinate your kids and yourself to protect those around you, as well as yourself.

 

Share

📰 Securing the Future

And the cleaning out of the accumulated news chum links continue. Here’s a collection of links related to cybersecurity, but the concern here is not where you think it might be:

  • When Identity Thieves Hack Your Accountant. We are all concerned about the online services that we used, and what might happen when they are attacked. But have you thought about the human service providers you use? Your accountant. Your auto repair shop. Your financial advisor. They use services too, and these services have your information. Hint: The adversaries have thought about it.
  • Why Cities Are So Bad at Cybersecurity. Many folks are aware of the US government’s efforts in cybersecurity, and at least the awareness is growing. But what about your state and local governments? How cyberaware are they? The answer, unfortunately, may not be as good as you might like. Now think about this: most of our critical systems are at the local level: power, elections, traffic control, ….
  • Transportation is now the third most vulnerable sector exposed to cyberattacks. The previous item connects directly to this. When we think about cybersecurity, we think about our banks, our national security systems. But one of our most vulnerable sectors is transportation — from automated traffic systems to air traffic control to automated trains to the computers in our cars. Just imagine attacks on all those black Ford SUVs carrying government officials. There’s a lot of risk there.
  • 4 Mistakes Security Pros Make and how a Wellness Model can Help. When we think security, we think certification and border protection. But a holistic wellness model is a great way to think about the subject. According to the National Wellness Institute, “wellness is multidimensional and holistic, encompassing lifestyle, mental and spiritual well-being, and the environment.” The model surrounding wellness is essentially a conscious effort to help an individual become self-directed to achieve their healthiest state, based on awareness and choice.   Wellness also understand that you don’t get well at once; it is incremental improvement.
  • Don’t Give Away Historic Details About Yourself. One way to start getting well is to stop answering those quizzes about yourself. Giving away historical data helps adversaries in so many ways: from giving hints on passwords (if you’re not using a random password generator) to giving answers to security questions for password recovery. Think before you answer a quiz.
Share

Risk and the Theatre

userpic=fringeRecently, after one of the numerous Fringe shows we’ve seen, I was talking to my wife. I opined that if I ever put on a Fringe show, it would likely me getting up and doing a short tutorial on the NIST Risk Management Framework using Powerpoint slides, and it would probably land with a thud. My wife, however, thought that with the right director, it could work…

This started me thinking. What if I was more than an audience? What if?

The idea has been floating around and taking space in my head, so I want to get it down so I can move forward. The notion is this: There have actually been very few plays — and certainly no musicals — that have explored the area of cybersecurity. There was Dean Cameron’s Nigerian Spam Scam Scam, a great two-person piece that we presented at the Annual Computer Security Applications Conference (ACSAC) in 2015 (and discovered at HFF15). There was the wonderful play The High Assurance Brake Job: A Cautionary Tale in Five Scenes by Kenneth Olthoff presented at the New Security Paradigms Workshop in 1999 (and if you haven’t read it, follow the link — you should). But that’s it. Could we create a play that imparted fundamental Cybersecurity notions — risk, assurance, resiliency, social engineering — to a non-technical audience using a form other than a Powerpoint presentation? Could we create something with some staying power? How do you take technical notions and transform them into broad acceptability, in a two-act multi-scene structure with a protagonist who goes on some form of journal?

I’ve got some ideas I’d like to explore, especially in the areas of exploring how people are incredibly bad at assessing risk*, and the difference between being risk-adverse and risk-aware. This could be a significant contribution: we could make people more cyber-aware while entertaining them. Think of it as an information security refresher training, but in a large building in a central part of town in a dark room as part of a play with a lot of people listening, who have all paid a great deal to get it in. Or a storefront during Fringe.

However, I know my limitations. I’m not a playwright — my writing is limited to blog posts and 5,000 page interpretations of government documents. I’m not an actor, although if I know my material I can give a mean tutorial. I am, however, an idea person. I come up with ideas, solutions, and architectures all the time. If I could find someone who actually knows how to write for the stage, perhaps we could collaborate and turn this idea into something (with that caveat that, as this is related to my real life job, I might have to clear it through them — but as it is at a high level with no specifics, that’s likely not a problem).

So, if you know a potential writer who finds this notion interesting, and might want to talk to me on this (or you are a writer), please let me know.** Who knows? Perhaps one day I’ll actually be more than a Fringe audience.

——————

*: Here’s my typical example: Would you rather let your child visit a friend’s house that had an unlocked gun safe, or a house with a pool. Most people fear the gun, but the pool is much much more dangerous, as this week’s news shows. There is intense fear about MS13, but the actual number of MS13 members attempting to come across the border is low when viewed across all immigrants making the attempt, and the likelihood that a single MS13 member will attack a particular American is very very low. A third example is how it is much safer to fly than to drive, yet people are more afraid of flying. The list goes on and on.

**: I should note that right now this is exploratory. I have no funds to commit, but when is there funding in theatre :-). 

Share

Who Are You? Identify Yourself!

Establishing your identity? Seems a simple thing, but it is quite complex. In the past, when our social circles were smaller, you could do it by sight or with a letter of recommendation. But today it is much harder. Here is a collection of articles all dealing with identity, and how it is changing.

 

Share

Postcards in Pencil

In light of the Cambridge Analytics incident and the loss of privacy on Facebook, people have been going around deleting their Facebooks, left and right, for fear that their information has been released to the world. Never mind, of course, that they willingly gave up that information. This is all Facebook’s fault, and Facebook must pay.

Take a deep breath, world. This is nothing new. We’re dealing with postcards in pencil again. For those unfamiliar with the phrase, that was the analogy used to describe email to people. It was a postcard because anyone could read what you wrote. It was in pencil because anyone could change what your wrote without leaving much of a trace. Thinking of email as postcards in pencil, would you put sensitive information there?

The issue with Facebook isn’t a new one. It was there in the days of Livejournal. It was there in the days of My Space. If you don’t think of your web space as a postcard visible to all, even with controls, you are giving your information away, not the website.

Further, if you are participating in all these memes and quizzes that ask for personal information, and just think they are fun, you are naive. Why would a free quiz want personal information?  Why would a free quiz want access to your data and information? Remember the key adage: If you are getting it for free, you are not the customer, you are the product that is being sold.

The problem is not with Facebook, per se. It is with users who did not understand what they were doing, and had the belief that there information was security … that had the belief that those applications weren’t going to use their data. They gave away their data due to their stupidity and lack of knowledge, and now want to blame someone else.

Facebook is perfectly safe to use, if and only if you treat it as 100% public. If and only if you only put information on there that can be publicly disclosed. If and only if you are constantly alert for what information you are giving out. Oh, and be forewarned, there is information you are giving out even when you aren’t entering data. Everytime you linger on an image, every time you visit a website, everytime you click to open an article, you are giving away information about your interests that will be sold. Facebook is a free service. Remember what I said about getting stuff for free.

Delete your Facebook if you want, and run away and make the same mistakes on another service. Alternatively, just perhaps, you can understand the online world and how it markets you, and be much more careful about what you say and do online.

[ETA: Of course, society and Facebook itself make it difficult to leave Facebook. Just think of all the data you would need to reenter, all those logins to third-party sites you do via FB that you would have to recreate anew (including their data), all the relationships you would need to reestablish on other services. There’s just too much inertia and friction to deal with.]

Share

Be Careful Out There

As I continue to clear out the news chum, here are some articles related to security, trust, safety, and cyber. In short: be worried, be suspicious, and everything is not as it seems:

  • Can You Believe Your Eyes? We’ve all been taught that “seeing is believing”. But is it? We live in an era of forgeries: email can be faked (and has been), and videos can be doctored. I’m sure you’ve all heard about “deepfakes”: Where AI is used to put a different face on a body in a porn video, creating celebrity porn without the real celebrity. The LA Times has an interesting article on the rise of fake videos and their implications. Just think about this: What damage could a faked video do when it spreads on the internet? How could a fake be used for propaganda purpose? We’ve been given the blessing of technology, but its misuse could be the downfall of society (as the 2016 election has shown, with the Russian manipulation of the US electorate through technology).
  • Financial Scams. The last few years have seen the growth of person to person online financial exchanges like Venmo and Zelle. But the scams are growing as well.  The services were intended for use between transfers between people that know and trust each other. There are no safeguards for scammers and fraud, unlike services like PayPal. This is starting to bite people in the butt. Remember: Only Venmo/Zelle funds to someone you know and trust in real life. Once the funds are gone, they are gone.
  • The Green Padlock. Starting in July, the Chrome browser will mark all sites using the original web protocol, HTTP, as insecure. This is because the protocol does not provide end-to-end security. I initially believed that was overkill: there are many static sites with no forms, that only serve as information providers. Why do they need encrypted transport? But a discussion of the issue highlighted the reason behind Google’s actions. Even for such sites, moving to HTTPS provides assurance that the data coming from the site is what is being received by the consumer of information. In other words, it prevents man-in-the-middle attacks to insert false data, advertising, or malware. I’ve taken the steps to secure my site for the highway pages, and will be doing it for subsidiary pages in the coming months.
  • Paying for Security. One of the biggest problems that security has is that it is often invisible. If the mechanisms work, nothing bad happens, and you don’t know it is there. It is like high quality building codes, that you don’t discover saved your house until everyone else’s house burned down. As such, consumers haven’t wanted to pay for security; they want new features and whells and bistles, Software and hardware vendors couldn’t justify costly new releases that just added security. Luckily, that’s all changing — a new survey shows that consumers now prefer security over convenience. Will things stay that way? Will the convenience of a simple facial recognition overtake the security of two-factor authentication? Stay tuned.
  • Fixing Vulnerabilities. Vulnerabilities are on the rise, and keeping up can be hard. Here’s an interesting article that highlights the fact that not all vulnerabilities end up in the CVE/NVD database; and thus relying on that database as your sole source of vulnerability information is a bad idea. For those of us who assess for obvious vulnerabilities, this is an important observation. It is also vitally important to understand that a vulnerability is not the same as a risk. Sceptre and Mindfuck (no) Meltdown are good examples. They are vulnerabilities, and their patches are causing incredible slowdowns, but how easy are they to exploit, and what can they leak? A determined adversary will find a way to exploit anything, but the casual “script kiddie” hacker may not find much utility. The same, by the way, is true of gun laws. Gun control will affect law abiding folk, but the determined adversary will find a way. That’s why it is important not only to address the symptom of the problem — the gun control, the identified vulnerability — but to address the source of the problem. We need to engineer-in safety and security in all of our systems — human and technical — from day 0 to identify and prevent problems BEFORE they happen.
  • Safety While Traveling. Here’s an interesting article from the folks at Lastpass on how to use your password manager to make your life safer while traveling.  There are some interesting notions here, including keeping copies of important travel documents in your password vault, so that if you lose them, you have that information. Other ideas include storing the credit card loss and fraud department phone numbers in your vault with your credit card form fills, so if you lose the physical card, you can easily call and report it.
  • Pre-Register to Prevent Fraud. An interesting reminder to register and create your account at SSA.gov, before the bad guys do it for you.
  • Securing the Internet of Things. One increasing risk is the Internet of Things. More and more, everything is being connected to the Internet. Often, what is connected are low-criticality devices (solar panels, refrigerators, light bulbs, dishwashers) with poor security protocols. Miscreants can then use those devices as stepping stones to get a trusted position in a network to jump to a more critical site, or to host a bot net or cryptocurrency mining operation. Luckily, NIST is working on standards for IOT security — and those standards are out for draft and comment.

 

Share

CyberSecurity News of Note

Here’s the last of the news chum collections for this morning. This one has to do with safety and security.

  • Tiny Dots and Phish. Hopefully, you’ve been getting trained on how to recognize phishing threats, and how to distrust links in email or on websites. But it’s getting even trickier, as this article notes. Miscreants are using characters in other character sets that ļȯоķ like other characters. Hint: Always look at how addresses look when you hover over them, and even then be suspicious.
  • Complex Passwords Don’t Solve All Problems. So you’ve gotten smart: you are using complex passwords everywhere. But every solution contains a problem: reusing complex passwords can give your identity away. Research showed, the rarer your password is, the more it “uniquely identifies the person who uses it. If a person uses the same unique password with multiple accounts, then that password can be used as a digital fingerprint to link those accounts.” Although this is not something previously unknown, there seems to be a lack of awareness about the practice. Remember: complex passwords, never reused, and use a password manager.
  • Two Factor Authentication. Using 2FA can also help. Here’s a handy guide on how to set it up on most major websites. Here’s a list of all major websites, and whether they support 2FA.
  • Protecting Your Social Security. This article from Brian Krebs explores abuse of the social security system, and contains some advice I hadn’t known: go create your account at SSA.gov now to protect yourself.  That’s something I need to do; I tried to do it this morning but it wouldn’t accept the proof for the upgraded account, and I have to (a) find a previous year’s W2 and (b) wait 24 hours to try again.
  • Predicting Problems. A few articles on predictive algorithms. One explores whether predictive algorithms should be part of public policy.  Essentially, should they have a hand in shaping jail sentences and predicting public policies? Government agencies are now using algorithms and data mining to predict outcomes and behaviors in individuals, and to aid decision-making. In a cyber-vein, there are calls to add prediction to the NIST cyber-security framework. The argument: With AI and machine learning, companies should now be considering how to predict threats before they even appear. Speaking of the NIST Framework, Ron Ross tweets that it is being incorporated into FIPS 200 and the RMF.
  • Building It In. The NIST effort — especially with SP 800-160 — is to emphasize the importance of engineering in and designing in security from the very beginning, not bolting it on at the end. Good news: The government is finally coming around to that realization as well. The link is a summary of the recent updates to the NIST pub. It’s an area I’ve been exploring as well, and I’ve been working on some modifications to the process to make it even more accepted. The first report on the effort is under review right now; I hope to publish something soon.

 

Share