Tag Archives: Security Testing

Hacking Airplanes…Lets Think About This

Recent news of airplane security and the did or didn’t someone take control of an airplane during a flight is scattered across the web. There are lots of opinions on whether or not the inflight entertainment systems and the airplane control systems are connected or not. I haven’t tested an airline system, so I can’t say for sure, and it may be different depending on the type of plane. One glaring issue here is we don’t know and there are a lot of people that don’t know either, while acting as if they do know. Is airplane security a concern? Of course it is, what security isn’t a concern? What is the right approach to having it tested?

United Airlines recently announced a bug bounty program. For those that may not know, a bug bounty program is set up by companies to recognize or reward security testers for identifying security bugs in their applications. Some of the big names like Google, FaceBook and Twitter have been doing this for a while now. While not something everyone is prepared for, it can help identify some of the security bugs in your applications, although many of these flaws should be identified internally by developers and QA before release to production. Any average person can participate in most bug bounties, no skills required (we won’t dig into that for this piece).

What seems to be interesting with the United program, at least what we see on Twitter, is that there is some concern that the airplane and in-flight systems are out of scope. This means that while you can test United’s external applications, they are NOT giving permission for anyone to test the airline systems during a flight. Airline security has been propelled into the spotlight recently with stories like GAO: Newer aircraft vulnerable to hacking and Chris Roberts tweeting on a plane about it and then getting questioned by authorities for hours upon arrival.

Does United have it right, by banning hacking on the plane? But what about the children you say? First off, without permission, you shouldn’t be security testing something that isn’t yours. I know there are lots of debate around this topic, but lets just get the permission thing out of the way. I understand, if the systems are not safe, then the issue should be addressed. Many will tell you that the only way to know if it is safe is to have any Joe Blow out there firing away at it. If telling the airline about it doesn’t get them to fix it then doing something a bit more rash is needed “for your safety”. Be prepared, when it comes to public disclosure of flaws that contain working exploits that are not patched “YOU” are the collateral damage.

Lets get real here for just a moment. Lets take a moment to realize that things that happen on computers DO have real consequences. Messing around on a website that exposes sensitive information is bad enough, but to think that allowing anyone to attempt hacking a plane to look for security vulnerabilities at 30,000 feet is a good idea is just ludicrous. You are directly, and immediately putting the lives of everyone on that plane at risk. Maybe you should do a vote to see who is ok with you attempting this. After events such as 9/11, I don’t think you want to announce you are hacking the plane.. you may find yourself duct taped to a chair and bruised up a bit for the remainder of the diverted flight.

In the professional world of security, when we want to test the security of something like this, we seek out the vendor and get a contract that outlines what testing will be done. Obviously this requires the vendor to agree to a contract and the testing. In this scenario, the testing would most likely be done in an airplane in a hangar at the airport, not at 30,000 feet and with no other passengers on board. If you are unable to get the vendor to commit to a contract for testing, then hopefully making people aware of the potential issue and the risks they assume by using that vendor could be enough to force the vendor into it. In our market, people stop using a service, vendor starts listening to people.

In the case of United, and hopefully any other airline that decides to open a bug bounty, I think they are making a good decision in not opening up a bounty on the airline systems. Of course these systems are critical, especially since they keep the plane safe in the air, but we need to make intelligent decisions about how things get tested. This decision by United does not seem to be a method of trying to silence “researchers” about the potential security vulnerabilities in the airplane. This is a move to keep people safe during a flight. We have ways to test, as mentioned earlier with the contract in a controlled environment, we don’t have to do it in the air with other passengers. It is also a smart decision to not open a bug bounty on those systems because with critical systems like this you want to ensure that only trained experts are assessing the system. Someone that can understand the fragility of the environment, the way it works, the things that shouldn’t be done. You start letting John in 34C who just learned what Metasploit is start firing exploits at a system all ad hoc, you are asking for a world of hurt.

If you really want to test the security of an airplane and its flight controls, pony up and buy a plane to do the testing. We see this with the guys that are testing the security of cars. They get funded or pay out of their own pocket to get a vehicle that they can test out the security. Look at some of what they have done, it doesn’t always go as planned. They are not hopping on a city bus and hacking it. They are not hopping on a train and attempting to hack it. They are doing their best to create a controlled environment to test in a safe environment.

Everything has security issues. There will never be a time when we don’t have some security issue still around in a system. We should be glad that due to recent events the airlines have not banned electronic devices on airplanes.. Yet if we keep making decisions to put people at risk with this type of “research” we will probably really learn with “chilling security research” really means.

QA and Security Pt. 1: What QA Are You Talking About?

There has been a lot of chatter recently regarding QA and the role it plays in security testing. Hashing this out over Twitter with 140 characters per post just makes things more confusing. While I think that Twitter helps start some of the most important discussions, there is a need for other mediums to expand on the topic. In this series of posts, I want to take the opportunity to bring up some of these topics and hopefully expand on then a little bit. We all have our opinions and we will certainly not all agree on everything, however these are conversations we need to have.

What do you think of when you hear Quality Assurance (QA) mentioned in a conversation? Is it just a process that documents or processes go through to make sure they are sound? Do you think of that group of people that test applications to make sure they function as they should? It is important for everyone in the discussion to understand the context in which the term QA is being used to ensure everyone is on the same page.

As mentioned above, there are a few different things we think of when we hear the term QA. The one that comes to mind the most to me is the team that tests applications or software to ensure that they are working properly. Most likely I go this route because I come from a lengthy development background and that is the QA I mostly dealt with. This QA team is responsible for identifying bugs in the system, documenting them and getting them back to developers to resolve them. This team is engaged after software is written (well, during the iterative development process) and before it is released to the production environment. Even this group can be very different between companies which I will talk about in future posts in this series.

Other people when they hear about QA think about a different group (most likely) that is responsible for reviewing documents and procedures to ensure accuracy. One example of this is sending requirements or design documents through a QA process to ensure they are correct.

Of course there are other types of QA, but those are the two we will focus on throughout this series. If you have other examples, please feel free to send them my way on twitter (@jardinesoftware) or james@jardinesoftware.com. The more we start distinguishing the context we are discussing the more we can get done by cutting through the confusion. Defining our context is a critical first step.

In the rest of this series I will dive into the different types of QA and how they relate to security. What role can they play. What should they be able to do and what should they not be able to find. Can QA find security flaws? Follow me on this short journey to see what we find out.