We’re all used to the convenience that technology gives us. But that same open access to information that lets us order groceries with our voices, pay invoices, or register to vote can also be an invitation to cyber attackers.
Our office machines, phones, thermostats and cars are growing more and more connected, and government agency systems and applications are mandated to become more connected as well. So do the benefits outweigh the risks? To frame the situation as well as some solutions, we spoke with Malcolm Harkins, Chief Security and Trust Officer at Cylance.
Don’t have time to read the article? You can listen to the podcast here.
Government Technology Insider (GTI): Both government agencies and commercial companies use networked devices to enter, store and share information. More and more frequently, it’s mobile devices being used, and combined with increasing usage of web apps and cloud storage, the question becomes: is there something inherently risky about this model?
Malcolm Harkins, Cylance (MH): It’s a yes and a no. A closed environment, on the one, hand that’s heavily restricted may appear to have less risk because of less access, less potential for data loss, less potential exposure, which is true if it’s set up properly in an isolated way.
But at the same time, that isolation also creates another risk, because you lose the agility, you lose the flexibility, you lose the potential business philosophy of what you need information for — to make decisions and to keep people on the same page. So, it really becomes one of those design and architectural issues where, if you really can create a closed environment to extract all the value you want out of the information, that might be the best option. If you need a more open environment to get the enrichment of the data and get the business value out of it and the organizational value then, yeah, you may be taking on some risk. But again those risks can be managed and mitigated if you design the right controls.
GTI: There’s all sorts of security schemes that agencies use. Obviously, there’s a lot of user training and technology involved. One thing I’ve been seeing a lot more of lately is biometrics, which are often touted as the be-all, end-all of security. We use fingerprint scanners on phones and laptops and some airlines are even using them in place of boarding passes. Is this a good idea?
MH: It’s not a bad idea. I would also say there’s some potential problematic issues with it. My fingerprint as an example: I can’t change it. With a password or some multifactor authentication, you can change that, you can change a certificate, you can change a token. I can’t change my fingerprint. And so once it’s digitized and used in the means and mechanisms for authentication, if the system from the fingerprint reader all the way through the back-end storage of where that is going to say, “That’s Malcolm’s fingerprint” is vulnerable, then my fingerprint is vulnerable.
Several years ago when I was at Intel, the early fingerprint readers on laptops that were starting to be shipped — many still have them — one of my employees took a mold of my finger in rubber and used it to bypass the fingerprint reader. So again, they’re not foolproof. And you have to think about how hardened that process and those technologies are to protect the fingerprint, let alone, “Can the fingerprint or the facial I.D. or the retinal scan be spoofed because you were able to pick up a copy of it somehow and use that to fool the authentication device?”
GTI: This actually brings up a whole raft of questions about what kind of identification process could be less vulnerable, less able to be stolen by a bad actor. Are there any that can balance convenience and security?
MH: It depends upon the circumstances, and sometimes there are enough relative to the risk of access of the data or the system. But I’ve long had a notion of a model that I think of as a “granular trust fall.” And it’s a level of multifactor authentication, but who you are, what you’re doing it, when you’re doing it, how you’re doing it.
I’ve had this notion: my phone is proximal to my laptop, my laptop has a camera, it has a microphone and so does my phone. My phone has its GPS location, my laptop knows what wireless I’m on, the badge and ID system knows if I’ve badged into the building. There’s a bunch of multifactor authentication variables already out there following me and creating my digital footprint. If those things can be turned and calculated, you could get a level of trust that could determine if Malcom is, or isn’t, in the building today. If Malcolm is in the building and it’s verified by all those factors, why even ask him a username or password?
I think there are different ways, if we think about our digital footprints innovatively, that we can actually get a higher level of trust and a higher level of validation that ‘Malcom is Malcolm’ and then ‘Malcolm’s system is Malcolm.’ We’ve just got to do that hard, innovative work to make that happen.
GTI: There are tremendous benefits to allowing citizens to interact with the government online. Meanwhile we’ve all heard about the recent DefCon event where an 11 year old hacked a replica of a state’s election website in 10 minutes. How do we make something like that safer and do the benefits outweigh the risks there?
MH: There’s certainly been a decent amount of news on the voting hacking and website hacking that was done at DefCon and some of it, as indicated at least from some of the government side, that perhaps those were contrived environments and not as real as they could have been. But negating that debate, as to whether or not it was set up for ease of hacking, when you think of a website, just like any application or system, there’s a security development lifecycle and privacy-by–design that should be followed. And the code needs to be validated and so, everything from that presentation layer that the user sees, all the way through the code and all the way in the back-end infrastructure, needs to be appropriately hardened.
Any organization, whether it be a voting registration website or a news website or a corporate presence on the web, any of those things, if they’re not done right, could have a level of vulnerability, could be hackable. It could be hacked to go after the organization or could be hacked to use it as what’s called a waterhole attack, where if you have a lot of people going and visiting a website, if there’s a vulnerability on it and I’m a bad guy, let me just go take advantage of that vulnerability and not use it to harm the website, but use it as a trap. When people go visit and click on the link there, I have the ability to send code to their laptop or their mobile device. And if their laptop or mobile device has insufficient control, like standard anti-virus, then I’ll likely own that endpoint and then from there be able to steal the identities or do whatever to the individual that I’ve compromised.
GTI: It’s surprising just how many government contractor websites are currently not ‘https’ sites. That seems like the easiest and most basic approach that you can take to avoid the kind of scenario you were just talking about. What’s the holdup?
MH: I think the holdup is a lot of organizations either don’t understand that totally or in some cases, they just haven’t invested the time or money. And think of a news organization or some company that set up a website and you’re trying to drive traffic, you’re trying to sell your goods or services or publish your information. A lot of the folks who manage those websites are going to be looking at it going, “Well, we don’t see any compromises, we don’t see any real issues. Yeah, it’s a risk but there’s nothing mandating us to take care of it right now.” And so it slowly continues to slide out. And I agree it’s a relatively easy step that would make a good difference.
But even if you have https, but you haven’t done the proper level of validation of the website itself, you still have the cross site scripting vulnerability error or an input/output validation error that would be trivial for a hacker to then compromise the website or a link on the website and use it for harm.
GTI: We know there’s a 2018 law requiring agency websites to be mobile friendly and the Census Bureau wants 55 percent of Americans to respond online to the 2020 census. Meanwhile, in very recent memory, we’ve got the OPM hack, the IRS hack as well as all the retailer hacks.
So if something does happen a breach or theft of information or ransomware attack, what does this do to users’ confidence in the systems and the institutions, or is this something we just now expect to happen? So, we roll our eyes and say, “Of course it got hacked.”
MH: Well, I think it’s a little bit of both. Unfortunately I think a lot of people have just said, ‘well it’s going to happen.’ You know that’s the whole notion of something that drives me nuts with many of my peers in the industry — they just accept that compromise is inevitable. They just assume that it’s going to occur and they can do nothing more than respond to it. And that’s an unfortunate event. And that narrative is also driven by the security industry itself, who makes more money when compromise occurs.
So I think we’ve got to change that narrative, change that mindset to do our best. You can’t eliminate risk and I’ve said that many times before. But we have to do our best to minimize the potential for compromise to the greatest extent possible. If we do that, then we’ll change that mindset. Now having said that, I do think there is an erosion of trust that has been occurring, certainly in the world and certainly with technology because of the repeated incidents.
And, I think that’s unfortunate, but that’s because we as a society, we as a technology industry, we as a security industry have not done a good job of preventing the issues. And until we step up to that responsibility and do a better job, we might continue to see an erosion and trust in technology and the organizations that are using it with their consumers.
GTI: Any final thoughts on this balancing act?
MH: I don’t necessarily think of it as much of a balancing act as it is like a calculus equation, an optimization issue. Because I do think you can achieve the velocity of the business, you can achieve the freedom of information flow. And I think you can do that while having appropriate security and having appropriate privacy.
There’s a professor of psychology in the UK, Glynis Breakwell, who wrote a book on the psychology of risk, probably 15-20 years ago. I had talked with her when I was writing my first book and there’s a quote from her book that I just love. And it was written on the psychology of risk but, by and large from a physical perspective, but I think this quote plays itself out in the technology realm as well: “Risk surrounds and envelops us. Without understanding it, we risk everything. Without capitalizing on it, we gain nothing.”
Everything we do is risky. Walking down the street is risky because you might trip and fall. A car might hit you. So, you can’t get away from risk. The question is, how do you understand it and how do you actually shape the path of the risk so that you can capitalize on the opportunities.