Repeated gusts of controversy have followed news of the SolarWinds hack and world-renowned experts warn a hurricane of similar security exploits could occur in future due to toxic corporate culture and futile security governance.
Dick Morrell, founder and former CTO/Chairman of global internet security and filtering software SmoothWall and ex-UK Ministry of Defence and U.S. Department of Defense advisor, is dismayed with the overall response to the cyberattack and has called out executives for repeatedly refusing to take ownership for security failings.
Digital Bulletin can also exclusively reveal former White House staffer Richard A. Clarke, who worked for multiple U.S. Presidents including as specialist cybersecurity advisor to George W. Bush, echoes many of Morrell’s security concerns.
In February, Microsoft President Brad Smith dubbed the SolarWinds hack “the largest and most sophisticated attack the world has ever seen” and told U.S. news programme 60 Minutes that an internal analysis of the attack found that “certainly more than 1,000” software engineers had been involved in the December operation which affected the software giants, various other technology companies and vendors as well as several U.S. Government agencies.
It has since emerged that the password “solarwinds123” was discovered by an independent security researcher, with former SolarWinds CEO Kevin Thompson blaming the critical lapse in security on “a mistake an intern made”.
Although not directly involved on a commercial footing with SolarWinds, Morrell has previously in 2017 brought a now acquired division of SolarWinds, Trusted Metrics, into the UK marketplace so he is acutely aware of technologies and processes involved.
Morrell, whose current role sees him head up cloud consulting and security training for UK-based tech skills provider QA Limited, actually predicted an event similar to the SolarWinds breach around six weeks before it happened during a webinar with Guy Martin of OASIS Open where he flagged up the “massive risk” with supplier-based or partner-based networks – and he claims the true gravity of the attack may be even greater than what has been reported.
He says: “I am incredibly aggrieved that the CEOs, and CTOs of companies such as SolarWinds can now turn around and say with no shame whatsoever at all that ‘this could have affected anyone’. That has to the biggest cop-out from a company that’s listed on the NASDAQ, that is governed by the SEC (U.S. Securities and Exchange Commission), and which has published 10K quarterly filings for the last 10 or 12 years since going public – all of which actually gave a breadcrumb trail to potential threat actors, paving a way for eventual attack.
“I find it utterly bizarre when SEC-listed companies, when they’re hit by a security threat and start to see a deprecation in their share price, and when they are very much aware that their methodologies and processes have been shown to be lax, that they don’t come out and actually work with a seasoned and industry-aware PR partner in order to get that story straight. Because honesty really, really matters in the security ecosystem.
“I’m sick of the buck-shifting. I would much prefer a COO or CTO to come out, to put their hands up and say, ‘we didn’t get it right, and these are the subsequent actions that we’ve taken’– rather than coming out and saying, ‘it could have happened to anyone’. This is even more critical given SolarWinds have been involved in merger and acquisition activity to which it is not clear that security due diligence and best practice was ever followed. This always and everywhere has the implied risk of walking in back doors and privileged access to development resources that otherwise should be protected.”
Morrell continues: “My concern is even bigger with regards to SolarWinds, because if SolarWinds’ development environment was hacked – and we don’t know how it was hacked and nobody’s ever going to tell us how it was hacked – SolarWinds also have a duty of care to ensure paucity and transparency with an even bigger, more aware community in the Open Source arena. SolarWinds Papertrails product potentially affects GitHub and organisations provisioning containers into Docker and Kubernetes environments with many applications still badly configured to run as root.
“Proactive knee-jerk responses like blocking DNS address ranges to stop potential attacks… that’s slamming the door shut after the horse has long since bolted. I am more concerned about how far attackers managed to jump off into trusted networks, and may have left doors open that we won’t find for another five, six or seven years. Myself and my industry counterpart Raj Semani (CTO of McAfee) spoke daily concerning SolarWinds and the threat factors involved and across the industry there are many concerns still outstanding. I doubt we will ever see these addressed. Until the next time and, rest assured, that is never far off!”
The warning is stark and it’s worth heeding. In that same webcast interview with Martin mentioned earlier, Morrell identified concerns he had with Tesla’s head-up in-vehicle display systems and their software design. A matter of weeks later Tesla acknowledged assumptions in those critical design failures and recalled 135,000 vehicles.
But this particular issue has been a long-standing grievance with Morrell, and he identifies a trail of complacency stretching back over a decade. Morrell claims that, even as early as the spring of 2009, he had identified and alerted embedded vendors in the U.S. which were using derivatives of the Red Hat Enterprise Linux community version CentOS as their build platform, making them aware of weak practices and lack of governance.
Twelve years on, this has not seen a level of improvement that allows Morrell to rest easy and he advises that such a cavalier approach to resolving inherent flaws which place organisations at risk, either now or in the future by bad practices, cannot continue.
He explains: “Organisations whose time to market is shortened using Open Source-derived technology places a responsibility on those companies to proactively manage, identify and patch dated, staid, older versions of shipping versions of code libraries. To maintain intellectual property almost every organisation backporting newer GPLv3 licenced code, with the Samba project being just one example, is taking part in exercises to deliberately recompile patches into older GPLv2 variants. They are sidestepping responsibility to contribute upstream fixes and code to projects and maintain proprietary advantage protecting intellectual property and residual commercial value, without thought for security best practice.
“You have to ask: what security risk exists to the end-user customer by recompiling and rebuilding binaries without those fixes and changes being scrutinised by the maintainers of that original source or their engaged community? What if those development environments in those vendors were targeted outside the Windows ecosystem? Who is to say that has not already happened given the threat vectors we’ve discussed?
“The danger inherent in that approach is organisations deploying black box expensive network layer technology relying on ‘Linux Inside’. This essentially meant relying on auditors failing to apply the same security auditing schema to a device than they would for example on a racked server using Linux running services such as OpenStack or hypervisor based KVM technologies forming staple parts of cloud. Penetration testing schema and agreed parameters do not come close to scrutiny of running firmware on the devices in question. It is potentially Christmas Day for the hacker elite.”
"This is a people and culture problem, we need smart folk making strategic, brave decisions armed with an appreciation of their own risks, and training and processes better than paper-based non-aligned ISO practices must become the new norm" - Dick Morrell
And Morrell says it’s very easy to identify those who are making themselves vulnerable in this way, by simply referencing companies’ 10K and 10-Q quarterly filings.
He points out that if any vendor’s terms state “because we use open-source code, we may be required to take remedial action in order to protect our proprietary software”, then it immediately flags that firm up as a potential target.
“If I’m a hacker and I want to cause mayhem, all I need is your quarterly filings, and you scroll down to section 24-25, and that gives me the indication of whether or not you’ve got something to hide,” says Morrell, who is also an OpenUK ambassador.
“It really is that simple to add you to a list of targets to then add to a list of social engineering exercises or remote surface attacks to gain access, especially now so many development staff are working from home often outside of corporate gateways and policies.
“By essentially stating in your 10K filing openly ‘our software is based on Open Source technologies for which we have no control, therefore this may affect our time to market, our ability to assess and mitigate risk’ you are complying with SEC regulations but also raising a white flag.
“Whereas in the Open Source community, we march on goodwill. We have maintainers. Maintainers look after software, but maintainers can only release new versions of the software containing updated patches, if people put the patches back in.
“When we release a patch, there are millions of pairs of eyes looking at their source code. That’s the old adage. We’ve always used this for 25 years. Open Source equals millions of pairs of eyes. It’s not true, because open-source as it’s grown as an anathema, as its grown as a phenomenon, generally it’s millions of pairs eyes consuming stuff, not necessarily contributing back.
“Open Source licences are there to encourage adoption but also to encourage good practice. Organisations are too focused on time to market and bottom end profitability to realise that by not partaking they are placing their customers and their own reputation at risk.”
Former Oval Office advisor Richard A. Clarke is also passionate about the need to do security properly and with conviction and, as a leading industry voice, his comments agreeing with Morrell highlight this as a global problem.
“There was a time when Linux systems were not as secure as they are now,” Clarke told Morrell in a recent interview provided to Digital Bulletin. “The U.S. Government asked NSA (National Security Agency) and NIST (National Institute of Standards and Technology) to work to make a secure version of Linux, and then to make that available, not just to the US government, but to the public. It did, and it was the first time that the NSA was working in a public, open way, Open Source way, and it provided to the world an improved and secure operating system. If you want a highly secure way of using Linux, Secure Linux is the way to go.”
What Clarke says makes sense to anyone with an Open Source mindset as the further a software moves away from the original open source code, the more vulnerable it becomes. With that in mind, U.S. government policy has now changed when it comes to identifying vulnerabilities. Also, given the sheer amounts of data held by government agencies and the sheer sensitivity of that data, he outlined a need for clearer governance when it comes to security standards for cloud-based systems and the need to safeguard existing systems from a legacy perspective.
“Given the huge amount of government databases and technology, it’s not something that can change quickly,” Clarke says. “That means that we’re going to have, for a long time, systems that are using legacy technology, legacy software, and hardware. When we think about improving security we have to realise that we can’t just come up with new things that will make things better in the future. We also need approaches that take care of systems that are already there by improving them, rather than replacing them.
“The United States government now has an internal directive that says if you find a vulnerability in software, your first obligation is to fix it, not to develop an exploit for it. If, in rare cases, you think that it’s justified to develop an exploit, then you’ve got to make that case. You’ve got to make it on an inter-ministerial basis, inter-agency basis and the equities of the private sector and the financial services sector and other critical infrastructures have to be taken into account in making that very rare, rare exception, where there’s some piece of software that might need to be exploited. Perhaps to go after, let’s say, an Iranian nuclear programme. But for generally available software, if we find a vulnerability in it, now the policy is don’t exploit it, patch it.
“We need, though, standards for security of cloud systems and the NIST process. The NIST framework, that was announced earlier this year as a result of a year-long public dialogue in public processes around the country is an example of how the government can work with the private sector in the United States to establish open standards that improve security.”
This applies on a worldwide basis of course, and the point becomes even more pertinent when considering the sheer amount of data held on a variety of devices given the proliferation of the Internet of Things (IoT).
“Embedded software is a real problem because people don’t realise it’s there and they don’t realise that it has to be patched as well,” Clarke adds. “So, patch management is a problem throughout government.
“As we move into the Internet of Things, increasingly, it’s going to be a problem throughout all of cyberspace, where there is software that needs to be updated, patches that need to be applied, and people don’t know what they have in their inventory. That’s going to be really true with IoT, because people today don’t realise what’s connected to their network.
“We saw that in the case of the Target hack (the 2013 breach of the U.S. retail giant where hackers stole data from up to 40 million credit and debit cards of shoppers), where there was an outside provider working on air conditioning and chillers that may have been, according to press reports, the way hackers got into the network.”
Morrell is in full agreement with Clarke and reinforces the need to be vigilant and to consider service partner networks across business verticals and to consider how organisations better use SOC (System on a Chip) and SIEM (Security Information and Event Management) technology and to use better use of threat alerting technologies, both proprietary and Open Source.
“The problem we’ve got here is the fact that it’s not just about the device, and that’s where a lot of people are getting it wrong,” Morrell says. “It’s about the underlying broker extensions. Things like Apache Fuse, Apache Camel, Fuse MQ, MQTT, all of the bits and pieces that sit below the IoT device in that architecture trailer.
“Even if you had what you believe is a modular cloud infrastructure to connect devices and threat surfaces in isolation, there is still the need to harness business applications and event management fabric in order to drive home governance.
"Patch management is a problem throughout government. As we move into the Internet of Things, increasingly, it's going to be a problem throughout all of cyberspace" Richard A. Clarke
“As we have seen with SolarWinds and the deliberate targetting of trusted certificate based update processes and technologies we are very much having to ensure awareness and ownership is key.”
Morrell is both adamant and passionate in his assessment of next steps.
“To mitigate all of these problems,” he adds. “Conformity of training on hybrid and multi-cloud is absolutely critical and we must not take a vendor-based approach. Especially when it comes to being able to identify risk when you are using a DevOps-based approach to planning. We have to get this right. We are going to just compound the problem if we do not take this opportunity to actually own and identify risks and educate and empower staff who have to deliver an ambition. Be that in cloud, the enterprise or IoT arenas.
“This is a people and culture problem, we need smart folk making strategic brave decisions armed with an appreciation of their own risks, and training and processes better than paper-based non-aligned ISO practices must become the new norm. If we don’t solve this now and assume relying on CI/CD then we are going to look back in four years and realise we are in a myriad of pain. It is simply not good enough to say well we followed NCSC and ISO standards guidance.
“This is first and foremost about harnessing and provisioning technology well, but also redefining and understanding risk. Google, Microsoft and Amazon Web Services invest very heavily and proactively in security and provide support services that are of huge benefit to customers but because of the myriad ways we as cloud consumers provision to cloud and our own individual threat appetite and governances, we have to be more intelligent.
“Even more so given we still mistakenly deal with SIEM, and event data badly risking another SolarWinds type exploit to our own provisioned platforms. That is even more critical when considering temporary often torn up and torn down instances contained within a Docker or Kubernetes ecosystem. If this issue is not tackled, then the problem is going to get bigger by design and become even more business-critical, I am watching it happen in projects across the EU, especially in the automotive sector.
“I only hope now that my concerns are echoed by a respected senior security advisor to The White House in Richard Clarke, that those of us taking this matter seriously can finally gain traction and be heard amongst our peers. Those service provider partners and the hardware vendor community must do better, we have removed the seventh veil – they’re naked. It’s time for change.”