Who We Are
NJVC is the engine of the secure, integrated enterprise, delivering mission-critical IT solutions for critical missions in enterprise management & monitoring, hybrid IT transformation & optimization, cloud migration and cyber security.
Editor's Note: This is the third of a three-part series on industry cyber security reports, covering research reports. Click here for Part I, covering four data summaries and click here for Part II, covering three more.
After reviewing a series of data summaries in Part I and Part II, the final section of our series focuses on reports which used different data collection methods, including benchmark research, company surveys, individual surveys and software line-by-line code inspection.
Each of these methods yields different metrics that can add to understanding the puzzle of cyber security. Like the data summaries, none by themselves tell you everything, but each one provides an extra viewpoint in the panorama of securing your enterprise. Like cyber security itself, it's a matter of taking as many data points as possible and creating actionable risk management based upon it.
Particularly, these research reports put context around the entirety of a data breach. While it may be easy to count the number of data records spilled in a breach, a soft measure, such as the damage to a corporate reputation, cannot be explicitly defined and measured. Hence, some type of survey must be conducted where individuals are asked an opinion and that data organized into a set of corporate measures. This provides some rigor in trying to quantify the difficult-to-quantify.
A completely different method is to scan each line of application software to determine if previously defined weaknesses are present. Strong and comprehensive security testing should catch most of these programming errors before they make it into production -- but who has the time or budget for that? Far too few companies, it seems, as is evidenced by the following reports:
Read: 2014 Cost of Data Breach
The Takeaway: The ultimate cost of a data breach to an organization is a mix of direct, indirect and opportunity costs.
Founded by Dr. Larry Ponemon in 2002, the Ponemon Institute is a think tank dedicated to advancing data privacy and protection practices. The organization has made a name for itself by producing survey reports which are frequently cited by the media. Because they have used an accepted and repeatable methodology to calculate data breach costs, their resultant metrics are used in news reports by journalists and industry commentators.
Ponemon also produces sponsored industry reports which advance the technical goals of sponsors. In this manner they are able to focus in on specific metrics which are not produced by other research groups. They do not include "mega-breaches" (millions of records) because they are considered statistical outliers. Instead, they focus on a more "normal" range of breached records allowing year by year (longer term) comparisons to be drawn.
In a 2014 data breach study sponsored by IBM, Ponemon researchers surveyed 314 companies in 10 regions, including the U.S. and countries in Europe, Asia and the Middle East. To determine the average cost of a data breach, the study collects figures from companies that experienced a data breach of anywhere from 2,415 to 100,000 compromised records. Compared to the previous year's study, the average total cost of a data breach has jumped 15 percent from $3.1 million to $3.5 million, with the average cost of a single lost or stolen record totaling $145, up from $136 last year.
Because the report anonymizes response queries, Ponemon is able to obtain data from a wide range of companies and industries, data which otherwise wouldn't be available. The cost of a breach to an organization is obviously a very sensitive item, so Ponemon's ability to categorize data by industry without revealing individual company sources is invaluable. This trust is key to getting accurate data.
The 2014 report also examines the root causes of data breaches and lists a cost per breached record specific to industries. The most expensive data breaches tend to result from malicious attacks, as threat actors zero in on valuable data. The most costly breaches occurred in industries such as healthcare, education, pharmaceuticals and financial services. Healthcare tops the list with a figure of $359 per record, far surpassing even the financial industry at $206 per record. Identity thieves often target medical records because the data can be used to perpetrate fraud over and over and, as such, has more economic value on the black market than a credit card record, which can be quickly shut down in the event of a breach.
To determine these numbers, the study calculates both the direct and indirect expenses of a breach. For example, direct expenses might include hiring forensic experts to find out how the breach happened, establishing hotline support for customers whose data was lost or stolen, and providing free credit monitoring. Indirect expenses might include conducting internal investigations, as well as any loss of business that might occur from customers who lose faith in the company’s ability to keep their data safe. Referred to as "abnormal churn rates," loss of business resulting from security issues has recently made data breaches a serious corporate concern, previously often only seen as an IT event.
Bit 9's primary focus is providing real-time visibility and protection of endpoints and servers. The company offers a solution which continuously monitors and records all activities those devices experience. They have also proven to be a relevant source for research reports and surveys, with 10 distinct reports dating back to 2011.
Bit 9’s 2013 Cyber Security Study, conducted by the Information Security Media Group, surveys companies to better understand the impact of today’s advanced cyber attacks. Across all surveyed companies, 47 percent report that they suffered one or more cyber attacks last year, while 13 percent admit that they are not aware if they were attacked -- a rather high level of ignorance.
Based on these responses, the study concludes that traditional signature-based (”first generation”) protections are insufficient to adequately guard against today’s advanced threats. The actual number of malware variants is unknown, but reported to be as high as 400 million. Seventy percent of companies identify endpoint user devices as their top vulnerability.
There seems to be a blind spot when it comes to endpoint and server security. Though these devices are key targets, given their proximity and access to critical organizational data, malware is often able to affect these systems for a long period of time before detection. Moreover, it is often difficult to determine the extent to which malicious software has penetrated a network, making removal a challenge. Fifty percent of companies report that they do not have the ability to block unauthorized software from running within their environment. Thus, once malware has penetrated their network defenses, there is no subsequent security control to prevent its execution and propagation.
Another cause for concern is the overabundance of security alerts that a company receives daily from its installed suite of security software. Twelve percent of companies receive 100 to 499 alerts, while 11 percent receive more than 500 alerts per day. Without some type of response automation, companies can quickly become overwhelmed responding to alerts, which may or may not be necessary.
Other noteworthy reports
Trustwave provides a wide range of services and products to protect companies from cyber crime and theft of intellectual property and information assets. Its 2014 Global Security Report draws on 691 of their forensic investigations conducted over the previous year, along with security intelligence acquired from its products and global security operations centers. Trustwave’s SpiderLabs group is composed of more than 100 investigators and researchers who are on the front lines of protecting organizations in 96 countries.
2013 was in many ways the year of the retail attack since fully 33 percent of the attacks investigated were retail related, more than any other single industry. Eighty-five percent of the exploits detected were of third party plug-ins: Java, Adobe Flash and Adobe Acrobat/Reader -- demonstrating the continued exposure non-secure application software causes. In terms of exploited software vendors, Oracle, Adobe and Microsoft were the top exploitation targets.
Trustwave dives into the process of malware exploitation and provides numeric insights into the components of obfuscating and delivering payloads, then using exploit kits to carry out the ultimate objective, usually some form of monetization. This research shows the extreme care attackers use to remain undetected and the extent they go to when targeting an adversary.
Veracode's State of Software Security Report (registration required) is a semi-annual report that draws on continuously updated information in Veracode’s cloud-based application risk management services platform. Unlike a survey, the data comes from actual code-level analysis of billions of lines of code and thousands of applications. As such, it provides a rather unique perspective.
This volume captures data collected over the past 18 months from the analysis of 4,835 applications on their cloud platform (compared to 2,922 in Volume 2, published in September 2010). This reflects the growing use of independent, cloud-based application security testing services. As before, the report first examines the security quality of applications by supplier type in the software supply chain, then explores application security by language, industry and application type.
The results they present are particularly distressing because of the widespread and continual deployment of non-secure software. By putting poor quality software into production, companies offer easy exploitation targets that can be readily penetrated by persistent attackers. In addition, minimal skill is required on the part of the attacker to exploit a specific identified vulnerability.
CheckPoint Security Report
CheckPoint is primarily known for enterprise network firewalls, serving as a leader in its Gartner Magic Quadrant. It uses its installed base to provide some highlights of the malware types and impacts that they have seen across the globe. One of the perspectives offered is an "average day" in an enterprise organization as they profile the frequency and type of attacks that can be anticipated.
CheckPoint's focus not only on reporting of observed data, but offering potential solutions is a key strength of the report. In particular, CheckPoint describes a security architecture for tomorrow's threats based on software defined protection mechanisms. It is this type of thoughtful approach that will allow all organizations to better fend off the relentless attacks they experience against their IT infrastructure.
As a cyber security professional -- and, like almost everyone, an end-user of countless IT networks -- it's troubling to note that even with all the well-meaning attention and vast sums of money thrown at the field of cyber security, security incidents and data breaches continue to increase in both volume and size. Previous criminal successes attract newcomers and, with the increased weaponization of cyber rootkits and malware, the technical barriers to entry continue to fall.
No single report can possibly cover the entire landscape and, as a result, various categories of reports have emerged. Specialized reports cover everything from government policy to industry best practices to current malware proliferating across the globe. Yet as vendors have come to see the insights they can provide to potential customers as a competitive advantage, the data collected and conclusions drawn have increased in their worth to all enterprises.
Good news may be hard to find in any of these reports, but having comprehensive data and conclusions on real and existing threats is invaluable for organizations trying to win in this vast cyber war space.
Some basics rise to the top: Expect to have your networks penetrated; provide multiple tier, multi-layered defenses; keep software and patching up-to-date; and prioritize your information assets and provide maximum protection and controls against those assets. There is increasing evidence that much malware lies in wait on networks until ordered to perform some activity by a remote command and control server. If this malware can be detected and disabled prior to execution, organization damage can be held to a minimum.
Awareness is a key component of preparation, turning data into actionable intelligence. Cyber security can't simply be put off, but must be part of your regular IT maintenance. Start with a cyber checkup, then use the information to improve security posture. Cyber security can't be a static process checked up on once every few months. In particular, take application software seriously. Conduct both static and dynamic tests and get those programmers some cyber training.
And constantly be aware of the latest developments. Read more. Get breached less.