In this article:
This article reflects our Scoring 3.0 methodology, for a broader and deeper understanding of how SecurityScorecard calculates scores, see the white paper SecurityScorecard's Scoring Methodology. This article refers frequently to the white paper.
Your Scorecard rating reflects your organization's security posture. Your letter grade (A through F) and the numeric score to which the grade is mapped (100 through 0) correspond to the likelihood of your organization sustaining a breach.
The lower the score, the greater the likelihood. An organization with an F grade (score of 60 or lower) is statistically 13.8 times more likely to sustain a breach than an organization with an A (score of 90 to 100).
Read this article for a quick understanding of:
- The main components of your scores
- Operations that contribute to the calculation of your scores
- Our scoring methodology
Issues: The main components of your scores
Your score directly reflects all the security issues that we discover in your organization's internet-facing assets, in the context of other key considerations that make up our scoring methodology.
Issue types
We discover security issues on your exposed network assets during our recurring scans of the internet. You can view these on the Issues tab of your Scorecard. Each issue type may include multiple findings on your Scorecard, or instances where we observed the issue, for example, on different IP addresses.
Tip: Learn more about our different issue types in the Cybersecurity Signals section of the scoring methodology white paper.
Each of these issue types has a High, Medium, or Low severity level, which is proportional to its degree of risk to your organization.
These severity levels, in turn, have varying weights, or degrees of negative impact on your score, from High (greatest impact) to Low (least impact).
Note: Some issue types do not impact your score. Positive issues highlight healthy security practices that can mitigate risk. Informational issue types identify areas of risk worth inspection. Over time, we may assign score-impacting severity levels to certain Informational issue types, as noted in our scoring updates.
Factors
Every issue type that appears on your Scorecard is grouped within one of 10 factors. These are categories of cyber-risk and protection that SecurityScorecard uses to assess and score your organization’s security resilience. Each factor has a numerical score that reflects the severity or risk that the factor contributes to the overall cybersecurity posture.
Factor score calculation is based on the severity and quantity of issues or findings associated with the factor.
Tip: Learn more about our different factors in the Factor Scores section of the scoring methodology white paper.
Operations that contribute to the calculation of your scores
The calculation of scores follows, and is informed by, a sequence of three major operations that produce the issue findings in your Scorecard.
- Signal collection
- Attribution
- Signal analysis
Signal collection
We scan the entire IPv4 web space, more than 3.9 billion routable IP addresses, every 10 days across more than 1,400 ports.
Note: We scan cloud assets multiple times daily because they change ownership so frequently.
Our in-house global internet scanning framework collects all the information that threat actors would see as they search for attack targets:
- IP addresses
- Exposed port mappings
- Fingerprints of services, products, libraries, operating systems, devices, and other internet-exposed resources, including version numbers
- Common Platform Enumeration (CPE) IDs
- Common Vulnerability Enumeration (CVE) Version 2 IDs
- Script output from Nmap, the open-source scanner that is one of the components of our own scanning framework.
Additionally, we monitor signals across the internet, using a network of sensors that spans three continents. We operate one of the world’s largest networks of sinkholes and honeypots to capture malware signals and further enrich our data set by leveraging commercial and open-source intelligence sources.
We supplement our data collection with external feeds from public and commercial data sources. These additional data-gathering methods help produce issue types related to leaked data.
Attribution
At this stage, we associate the collected signals with IPs or related domains, which we then match with an organization, based on its Digital Footprint. We use a number of reliable sources, such as DNS lookups, to make attribution as accurate as possible.
We also encourage you to validate these attributions by claiming and refuting assets and even adding assets to the Digital Footprint.
Analysis of signals
We used a suite of analytics tools developed by our threat researchers, data scientists, and engineers to derive issue findings and other key insights from the signals we collect. Examples of analysis include:
- Identification of malware strains and characterization of their behavior and threat level
- Identification of CVEs and other vulnerabilities based on examination of digital asset identification in HTTP header data, website code bases, communication protocol, secure socket layer (SSL) certifications, and more
We also apply machine-learning algorithms to improve the quality and accuracy of security findings and provide key insights on security posture.
Tip: Learn more about how we gather and process signal data in the Signal Processing Workflow section of the scoring methodology white paper.
Our scoring methodology
Issue types are primary components of your score calculation, but other important considerations and adjustments help ensure that the calculation is as fair and accurate as possible.
Size normalization
A small or mid-size organization has fewer IPs than a large enterprise, so it has fewer issue types. That does not mean it is more secure than the enterprise.
Our scoring methodology uses a logarithmic scale, where each increment corresponds to a multiple of 10. Richter and decibel scales are based on similar approaches. For every issue type, we generate scatter plots, where every organization that we score (more than 12 million) represents a point to capture how the number of occurrences of a given issue varies with organization size.
For example, one organization has three findings for the DNS open resolver issue type. Based on our analysis of more than 12 million organizations, only 12 percent of organizations of comparable size have this security flaw. And, among those organizations, the average number of findings is two, while this organization has three, which is worse than average.
Calibration
We apply a calibration algorithm for every scored issue type using data collected over two months time to smooth out statistical fluctuations. This ensures fair performance comparisons for organizations of similar size.
Calculation of issue, factor, and overall scores
We calculate scores for issues using a modified "z-score”, where z = 0 if no findings are present, while z = 1 when the number of findings equals the mean for organizations with the same size Digital Footprint.
To calculate each factor score, we calculate the raw total score by adding up all the z-scores associated with issue findings, multiplied by their weights.
After calculating the raw total score, we scale it based on the expected value of issue finding counts. We want to fairly score an organization by comparing it to others with similar Digital Footprint sizes. Informational and positive issues do not contribute to the score.
Tip: Learn more about our scoring methodology in the white paper.
Breach penalties
For the scoring impact of breach penalties, see Understand how breaches affect your score.
Updates and recalibrations
We update factor and total scores daily. Also, we calculate and update modified z-scores daily for every organization and every issue type on the SecurityScorecard platform. This ensures inherently low score volatility. If an organization's Digital Footprint and issue counts are stable, then its security score will be unchanged.
Additionally SecurityScorecard recalibrates its scoring algorithm every month. See our scoring update release notes for more information.
Maintaining a regular scoring update cadence enables us to preserve fair cybersecurity risk ratings in a dynamic threat environment and to introduce new issue types as needed, to keep you better informed about threats to your organization and vendor ecosystem.
Validation
SecurityScorecard’s scoring algorithm has successfully passed rigorous internal verification and validation testing, where we determine whether the algorithm’s outputs conform to the inputs. We subject the algorithm to a battery of statistical tests, including edge cases, to verify its accuracy and stability.
This testing determines whether the scoring algorithm satisfies its intended use as a cybersecurity risk assessment tool, in other words, whether low scores correlate with a higher likelihood of an adverse event.