Cybersecurity assessment initiatives and frameworks abound in the US government, the most important being the Federal Information Systems Management Act (FISMA), passed in 2002. The law’s broad scope included a mandate to the US National Institute of Standards and Technology (NIST), charging it to create methods and standards to assess and optimize the cybersecurity posture of US government agencies. NIST’s flagship methodology, Risk Management Framework (RMF, or DIARMF in the DoD, is comprehensive and fundamentally sound. However, years of experience have exposed flaws in the RMF. Some stem from lack of proper adoption and execution, some from unintended consequences, and others arise from the relentless pace of innovation in technology.
Here are some of the problems I have witnessed in my years of running cybersecurity programs for the Federal government.
1. Conflicts of interest
Government agencies typically pay a systems integrator to assess the security posture of the agency. This arrangement can put a contractor in a difficult position: they must discover and document weaknesses in systems or business processes that might embarrass the agency paying them. As a result, there can be pressure to minimize or ignore security problems.
2. Plan of Action and Milestone (POA&M) abuse
Security assessors document deficiencies in a set of Plans of Action and Milestone, or POA&Ms. A POA&M includes a description of the problem, and estimates of the cost and schedule required to remediate the problem. When the deadlines pass, there is typically no action: an administrator simply edits the due date to keep pushing it back, and problems remain without solutions for very long periods. In one case, I insisted on rectifying an issue that had been open for over seven years, but took only 24 hours to address.
3. Excessive emphasis on compliance and burdensome documentation
To adhere to the RMF, agencies and assessors must create, review, and track enormous amounts of documentation. The work is difficult or impossible to automate, and can easily consume up to 70% of an agency’s overall security budget.
4. Risk scoring: quantitative vs qualitative scoring
Assessors examine systems to determine the overall risk of intrusion. Each weakness receives a separate risk score, but the risk scores are inconsistent. Though notated quantitatively, the scores are actually qualitative, resulting in a false sense of mathematical accuracy.
5. Categorization: high-water mark
Every government system undergoes an impact assessment and associated category: high, moderate, or low. Systems designated as “high” or “moderate” must adhere to all of the requirements for security at those levels. This makes sense on the surface, but can result in needless expense. For instance, a system with “high” confidentiality may not need all of the failover systems and equipment required to be online 24/7.
6. Impact assessment: focus on victim, not organization
The impact assessment is a formal procedure and document that evaluates the effect of a breach on the agency. Again, this approach seems reasonable at first blush, but it too often ignores or minimizes the impact on individuals whose personal data resides on the government system.
So, while standards and regulations are important aspects of cybersecurity, experience shows they can be abused or misused, and consume a disproportionate share of the government cybersecurity budget.