Practice Exams:

Introduction to the VA Risk Management Controversy

In the realm of federal cybersecurity, few issues illustrate the ongoing struggle to implement effective security frameworks as clearly as the controversy surrounding the U.S. Department of Veterans Affairs (VA). What began as a leaked report concerning internal authorization procedures quickly evolved into a broader conversation about governance, accountability, and the proper roles within risk management. While the timing of these revelations—coinciding with scheduled congressional hearings—suggests a political undertone, the deeper concern lies in how federal agencies interpret and implement critical security frameworks like the NIST Risk Management Framework (RMF).

This incident highlights a recurring challenge in federal IT security: the disconnect between compliance and true risk management. Instead of focusing solely on signatures and checkboxes, agencies must embrace ongoing evaluation, accountability, and real-time threat awareness. The VA’s situation reveals not just a momentary lapse but the broader cultural and structural hurdles that agencies face when navigating cybersecurity mandates.

Misinterpretation of Authority in Authorization Decisions

One of the most striking aspects of the VA situation is the misallocation of decision-making authority. Reports indicate that security staff at the agency were pressured to sign Authorization to Operate (ATO) documents for systems that had not fully completed certification processes. This practice runs contrary to federal guidelines, particularly those outlined in OMB Circular A-130, which explicitly states that security personnel should not be the ones authorizing systems.

The intent of this directive is to separate risk assessment from risk acceptance. Security officers, like the Chief Information Security Officer (CISO), are tasked with evaluating the threat landscape, identifying system vulnerabilities, and advising leadership on potential impacts. However, the decision to accept residual risk and move forward with operations belongs to an executive or program owner responsible for the system’s mission function. This division ensures objectivity in the risk analysis process and protects the integrity of the authorization process.

When security staff become responsible for both assessing and accepting risk, the objectivity of the process collapses. It puts undue pressure on individuals whose role is to report honestly and without influence. Furthermore, it compromises accountability. If a breach occurs, the blame can be unfairly distributed, diluting responsibility and undermining trust in the system.

ATO as a Symbol Rather Than a Process

Another problematic dynamic revealed in this situation is the symbolic weight the ATO has taken on. Rather than being treated as part of a living, adaptive risk management process, the ATO is often viewed as a final stamp of approval. This misinterpretation reduces it to a bureaucratic checkbox—something that must be obtained to meet compliance requirements, regardless of a system’s actual security posture.

In practice, the ATO should signify that a system has been evaluated in the context of current threats and vulnerabilities and that the appropriate risk owner has consciously accepted any residual risk. However, when system owners treat the ATO as a defensive shield against criticism, the actual purpose is lost. Likewise, when auditors focus on the age of an ATO rather than the health of the system, the emphasis shifts from meaningful security to superficial metrics.

This confusion fosters a dangerous mindset. Rather than fostering collaboration between security professionals and mission owners, it encourages a “get the paper signed” mentality. The consequence is that systems may go live with known issues simply because the process demands a signed document rather than actual mitigation or risk reduction.

The Misleading Metrics of POAMs

Closely tied to the ATO fixation is the reliance on the number of unresolved items in the Plan of Actions and Milestones (POAM) as a measure of system security. Congressional bodies, auditors, and even internal stakeholders often interpret a long POAM as a sign of poor security and pressure agencies to “close items” as quickly as possible. While well-intentioned, this approach fails to understand the true nature of risk.

Risk is not a function of how many items are listed on a POAM but of the severity and duration of those items. A system with 100 low-impact issues under active management may be in far better shape than one with two unresolved critical vulnerabilities that have been left unaddressed for months. By focusing on quantity over quality, agencies may rush to close low-priority tasks to show progress while ignoring the most serious problems.

Moreover, this pressure can result in a misleading sense of security. When success is measured by how many POAM items have been cleared, agencies may become risk-averse and avoid documenting certain vulnerabilities altogether. This suppression of information undermines the risk management process and creates blind spots in system security.

Continuous Monitoring: The Key to Real Risk Management

Amid these structural and cultural issues, continuous monitoring stands out as a transformative approach to cybersecurity. Unlike traditional risk management, which operates on fixed cycles and periodic assessments, continuous monitoring embraces real-time data, dynamic threat evaluation, and ongoing system evaluation. It is the difference between inspecting a bridge every five years and placing sensors that detect stress and corrosion as they occur.

The VA has taken notable steps toward implementing continuous monitoring and integrating it into its policy framework. This indicates a shift toward a more resilient and responsive security posture. By moving away from static, document-driven processes toward adaptive, real-time awareness, the agency is laying the groundwork for a more effective security program.

Still, the transition to continuous monitoring must be accompanied by a cultural shift. Leaders must stop treating compliance artifacts as end goals and instead view them as part of a broader, ongoing effort to manage risk. Continuous monitoring is not just about deploying tools—it requires investment in people, processes, and training to make sense of the data and respond effectively.

Organizational Culture and the Compliance Trap

One of the most persistent barriers to effective risk management in government agencies is the organizational culture surrounding compliance. Over the years, federal agencies have developed rigid processes that emphasize policy adherence, documentation, and audit readiness. While these practices are important, they often overshadow the more fluid and responsive aspects of cybersecurity.

The term “compliance-itis” is sometimes used to describe this condition—an obsessive focus on passing audits, meeting regulatory benchmarks, and maintaining paperwork, often at the expense of practical security measures. When compliance becomes the goal, rather than a tool to achieve security, agencies lose sight of the real threats.

This culture can also create perverse incentives. For instance, a CISO who reports too many issues may be viewed as ineffective or alarmist. System owners may be reluctant to disclose vulnerabilities for fear of retribution. Auditors may reward neat documentation over honest risk assessments. In this environment, the true goal of cybersecurity—protecting systems and data from harm—can become secondary to avoiding negative attention.

To move past this mindset, agencies must cultivate a culture that values transparency, honesty, and shared responsibility. Risk management must be seen as a partnership between security professionals, program managers, and executive leadership. Everyone must have a stake in the outcome and understand their role in maintaining security.

Balancing Mission and Security Objectives

Another important theme in the VA controversy is the tension between mission objectives and security priorities. Federal systems are not built in a vacuum; they support real services, deliver critical functions, and often serve vulnerable populations. In the case of the VA, these systems impact veterans’ access to healthcare, benefits, and support.

When mission delivery and security controls come into conflict, decision-makers must weigh the consequences of delay, disruption, or denial of service. The risk management process is meant to provide a structured way to make those decisions, ensuring that risks are understood, accepted, and managed—not ignored or hidden.

However, this balance cannot be achieved if those making the decisions lack the authority or information needed to make informed choices. Security staff must be empowered to report issues honestly. Program managers must be trained to understand risk in context. Executives must be willing to accept responsibility for the decisions made. When roles and responsibilities are unclear or improperly aligned, the entire process breaks down.

Toward a More Mature Risk Management Framework

The challenges exposed by the VA’s experience are not unique. Many federal agencies struggle with similar issues—confusion over roles, fixation on paperwork, pressure to close findings, and a compliance-driven culture. What’s needed is a maturing of the risk management framework, one that embraces the original intent of NIST’s RMF but adapts it to the real-world complexities of federal operations.

This maturation involves several key elements:

  • Clear separation of duties between those who assess risk and those who accept it

  • Empowerment of security staff to provide honest, independent evaluations

  • Integration of continuous monitoring tools and practices across the system lifecycle

  • Education and training for mission owners and executives on cybersecurity principles

  • Cultural change that prioritizes actual security over appearances

The risk management framework must not be reduced to a set of forms or templates. It must become a living, breathing process that guides decision-making, adapts to emerging threats, and ensures that every system operates with a known, understood, and accepted level of risk.

Learning from the VA Experience

The controversy at the Department of Veterans Affairs offers more than just a momentary scandal; it serves as a case study in how federal cybersecurity can go wrong when policy is misinterpreted, roles are confused, and compliance becomes the goal. It also provides valuable lessons on how agencies can do better.

By reaffirming the proper roles in the authorization process, rejecting superficial metrics like POAM counts, embracing continuous monitoring, and fostering a culture of transparency, federal agencies can begin to shift away from reactive, compliance-driven security toward proactive, risk-informed decision-making.

These changes are not easy. They require leadership commitment, structural reform, and persistent effort. But the alternative—continued reliance on flawed processes and symbolic gestures—leaves systems vulnerable and citizens at risk. If the VA and other agencies can seize this moment to reflect, reform, and recommit to true risk management, the federal cybersecurity landscape will be stronger for it.

Historical Context of Federal Cybersecurity Practices

To understand the VA controversy in a broader sense, it is important to explore the history of federal cybersecurity practices. For decades, federal agencies operated in a compliance-first environment where the focus was on meeting policy mandates rather than reducing actual risk. This culture of form-over-function led to security programs being heavily documentation-driven, often prioritizing audit checkboxes instead of proactive security.

Agencies like the VA, responsible for large-scale and sensitive information systems, were particularly vulnerable to the limitations of this model. As cyber threats became more sophisticated and persistent, the outdated processes of infrequent reviews and static certifications left many systems exposed. The environment created by legacy federal cybersecurity structures fostered a reactive rather than proactive mindset, making it difficult to adapt to the modern risk landscape.

The introduction of the NIST Risk Management Framework (RMF) was intended to change that by shifting the focus from static compliance to continuous, dynamic risk assessment. But implementing RMF requires more than policy changes—it demands shifts in governance, accountability, culture, and technology adoption. The VA case shows how these deeper reforms remain incomplete across many federal institutions.

The Role of Auditors and Oversight Entities

Oversight is critical in ensuring public accountability, especially in departments that manage taxpayer-funded programs and sensitive data. However, the methods by which agencies are audited often add unintended pressure on security teams. In many cases, auditors fixate on easily quantifiable metrics like the number of open POAM items or the age of an ATO, ignoring whether the system’s actual security posture has improved.

This behavior is not surprising. Quantitative metrics provide a sense of objectivity and progress. A drop in open findings or a newly signed ATO looks good in a report or hearing. But these metrics do not always correlate with improved cybersecurity outcomes. Instead, they can create perverse incentives where agencies focus on appearance rather than effectiveness.

For instance, some organizations may rush to close low-impact POAM items while leaving more complex or critical vulnerabilities unaddressed. Others may delay reporting new findings for fear of increasing their “risk score.” Ultimately, these behaviors distort the purpose of risk management and lead to decisions that serve audits rather than security.

To break this cycle, oversight entities must evolve their evaluation criteria. They should prioritize risk-informed decision-making over raw numbers and seek evidence of continuous improvement and operational maturity. Security should be judged not just on whether paperwork is complete, but on whether systems are resilient, adaptable, and responsive to current threats.

Why Separation of Duties Is Crucial

The confusion at the VA regarding who should authorize systems is not merely a procedural issue—it’s a fundamental breakdown in governance. When security professionals are asked to sign ATOs, they are placed in an impossible position. Their role is to assess risk, not to accept it. Accepting risk requires understanding the business mission, legal obligations, and the broader consequences of system failure—factors that security staff are not always positioned to weigh effectively.

This is why federal policy makes it clear that authorizing officials must be individuals who own the business process the system supports. They must understand how a system’s availability, confidentiality, or integrity impacts operations and services. Only then can they make an informed decision about whether the level of residual risk is acceptable.

Separating duties between security assessors and risk acceptors serves two important functions. First, it preserves the objectivity and integrity of the risk assessment process. Second, it creates accountability for security decisions. If something goes wrong, it is clear who accepted the risk and why.

Without this separation, organizations may experience finger-pointing, diluted responsibility, and decision-making driven more by expediency than strategy. Agencies must reinforce this principle in training, policy implementation, and enforcement practices to ensure decisions are made responsibly and transparently.

The Flawed Relationship Between POAMs and Perceived Risk

Federal cybersecurity culture has long treated the Plan of Actions and Milestones (POAM) as the central record of risk remediation efforts. While POAMs are a valuable management tool, they have been widely misunderstood and misused. The number of open POAM items is frequently treated as a proxy for the overall risk level of a system. This is a serious oversimplification.

Not all POAMs are created equal. A POAM that addresses a minor system setting is not equivalent to one that addresses a critical vulnerability with known exploits. Yet both may be counted the same in an audit or report. This kind of misinterpretation skews priorities and misrepresents risk.

In reality, the most secure systems may be the ones with the most POAM entries—because their issues have been properly discovered, documented, and tracked for remediation. Conversely, a system with zero POAMs might simply be failing to perform adequate assessments.

Risk should be evaluated in terms of severity and exposure duration. Agencies need tools and processes that assign appropriate weight to the impact of vulnerabilities, rather than focusing solely on their count. Security management must emphasize meaningful remediation, not cosmetic fixes.

The Dangers of “Compliance-itis”

The VA’s situation is a textbook example of what happens when compliance replaces security as the driving force of cyber programs. In many agencies, success is defined by audit outcomes rather than breach prevention. This culture, often referred to as “compliance-itis,” rewards the wrong behaviors and punishes the right ones.

Security teams may hesitate to document known vulnerabilities out of fear that doing so will make the organization appear vulnerable. Program managers may prioritize speedy ATO renewals over addressing lingering security issues. In the worst cases, findings are downplayed or ignored simply to ensure a positive audit result.

This mindset not only weakens security but also erodes trust. Leaders may become skeptical of their security staff, fearing that good news has been manufactured. Security professionals may become frustrated by pressure to prioritize appearance over substance. And ultimately, the agency becomes more vulnerable, not less.

Breaking out of this cycle requires a redefinition of success. Agencies should be recognized for identifying and addressing risk, not punished for being transparent. Performance evaluations, funding, and oversight should reflect a commitment to continuous improvement, not just regulatory satisfaction.

Implementing Continuous Monitoring Beyond Policy

Adopting a policy of continuous monitoring is not the same as operationalizing it. While many agencies—including the VA—have stated their intent to shift toward continuous monitoring, actual implementation is still inconsistent. Continuous monitoring means integrating real-time data collection, analysis, and response into the daily operations of IT and cybersecurity teams.

This requires more than deploying technical tools. It involves automating vulnerability scanning, correlating threat intelligence, monitoring user behavior, and flagging anomalies—all in near real-time. These tools must be tied into workflows that ensure findings are triaged and addressed promptly.

But even with technology in place, continuous monitoring will fail without trained personnel who understand how to interpret the data and take appropriate action. It requires cybersecurity professionals who can distinguish signal from noise, prioritize threats, and communicate effectively with business owners.

Investing in workforce development is just as critical as investing in monitoring tools. Agencies must ensure that staff understand how to use continuous monitoring data to support risk management decisions, not just to populate dashboards. This shift will take time, but it’s essential for achieving operational security in a constantly evolving threat environment.

Realigning Executive Leadership with Cybersecurity Goals

Many of the root issues exposed by the VA incident stem from a lack of cybersecurity awareness among agency leadership. Program executives may not fully understand the implications of cybersecurity decisions. They may delegate too much responsibility to technical staff or fail to ask the right questions about risk.

Cybersecurity must be seen as a strategic issue, not just a technical one. Agency leaders need to be educated on the basics of risk management, the meaning of residual risk, and the importance of making informed authorization decisions. Only then can they serve as effective authorizing officials.

Some agencies have begun to address this by incorporating cybersecurity training into executive development programs. Others are creating cybersecurity steering committees that include leadership from across the organization. These steps help integrate cybersecurity into the broader mission and create alignment between operational and technical priorities.

The goal is to empower leaders to take ownership of security decisions rather than viewing them as someone else’s responsibility. When executives are engaged, informed, and accountable, security becomes a shared objective rather than a siloed function.

Making Authorization a Process, Not a Point-in-Time Event

Perhaps the most valuable lesson from the VA controversy is that authorization should not be treated as a one-time event. Security is not static, and neither should the decision to operate a system be. Yet the way ATOs have traditionally been handled suggests otherwise. Once a system is “authorized,” it is often assumed to be secure until the next review period—usually years later.

This mindset is outdated and dangerous. Authorization should be an ongoing process supported by continuous monitoring, regular assessments, and dynamic risk evaluation. The ATO should mark the beginning of a system’s operational life, not the end of its security review.

This continuous approach requires formal procedures for ongoing authorization. Agencies must define thresholds for risk that, if exceeded, trigger reauthorization. They must schedule periodic reviews of system security postures and ensure that monitoring data is used to update the risk profile of each system regularly.

By treating authorization as a living process, agencies can make more informed decisions and reduce the likelihood of systems operating with unknown or unacceptable risk.

Laying the Groundwork for Structural Change

The VA case is not just a cautionary tale—it’s an opportunity for transformation. The problems it reveals are not new, but the attention it brings may help catalyze reform across other federal agencies. By confronting the issues of role confusion, compliance-driven metrics, superficial audits, and executive disengagement, agencies can begin to rebuild cybersecurity from the ground up.

Structural change requires more than updated policies. It demands a cultural shift that values transparency, prioritizes risk-informed decision-making, and aligns mission needs with security realities. It also requires sustained investment in workforce development, technology modernization, and leadership training.

Only by addressing these core issues can federal agencies move beyond compliance and achieve true cybersecurity maturity. The path forward is difficult but necessary. The alternative—maintaining the status quo—leaves systems, data, and citizens at risk.

ChatGPT said:

Transforming Federal Cybersecurity Through Practical Action

Building on the lessons from the VA’s authorization controversy and broader federal cybersecurity struggles, it becomes clear that systemic transformation is not just needed—it’s overdue. Moving from a compliance-heavy, documentation-focused approach toward an integrated, risk-aware, continuously adaptive security posture requires bold changes at all levels of federal agencies. To turn these challenges into opportunities, agencies must take actionable steps in governance, operations, training, policy interpretation, and cultural development.

This final section outlines practical, forward-thinking strategies designed to improve federal risk management practices. These are not merely aspirational ideas but necessary components of a sustainable cybersecurity program that can keep pace with evolving threats and protect mission-critical systems.

Clarify Roles and Responsibilities Across the Enterprise

One of the foundational problems revealed in the VA case is the blurring of lines between those who assess risk and those who are authorized to accept it. The first step toward maturing federal risk management is to clearly define and reinforce the roles involved in the Risk Management Framework (RMF).

CISOs and security personnel must be empowered to conduct objective, independent risk assessments. Their reports should carry weight and visibility without being diluted by political or organizational pressure. Meanwhile, program managers and executives who understand the system’s business functions must be the ones responsible for accepting or rejecting residual risk.

This separation must be formally institutionalized in agency governance structures. It should be supported by updated internal policies, job descriptions, and training modules. Oversight should include audits of not only technical controls but also of governance adherence to role separation, ensuring that authorization decisions are made correctly and consistently.

Rebuild the Value and Purpose of the ATO

The Authorization to Operate (ATO) must be reclaimed from its current role as a symbolic piece of paper. It should no longer be treated as a static certification that guarantees immunity from scrutiny or signals a finish line in the security process.

To achieve this, agencies should adopt the concept of ongoing authorization. This model relies on continuous monitoring data to ensure that authorization remains valid throughout the system’s lifecycle. Rather than renewing an ATO every three years, risk must be assessed dynamically and reauthorization triggered whenever security thresholds are crossed.

This approach also means reframing how the ATO is communicated. Instead of being the outcome of a compliance checklist, it should be a risk statement—an expression of informed acceptance of specific residual risks under current conditions. This clarity ensures that all stakeholders understand what is at stake and fosters a more transparent, accountable security culture.

Reform the POAM Management Process

The misuse of Plans of Actions and Milestones (POAMs) has led to a distorted view of risk. Agencies must redesign how POAMs are created, categorized, prioritized, and resolved. Rather than measuring success by the number of items closed, focus must shift to the impact of the issues addressed and the overall reduction in system risk exposure.

This begins with enhancing the way POAMs are written. Each POAM should include a clear risk rating, severity, and expected impact if left unaddressed. This makes prioritization easier and ensures that critical vulnerabilities are resolved before cosmetic issues.

POAM dashboards and metrics should be reoriented to reflect severity-weighted progress. For example, closing five high-impact items should count more than closing twenty low-impact ones. Managers should be held accountable not just for POAM completion, but for the risk-reduction outcomes those closures produce.

Advance Continuous Monitoring to Operational Maturity

Many agencies have adopted continuous monitoring policies, but few have embedded them deeply into day-to-day operations. For continuous monitoring to be effective, it must become more than just automated scanning—it must be a strategic practice tied to decision-making, authorization maintenance, and real-time threat intelligence integration.

Agencies must ensure that monitoring tools are feeding relevant, actionable data into operational workflows. Dashboards should be accessible to both technical staff and business executives, translated into language that enables risk-informed decisions.

This also requires refining thresholds for alerts and responses. Not every vulnerability needs immediate escalation, but high-severity findings must trigger clear escalation paths. Agencies should define conditions under which reauthorization is required or when mission-impacting actions need to be taken.

Most importantly, monitoring must inform leadership. Data collected should feed into regular briefings with decision-makers, enabling them to see how their systems are performing and where strategic investments in security may be required.

Align Oversight and Audit Functions with Modern Risk Principles

Oversight bodies and auditors play an influential role in shaping agency behavior. If they continue to focus on outdated compliance metrics, agencies will respond accordingly—by optimizing for audits rather than actual security.

Therefore, oversight entities need to evolve. Instead of asking how many POAMs are open or when the last ATO was signed, auditors should ask questions like:

  • How is risk being tracked and communicated across the enterprise?

  • What actions are being taken in response to continuous monitoring alerts?

  • How well do authorizing officials understand the systems they are approving?

  • What steps are in place to ensure findings are prioritized by risk, not convenience?

Audit frameworks should reward transparency, maturity, and responsiveness. Agencies that can demonstrate a strong internal risk culture and commitment to continuous improvement should be evaluated more favorably than those that simply show clean documentation.

This change will require collaboration between inspectors general, congressional committees, and federal CIO offices to redefine evaluation standards in line with modern cybersecurity thinking.

Invest in Cybersecurity Workforce Development

All these improvements are only possible with a capable, well-supported cybersecurity workforce. Federal agencies frequently struggle to attract and retain qualified personnel due to outdated hiring processes, non-competitive salaries, and a lack of career development pathways.

To overcome this, agencies must:

  • Expand scholarship and internship programs to build a talent pipeline

  • Create specialized cybersecurity tracks within existing federal hiring structures

  • Support ongoing education and certification for current staff

  • Develop leadership training for non-technical executives to understand risk

Workforce development should be considered a strategic investment, not just an HR issue. Skilled cybersecurity professionals are the linchpin of every security program, and without them, even the best policies and tools will fail.

Foster a Culture of Cyber Risk Awareness

Culture change is perhaps the hardest but most necessary transformation federal agencies must undergo. As long as cybersecurity is seen as a box to check rather than a core part of mission delivery, real progress will remain limited.

To reshape this culture, leadership must set the tone. Executives need to publicly prioritize cybersecurity, back their security teams, and emphasize the importance of transparency. Success stories should be shared to demonstrate how risk-aware decisions helped prevent or mitigate threats. Conversely, mistakes should be treated as opportunities for learning, not blame.

Staff at all levels should be encouraged to speak openly about security concerns. This includes frontline workers noticing unusual system behavior, IT teams observing suspicious activity, and compliance officers identifying documentation gaps. When employees feel safe raising issues, agencies become more agile and responsive.

Training programs, agency-wide security days, and cross-department cybersecurity drills can all help embed security into daily thinking. The more security becomes part of the culture, the more resilient the agency becomes.

Bridge the Gap Between Security and Mission Delivery

A fundamental tension exists in every government agency: how to maintain effective cybersecurity without impeding the mission. This challenge must be addressed head-on by integrating security planning into mission design from the beginning.

Security professionals should be part of system development teams, program launches, and service design discussions. Their role should be advisory and enabling—not gatekeeping. By working together, mission owners and security teams can find creative ways to achieve goals without compromising protections.

This integration requires shifting security left—embedding controls earlier in the system lifecycle rather than retrofitting them after deployment. It also means understanding business priorities so that security recommendations are aligned with operational realities.

Only through collaboration can security and mission objectives be harmonized.

Encourage Cross-Agency Learning and Innovation

No agency operates in isolation. Lessons learned at one department—whether through success or failure—can be valuable to others. Agencies should develop formal mechanisms to share practices, tools, case studies, and strategies for risk management improvement.

This can include participating in inter-agency working groups, submitting reports to shared cybersecurity knowledge repositories, or inviting peer reviews from outside agencies. Innovation can also be accelerated through federal-wide initiatives that fund pilot programs or encourage experimentation with new technologies and governance models.

Cybersecurity is a constantly evolving domain, and staying ahead requires collective intelligence. Agencies should not be afraid to learn from one another and to openly discuss both what works and what doesn’t.

A Blueprint for the Future of Federal Cybersecurity

The VA risk management controversy revealed a series of systemic issues that are not confined to one agency. From blurred roles and misplaced authorizations to audit-driven behavior and shallow metrics, these problems reflect deeper structural flaws in the federal approach to cybersecurity.

But they also illuminate a path forward.

By clarifying roles, rethinking the purpose of authorizations, rebalancing POAM practices, investing in continuous monitoring, reforming oversight, growing the cybersecurity workforce, and transforming the internal culture, agencies can build programs that are truly capable of managing risk in today’s digital environment.

This is not a quick fix. It is a long-term commitment that will require persistent effort, political will, and executive support. But the alternative—continuing down a path of symbolic security and reactive compliance—is not sustainable.

The goal must be to create a government that not only meets its policy obligations but also earns the trust of the public by protecting their data, their services, and their interests with integrity, transparency, and competence.

Through thoughtful reform, shared accountability, and a relentless focus on real risk, federal agencies can become models of cybersecurity excellence in a world that increasingly demands nothing less.