Binding Operational Directive 20-01 - Develop and Publish a Vulnerability Disclosure Policy

September 2, 2020

This page contains a web-friendly version of the Cybersecurity and Infrastructure Security Agency’s Binding Operational Directive 20-01Develop and Publish a Vulnerability Disclosure Policy. Additionally, see the Assistant Director’s blog post.

A binding operational directive is a compulsory direction to federal, executive branch, departments and agencies for purposes of safeguarding federal information and information systems.

Section 3553(b)(2) of title 44, U.S. Code, authorizes the Secretary of the Department of Homeland Security (DHS) to develop and oversee the implementation of binding operational directives.

Federal agencies are required to comply with DHS-developed directives.

These directives do not apply to statutorily defined “national security systems” nor to certain systems operated by the Department of Defense or the Intelligence Community.

Cybersecurity is a public good that is strongest when the public is given the ability to contribute. A key component to receiving cybersecurity help from the public is to establish a formal policy that describes the activities that can be undertaken in order to find and report vulnerabilities in a legally authorized manner. Such policies enable federal agencies to remediate vulnerabilities before they can be exploited by an adversary – to immense public benefit.

Vulnerability disclosure policies enhance the resiliency of the government’s online services by encouraging meaningful collaboration between federal agencies and the public. They make it easier for the public to know where to send a report, what types of testing are authorized for which systems, and what communication to expect. When agencies integrate vulnerability reporting into their existing cybersecurity risk management activities, they can weigh and address a wider array of concerns. This helps safeguard the information the public has entrusted to the government and gives federal cybersecurity teams more data to protect their agencies. Additionally, ensuring consistent policies across the Executive Branch offers those who report vulnerabilities equivalent protection and a more uniform experience.

A vulnerability disclosure policy (VDP) is an essential element of an effective enterprise vulnerability management program and critical to the security of internet-accessible federal information systems. This directive requires each agency to develop and publish a VDP and maintain supporting handling procedures. It is issued in support of the Office of Management and Budget M-20-32, “Improving Vulnerability Identification, Management, and Remediation”.


vulnerability is a “[w]eakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.”1 Vulnerabilities are often found in individual software components, in systems comprised of multiple components, or in the interactions between components and systems. They are typically exploited to weaken the security of a system, its data, or its users, with impact to their confidentiality, integrity, or availability. The primary purpose of fixing vulnerabilities is to protect people, maintaining or enhancing their safety, security, and privacy.

Vulnerability disclosure is the “act of initially providing vulnerability information to a party that was not believed to be previously aware”. 2 The individual or organization that performs this act is called the reporter3

Choosing to disclose a vulnerability can be frustrating for the reporter when an agency has not defined a vulnerability disclosure policy – the effect being that those who would help protect the public are turned away:

  • The reporter cannot determine how to report: Federal agencies do not always make it clear where a report should be sent. When individuals cannot find an authorized disclosure channel (often a web page or an email address of the form they may resort to their own social network or seek out security staff’s professional or personal contact information on the internet. Or, if the task seems too onerous, they may decide that reporting is not worth their time or effort.

  • The reporter has no confidence the vulnerability is being fixed: If a reporter receives no response from the agency or gets a response deemed unhelpful, they may assume the agency will not fix the vulnerability. This may prompt the reporter to resort to uncoordinated public disclosure to motivate a fix and protect users, and they may default to that approach in the future.

  • The reporter is afraid of legal action: To many in the information security community, the federal government has a reputation for being defensive or litigious in dealing with outside security researchers. Compounding this, many government information systems are accompanied by strongly worded legalistic statements warning visitors against unauthorized use. Without clear, warm assurances that good faith security research is welcomed and authorized, researchers may fear legal reprisal, and some may choose not to report at all.

Agencies should recognize that “a reporter or anyone in possession of vulnerability information can disclose or publish the information at any time,”4 including without prior notice to the agency. Such uncoordinated disclosure could result in exploitation of the vulnerability before the agency has had a chance to address it and could have legal consequences for the reporter as well. A key benefit of a vulnerability disclosure policy is to reduce risk to agency infrastructure and the public by incentivizing coordinated disclosure so there is time to fix the vulnerability before it is publicly known.

A VDP is similar to, but distinct from, a “bug bounty.” In bug bounty programs, organizations pay for valid and impactful findings of certain types of vulnerabilities in their systems or products. A financial reward can incentivize action and may attract people who might not otherwise look for vulnerabilities. This may also result in a higher number of reports or an increase in low-quality submissions. Organizations engaged in bug bounties will frequently use third-party platforms and service vendors to assist in managing and triaging bug reports. Bug bounties may be offered to the general public or may only be offered to select researchers or those who meet certain criteria. While bug bounties can enhance security, this directive does not require agencies to establish bug bounty programs.

Required Actions

The actions of this directive have been developed to be in harmony with other federal agencies’ frameworks5international standards6, and good practices.7

Enable Receipt of Unsolicited Reports

Before the publication of a vulnerability disclosure policy, an agency must have the capability to receive unsolicited reports about potential security vulnerabilities.

Within 30 calendar days after the issuance of this directive, update the following at the .gov registrar8:

  1. The security contact9 field for each .gov domain registered. The email address defined as the security contact must be regularly monitored, and personnel managing it must be capable of triaging unsolicited security reports for the entire domain.

  2. The “Organization” field for each .gov domain registered. The field must identify the agency component responsible for the internet-accessible services offered at the domain. If the domain is for a general or agency-wide purpose, use the most appropriate descriptor. This value should usually be different from the value in the “Agency” field.

Develop and Publish a Vulnerability Disclosure Policy

A vulnerability disclosure policy facilitates an agency’s awareness of otherwise unknown vulnerabilities. It commits the agency to authorize good faith security research and respond to vulnerability reports, and sets expectations for reporters.

Within 180 calendar days after the issuance of this directive:

  1. Publish a vulnerability disclosure policy as a public web page in plain text or HTML at the “/vulnerability-disclosure-policy” path of the agency’s primary .gov website.

    a) The policy must include:

    i. Which systems are in scope. At least one internet-accessible production system or service must be in scope at the time of publication.10

    ii. The types of testing that are allowed (or specifically not authorized), and include a statement prohibiting the disclosure of any personally identifiable information discovered to any third party.11

    iii. A description of how to submit vulnerability reports, which must include:

    1. Where reports should be sent (e.g., a web form, email address).
    2. A request for the information needed to find and analyze the vulnerability (e.g., a description of the vulnerability, its location and potential impact; technical information needed to reproduce; any proof of concept code; etc.).
    3. A clear statement that reporters may submit a report anonymously.

    iv. A commitment to not recommend or pursue legal action against anyone for security research activities that the agency concludes represents a good faith effort to follow the policy, and deem that activity authorized.

    v. A statement that sets expectations for when the reporter (where known) can anticipate acknowledgement of their report and pledges the agency to be as transparent as possible about what steps it is taking during the remediation process.

    vi. An issuance date.12

    b) Each agency should consider stating in its policy that reporters will not receive payment for submitting vulnerabilities and that by submitting, reporters waive any claims to compensation. If applicable, agencies may link to a separate bug bounty program policy that involves payment.

    c) The policy, or implementation of policy, must not:

    i. Require the submission of personally identifiable information. Agencies may request the reporter voluntarily provide contact information.

    ii. Limit testing solely to “vetted” registered parties or U.S. citizens.13 The policy must provide authorization to the general public.

    iii. Attempt to restrict the reporter’s ability to disclose discovered vulnerabilities to others, except for a request for a reasonably time-limited response period.

    iv. Submit disclosed vulnerabilities to the Vulnerabilities Equities Process14 or any similar process..

After publication of a vulnerability disclosure policy:

  1. All newly launched internet-accessible systems or services must be included in the scope of the policy. If the policy’s scope does not implicitly include the new system or service,15 the policy must be updated to include the new system or service explicitly.

Expand Scope

The VDP will ultimately cover all internet-accessible systems or services in the agency – which includes systems that were not intentionally made internet-accessible.

  1. By 270 calendar days after the issuance of this directive, and within every 90 calendar days thereafter, the scope of the VDP must increase by at least one internet-accessible system or service until all systems and services are in scope of the policy.

  2. At 2 years after the issuance of this directive, all internet-accessible systems or services must be in scope of the policy.

Vulnerability Disclosure Handling Procedures

Effectively executing a VDP requires defined processes and procedures.

Within 180 calendar days after the issuance of this directive:

  1. Develop or update vulnerability disclosure handling procedures to support the implementation of the VDP. The procedures must:

    a) Describe how16:

    i. Vulnerability reports will be tracked to resolution.

    ii. Remediation activities will be coordinated internally.

    iii. Disclosed vulnerabilities will be evaluated for potential impact17 and prioritized for action.

    iv. Reports for systems and services that are out of scope will be handled.

    v. Communication with the reporter and other stakeholders (e.g., service providers, CISA) will occur.

    vi. Any current or past impact of the reported vulnerabilities (not including impact from those who complied with the agency VDP) will be assessed and treated as an incident/breach, as applicable.

    b) Set target timelines18 for and track:

    i. Acknowledgement to the reporter (where known) that their report was received.19

    ii. Initial assessment (i.e., determining whether disclosed vulnerabilities are valid, including impact evaluation).20

    iii. Resolution of vulnerabilities, including notification of the outcome to the reporter.21


Reporting Requirements and Metrics

  1. After publication of the VDP, immediately report to CISA22:

    a) Any valid or credible reports of newly discovered or not publicly known vulnerabilities (including misconfigurations) on agency systems that use commercial software or services that affect or are likely to affect other parties in government or industry.

    b) Vulnerability disclosure, coordination, or remediation activities the agency believes CISA can assist with or should know about, particularly as it relates to outside organizations.

    c) Any other situation where it is deemed helpful or necessary to involve CISA.23

  2. After 270 calendar days following the issuance of this directive, within the first FISMA reporting cycle and quarterly thereafter, report the following metrics through CyberScope:

    a) Number of vulnerability disclosure reports

    b) Number of reported vulnerabilities determined to be valid (e.g., in scope and not false-positive)

    c) Number of currently open and valid reported vulnerabilities

    d) Median age (in days from receipt of the report) of currently open and valid reported vulnerabilities

    e) Number of currently open and valid reported vulnerabilities older than 90 days from the receipt of the report

    f) Number of all reports older than 90 days by risk/priority level

    g) Median age of reports older than 90 days

    h) Median time to validate a submitted report

    i) Median time to remediate/mitigate a valid report

    j) Median time to initially respond to the reporter

CISA Actions

  • CISA will monitor agency compliance to this directive and may take actions for non-compliance.

    • Within 180 calendar days following the issuance of this directive, CISA will begin scanning for agencies’ VDP at the "/vulnerability-disclosure-policy" path of their primary .gov website.

    • CISA may occasionally email agency security contacts requesting a response in order to verify the email address is monitored.

    • CISA may request vulnerability disclosure handling procedures.

  • CISA will review agencies’ initial implementation plan that reflect timelines and milestones for their VDP to cover all internet-accessible federal information systems as required by M-20-32.

  • Upon agency request, CISA will assist in the disclosure to vendors of newly identified vulnerabilities in products and services when agencies receive them.

  • CISA will not submit any vulnerabilities it receives or helps coordinate under this directive to the Vulnerabilities Equities Process.

  • Within 2 years following the issuance of this directive, CISA will update this directive to account for changes in the general cybersecurity landscape and incorporate additional best practices to receive, track, and report vulnerabilities identified by reporters.

CISA Points of Contact

Implementation Guide



After issuance of the directive, the following actions (abbreviated here as a summary) must take place by the time indicated.

  • Within 30 calendar days (Friday, October 2, 2020), update the following at the .gov registrar:
    • Add a security contact for each .gov domain you have registered, if you have not already done so
    • Update the “Organization” field to reflect the unit within your agency that uses the domain
  • Within 180 calendar days (Monday, March 1, 2021):
    • Publish a vulnerability disclosure policy at the “/vulnerability-disclosure-policy” path of your agency’s primary .gov website.
  • After 180 calendar days:
    • All newly launched internet-accessible systems or services must be in scope of your policy.
  • Within 270 calendar days (Tuesday, June 1, 2021), and every 90 calendar days thereafter:
    • The scope of your VDP must increase by at least one internet-accessible system or service.
  • At 2 years (Friday, September 2, 2022):
    • All internet-accessible systems or services must be in scope of your policy.

VDP template

To make it easier for your agency to begin, we’ve created a VDP template which has been written to align with the Department of Justice’s Framework for a Vulnerability Disclosure Program for Online Systems. The template is also available as a Word document.

Craft a policy

Consider starting with our template!

Though there are common themes in different organization’s vulnerability disclosure policies, there is no one-size-fits-all approach. The required actions in this directive are merely the mandated elements of your policy; they are the minimums required, not the ceiling. For instance, a policy ought to:

  • Display a commitment to securing the American public’s information.
  • Make clear that the agency’s primary goal is receiving any information that can help it secure its systems, and welcomes all good faith attempts to comply with its policy. In other words, it should relay an impression that your agency is more concerned with receiving and fixing vulnerabilities than in enforcing strict compliance with the letter of the policy.
  • Contain guidelines to help vulnerability finders understand how to remain in scope of authorized testing.
  • Specify a target time for resolution, in days.

Your policy should be written in plain language, not legalese. It need not be long. The tone should be inviting, not threatening.

Consider prior art

As you evaluate your approach, consider modeling your VDP after our template. Additionally, several US government documents, international standards, and academic resources can help guide the policy’s development and management:

  • The Department of Justice’s Framework for a Vulnerability Disclosure Program for Online Systems provides helpful background for developing, instituting, and administering a policy.
  • The NTIA convened a working group on topics related to coordinated vulnerability disclosure, and their research gives an excellent overview that can inform key elements of vulnerability disclosure policies and support procedures.
  • International standards ISO 29147 (vulnerability disclosure) and ISO 30111 (vulnerability handling processes) are high quality normative resources. As vulnerability disclosures can come from anyone across the globe, aligning with international best practices can increase shared expectations and minimize the potential for friction.
  • Carnegie Mellon University’s Software Engineering Institute has authored The CERT® Guide to Coordinated Vulnerability Disclosure, a “travel guide” from an organization that has helped coordinate many vulnerability disclosures.

Develop handling procedures

Though the directive has ‘creating a VDP’ and ‘creating handling procedures’ as two distinct actions, they are like different sides of the same coin: the handling procedures describe how your agency implements your policy.

Most organizations already have procedures for managing vulnerabilities in their systems. Handling disclosed vulnerabilities should not be independent from that process, though there are different factors involved.

You can start by asking (and documenting your answers to):

  • How will reports be tracked?
  • How do we coordinate internally with those who need to know?
  • What’s our process for triage, prioritization, and resolution development?
  • How will we handle reports that are out of scope?
  • How is communication with a vulnerability reporter (and other external parties like CISA) managed?
  • What is done to evaluate whether the reported vulnerability resulted in a previously unknown impact, and what are our procedures around federal incident reporting?

Your handling procedures can optionally be made public, which could help your team find them quickly and instill confidence in vulnerability reporters that their submissions will be taken seriously. See how the Technology Transformation Services at the General Services Administration does this in their handbook.

Organizational flexibility

In support of this directive, your agency has the flexibility to allow different units within your agency to maintain their own vulnerability disclosure policies and handling procedures. In fact, most large agencies should optimize this way, allowing the policy, scope, and communications to be set and managed at a level near the system owner. To the greatest degree possible, optimize for closeness to the system owner, rather than agency-wide visibility.

Where this occurs, your agency should still take care to ensure the discoverability of each unit’s policy and alignment with the directive’s requirements. For instance, each unit’s policy can be linked to in the parent agency’s policy.

Third-party products and services

In use by your agency

When including systems or services in your policy’s scope that incorporate third-party products or services (for example, from cloud service providers), take counsel from the Department of Justice’s Framework (step 1(C)) and consider which components can be included in your VDP. Once you’ve made that determination, take care to specify what is and isn’t in scope for authorized testing, and how one can tell (for example, clearly name the set of domain names that are in-bounds). See the FAQ, How does the directive apply to internet-accessible systems and services supplied by service providers?

Should your agency lack an appropriate contact at commercial software or hardware vendors, CISA will help coordinate disclosure. Get in touch through

Sent to you because of a real or perceived regulatory role

Your agency may receive reports covering the online services of organizations in the sector your agency participates in or oversees. To communicate expectations, you might consider sharing something about this in your VDP:

Vulnerabilities in {aviation, financial systems} should be reported to the vendor or system owner, not to the {Federal Aviation Administration, Department of the Treasury}.

If vulnerabilities are reported to your agency anyway, you should still make a good faith effort to relay them to the appropriate party. CISA is available to assist in coordinating such reports when received by your agency.

Use the web

It ought to be easier to report a vulnerability to your agency than it is to tweet about it.

Use a web form or a dedicated web application to accept vulnerability reports and encourage reporters to use it. Submissions delivered via the web are likely to be better protected in transit than via email (since HTTPS is mandatory), and web forms enable your agency to standardize the format of vulnerability reports. Web forms can also mark certain fields as mandatory, reducing the need to follow up with the reporter unnecessarily. There are web-based commercial services that are designed for the specific purpose of helping organizations receive high-quality, well-structured vulnerability reports.

However, your form should not be the only way your agency can receive notice or be made aware of a vulnerability. You should be able to handle vulnerabilities sent by email (for instance, to security contact addresses or to staff directly, including press offices) or via messaging on social media, whether or not you advertise those channels for vulnerability reporting.

Frequently Asked Questions

Answers to other common compliance questions appear below.

Security contacts and the Organization field


Vulnerability reports

Coordination with vulnerability reporters



Security contacts and the Organization field FAQ

My agency has published a security contact but we don’t yet have a VDP. What should we do with the reports we receive?

Even though your agency is not authorizing security research before you have a VDP and may not have well-defined coordination procedures, you should still acknowledge that you’ve received the report and demonstrate a good faith effort to remediate vulnerabilities. This fixes security problems and provides you direct experience to establish more effective vulnerability disclosure handling procedures.

What’s the purpose of the security contact and Organization field in the .gov registrar?

A security contact positions you to receive security-relevant information for an entire domain. You can do this by managing a mailbox to receive reports.

The Organization field can speed the routing of messages internal to enterprises of reports sent by external parties. It can be helpful in ensuring the report’s delivery to the right organizational unit.

What are some considerations around managing the .gov security contact mailbox?

  • Use a team email address specifically for these reports and avoid the use of an individual’s email address.
  • The contact need not have 24/7 support, but someone should be responsive within a few business days. Those who are tapped to be responsive should know they have that responsibility.
  • Evaluate whether to use a distribution list instead of a shared mailbox.
    • distribution list allows an emailed report to be spread across a team, and doesn’t necessitate managing access to a separate mailbox.
    • shared mailbox is a dedicated inbox that several people have access to. Managing messages will require procedures shared among a team but shields individual staff members from replying to email from their own account.
    • With either approach, consider the value to the vulnerability reporter in receiving a reply from a named human, not an auto-reply, not signed “the team”.
  • Recognize that reusing an already established list or mailbox has benefits and drawbacks. Reusing something minimizes the organizational start-up costs involved in receiving reports (e.g., defining a list, adding members, communicating its existence and use) but may cause vulnerability reports to be diminished, lost in the “noise” of other messages. Reusing a list/mailbox might also dilute a sense of responsibility in taking action on reports, especially where membership is high. If the name on the email address is geared more towards internal operations than clearly indicating an information security purpose, outsiders may be unsure whether the address is suitable for their report.
  • You may make a PGP key available to enable encrypted email, but we strongly recommend using the web instead.


What does the directive mean by “good faith”?

In the context of this directive, “good faith” means security research conducted with the intent to follow an agency’s VDP without malicious motive; your agency may evaluate an individual’s intent on multiple bases, including by their actions, statements, and the results of their actions. In other words, good faith security research means accessing a computer or software solely for purpose of testing or investigating a security flaw or vulnerability and disclosing those findings in alignment with the VDP. The security researcher’s actions should be consistent with an attempt to improve security and to avoid doing harm, either by unwarranted invasions of privacy or causing damage to property.

A hallmark of good faith activity is a factual, timely report of a vulnerability on a system authorized for testing, sent directly to the organization in accordance with a VDP’s instructions; however, a person could conduct research in good faith and have no reportable findings. An individual could also be acting in good faith when reporting information related to systems that are not in scope, if the discovery of the vulnerability was accidental or incidental to authorized testing on in-scope systems. The Software Engineering Institute’s CERT/CC shares an example situation where someone, “testing an in-scope system[,] finds it to be exposing data from out-of-scope system. These are still reportable vulnerabilities.” In contrast, an individual who consciously decides to test systems that are not included in the VDP would not be acting in good faith.   

The Department of Justice’s Framework notes that “an organization should decide how it will handle accidental, good faith violations of the vulnerability disclosure policy”. The Framework provides an example VDP statement that an organization will not “pursue civil action for accidental, good faith violations of its policy or initiate a complaint to law enforcement for unintentional violations” – and the directive requires that your VDP include a similar statement. At minimum, the directive requires your VDP to include a “commitment to not recommend or pursue legal action against anyone for security research activities that the agency concludes represents a good faith effort to follow the policy, and deem that activity authorized.”

What should the initial scope of my agency’s VDP include?

Choose systems/services that play a meaningful role in your mission and have at least a moderate volume of use. This allows your policy and handling procedures to be exercised, providing opportunities for your team to develop into strength.

Add systems to your scope deliberately, not as experiments. Recognize that your agency has great latitude in determining which systems should be included in your scope (until 2 years after the issuance of this directive). Scope change should generally increase, not decrease.

There’s no need to wait for the 90 day intervals specified by the directive to increase the scope of your VDP; you can add to it at anytime.

Can we remove systems from the scope of our VDP?

You should generally avoid this and CISA warns against it, as individuals may be relying on the system’s prior inclusion in scope and are unlikely to be aware of its removal. This can cause a real or perceived risk of prosecution, chilling security research, and offer the impression that your agency is not serious about hearing from outside parties who report potential problems.

There are justifiable reasons for reducing scope, however, like retiring legacy systems that are explicitly named. If you are going to remove systems, particularly if doing so out of concern for a lack of resources, we request that you notify before doing so.

In accordance with the directive, at 2 years after the issuance of this directive, all internet-accessible systems or services must be in scope of your VDP.

What is meant by “internet-accessible systems or services”?

The directive applies generally to federal information systems and services that are accessible over the internet, which encompasses those systems directly managed by an agency as well as those operated on an agency’s behalf. It also applies to mobile applications.

The phrase “internet-accessible production systems or services”, used in requirement 3.a.i, is used to define the range of acceptable systems in your VDP’s initial scope. Below is an expansion of each component of the phrase.


“Internet-accessible” means a system that is reachable over the public internet that has a publicly routed IP address or a hostname that resolves publicly in DNS to such an address. This doesn’t include infrastructure that is internal to your network (which may be made accessible from the internet via a virtual private network, or VPN), or necessarily include shared services used by your agency that are not specifically managed by your agency. However, it does include services that bridge the internet to your intranet, like VPN infrastructure. “Internet-accessible” includes those systems that were not intentionally made internet-accessible.

Many systems will use a .gov domain. (M-17-06 requires executive branch agencies to use a .gov or .mil address for their public-facing digital services and add use of any other domains to a General Services Administration-maintained list.) Generally speaking, most systems that will eventually be in your agency’s VDP are included in the set of internet-accessible systems that M-21-02 requires you to keep updated at CISA.

For situations where you use infrastructure that lacks a public DNS record, it is not required that you make a public inventory of each IP address as part of your VDP’s scope – even though it is clearly in scope of “internet-accessible”. In these situations, a researcher may be able to glean information by other means in order to discover the system owner. This could include, for example, a system use notification or by additional reconnaissance activity.


“Production” means systems that serve live, authentic users, not development or test environments. The goal of using this word in Action 3 of the directive is to have your agency place systems of actual consequence in the initial scope of your VDP.

To be clear, all internet-accessible systems and services, including development or test environments, must be included in the eventual scope of your VDP.

Systems and services

There are different ways agencies define boundaries for their systems or services that affect how an application is accounted for. With this wording, we’re aiming for the widest possible description of software applications that belong to your agency and that use the internet.

For instance, “systems and services” includes agency-published or -branded software, like mobile applications (e.g., iOS, Android). For open source software, the reporting path for vulnerabilities is likely to be plain already; individual open source repositories need not be added to an agency VDP, so long as anyone interacting with the repository can easily discover the vulnerability reporting path (like a project’s README).

How does the directive apply to internet-accessible systems and services supplied by service providers?

The directive applies to all internet-accessible federal information systems and services – and it should be applied as widely as possible to enable your agency to be made aware of vulnerabilities, but there are important nuances.

Your agency may not have the authority to authorize testing on e.g., the cloud infrastructure or software as a service you use. Before including a system within your VDP that may implicate third-party interests, confirm whether the third party has explicitly authorized such testing, such as in your agency’s contract with the provider or a publicly available policy of the provider. The directive does not grant your agency authority to force an organization you do business with to have a vulnerability disclosure policy.

You should work with the service provider to establish how your assets could be added to your VDP. Carefully review the Department of Justice’s Framework. If the vendor will not authorize testing, you may not include that system or service within the scope of your VDP.

A useful guideline for a given system’s inclusion in your VDP is to ask “If there were a vulnerability in this system, who would be responsible for fixing it?”.

  • If the answer is your operations or development teams, that system should likely be in scope of your VDP. (Recognize that vulnerabilities can exist because of poorly managed configurations that your agency maintains, not just because of bugs inherent in a provider’s infrastructure.)
  • If the answer is a vendor, you will need to confirm that the vendor has authorized such testing or work to obtain authorization.

In some instances, the answer might be split based on where the vulnerability is or how it expresses. Even if you are unable to add a system to your policy, your agency must have a way to relay questions or issues that arise about the security of your provider’s services to them, as your security is reliant on theirs.

In your VDP, be clear about specific hostnames in or out of scope. Include language like the following from our VDP template:

Any service not expressly listed above, such as any connected services, are excluded from scope and are not authorized for testing. Additionally, vulnerabilities found in systems from our vendors fall outside of this policy’s scope and should be reported directly to the vendor according to their disclosure policy (if any). If you aren’t sure whether a system is in scope or not, contact us at before starting your research.

Do our internet-accessible high value assets need to be included in scope?

Yes, eventually. High value assets are important systems, and you will want to know if someone spots a problem. High value assets need not be called out as high value assets in your VDP’s scope, though.

Does our VDP need to be hosted at the /vulnerability-disclosure-policy path or can we redirect to another location?

Using HTTP redirects is allowed, and the /vulnerability-disclosure-policy path may redirect internally (e.g., on the same domain) or externally. Take care to ensure that redirects use HTTPS.

See How will CISA know my agency’s primary .gov website in order to find my VDP?

Vulnerability reports FAQ

Does the directive require a deadline to fix reported vulnerabilities?

No. The directive does not mandate resolution timelines for addressing vulnerability reports generally, reports assessed to be real after triage, or vulnerabilities of a certain severity. However, the directive does require your agency to set target timelines for when you will acknowledge a report, initially assess or triage it, and resolve the vulnerability report (including communicating with the person who reported the vulnerability). These timelines are meant to guide organizational behavior and enable performance tracking; they are not vulnerability resolution deadlines.

Though the directive may not specify when a given vulnerability must be fixed, timing is a critical factor in all your actions:

  • You should work to remediate vulnerabilities quickly, focusing on those with real impact. See footnote 21 in the directive for more elements to consider.
  • While it is generally ideal for any public disclosure to occur after a vulnerability has been fixed, agencies must assume that any vulnerability discovered by a good-faith researcher may have easily been discovered already by a bad actor.
  • Some vulnerability finders may seek to publicly disclose how they discovered the flaw and why it was impactful. This is a valuable activity because it increases awareness about the class of vulnerabilities found and motivates others to improve security. Many in the security research community consider public disclosure of a vulnerability to be appropriate between 45 to 90 days after the first communication with the affected entity in order to allow the affected organization time to fix the issue without unnecessary delay. Agencies may require that the researcher give the agency a defined window of time to address the vulnerability before public disclosure, but that delay may not be unreasonable – and anything more than 90 days begins to veer away from what is reasonable. Agencies should not attempt to limit publication after the vulnerability has been addressed.

More important than a target timeline is ongoing, meaningful communication with vulnerability reporters. In research that surveyed populations of software vendors and security researchers, the National Telecommunications and Information Administration summarized perspectives on timelines and communication:

Timelines were also very important to the researcher community, with over nine in ten respondents describing a desire for some deadline for remediation. However, the timeframe involved is not always perceived as something that should be fixed. On the contrary, only 18% of the researchers that expressed an expectation of a resolution timeline thought that vendors should conform to a timeline without regard to the circumstances of a particular bug. Maintaining a definite resolution date, then, is less important than communicating the decision-making involved in determining resolution priority in a transparent manner, allowing a researcher to calibrate their expectations.

See the section “Communication is key” in NTIA’s report, Vulnerability Disclosure Attitudes and Actions.

Even so, many reports shared with you will be unimportant or invalid. These need not necessarily be fixed at all or on a timely basis. Security teams should spend most of their attention on those reports with the greatest practical impact.

See also Does BOD 19-02 apply to vulnerabilities that are reported under our VDP?

Does BOD 19-02 apply to vulnerabilities that are reported under our VDP?

Generally, no. BOD 19-02 requires your agency to fix vulnerabilities rated critical and high as “identified through Cyber Hygiene scanning” within 15 and 30 days, respectively. In many circumstances, remediating vulnerabilities reported under your VDP will not have a CVE, come with a defined severity rating, or be addressed simply by patching software, for example. You should work to remediate vulnerabilities quickly, focusing on those with real impact.

When a reported vulnerability is identified in your Cyber Hygiene report as critical or high, the required actions under BOD 19-02 remain in effect.

See also Does the directive require a deadline to fix vulnerabilities?

If a reported vulnerability is verified, does that trigger the need to report an incident?

No, determining a reported vulnerability is present does not mean an incident has occurred and does not necessarily spur immediate reporting actions (as outlined in M-21-02). However, vulnerabilities with potentially significant impact should prompt an evaluation of the affected system to determine whether an incident occurred in the past.

Coordination with vulnerability reporters FAQ

How should we communicate with vulnerability reporters?

The directive requires that you be “as transparent as possible” about the steps you’re taking during the remediation process.

Being transparent means you operate in a spirit of openness. It means sharing what you’ve done, what you plan on doing, and how (approximately) you expect to arrive at a remedy. It doesn’t mean oversharing, but it does include offering information sufficient that the vulnerability reporter can tell you understand the issue and are taking concrete action.

“As possible” covers appropriateness. You have the flexibility to determine the level of information that should be shared – but generally, if it does not materially, tangibly, or feasibly impact security in a negative way, you should feel free sharing with the vulnerability reporter.

You should communicate with high emotional intelligence:

  • Triage the report and decide the best course of action as quickly as possible. Communicate this.
  • In word and tone, choose not to respond defensively. You are not under attack.
  • Express appreciation. Someone has taken the time to tell you something they did not have to.
  • Take them seriously. You can expect many reports to be from professionals who have deep expertise in information security.
  • Respond back using the name or pseudonym they have offered to you. Do not presume a person’s pronouns.
  • It’s ok to display your humanity. Your response should not come across as robotic or pro forma.

Do we need to send a response to every vulnerability report?

Where contact information is known, you should respond to every vulnerability report that is shared in good faith. This means that you don’t need to respond to reports that are obviously spam. GSA/TTS has shared helpful example responses for reports that are apparently “not applicable”.

How should my agency treat vulnerability reports from anonymous sources?

These reports should be treated the same as all other reports: like a gift. Knowing the source of a report can be a real benefit because it allows for rapport to develop. However, if the person who submits a report isn’t known, the claim should simply be evaluated on its merits – like every other report.

Obviously, if the source is unknown, you are not under any requirement to find or reply to the person, and you should never attempt to unmask the identity of the person who offers a report in good faith.

Can we recognize vulnerability reporters for their efforts?

Yes. Some organizations publicly acknowledge people who relay confirmed or especially impactful reports on a webpage or in social media. A public ‘thank you’ is a generous practice, costs nothing, and makes vulnerability reporters happy to share their findings in the future. However, it must not be done unless you’ve obtained explicit permission from the reporter that they are comfortable with public acknowledgement.

You may elect to include language in your VDP that makes clear your stance on recognition, which facilitates shared expectations from the start.

CISA’s role

What is CISA’s role in my agency’s coordinated vulnerability disclosure efforts?

In most instances, your agency should be able to remediate issues presented in vulnerability disclosures directly or coordinate their resolution with vendors and partners. Per the directive, you must immediately report to us in certain circumstances, but you also can reach out as you deem appropriate.

We recognize that the disclosure of vulnerabilities to those that are not essential to mitigation development may increase the risk of exploitation of the vulnerability. The reason for telling CISA the details of a vulnerability is so we can help resolve it. We may also be aware of additional parties that should be informed.

Outside of the directive, CISA helps coordinate newly identified vulnerabilities in digital products and services. We maintain good relationships with many major vendors, and will assist agencies who ask for help in finding the necessary party.

If CISA receives a report for a system you manage, we will point reporters to your security contact/VDP. We will also serve as the last resort for researchers when they cannot find a contact or receive no response.

How will CISA know my agency’s primary .gov website in order to find my VDP?

The directive requires you to make your VDP available at the ‘/vulnerability-disclosure-policy’ path of your agency’s primary .gov website. The relationship between each .gov domain and who owns the domain is public information, and CISA will scan all second-level .gov domain websites in the executive branch at the ‘/vulnerability-disclosure-policy’ path for a VDP and then our staff will evaluate your policy for alignment with the directive.

In situations that could be unclear (e.g., an agency has more than one VDP at various domains), we’ll evaluate the content of the file to gather additional context, and may seek more information from your agency so we can programmatically maintain awareness of where all agency VDPs are.

Miscellaneous FAQ

How can our security team tell the difference between adversaries prodding for vulnerabilities and people acting in good faith?

You should continue to take the same defensive actions you would normally take. When various security alerts fire, it would not be fruitful to attempt at guessing whether packets came from those who are seeking to make things better or from people that would cause harm. Just operate as normal. However, it may be useful to deconflict traffic with vulnerability reports in order to determine whether actual, previously unknown compromise occurred.

Having a VDP doesn’t dictate that you to metaphorically “drop your shield”. Maintaining one can provide better insight into organizational weaknesses that result in vulnerabilities, like insecure development practices, poor configuration management, or ineffective collaboration, which your security team can help analyze and work to address.

Are federal personnel and contractors prohibited from testing and reporting under our VDP?

No. The directive requires all who follow your policy to be considered authorized. Federal personnel and contractors may report vulnerabilities to any agency, including their own. Your agency may place restrictions on your workforce participating in a bug bounty program, however.

Can we operate a bug bounty?

Yes. Bug bounties can serve as a motivator to people who might not otherwise participate, and can help target external efforts on systems of particular interest to the agency. You may choose to add financial incentives to the discovery of certain issues or on specific systems. See the directive’s background section for additional comments about bug bounties.

Should we make changes to a system use notification when it is in scope of our VDP?

NIST’s Special Publication 800-53v5, control AC-8, suggests a “system use notification”, which are the banners on many government information systems that warn visitors against unauthorized use.

It’s worth considering how you could make reporting easier for those who find something, and this could include updating a systems use notification to clarify that a system is part of your VDP, or share where to report potential issues.

Can we use a security.txt file?

Yes. security.txt is a proposed standard that allows websites to define security policies and the best points of contact to report a vulnerability. While use is not required under the directive, it can help some people find who to share vulnerability findings with.

There is a helpful utility on which can help you generate the file.


  1. NIST Special Publication 800-53 revision 4. 

  2. ISO/IEC 29147:2018, Information Technology – Security Techniques – Vulnerability Disclosure. §3.2 

  3. Ibid., §3.5 

  4. Ibid., §5.6.3 

  5. U.S. Department of Justice, A Framework for a Vulnerability Disclosure Program for Online Systems

    NIST Framework for Improving Critical Infrastructure Cybersecurity. “RS.AN-5: Processes are established to receive, analyze and respond to vulnerabilities disclosed to the organization from internal and external sources (e.g. internal testing, security bulletins, or security researchers). 

  6. ISO/IEC 29147:2018; ISO/IEC 30111:2019, Information technology – Security techniques – Vulnerability handling processes 

  7. National Telecommunications and Information Administration, Multistakeholder Process: Cybersecurity Vulnerabilities

    The CERT^®^ Guide to Coordinated Vulnerability Disclosure, 


  9. []{.ul}. CISA recommends using a team email address specifically for these reports and avoiding the use of an individual’s email address. The email address can be the same across multiple domains; it need not be on the domain it is a security contact for. However, we strongly recommend using an address of the form security@<domain>, as it is a de facto address used to initiate conversations about security issues on a domain. 

  10. Agencies are encouraged to specify broader categories of systems, such as “all internet-accessible online services” or “any system within the domain”, rather than listing each system individually. Agency-published or -branded mobile applications can also be added to the initial scope. 

  11. This is intended to protect sensitive personal information. It is not intended to restrict, for instance, a reporter sharing a screenshot that includes personally identifiable information back to the agency. 

  12. As the document is updated, it is recommended to include a descriptive document change history that summarizes differences between versions.; Including links to prior versions of the policy is also recommended. (Using a platform to publish a policy that provides version control information meets this requirement.) 

  13. As systems that are publicly accessible are already subject to malicious activity, all individuals, regardless of citizenship, geography, occupation, or other discriminating factor, must be treated the same under an agency’s VDP. 

  14. In accordance with Section 5.4 of the Vulnerabilities Equities Policy and Process for the United States Government (VEP), vulnerabilities that are reported to an agency are “security research activity” intended for remediation and shall not be subject to adjudication in the VEP.

  15. For example, by indicating a wildcard on a domain’s scope. 

  16. For an example, see 

  17. One approach is to attach a risk score to the vulnerability, which can help to establish priority. The goal of risk scoring at this stage is to quickly provide an organization a sense of the severity and potential impact of a vulnerability. These scores will be subjective. An agency might score the potential impact of the disclosed vulnerability to their system or service’s confidentialityintegrity, and availability with severity rankings of ‘low’, ‘moderate’, ‘high’, ‘not applicable’ (out of scope, negligible, not enough information), and ‘incident’ (should any of those already be compromised) for each metric. See the TTS/18F Handbook in the prior footnote. 

  18. Target timelines guide organizational actions and priorities; they are not mandatory remediation dates. Agencies should regularly evaluate how to improve their processes in order to set more progressive targets. 

  19. CISA recommends no more than 3 business days from the receipt of the report. 

  20. CISA recommends this assessment take no more than 7 days from the receipt of the report. Agencies should set target timelines for vulnerability resolution based upon the determined impact. For example, critical vulnerabilities should generally be remediated much faster than those with low impact. 

  21. CISA recommends no more than 90 days from the receipt of the report. Agencies should strive to resolve the issue as quickly as possible while considering the severity of the vulnerability, the importance of the system, evidence of exploitability, and the completeness and effectiveness of the proposed mitigation. Complex situations, including those that involve multi-party coordination, may require additional time. Where contact details are known, consider requesting the reporter to evaluate the remediation’s effectiveness. 


  23. General inquiries can be sent to

Was this webpage helpful?  Yes  |  Somewhat  |  No