Continuous monitoring, internal assessments and validation

Please provide the information below to view the online Verizon Payment Security Report.

Thank you.

You will soon receive an email with a link to confirm your access, or follow the link below.

Download this document

Thank you.

You may now close this message and continue to your article.

  • A key component of PCI DSS v4.0 is the importance of continuous monitoring of the control environment, and continuous compliance with PCI DSS requirements. Organizations need to develop performance metrics to measure effectiveness and resilience of their security controls and the control environment. In terms of control resilience, a clear capability must exist and be exercised for all security controls to be continuously monitored across the cardholder data environment (CDE) to ensure they are operating effectively and as intended. All failures in security controls must be rapidly detected and promptly responded to in order to restore the security control, identifying the cause(s) of failure and addressing any security issues that arose during the failure of the security control. Evidence that this process is effective needs to be presented during compliance validation assessments. Critical to those procedures are standards of evidence (evidence criteria) and evidence assessment.

    • Evidence assessment

      A typical evidence review process conducted by an assessor involves:

      1. Designing the independent (QSA) assessment procedures or tests. For customized control implementation, this includes the evaluation of the tests and procedures that are specifically designed to validate customized control implementations—in terms of risk analysis, meeting the requirement objective and its ongoing effectiveness, and evaluating the validity of the evidence presented to support this

      2. Gathering evidence and carrying out the independent assessment procedures or tests

      3. Analyzing evidence and evaluating it against evidence validity criteria (see page 54), evaluating DSS requirements performance against the assessment validation criteria, drawing conclusions; making decisions about whether additional information is required and can be obtained (go back to Step 1 above) or if sufficient, appropriate evidence exists to determine with reasonable assurance the compliance condition of the DSS requirement in question

  • PCI DSS compliance validation evidence assessment and acceptance

    The assessor is required to critically evaluate evidence of compliance. This requires set standards to enable the consistent evaluation of evidence against established evidence acceptance criteria. Validation procedures for compliance evidence typically include documentation that explains the design of the security control, its operation, and a documented set of tests designed to confirm the effectiveness of the control’s ability to meet the intent of the relevant control objectives. Test procedures preferably should be developed jointly, approved and agreed upon by the assessed entity and the QSA, prior to the assessor’s evaluation of each customized control.

    The QSA will make firsthand observations of the control environment, conduct interviews and request documents from the assessed entity. The assessor will then abstract the information to obtain evidence to support the conclusions of the assessment findings. The strength of the evidence will depend on the evidence type—distinguishing between primary (firsthand) and secondary (secondhand) sources of information. The documentation collection and review generally includes policies, standards and procedure documentation, documented control design profiles, asset inventories, configuration files, audit logs, data files, training records, etc. ideally, there should be documentation for all elements of the security operating model (see page 35) and The Security Management Canvas (see page 33).

    Evidence validity criteria

    Assessors are required to exercise professional judgment and skepticism when evaluating the quantity and quality of evidence, and thus their sufficiency and appropriateness. Determination of the adequacy with which a control conforms to the intent of its relevant control objective(s) is based on evidence presented that meets evidence validity criteria. Evidence can be considered valid when it consists of data that is judged to be both appropriate and sufficient—both measures of the quantity of evidence. The sufficiency and appropriateness of evidence are interrelated. The evidence collected has to be enough. How much is considered enough depends on standards of evidence provided by the PCI SSC (such as sampling standards included in the DSS), standards established by the assessor and the circumstances of the engagement. In general, the higher the quality of evidence presented, the less may be required. Merely obtaining more evidence may not compensate for its poor quality.

    The basic criteria and main test to determine if evidence is acceptable is the triangulation between the validity, reliability and accuracy of the evidence presented. Therefore, evidence collected is considered appropriate when it’s evaluated by the assessor and determined to be 1) relevant to the assertion being tested, 2) from a reliable source and 3) accurate. Each of those evidence validity criteria with their associated qualities is further explained below.

    • Evidence validity


      1. Relevance

      Evidence presented during a PCI DSS compliance validation assessment must have credible relevance to the DSS requirement, or relate to dependent compliance requirements (the control system in question). The evidence should be evaluated in the context of the CDE and overall control environment. Evidence that has no relevance or relation to the control system in question and the CDE is deemed unacceptable. In other words, the artifacts (evidence) presented must support the existence of any fact that is of consequence to the determination of compliance status of a control and control system (i.e., in place, not in place, compensated, not applicable, control effectiveness, etc.) and substantiate the effectiveness of the operation and management of the control system, its robustness and its resilience. This fundamental test of the relevance of an artifact and its associated facts is that its inclusion must make the determination of compliance status more probable when included, and less probable when excluded. This is especially important for creating test procedures for the validation of PCI DSS v4.0 customized controls to help establish the lower and acceptable upper boundaries of the amount of evidence that can be considered “sufficient.”

      2. Reliability

      The ability to test the reliability of evidence must exist: the origin, accuracy, authenticity, age, ownership, trustworthiness and dependability of the artifact/data. Reliability is the extent to which the assessor can confidently rely on the source of the data and, therefore, the data itself. Reliable data should be considered dependable, authentic, trustworthy, genuine, reputable and consistent. Evidence is considered more reliable when it’s obtained from independent sources, in documented form (original documents) and corroborated from different sources, as compared to evidence that is obtained indirectly or by inference. Therefore, the reliability of evidence is influenced by its source and nature, and its dependence on the individual circumstances under which it’s obtained.

      3. Accuracy and completeness

      Accuracy refers to the degree to which the evidence presents data, measurement, calculations or specification with true, precise values. The evidence presented must be clear and complete. The evidence must be reasonably free from mistakes and errors and conform to the correct values in terms of detail, such as dates, numbers, names, locations, etc. The reliability of the data obtained from a sample will increase as the sample increases in size toward that of the whole population. In general, it’s the size of the sample that determines its accuracy: the size of the population is less relevant. In terms of statistics, doubling the size of the sample will not double the reliability of the information. Accuracy is proportional to the square root of the sample size. So, to double the accuracy, the sample size must be increased fourfold—which will greatly increase the cost of the sample survey.49 This is an example of the so-called law of diminishing returns (or diminishing marginal utility). It explains why most samples are relatively small.

  • Burden of proof: Positive confirmation vs negative confirmation

    The evidence should reflect the whole story. It’s not enough to collect evidence that just shows one perspective. The focus of the assessor (validation QSA) is to request supporting evidence from the assessed entity that clearly and convincingly substantiates how and why all compliance requirements are met (positive confirmation), and not to start from a position of assuming full compliance, and then trying to determine where discrepancies may exist, indicating noncompliance (negative confirmation). This is an important distinction on the focus of the assessment approach.

    In general, the burden of proof during compliance validation lies with the assessed entity, since it is required for merchants and service providers to provide sufficient evidence to support their claims that all PCI DSS requirements are, indeed, in place. The burden is not on the assessor to attempt to gather evidence of noncompliance. As mentioned, assessors should not generally assume an initial position that the assessed entity is fully compliant, and then attempt to demonstrate non-compliance—instead, the evidence presented should convincingly assert the claim of compliance.

    The introduction of a customized approach for control design and validation under PCI DSS v4.0 may depend on the involvement of a remediation QSA in the design and validation of customized controls. It also may introduce a bilateral burden of proof, where some aspects of the burden of proof are shared between the remediation QSA and the assessed entity to produce evidence that the design of customized controls are effective and meet the intended objectives.