Domain 6. Secure Lifecycle Management
Guidelines for Software Acceptance
- Software Acceptance: the process of officially or formally accepting new or modified software components.
- Determine whether a product or service has met the criteria in the contract.
- Acceptance criteria must be predefined:
- Functionality
- Performance
- Quality
- Safety
- Privacy
- Security
- Objectives of software acceptance:
- Verification that the software meets specified functional and assurance requirements
- Verification that the software is operationally complete and secure as expected
- Obtaining the approvals from the system owner
- Transference of responsibility from the vendor to the acquirer.
- SD3 initiatives:
- Secure by Design
- Secure by Default
- Secure in Deployment
- Software accepted for deployment or release must:
- Meet SD3 initiatives
- Complement existing defense in depth protection
- Run with least privilege
- Be irreversible and tamper-proof
- Isolate and protect administrative functionality and security management interfaces
- Need to be validated by the users who will be using them.
- Have non-technical protection mechanisms in place
- Legal protections and acceptance
- EULA agreement means binding to Digital Millennium Copyright Act (DMCA); deterrent control.
- Benefits of Accepting Software Formally
- Assure that the software is of high quality, reliable and secure from risks.
Software Acceptance Considerations
- Completion Criteria
- Requirement phase:
- Requirements traceability matrix
- Design phase:
- Threat model
- Security architecture review
- Development phase:
- Testing phase:
- Pre-Deployment phase:
- Change Management
- Perform impact evaluation on the change request
- Update asset management database
- Appove the change request by authorities
- Approval to Deploy or Release
- Conduct risk analysis to determine residual risks
- Communicate redidual risks to the owner
- Include recommendation in the approval or rejection
- Authorizing official (AO) is responsible for change approvals
- Document the approvals
- Risk Acceptance and Exception Policy
- General rules:
- Residual risks should be below the business defined threshold
- Residual risk must be accepted by the business owner
- If risks cannot be handled due to time/resource, consider risk transfer and avoidance.
- Risk acceptance template (RAID):
- Risk:
- Document the probability of a security situation that can lead to security breach
- Avoid technical jargon, aimed for business
- Actions:
- Inform technical team the steps that have been taken and the steps that are to be taken
- Issues:
- Provide development team the details of how the threats can be realized
- Decisions:
- Provide management the options to consider
- Risks due to non-compliance cannot be avoided:
- Exception policy:
- A temporal means that allows business continuity and can be used as an audit defense.
- The documented issue must be temporal and fixed soon.
- Documentation of Software
- Purpose:
- Make the software deployment process easy and repeatable
- Ensure operations are not disrupted
- Ensure the impact as of a change is understood
- Documentation is often overlooked, hence it’s best to make documentation at each phase.
Document Type and Assurance Aspect
- Requirements Tracability Matrix:
- Are functionality and security aspects traceable to customer requirements and specifications?
- Threat Model:
- Is the threat model comprehensively representative of the security profile and addressing all applicable threats?
- Risk Acceptance Document:
- Is the risk appropriately mitigated, transferred or avoided?
- Is the residual risk below the acceptable level?
- Has the risk been accepted by the AO with signatory authority?
- Exception Policy Document:
- Is there an exception to policy and if so is it documented?
- Is there a contingency plan in place to address risks that do not comply with the security policy?
- Change Requests:
- Is there a process to formally request changes to the software and is this documented and tracked?
- Is there a control mechanism defined for the software so that only changes that are approved at the appropriate level can be deployed to production environments?
- Approvals:
- Are approvals (risk, design and architecture review, change, exception to policy, etc.) documented and verifiable?
- Are appropriate approvals in place when existing documents like BCP, DRP, etc. need to be redrafted?
- Business Continuity Plan (BCP) or Disaster Recovery Plan (DRP):
- Is the software incorporated into the organizational BCP or DRP?
- Does the DRP not only include the software but also the hardware on which it runs?
- Is the BCP/DRP updated to include security procedures that need to be followed in the event of a disaster?
- Incident Response Plan (IRP):
- Is there a process and plan defined for responding to incidents (security violations) because of the software?
- Installation Guide:
- Are steps and configuration settings predefined to ensure that the software can be installed without compromising the secure state of the computing ecosystem?
- User Training Guide/Manual Is there a manual to inform users how they will use the software?
Software Qualification Testing
- The formal analysis to determine whether a system or software product satisfies its acceptance criteria.
- Ensures that the customer’s requirements have been met.
- Validates compliance with the initial requirement, contact, standard and requirements.
- Validate important elements under both normal and abnormal conditions.
Qualification Testing Plan
- Purpose: to minimise the number of cases and still achieve sufficient quality and security.
- Include:
- Scope
- Approach
- Resources
- Schedule
- Test cases and time
- Completion criteria
- Considerations:
- The required features to be tested
- Requisite load limits
- Number and types of stress tests
- All necessary risk mitigation and security tests
- Requisite performance levels
- Interfaces to be tested
- The test cases to address each of the following questions
- Who’s responsible for generating the test designs/cases and procedures?
- Who’s responsible for executing the tests?
- Who’s responsible for building/maintaining the test bed?
- Who’s responsible for configuration management?
- What are the criteria for stopping the test effort?
- What are the criteria for restarting testing?
- When will source code be placed under change control?
- Which test designs/cases will be placed under configuration management?
- What level will anomaly reports be written for?
Qualification Testing Hierarchy Software
- A bottom up approach to test the following items:
- Architecture
- Components
- Interfaces
- Data
- Activities at the acceptance level include
- Software design traceability analysis (e.g., trace for correctness)
- Software design evaluation
- Software design interface analysis
- Test plan generation (by each level)
- Test design generation (by each level)
Pre-release Activities
- Goal: to provide an objective assessment of the product in terms of:
- Accuracy
- Completeness
- Consistency
- Testability
- Activities:
- Access
- Analyze
- Evaluate
- Review
- Inspect
- Test
- Key notes:
- Total product assurance is a noble, not necessarily an achievable goal
- Preserve evidence along the whole process to suggest a properly completed pre-release activity
- It’s impossible to find all failure modes
- Pre-release activities should be planned as early as possible in order to be contracted
- Contract is the only way to enforce pre-release activities
- Pre-release testing should start when the system is initially integrated and end with beta testing of the delivered product.
Implementing the Pre-release Testing Process
- Test plan:
- Objectives
- Scope
- Approach
- Focus of testing effort
- Should be defined with the right level of detail.
Conducting a Test
- Determine what to test
- Identify high-risk aspects
- Set priorities
- Determine scope and limitation
- Prepare test plan
- Prepare test cases
- Establish test data
Key Component: Test Cases
- Purpose: help find problems in the requirements or design of an application.
- Describes:
- Input
- Action/Event that is expected to produce a predictable response
- Content: Identifier, Name, Objective, Conditions/setup, Input data, Steps, Expected results
Black-Box Testing
- Purpose: to confirm that a given input reliably produces the anticipated output condition.
- Based on requirements and the specified functionalities; no knowledge of the code.
- Ideal for situations where the actual mechanism is not known.
White-Box Testing
- Purpose: to confirm internal integrity and logic of the artifact, which is primarily the code.
- Based on and requires knowledge of the internal logic.
- Normally done using a targeted set of prearranged test cases.
- Tests are based on and examine coverage of code statements, branches, paths, and conditions.
Load Testing
- Identify performance problems without hand checking.
- Frequently performed on an ad hoc basis during the normal development process.
Stress Testing
- A term that is often used interchangeably with “load” and “performance” testing
- Normally supported by software and other forms of automation.
- Same purpose as load testing in the sense that it is looking to predict failure thresholds.
- Differs from load testing in the sense that any potential area of failure under stress is targeted.
- Often used interchangeably with “stress” and “load” testing, generally supported by software
- Differs from the other two in that it normally references criteria that are established in advance
Usability Testing
- Testing for user-friendliness
- Problem:
- Subjective
- Not generally supported by software
Alpha Testing
- Typically done by end users, not by programmers or technical testers.
- Take place when a product is nearing completion
Beta Testing
- Purpose: to exercise the product in its environment.
- The most common method of pre-release testing.
- Take place when the product is assumed is complete.
- Automated tools can include:
- Code analyzers: monitor code complexity and adherence to coding standards.
- Coverage analyzers: determine the parts of the code that are yet to be tested.
- Memory analyzers: look at physical performance.
- Load/performance test tools: test applications under various load levels.
- Web test tools: check that links are valid, that HTML code usage is correct, that client-side and server-side programs work, and that a website’s interactions are secure.
Completion Criteria
- Serves as the evidence of the product’s readiness for delivery.
- Established by the contract.
- Mandates a tangible outcome or action that can be judged in objective terms.
ISO 9126 Criteria
- A common set of criteria that can be used to assess the general suitability of a software or service.
- Six generic criteria:
- Functionality
- Suitability to requirements
- Accuracy of output
- Interoperability
- Functional compliance
- Security
- Reliability
- Product maturity
- Fault tolerance
- Frequency of failure
- Recoverability
- Usability
- Efficiency
- Time behavior: response and processing times and throughput rates
- Resource behavior: resources and duration in performing a given function
- Maintainability
- Time taken to analyze and address a problem report or change request
- Ease of the change if a decision is made
- Measure of the general stability
- Operational testability of the product
- Portability
- The ability of the product to be adapted to a given situation
- Ease of installation
- Conformance with the organization’s general requirements and regulations
Risk Acceptance
- Goal: maximize resource utilization by identifying risks with the greatest liklihood and impact.
- Scope has to be define to keep the process realistic and accurate.
- Have to be built around concrete evidence.
- Adopt a commonly accepted and repeatable methodology for data collection that produces reliable and concrete evidence that can be independently verified as correct.
- Requires detailed knowledge of the risks and consequences.
- Mission-critical software requires a high degree of integrity
- (critical) Document a process for identifying and prioritizing risks.
- Key properties:
- Implicit and explicit safety requirements
- Implicit and explicit security requirements
- Degree of software complexity
- Performance factors
- Reliability factors
- Threat picture (situational assessment): assessments of specific vulnerabilities at a point in time.
- The assessment should maintain continuous knowledge of three critical factors:
- The inter-relationships among all of the system assets
- The specific threats to each asset
- The precise business and technological risks associated with each vulnerability
- Latent threat: a threat that has no immediate consequences.
- Operational risk can be compartmentalised
Post-release Activities
- Goal: to place a completed and tested application into an existing operating environment.
- Preparation:
- Define a formal baseline for control.
- Determine whether all software products are installed and operating correctly. (audit)
- Post-release plan: describe the procedures of post-release.
- Timing of the reports
- Deviation policy
- Control procedures for problem resolution
- Additional elements
- Configuration audit
- Baseline generation
- Operation and maintenance problem reporting
- Configuration and baseline management plan revision
- Anomaly evaluation, reporting, and resolution
- Proposed change assessment/reporting
- General status reporting
- Configuration management administration
- Policy and procedure to guide the selection of standard practices and conventions
Verification and Validation (V&V)
- Objective: ensure the software is reliable and no unintended behavior is observed or can be forced.
- Two categoies:
- Reviews: Design, Code
- Testing: Error Detection, Acceptance, Independent Third Party
- Two forms:
- Verification: determine whether the product satisfies the conditions imposed at the start of the phase.
- Are we building the product right?
- Validation: determine whether the product satisfies the specified requirements.
- Are we building the right product?
- All activities should be documented in the Software Validation and Verification Plan (SVVP):
- Configuration baseline change assessment
- Management and technical reviews
- Interfaces between development and all organizational and supporting processes
- Documentation and reporting
- All validation and verification task reports
- Activity summary reports
- Anomaly reports
- Final report
- Specify the administrative requirements for:
- Anomaly resolution and reporting
- Exception/deviation policy
- Baseline and configuration control procedures
- Standards practices and conventions adopted for guidance
- Form of the relevant documentation: plans, procedures, cases, and results
Management V&V
- Examine management plans, schedules, requirements and methods for the purpose of assessing their suitability to the project.
- Support decisions about corrective actions, allocation of resources and project scoping.
- Management reviews support the individuals who have direct responsibility for the system.
- Roles involved:
- Decision maker
- Review leader
- Review recorder
- Management staff
- Technical staff
- Customer (or user) representative
Technical V&V
- Evaluates the product itself: requirements and design documentation and the code, test and user documentation and manuals and release notes, and the build and installation procedures.
- Support and enforce producer and customer understanding of the product.
- Roles involved:
- Decision maker
- Review leader
- Review recorder
- Technical staff and technical managers
- Customer (or user) technical staff
Reviews
- Performed as needed at each phase.
- A checklist is not enough.
- Tools are usually used for prioritisation, be careful of false rate.
- Formal review:
- Presentation of the materials
- Reviews must not be a mere check-in-the-box exercise
- Review panel is appointed by the acquirer
- Fagan inspection process: highly structured process (specifications, design and code)
- Informal review:
- Design review: detect architectural flaw
- Code review: detect bugs/erros
- Security design review
Testing
- Error Detection Tests: unit and component level testing.
- Proper handling of input validation using fuzzing
- Proper output responses and filtration
- Proper error handling mechanisms
- Secure state transitions and management
- Proper handling of load and tests
- Resilience of interfaces
- Temporal (race conditions) assurance checks
- Spatial (locality of reference) assurance and memory management checks
- Secure software recovery upon failures
- Acceptance Tests: demonstrate if the software is ready for its intended use.
- Regression testing: ensure backward compatibility, no new risks introduced.
- Simulation testing: check for configuration mismatches and data discrepancy (production-like environment).
- Independent (Third party) tests
- Main issue for internal staff: lack of objectivity
- Transfers liability inherent from the software risks to the third party
- Be aware of the tools used by 3rd party and their independence
- Should report to a person at significant level to ensure the results are properly considered
- Items that may be audited include:
- Project Plans
- Project Contracts
- Project Complaints
- Project Procedures
- Reports and other documentation
- Source code
- Deliverables
- Process:
- Planning activities
- Collect evidence
- Review by the audited party
- Auditor is always a 3rd party personnel
- Close meeting and report
- Preliminary conclusions
- Problems experienced
- Recommendations for remediation
- Follow up and final report
- The purpose and scope
- The audited organization
- The software product(s) audited
- Any applicable regulations and standards
- The audit evaluation criteria
- An observation list classifying each anomaly detected as major and minor
- The timing of audit follow-up activities
Certification and Accreditation (C&A)
- Certification: the technical verification of the software functional and assurance levels.
- Assess the suitability of software to operate in a computing environment.
- Evaluate both the technical and non-technical controls based on predefined criteria (e.g. Common Criteria)
- For software in the operational environment.
- Minimum assurance evaluation:
- User rights, privileges and profile management
- Sensitivity of data and application and appropriate controls
- Configurations of system, facility and locations
- Interconnectivity and dependencies and
- Operational security mode
- Accreditation: management’s formal acceptance of the system after an understanding of the risks to that system rating in the computing environment.
- Evaluation rated at the end, then decision can be made to accept or reject.
- Resources:
- PCI DSS mandates the compensating control that is used must provide a similar level, not increased level of defense as the original requirement.
- ISO/IEC 27006:2007 provides requirements and guidance for C&A providers of information security management system (ISMS).