Requirements Engineering – Defining the System

When architecting a secure system, we must dictate both its capabilities and its constraints—failing to distinguish between these leads to flawed architectures and misunderstood project scopes.

  • Functional Security Requirements (The "What"): A functional requirement defines a specific behaviour or action the system must perform. It is a feature you can observe happening.

    • Examples: Locking a user's account for 15 minutes after five consecutive failed login attempts; generating an alert to the SIEM when a user attempts to escalate privileges; requiring users to input a One-Time Password (OTP).

  • Non-Functional Security Requirements (NFRs) (The "How" / The Constraints): An NFR dictates the qualities, attributes, or constraints under which the system must operate. It does not describe a new feature; it describes the security posture of the existing features.

    • Examples: Encrypting all data at rest (the user’s goal is to store a file; encryption is the hidden constraint applied to that action); requiring cryptographic modules to have FIPS 140-2/3 validation to ensure tamper resistance; dictating performance baselines under heavy cryptographic load.

The Secure Software Development Lifecycle (SDLC)

Historically, engineering relied on the sequential Waterfall model. From a security standpoint, this is heavily criticised because it treats security as a bolt-on phase at the very end. Discovering a fundamental architectural flaw during final testing makes remediation catastrophically expensive.

To solve this, modern teams utilise DevSecOps, in which security is a shared responsibility across all teams and the security department provides automated guardrails. Within this modern SDLC, we apply several critical controls:

  • Protecting Test Environments: When a project team needs "real-world" fidelity for testing, you cannot simply copy over production data. You must utilise data masking or tokenisation to de-identify sensitive information before it crosses the production boundary.

  • Vulnerability Discovery: We use multiple testing types to uncover different flaws. To find buffer overflows and error-handling issues, we use Fuzzing by inputting large amounts of random, invalid data to intentionally crash the application.

  • Detecting Complex Logic Flaws: Automated tools struggle with concurrency issues such as Race Conditions (Time-of-Check-to-Time-of-Use). These are best detected through targeted Manual Code Review.

  • Mitigating Insider Threats: To prevent a rogue developer from inserting a Logic Bomb (malicious code that executes only under highly specific conditions to evade scanners), we enforce Separation of Duties (SoD). This ensures developers cannot unilaterally push their own code to production without secondary review.

  • Supply Chain Integrity: Before software is shipped or deployed, it must be code-signed. This cryptographically guarantees that the code has not been altered (integrity) and confirms the publisher's identity (authenticity).

Architecture and Supply Chain Governance

Security does not exist in a vacuum; your risk perimeter extends to the vendors, cloud providers, and physical infrastructure you rely on.

  • Vendor Acquisition (ISO 15288): When acquiring software or systems, formalised "Agreement Processes" are used to establish the strict security obligations and requirements that legally bind the external supplier to your organisation's standards.

  • Cloud Computing: If a SaaS vendor claims they are secure simply because they host on AWS, they are ignoring the Shared Responsibility Model. The vendor remains entirely responsible for their own application security, data encryption, and access controls built on top of the cloud infrastructure.

  • Containerization Risks: Modern applications rely heavily on containers (e.g., Docker). While highly efficient, their specific security risk is the Shared Kernel. If an attacker compromises the underlying host kernel, all containers running on that host are simultaneously compromised.

  • Network Interconnections: When two separate organisations establish a direct network connection to exchange data, an Interconnection Security Agreement (ISA) must be drafted to define the technical security requirements of that connection formally.

  • Hardware Lifecycle Management: Security extends to the physical disposal phase. Modern office printers, for example, are a severe security concern upon disposal because they contain internal hard drives and memory caches that retain sensitive printed documents.

Security Operations and Continuous Assurance

Once a system is live, the focus shifts to maintaining its integrity, hunting for deviations, and proving compliance.

Managing Change and Drift

  • Configuration Drift: Over time, systems are manually tweaked. The primary risk here is that the system's actual state diverges from the approved, secure baseline, silently introducing unknown vulnerabilities.

  • Detection & Prevention: To detect this drift, we implement File Integrity Monitoring (FIM) to alert on unauthorised changes to critical system files. Furthermore, before any significant change is approved for production, a Security Impact Analysis (SIA) must be conducted to ensure the change will not degrade existing controls.

Vulnerability Management & Assessment

  • Scanning Strategy: To achieve the most accurate picture of missing patches while minimising disruptive network traffic, utilise credentialed (authenticated) scans. The scanner logs directly into the host rather than guessing vulnerabilities from the outside.

  • Penetration Testing: Before engaging ethical hackers, you must clearly define the Rules of Engagement (RoE). This document dictates the scope, permitted attacks, and timing to prevent the test from accidentally causing a Denial of Service (DoS) or legal liability.

  • Bug Bounties: Crowdsourced security is effective, but a Bug Bounty program should never be launched until the organisation has a mature vulnerability intake and remediation process in place to handle the massive influx of reports.

Incident Response and Proactive Defense

  • Advanced Detection: Mature teams do not just wait for SIEM alerts; they engage in Threat Hunting—proactively searching through networks to detect advanced persistent threats that evade automated solutions.

  • Malware Analysis: Suspicious code is analysed in a Sandbox, a strictly isolated environment where malware can execute safely, revealing its behaviour without risking the production network.

  • Root Cause & Metrics: If a recurring incident is caused by user error, the root cause is a failure in Directive/Administrative controls (policies and training), which must be improved. The security team's efficiency in resolving these issues is tracked using the Mean Time to Remediate (MTTR) metric.

The Ultimate Standard: Due Care vs. Due Diligence. As security leaders, you will be judged by legal and regulatory bodies on two distinct concepts:

  1. Due Care: Doing the right thing (e.g., establishing a comprehensive patch management policy).

  2. Due Diligence: The continuous verification and proof that Due Care is actually happening (e.g., regularly reviewing audit logs to prove the patches were successfully applied).

You cannot have one without the other. Engineering the system is Due Care; operating it securely with continuous monitoring is Due Diligence.