This document provides a comprehensive security threat model for the Decentralized Identifier (DID) ecosystem. It identifies key threats and provides a framework for implementers to consider threats specific to their implementations.

This is a living document that will be updated as new threats are identified and mitigations are developed.

Introduction

Decentralized Identifiers (DIDs) are a new type of globally unique identifier that does not require a centralized registration authority. DIDs enable individuals and organizations to generate their own identifiers using systems they trust. These identifiers are designed to enable the controller of a DID to prove control over it through cryptographic proof and to enable resolution to a DID document containing public keys and service endpoints.

The DID ecosystem consists of DID controllers who create and manage DIDs, DID resolvers that retrieve DID documents from verifiable data registries, and verifiers who consume DID documents to verify cryptographic proofs. The process involves creation, resolution, verification, and potential deactivation of DIDs across various implementations and DID methods.

This threat model provides a framework for understanding and addressing security threats in the W3C Decentralized Identifier ecosystem. It expresses threats known to the drafters of the DID specification and provides a tool for implementers to consider threats of their own implementation. It is not exhaustive and should be extended by implementers based on their specific architectural decisions, deployment environments, and risk profiles.

Security is a continuous process. As the DID ecosystem evolves, new threats will emerge and existing mitigations may need to be updated. Implementers are encouraged to:

By taking a proactive approach to threat modeling and security, the DID ecosystem can provide robust, privacy-preserving, and trustworthy decentralized identity infrastructure for the future.

Frameworks Used

In evaluating threats to the DID ecosystem, we used two different analytic models: STRIDE and Adversaries.

STRIDE is a framework developed by Praerit Garg and Loren Kohnfelder of Microsoft, and described in Adam Shostack's Threat Modeling: Designing for Security (https://shostack.org/books/threat-modeling-book). It also has an excellent Wikipedia article (https://en.wikipedia.org/wiki/STRIDE_model). "STRIDE" is an acronym for six computer security threats: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.

Adversaries is a framework developed by Christopher Allen, and described in his Smart Custody book discussing strategies for securing legacy wealth in the form of cryptocurrencies. It is available at https://www.smartcustody.com/. The approach identifies 27 adversaries in seven different categories: Loss by Acts of God, Loss by Computer Error, Loss by Crime (Theft), Loss by Crime (Other Attacks), Loss by Government, Loss by Mistakes, and Privacy-related Problems. We include Allen's approach because it deals with several long-term threats that are not caused by active attackers and are essentially missed in STRIDE. For example, passive threats such as Loss by Acts of God and Loss by Computer Error can be significant when considering threats to the DID ecosystem.

Does a threat model need a conformance section?

Architecture

The architecture of the DID ecosystem considered by this threat model is shown below.

DID Ecosystem Architecture Diagram showing trust boundaries, processes, data stores, and data flows between DID controllers, verifiable data registries, resolvers, and verifiers

Detailed Description of Architecture Diagram

The diagram illustrates the DID ecosystem architecture using a data flow diagram with four main trust boundaries, shown as large rectangles with dashed blue borders, arranged horizontally from left to right.

Left Section - B1: DID Controller Device: This outermost boundary contains a nested trust boundary labeled "B2: DID Controller App Container" (also with a dashed blue border). Inside B2 is an orange-bordered rectangle labeled "P1: DID Controller Application". Below P1, connected by a purple dashed arrow labeled "F9", is a purple-bordered rectangle labeled "S1: Controller Local Storage". This represents where the DID controller manages identifiers and stores private keys locally.

Center-Left Section - B3: Verifiable Data Registry System: This boundary contains an orange-bordered rectangle at the top labeled "P2: Verifiable Data Registry". Below it, connected by a purple dashed arrow, is a purple-bordered rectangle labeled "S2: Registry Storage". This is the authoritative storage system for DID documents.

Center-Right Section - B4: DID Resolver System: This boundary contains an orange-bordered rectangle at the top labeled "P3: DID Resolver". Below it, connected by a purple dashed arrow labeled "F10", is a purple-bordered rectangle labeled "S3: Resolver Cache". This system resolves DIDs to DID documents.

Right Section - B5: Verifier Device: This outermost boundary contains a nested trust boundary labeled "B6: Verifier App Container". Inside B6 is an orange-bordered rectangle labeled "P4: Verifier Application". Below P4, connected by a purple dashed arrow labeled "F11", is a purple-bordered rectangle labeled "S4: Verifier Local Storage". This is where verifiers validate DIDs and cryptographic proofs.

Data Flows Between Components: The diagram shows several black solid arrows representing data flows between the main process components. From P1 to P2, a black arrow labeled "F1-F3, F7-F8" represents DID creation, registration, updates, deactivation, and key rotation operations. From P3 to P2, a black arrow labeled "F5" represents registry queries to retrieve DID documents. From P4 to P3, a black arrow labeled "F4" represents resolution requests. From P3 to P4, a black arrow labeled "F6" represents the return of resolved DID documents.

Visual Coding: Dashed blue borders represent trust boundaries (B1-B6). Orange borders represent active processes and applications (P1-P4). Purple borders represent data storage components (S1-S4). Black solid arrows represent data flows between processes (F1-F8). Purple dashed arrows represent data flows between processes and their local storage (F9-F11).

Key Security Insights: The visual structure emphasizes isolation, with each major component operating within its own trust boundary. Clear data flow paths show where data moves between components, highlighting potential attack surfaces. Each major system has its own dedicated storage, separated from the processing components. The controller and verifier sides show nested trust boundaries, indicating multiple layers of isolation between the device level and application container level.

The DID ecosystem begins with a DID Controller (E1), an entity that has the capability to make changes to a DID document. The DID Controller typically operates through a DID Controller Device (B1), such as a mobile phone, laptop, or hardware security module, which runs a DID Controller Application (P1) within its own isolated execution context (B2. DID Controller Application Container).

When a DID is created or updated, the DID Controller Application interacts with a Verifiable Data Registry (P2), which may be a blockchain, distributed ledger, database, or other system that records DIDs and DID documents. This registry operates within its own trust boundary (B3. Verifiable Data Registry System).

To resolve a DID to its associated DID document, a DID Resolver (P3) queries the appropriate Verifiable Data Registry based on the DID method. The resolver operates within its own environment (B4. DID Resolver System) and may be implemented as a universal resolver supporting multiple DID methods or as a method-specific resolver.

A Verifier (E2) is an entity that receives DIDs and needs to resolve them to obtain DID documents for verification purposes. The Verifier typically operates through a Verifier Application (P4) running on a Verifier Device (B5) within its own container (B6. Verifier Application Container).

DID documents contain Verification Methods (O1), which include public keys and other mechanisms that can be used to authenticate or authorize interactions with the DID subject. They may also contain Service Endpoints (O2) that enable trusted interactions with the DID subject.

Components

This table lists each component in the DID ecosystem architecture: identifier, name, and description. Data flows and data objects are described in their respective sections below.

Threat Boundaries

Threat boundaries define trust boundaries where different parties control different aspects of the system. These boundaries are critical for security analysis because threats often arise at the boundaries between different trust zones. The six boundaries (B1-B6) identified below cover controller devices/containers, registry systems, resolver systems, and verifier devices/containers, each representing a distinct security perimeter in the DID ecosystem.

ID Name Description
B1 DID Controller Device Operating system and hardware running the DID Controller Application. May include hardware security modules.
B2 DID Controller Application Container Isolated execution environment for the DID Controller Application, managed by the device OS.
B3 Verifiable Data Registry System The underlying infrastructure hosting the verifiable data registry (blockchain, ledger, database).
B4 DID Resolver System Infrastructure hosting DID resolver services, which may be operated by third parties.
B5 Verifier Device Operating system and hardware running the Verifier Application.
B6 Verifier Application Container Isolated execution environment for the Verifier Application.

External Entities

External entities represent the human or organizational actors in the DID ecosystem who perform actions and make decisions. These entities (E1-E3) are significant because they represent the trust anchors and accountability points in the system. The DID Controller manages identifiers and has authority to make changes, the Verifier consumes DIDs to validate cryptographic proofs, and the DID Subject is the entity being identified (which may or may not be the same as the DID Controller).

ID Name Description
E1 DID Controller An entity that has the capability to make changes to a DID document.
E2 Verifier An entity that receives DIDs and resolves them to verify associated cryptographic proofs.
E3 DID Subject The entity identified by the DID (may be the same as the DID Controller).

Processes

Processes are the active computational components that perform operations in the DID ecosystem. These processes (P1-P4) are significant because they represent the primary attack surfaces for technical exploits and embody the core functionality of the system. The DID Controller Application creates and manages DIDs, the Verifiable Data Registry stores them, the DID Resolver retrieves them, and the Verifier Application validates them. Each process is a potential target for attacks seeking to compromise the integrity, availability, or confidentiality of DID operations.

ID Name Description
P1 DID Controller Application Application that creates, manages, and updates DIDs on behalf of the DID Controller.
P2 Verifiable Data Registry System that records DIDs and DID documents (blockchain, ledger, database, etc.).
P3 DID Resolver Service that takes a DID as input and produces a conforming DID document as output.
P4 Verifier Application Application that resolves DIDs and verifies cryptographic proofs associated with DID documents.

Data Flows

Data flows represent the movement of information between components in the DID ecosystem. These flows (F1-F11) are significant because they represent potential interception points where attackers could eavesdrop, tamper with, or manipulate data in transit. Understanding these flows is crucial for analyzing where spoofing, tampering, or information disclosure could occur during DID lifecycle operations including creation, registration, updates, resolution, deactivation, key rotation, and cache management across all system components.

ID Name Description
F1 DID Creation DID Controller initiates creation of a new DID through the Controller Application.
F2 DID Registration Controller Application registers the DID and DID document with the Verifiable Data Registry.
F3 DID Update Controller Application updates an existing DID document in the Verifiable Data Registry.
F4 DID Resolution Request Verifier Application requests DID document from DID Resolver.
F5 Registry Query DID Resolver queries the Verifiable Data Registry for DID document.
F6 DID Document Response DID Resolver returns DID document to Verifier Application.
F7 DID Deactivation Controller Application deactivates a DID in the Verifiable Data Registry.
F8 Key Rotation Controller Application updates verification methods in the DID document.
F9 Controller Cache Management Controller Application reads and writes to local cache.
F10 Resolver Cache Management DID Resolver reads and writes to local cache.
F11 Verifier Cache Management Verifier Application reads and writes to local cache.

Data Stores

Data stores represent persistent or temporary storage locations for DID-related data within the system. These stores (S1-S4) are significant because they are high-value targets for attackers seeking to compromise private keys, tamper with cached data, or exfiltrate sensitive information. The security of these stores is critical: controller local storage holds private keys, registry storage maintains the authoritative record of DID documents, and caches at the resolver and verifier layers must maintain data integrity while providing performance benefits.

ID Name Description
S1 Controller Local Storage Local storage for DID Controller Application including private keys and cached data.
S2 Registry Storage Persistent storage within the Verifiable Data Registry for DID documents.
S3 Resolver Cache Temporary cache of resolved DID documents in the DID Resolver.
S4 Verifier Local Storage Local storage for Verifier Application including cached DID documents.

Data Objects

Data objects are the key data structures that flow through and are operated upon by the DID ecosystem. These objects (O1-O6) are significant because they represent the actual cryptographic materials, identifiers, and proofs that must be protected throughout the system. Their integrity is fundamental to the security of the entire DID ecosystem: verification methods and private keys enable authentication, DIDs and DID documents establish identity, service endpoints enable interactions, and cryptographic proofs provide non-repudiable evidence of authorization. Any compromise of these objects directly undermines the security guarantees of the system.

ID Name Description
O1 Verification Method Public key or other cryptographic material used to authenticate or authorize interactions with the DID subject.
O2 Service Endpoint Network address where services related to the DID subject can be accessed.
O3 DID Document Set of data describing the DID subject, including verification methods and service endpoints.
O4 Private Key Cryptographic private key corresponding to a verification method, controlled by the DID Controller.
O5 DID Decentralized identifier conforming to the DID syntax specification.
O6 Cryptographic Proof Digital signature or other cryptographic proof created using a private key associated with a DID.

Key Architectural Considerations

The threat boundaries specify areas of concern created because different parties may control different aspects of the system. We break the trust regions into known areas of concern for the DID specification itself, with appreciation for the value of further analysis to address challenges of flows within each trust zone.

This is most relevant in several places:

  1. The DID Controller Device is expected to securely store private keys and execute cryptographic operations. The isolation of the DID Controller Application from other applications on the same device is critical for security.
  2. The Verifiable Data Registry MUST provide integrity guarantees for stored DID documents. Different DID methods make different trust assumptions about their underlying registries, from fully decentralized blockchains to centralized databases.
  3. DID Resolvers act as intermediaries and MUST faithfully retrieve and return DID documents without modification. Verifiers may choose to trust specific resolvers or operate their own resolver infrastructure.
  4. Each component is expected to maintain its own local data cache for performance and availability. The data flows for each cache are identified because they create threats related to data freshness, integrity, and confidentiality.
  5. Private key material MUST never leave the DID Controller's trusted execution environment except through secure key backup or recovery mechanisms explicitly controlled by the DID Controller.

Threat Model

This section covers Security Considerations by way of threat analysis. The working group created the following Threat Model to highlight the key threats in the DID ecosystem, and index the responses or mitigations used to address each threat. Some threats that were identified have no direct response in the core specification but rely on other systems to mitigate or respond to the threat. In these cases, the external system is listed to index what the DID ecosystem relies on.

This threat model expresses threats known to the drafters of the DID specification and provides a framework for implementers to consider threats of their own implementation. As such, this is first a way to understand the known threats inherent in the specification, and second, a tool for implementers to extend the threat analysis to necessary design decisions to be made when implementing the specification.

Implementers SHOULD extend this threat model with considerations for their own particular decisions as a matter of internal documentation.

The following threats and responses were considered in the analysis of the DID ecosystem, ordered from most critical to least critical:

Threat T1: Private Key Compromise

The private key associated with a DID verification method is stolen, exposed, or otherwise compromised, allowing an attacker to impersonate the DID Controller and make unauthorized updates to the DID document or sign fraudulent proofs.

Response R1. Key Protection and Rotation [Mitigate]

Private keys MUST be stored in secure storage provided by the DID Controller Device, such as hardware security modules, secure enclaves, or encrypted key stores. DID Controllers SHOULD implement key rotation capabilities, allowing compromised keys to be replaced without losing control of the DID. DID documents SHOULD support multiple verification methods to enable key rotation and recovery mechanisms. Controllers SHOULD monitor for unauthorized use of their DIDs and be prepared to rotate keys or deactivate DIDs if compromise is detected.

Affected Components: O4. Private Key, S1. Controller Local Storage, B1. DID Controller Device, P1. DID Controller Application

Analysis Framework: STRIDE (Spoofing, Elevation of Privilege), Adversaries (Loss by Crime - Theft)

Threat T2: DID Document Tampering

An attacker modifies a DID document in transit or in storage to redirect verification methods or service endpoints to malicious resources controlled by the attacker.

Response R2. Cryptographic Integrity [Mitigate]

DID documents SHOULD be protected by the integrity guarantees of the underlying Verifiable Data Registry. For registry types that don't provide inherent integrity protection, DID documents SHOULD be signed by the DID Controller. Resolvers and Verifiers MUST validate the integrity of DID documents against the source registry. Any modification to a properly registered DID document will fail integrity checks and MUST be rejected by verifiers.

Affected Components: P2. Verifiable Data Registry, O3. DID Document, F5. Registry Query

Analysis Framework: STRIDE (Tampering)

Threat T3: Resolver Spoofing

A malicious or compromised DID resolver returns falsified DID documents that don't match what is recorded in the Verifiable Data Registry, potentially redirecting verifiers to attacker-controlled resources.

Response R3. Direct Registry Verification [Mitigate]

Verifiers with high security requirements SHOULD operate their own DID resolvers or directly query the Verifiable Data Registry to avoid dependency on third-party resolvers. When using third-party resolvers, verifiers SHOULD implement resolver redundancy by querying multiple independent resolvers and comparing results. DID documents resolved through untrusted resolvers SHOULD be validated against the source registry when possible. Resolvers SHOULD provide cryptographic proof of the DID document's origin from the registry.

Affected Components: P3. DID Resolver, F4. DID Resolution Request, F5. Registry Query, F6. DID Document Response

Analysis Framework: STRIDE (Spoofing, Tampering, Information Disclosure)

Threat T4: Verifiable Data Registry Compromise

The underlying registry (blockchain, distributed ledger, database) is compromised through 51% attack, consensus manipulation, database breach, or other means, affecting the integrity and availability of all DIDs recorded in that registry.

Response R4. Registry Selection and Redundancy [Accept and Mitigate]

This threat is largely outside the scope of the DID specification and depends on the security properties of the chosen Verifiable Data Registry. DID Controllers SHOULD select DID methods backed by registries with appropriate security guarantees for their use case. For high-value identifiers, controllers MAY register the same DID or equivalent DIDs across multiple registries to provide redundancy. Critical applications SHOULD monitor registry health and have contingency plans for registry compromise, including migration paths to alternative registries. The DID specification accepts that registry-level compromise is possible and relies on the registry's own security mechanisms for protection.

Affected Components: P2. Verifiable Data Registry, B3. Verifiable Data Registry System, S2. Registry Storage

Analysis Framework: STRIDE (Tampering, Denial of Service), Adversaries (Loss by Crime - Other Attacks)

Threat T5: DID Recovery Process Exploitation

Adversaries exploit DID recovery mechanisms (social recovery, key recovery services, backup systems) to seize control of DIDs from legitimate controllers. Recovery processes designed to help users regain access become attack vectors when improperly secured or when recovery credentials are compromised.

Response R5. Secure Recovery Mechanisms [Mitigate]

DID recovery mechanisms MUST implement strong authentication requirements that are distinct from primary authentication methods. Social recovery systems SHOULD require multiple independent parties to prevent single points of failure. Recovery processes SHOULD include time delays and notification mechanisms to alert controllers of recovery attempts. Recovery credentials SHOULD be stored separately from primary keys and protected with equivalent or stronger security measures. Controllers SHOULD regularly audit and update recovery configurations. DID method specifications SHOULD define secure recovery procedures specific to their architecture.

Affected Components: P1. DID Controller Application, S1. Controller Local Storage, O4. Private Key, F3. DID Update

Analysis Framework: STRIDE (Spoofing, Elevation of Privilege), Adversaries (Loss by Crime - Theft)

Threat T6: Cache Poisoning and Freshness Attacks

Attackers poison DID document caches to serve stale or malicious documents, especially after key rotation or deactivation events. Cached documents may contain compromised keys or outdated service endpoints, allowing attackers to impersonate controllers or redirect traffic even after legitimate updates.

Response R6. Cache Validation and Freshness Controls [Mitigate]

Caches MUST implement time-to-live (TTL) mechanisms appropriate to the security requirements of the application. Resolvers and verifiers SHOULD validate cached documents against the authoritative registry for high-security operations. Caches SHOULD implement cache invalidation mechanisms that respond to update notifications. When key rotation or deactivation is detected, applications MUST flush relevant cache entries immediately. Verifiers SHOULD compare cache timestamps against known update events. DID documents SHOULD include metadata indicating last modification time to facilitate freshness validation.

Affected Components: S3. Resolver Cache, S4. Verifier Local Storage, F10. Resolver Cache Management, F11. Verifier Cache Management

Analysis Framework: STRIDE (Tampering, Information Disclosure), Adversaries (Loss by Computer Error)

Threat T7: Unauthorized DID Document Change Detection Failure

The lack of standardized notification mechanisms means controllers may not learn when their DID documents are modified, enabling undetected unauthorized modifications. Attackers who compromise a controller's keys can make changes that go unnoticed for extended periods.

Response R7. Change Monitoring and Notification [Mitigate]

Controllers SHOULD implement automated monitoring of their DID documents for unexpected changes. Applications SHOULD provide notification services that alert controllers when document modifications occur. Controllers SHOULD maintain local copies of their DID document state and regularly compare against the registry. DID method specifications SHOULD define event notification mechanisms for document updates. For high-security applications, controllers SHOULD subscribe to blockchain monitoring services or implement webhook notifications. Multi-party monitoring services MAY be used to provide independent verification of document state.

Affected Components: O3. DID Document, P2. Verifiable Data Registry, F3. DID Update, P1. DID Controller Application

Analysis Framework: STRIDE (Tampering, Repudiation), Adversaries (Loss by Crime - Other Attacks)

Threat T8: Verification Method Controller Delegation Attack

Controllers can set verification methods controlled by other parties without granting document update rights. This creates scenarios where unauthorized parties can authenticate on behalf of an identifier while remaining unable to modify the authoritative document, enabling selective impersonation attacks.

Response R8. Verification Method Binding Validation [Mitigate]

Verifiers SHOULD distinguish between document controllers and verification method controllers when evaluating proofs. Applications SHOULD validate that verification methods are appropriately bound to the DID subject. Controllers SHOULD clearly document the purpose and scope of each verification method. Verification methods controlled by external parties SHOULD be explicitly marked and their authorization scope limited. Verifiers MAY require verification methods to be controlled by the same entity as the document for high-security operations. DID documents SHOULD use the controller property to make delegation relationships explicit.

Affected Components: O1. Verification Method, O3. DID Document, O6. Cryptographic Proof, P4. Verifier Application

Analysis Framework: STRIDE (Spoofing, Elevation of Privilege)

Threat T9: Expired/Revoked Key Verification Bypass

Systems may fail to check expiration dates or revocation status of verification methods, allowing expired or revoked keys to remain functional. This undermines the security model by permitting authentication with compromised or outdated cryptographic material.

Response R9. Mandatory Validity Checking [Mitigate]

Verifiers MUST check expiration timestamps on verification methods before accepting cryptographic proofs. Systems MUST NOT verify any proofs associated with expired or revoked verification methods. DID documents SHOULD include explicit expiration dates for all verification methods. Revocation mechanisms MUST be checked before trusting verification methods. Applications SHOULD reject proofs if they cannot verify the validity status of the associated verification method. DID method specifications SHOULD define how expiration and revocation information is expressed and validated.

Affected Components: O1. Verification Method, O6. Cryptographic Proof, P4. Verifier Application, F4. DID Resolution Request

Analysis Framework: STRIDE (Spoofing, Elevation of Privilege)

Threat T10: DID Method Manipulation

An attacker exploits vulnerabilities in a specific DID method implementation to create fraudulent DIDs, corrupt the resolution process, or gain unauthorized control over existing DIDs.

Response R10. Method-Specific Security Requirements [Mitigate and Transfer]

Each DID method specification MUST define security considerations specific to that method's architecture. Implementers MUST follow the security requirements defined in the DID method specification they choose to implement. Verifiers SHOULD maintain a list of trusted DID methods and MAY reject DIDs using methods that don't meet their security requirements. This specification establishes baseline security requirements, but method-specific threats are the responsibility of individual DID method specifications.

Affected Components: P2. Verifiable Data Registry, O5. DID, F2. DID Registration

Analysis Framework: STRIDE (Spoofing, Tampering, Elevation of Privilege)

Threat T11: DID Deactivation/Deletion

An unauthorized party deactivates or deletes a DID, causing denial of service for legitimate users who need to resolve the DID or verify proofs associated with it.

Response R11. Authorization and Audit Trail [Mitigate]

DID deactivation operations MUST require cryptographic proof of authorization from the DID Controller. The Verifiable Data Registry SHOULD maintain an audit trail of all DID lifecycle operations including deactivation. DID method specifications SHOULD define whether deactivated DIDs remain resolvable (returning a deactivated status) or become completely unresolvable. For critical DIDs, implementers MAY use multi-signature or threshold authorization requirements for deactivation to prevent unauthorized deactivation even if a single key is compromised.

Affected Components: F7. DID Deactivation, P2. Verifiable Data Registry, O4. Private Key

Analysis Framework: STRIDE (Denial of Service, Elevation of Privilege)

Threat T12: Time-of-Check to Time-of-Use Race Conditions

Between resolving a DID document and using its verification methods, the document could be updated maliciously. This creates race conditions where stale keys or service endpoints are used for verification, potentially allowing attackers to exploit the timing window.

Response R12. Atomic Resolution and Validation [Mitigate]

Applications SHOULD minimize the time window between DID document resolution and verification method use. Verifiers SHOULD include document resolution timestamps in verification decisions. For high-security operations, verifiers SHOULD re-resolve DIDs immediately before critical operations. Applications SHOULD validate that DID documents have not been updated between resolution and use. Verification methods SHOULD include nonces or timestamps that bind them to specific document states. DID method specifications SHOULD provide mechanisms to detect concurrent modifications during verification flows.

Affected Components: P3. DID Resolver, P4. Verifier Application, F4. DID Resolution Request, F6. DID Document Response

Analysis Framework: STRIDE (Tampering, Elevation of Privilege)

Threat T13: Key Rotation Failure

Improper key rotation procedures leave old compromised keys valid or break the continuity of the DID, causing either security vulnerabilities or loss of access to the identifier.

Response R13. Structured Key Management [Mitigate]

DID documents SHOULD support multiple concurrent verification methods to enable smooth key rotation without service interruption. When rotating keys, controllers SHOULD follow a process of: (1) adding the new verification method to the DID document, (2) updating all systems to use the new key, (3) monitoring for continued use of the old key, and (4) removing the old verification method only after confirming all systems have migrated. DID method specifications SHOULD define best practices for key rotation specific to their registry architecture. Controllers SHOULD maintain secure backups of all key material and recovery mechanisms to prevent permanent loss of DID control during rotation failures.

Affected Components: F8. Key Rotation, O1. Verification Method, O4. Private Key, O3. DID Document

Analysis Framework: STRIDE (Denial of Service), Adversaries (Loss by Mistakes)

Threat T14: Equivalence/Canonical ID Confusion Attack

Multiple equivalent identifiers (alsoKnownAs, canonical IDs) for the same subject can be exploited to bypass security checks or create confusion attacks where different equivalent forms are treated inconsistently across systems.

Response R14. Equivalence Validation and Normalization [Mitigate]

Verifiers SHOULD normalize DIDs to canonical form before making security decisions. Applications SHOULD validate equivalence relationships cryptographically before accepting them. Systems SHOULD maintain consistent treatment of equivalent identifiers across all security boundaries. When processing alsoKnownAs properties, verifiers SHOULD verify that the controller of each equivalent identifier approves the equivalence relationship. Security policies SHOULD be applied uniformly to all equivalent forms of an identifier. DID method specifications SHOULD define canonical identifier forms and equivalence validation procedures.

Affected Components: O5. DID, O3. DID Document, P4. Verifier Application

Analysis Framework: STRIDE (Spoofing, Elevation of Privilege)

Threat T15: Service Endpoint Exploitation

Malicious or compromised service endpoints listed in DID documents are used for phishing, data exfiltration, malware distribution, or other attacks against parties who trust and interact with those endpoints.

Response R15. Endpoint Validation and Least Privilege [Mitigate]

Verifiers SHOULD NOT automatically trust service endpoints listed in DID documents. Applications interacting with service endpoints SHOULD validate TLS certificates, implement timeout and rate limiting, and treat responses as potentially malicious input requiring validation. Service endpoints SHOULD be used with the principle of least privilege, only accessing the minimum necessary functionality. DID Controllers SHOULD regularly audit service endpoints in their DID documents and remove any that are no longer needed or trusted. When possible, service endpoint URLs SHOULD use the DID URL syntax to tie the endpoint cryptographically to the DID itself. Verifiers MAY implement allowlists or denylists of service endpoint domains based on reputation systems.

Affected Components: O2. Service Endpoint, O3. DID Document, P4. Verifier Application

Analysis Framework: STRIDE (Spoofing, Information Disclosure, Elevation of Privilege), Adversaries (Loss by Crime - Other Attacks)

Threat T16: Amplification/Resource Exhaustion Attacks

Crafted DID resolution requests with small, valid inputs result in exponentially costly processing, exhausting resolver or registry resources through denial-of-service attacks. Malicious DIDs could reference complex resolution chains or trigger expensive cryptographic operations.

Response R16. Resource Limiting and Rate Controls [Mitigate]

Resolvers SHOULD implement resource limits on resolution operations, including maximum resolution chain depth, timeout limits, and computational budgets. Systems SHOULD implement rate limiting on resolution requests per client. DID method specifications SHOULD define maximum complexity bounds for resolution operations. Resolvers SHOULD detect and reject resolution loops or excessively deep delegation chains. Applications SHOULD monitor resource consumption during resolution and abort expensive operations. Registries SHOULD validate DID documents for complexity before accepting registration.

Affected Components: P3. DID Resolver, P2. Verifiable Data Registry, F4. DID Resolution Request, F5. Registry Query

Analysis Framework: STRIDE (Denial of Service), Adversaries (Loss by Crime - Other Attacks)

Threat T17: Multi-Controller Authorization Confusion

With multiple controllers for a single identifier, authorization decisions become ambiguous. Any controller can modify the document, potentially creating conflicting or malicious changes without consensus mechanisms, leading to authorization confusion and security policy bypass.

Response R17. Multi-Controller Governance [Mitigate]

DID documents with multiple controllers SHOULD clearly specify governance rules for updates. Implementations SHOULD consider requiring multi-signature or threshold authorization for critical operations when multiple controllers exist. Controllers SHOULD establish off-chain agreements about update authorization before deploying multi-controller DIDs. Verifiers SHOULD be aware that any listed controller can modify the document. Applications SHOULD monitor for conflicting updates from different controllers. DID method specifications SHOULD define how multi-controller authorization is handled and validated.

Affected Components: O3. DID Document, F3. DID Update, P2. Verifiable Data Registry, E1. DID Controller

Analysis Framework: STRIDE (Tampering, Elevation of Privilege, Repudiation)

Threat T18: Immutable Data Exposure

DID documents on immutable registries (blockchains) cannot be corrected or removed if they contain errors or sensitive information. This creates permanent exposure risks, privacy violations, and regulatory compliance issues (GDPR right to erasure).

Response R18. Privacy-Preserving Design [Accept and Mitigate]

Controllers SHOULD carefully review DID documents before publishing to immutable registries. DID documents SHOULD NOT contain personally identifiable information or sensitive data. Applications SHOULD use content-addressed references rather than embedding content directly. Controllers SHOULD use deactivation rather than deletion for immutable registries. For privacy-sensitive applications, controllers SHOULD select DID methods that support document modification or deletion. Implementers SHOULD educate users about the permanence of data on immutable registries. This threat is partially accepted as inherent to certain registry types.

Affected Components: O3. DID Document, P2. Verifiable Data Registry, S2. Registry Storage

Analysis Framework: STRIDE (Information Disclosure), Adversaries (Privacy-related Problems, Loss by Mistakes)

Threat T19: DID URL Fragment Exploitation

DID URLs with fragments reference specific resources within documents. Malicious fragments could redirect to attacker-controlled verification methods or service endpoints within otherwise legitimate documents, or exploit parsing vulnerabilities in fragment handling.

Response R19. Fragment Validation and Sanitization [Mitigate]

Applications SHOULD validate that DID URL fragments reference legitimate resources within the resolved document. Verifiers SHOULD sanitize fragment identifiers to prevent injection attacks. Applications SHOULD validate that referenced resources match expected types (e.g., a verification method fragment should reference a verification method). Systems SHOULD implement strict parsing of DID URL syntax. Verifiers SHOULD reject malformed or suspicious fragments. Applications SHOULD apply security policies to fragment-referenced resources as strictly as to the full document.

Affected Components: O5. DID, O3. DID Document, P4. Verifier Application, F4. DID Resolution Request

Analysis Framework: STRIDE (Spoofing, Tampering)

Threat T20: Correlation Attack

Multiple uses of the same DID across different contexts allow tracking and profiling of the DID subject, compromising privacy even when the DID document doesn't contain directly identifying information.

Response R20. Pairwise and Limited-Use DIDs [Mitigate]

DID Controllers SHOULD create unique pairwise DIDs for each relationship or context to prevent correlation across different verifiers. For high-privacy scenarios, controllers SHOULD use single-use DIDs that are deactivated after a single interaction. DID method specifications SHOULD support efficient creation of multiple DIDs to enable these privacy practices. Applications SHOULD educate users about the privacy implications of DID reuse and provide easy mechanisms for creating context-specific DIDs.

Affected Components: O5. DID, E1. DID Controller, E2. Verifier

Analysis Framework: STRIDE (Information Disclosure), Adversaries (Privacy-related Problems)

Threat T21: Personal Data and Service Endpoint Correlation

Service endpoints and personal data (email addresses, social media accounts, websites) publicly advertised in DID documents enable tracking and profiling across contexts. Even with pairwise DIDs, service endpoints can reveal identity and enable correlation.

Response R21. Minimal Service Endpoint Disclosure [Mitigate]

Controllers SHOULD NOT include personal data or identifying information in service endpoints unless necessary. Applications SHOULD discourage revealing social media accounts, personal websites, or email addresses in DID documents. For high-privacy scenarios, service endpoints SHOULD use anonymous or context-specific addresses. Controllers SHOULD use different service endpoints for different contexts to prevent correlation. Applications SHOULD warn users when adding potentially correlatable service endpoints. Privacy-preserving communication protocols SHOULD be preferred over direct endpoint disclosure.

Affected Components: O2. Service Endpoint, O3. DID Document, E1. DID Controller

Analysis Framework: STRIDE (Information Disclosure), Adversaries (Privacy-related Problems)

Threat T22: Group Privacy and Herd Privacy Leakage

DID method choice, registry selection, transaction patterns, and usage behaviors can reveal group membership or organizational affiliation. Even with pairwise DIDs, metadata about DID creation and usage can compromise privacy by revealing associations.

Response R22. Anonymity Set Preservation [Mitigate]

Controllers SHOULD select DID methods with large user populations to maximize anonymity sets. Applications SHOULD avoid creating unique transaction patterns that distinguish users. DID creation and update operations SHOULD be timed to avoid correlation with other activities. Controllers SHOULD use common registration patterns rather than distinctive configurations. Privacy-focused DID methods SHOULD implement transaction mixing or anonymity-enhancing technologies. Applications SHOULD educate users about metadata privacy risks.

Affected Components: O5. DID, P2. Verifiable Data Registry, F2. DID Registration, E1. DID Controller

Analysis Framework: STRIDE (Information Disclosure), Adversaries (Privacy-related Problems)

Threat T23: Unsupported DID Methods

A verifier encounters a DID using a method they don't support or recognize, creating interoperability failures that prevent the verifier from resolving the DID or validating associated proofs.

Response R23. Method Discovery and Graceful Degradation [Accept and Mitigate]

Verifiers SHOULD clearly document which DID methods they support. Universal resolvers SHOULD support a broad range of DID methods to maximize interoperability. When encountering an unsupported DID method, applications SHOULD provide clear error messages and, where possible, offer alternative verification mechanisms. The DID specification accepts that not all verifiers will support all methods. Controllers SHOULD consider the method support landscape when selecting a DID method, choosing methods with broad adoption when interoperability is important.

Affected Components: O5. DID, P3. DID Resolver, P4. Verifier Application

Analysis Framework: STRIDE (Denial of Service), Adversaries (Loss by Computer Error - Bitrot)

Threat T24: Encrypted Data Key Management Failure

Storing encrypted data in DID documents creates permanent data loss if encryption keys are lost, or permanent exposure if keys are compromised. No lifecycle management exists for encrypted content, creating long-term key management challenges.

Response R24. Encrypted Data Lifecycle Management [Mitigate]

Controllers SHOULD avoid storing encrypted data directly in DID documents. Applications SHOULD use content-addressed references to encrypted data stored elsewhere. When encryption is necessary, controllers MUST implement robust key backup and recovery procedures. Encrypted content SHOULD include key rotation mechanisms. Applications SHOULD document key management responsibilities clearly. For high-value encrypted data, controllers SHOULD implement key escrow or multi-party key management. DID method specifications SHOULD provide guidance on encrypted data handling.

Affected Components: O3. DID Document, O4. Private Key, S1. Controller Local Storage

Analysis Framework: STRIDE (Information Disclosure, Denial of Service), Adversaries (Loss by Mistakes)

Threat T25: Quantum Computing Cryptographic Break

Future quantum computers could break current cryptographic algorithms used in DIDs, compromising all historical encrypted data and signatures. Harvest-now, decrypt-later attacks could compromise data retroactively when quantum computers become available.

Response R25. Post-Quantum Preparedness [Accept and Mitigate]

This is a long-term threat that is partially accepted given current technology limitations. DID method specifications SHOULD support cryptographic agility to enable future algorithm upgrades. Applications SHOULD monitor post-quantum cryptography standardization efforts. Controllers SHOULD plan for eventual migration to post-quantum algorithms. For highly sensitive long-term data, controllers MAY implement hybrid cryptographic schemes combining classical and post-quantum algorithms. The ecosystem SHOULD establish migration paths for transitioning to quantum-resistant cryptography. This threat requires ongoing monitoring of cryptographic research and standardization.

Affected Components: O1. Verification Method, O4. Private Key, O6. Cryptographic Proof

Analysis Framework: STRIDE (Spoofing, Information Disclosure), Adversaries (Loss by Computer Error - Technical Obsolescence)

Implementation Considerations

Implementers of DID systems should consider the following when deploying DID infrastructure:

Future Threats and Considerations

The following threats are recognized but not fully addressed in the current version of this threat model. They are included for awareness and may be expanded in future revisions:

References

DID-CORE
Decentralized Identifiers (DIDs) v1.0. Manu Sporny, Dave Longley, Markus Sabadello, Drummond Reed, Orie Steele, Christopher Allen. W3C. 19 July 2022. W3C Recommendation.
STRIDE
Threat Modeling: Designing for Security. Adam Shostack. Wiley. 2014.
SMART-CUSTODY
Smart Custody: Use of Advanced Cryptographic Tools to Improve the Care, Maintenance, Control, and Protection of Digital Assets. Christopher Allen and Shannon Appelcline. Blockchain Commons. 2019.

Acknowledgements

This threat model was developed with input from the W3C Decentralized Identifier Working Group and security experts in the decentralized identity community.