Digital Assets & Virtual Assets
RWA Tokenisation in Hong Kong: Legal Framework and Structuring Guide

Deploying artificial intelligence systems in Hong Kong exposes businesses to obligations under the Personal Data (Privacy) Ordinance (Cap. 486), which has been Hong Kong's foundational privacy statute since its enactment in 1995. While the PDPO predates the AI era by decades, its core principles—lawful collection, purpose limitation, data security, and individual access rights—apply directly to AI systems that process personal data. This article explains how the PDPO's six Data Protection Principles apply to AI data collection, training, and automated decision-making; examines the Privacy Commissioner's emerging guidance on AI; and provides practical guidance for businesses deploying AI systems in Hong Kong.
The PDPO is Hong Kong's principal privacy law. It does not create a sector-specific regulator with direct licensing authority (unlike data protection authorities in the EU or Singapore); rather, it establishes general privacy obligations enforced by the Privacy Commissioner for Personal Data (PCPD), an independent statutory authority with powers to investigate complaints, conduct compliance reviews, and issue enforcement notices.
The PDPO applies to any person (natural person, company, or organisation) in Hong Kong that collects, holds, processes, or uses personal data. It does not matter whether the person is carrying on a "business" or is a non-profit or government entity. The PDPO covers all processing of personal data relating to living individuals, with limited exceptions for law enforcement, national security, and certain internal administrative purposes.
The PDPO does not require an organisation to register with the PCPD or to obtain approval before deploying AI systems (unlike the EU AI Act, which imposes pre-deployment conformity assessment for certain high-risk AI systems). Instead, the PDPO imposes substantive obligations that organisations must satisfy in their data processing activities. Failure to comply with the PDPO can result in: (a) formal investigation by the PCPD; (b) compliance orders; (c) compensation liability to affected individuals; and (d) in egregious cases involving unlawful disclosure of personal data (the "doxxing" provisions added in 2021), criminal liability.
Data Protection Principle 1: Collection of Personal Data. DPP1 establishes three core requirements for the collection of personal data: (a) the data must be adequate but not excessive for the purpose for which it is collected; (b) the collection must be for a lawful purpose; and (c) the data subject must be given a Personal Information Collection Statement (PICS) disclosing the purpose of collection, intended uses, and classes of persons to whom the data will be transferred.
Applied to AI systems, DPP1 creates several obligations. First, organisations collecting data to train AI models must be able to articulate the lawful purpose for which the data is collected. The purpose must not be excessively broad (e.g., "for any purpose in the future") but must be reasonably specific (e.g., "to train a customer sentiment analysis model"). Second, the collection must be by lawful means—data obtained through fraud, misrepresentation, or unauthorised access is not lawfully collected. Third, and critically, if personal data is collected for one purpose (e.g., customer service interaction), that data cannot be used for training a commercial AI model without either: (a) the data subject's separate informed consent to AI training use; or (b) a finding that AI training is directly related to the original collection purpose.
The requirement for a PICS is particularly important for AI operators. If an organisation collects personal data by scraping social media profiles, web pages, or public registries with the intent to use that data to train an AI model, the organisation must provide a PICS to each affected individual (or must have reasonably attempted to do so, or must have obtained consent through other means). For large-scale data collection (e.g., scraping millions of social media profiles), providing individual PICS statements is impractical, which creates a significant compliance challenge. Organisations often address this by seeking to rely on exceptions or by obtaining a general consent.
Data Protection Principle 2: Accuracy and Retention. DPP2 requires that personal data held by an organisation must be accurate and must not be retained longer than necessary. Applied to AI, this principle creates obligations concerning: (a) the accuracy of data used to train AI models—organisations should have processes to ensure that training data is accurate or should understand the limitations of inaccurate training data; (b) the retention of training datasets—once training is complete and the model is deployed, the original training data (which may contain personal data) should be deleted unless there is a legitimate reason to retain it; and (c) logs and audit trails—many AI systems generate logs of input and output data, which may contain personal data and should be subject to retention policies.
The "not longer than necessary" requirement is particularly important. Organisations frequently retain AI training datasets indefinitely without clear justification. Best practice is to establish a retention schedule for training data (e.g., training datasets are deleted 12 months after model training is complete) and to document the business rationale for any extended retention.
Data Protection Principle 3: Use of Personal Data. DPP3 is the "purpose limitation" principle: personal data collected for one purpose may not be used for a different purpose unless the new purpose is directly related to the original purpose or the data subject consents to the new use. This principle is frequently breached in AI contexts.
Example: An organisation collects customer service emails and chat transcripts for the stated purpose of "customer support and quality assurance." Later, the organisation decides to use those emails and transcripts to train a customer sentiment analysis model and sell sentiment reports to third parties. This new use—training a commercial AI model and monetising the outputs—is not directly related to the original "customer support" purpose. Using the data for this new purpose without seeking separate customer consent violates DPP3.
For organisations deploying AI systems, DPP3 requires careful documentation of: (a) the original stated purpose(s) for which personal data is collected; (b) the actual purposes for which the data will be processed; and (c) confirmation that those uses are lawful either because they are directly related to the collection purpose or because separate consent has been obtained. If consent is required, organisations should document the mechanism by which consent was obtained and should retain evidence of that consent.
Data Protection Principle 4: Security of Personal Data. DPP4 requires that appropriate security safeguards be implemented to protect personal data against unauthorised or accidental access, modification, or disclosure. Applied to AI systems, this means that: (a) AI training data containing personal data must be stored on secure servers with access controls; (b) the AI system's outputs must be protected if they contain personal data or could be used to infer personal data; (c) the organisation must have cybersecurity controls appropriate to the sensitivity of the data and the risk profile of the system.
The PCPD has issued guidance on data security standards, though it does not prescribe specific technical requirements (e.g., encryption algorithms, key management procedures). Rather, the PCPD's guidance emphasizes "appropriate" security, meaning security that is reasonable given the nature of the data, the storage environment, and the risk of unauthorised access. For organisations storing large volumes of personal data for AI training, this typically requires: (a) data encryption in transit and at rest; (b) access controls limiting data access to authorised personnel; (c) audit logs tracking access to the data; (d) regular penetration testing or security assessment; and (e) incident response procedures for potential data breaches.
Data Protection Principle 5: Openness. DPP5 requires that organisations disclose their data handling practices in an accessible privacy policy. Applied to AI, organisations should publish clear privacy policies and data governance documents explaining: (a) what personal data they collect; (b) what purposes they collect it for; (c) how they use that data in AI systems; and (d) what individuals' rights are (access, correction, deletion). The policy should specifically address AI use and should not be buried in technical jargon unintelligible to the reasonable person.
Data Protection Principle 6: Access and Correction. DPP6 grants individuals the right to access their personal data held by organisations and to request correction of inaccurate data. For organisations, this creates an operational challenge when AI systems are involved: if personal data has been used to train an AI model, the data is now embedded in the model's parameters and cannot be easily "accessed" or "corrected" in any practical sense. The PDPO does not explicitly address this difficulty, and the PCPD has issued only limited guidance.
Current practice assumes that DPP6 access requests should be addressed by providing the individual with a copy of their personal data as it was collected and retained in the organisation's records—not by asking the organisation to reverse-engineer or explain the model's use of that data. If an individual requests deletion of their data under DPP6 (which is a correction/deletion right under Hong Kong law, though not an absolute "right to be forgotten" as in the EU GDPR), the organisation should delete the individual's data from the training dataset if feasible, though the organisation need not retrain the entire model or modify the model's outputs. However, for newly trained models, the deleted data should not be included.
The EU's General Data Protection Regulation (GDPR) contains Article 22, which grants individuals the right not to be subject to a decision based solely on automated processing (including profiling) that has legal effects or significantly affects them. Hong Kong's PDPO does not contain an equivalent provision. However, the PCPD's 2021 Guidance on the Use of Artificial Intelligence in an Automated Decision System (issued in response to queries from organisations) recommended that organisations maintain human oversight over significant automated decisions and provide individuals with meaningful information about how decisions were made.
The lack of an explicit statutory right does not mean automated decision-making is unregulated. Rather, it is constrained by the existing DPPs, particularly DPP3 (purpose limitation) and DPP6 (access rights). An organisation using an AI system to make employment decisions, lending decisions, or other significant decisions affecting individuals should: (a) ensure that use of personal data for automated decision-making is consistent with the purpose for which the data was originally collected; (b) provide affected individuals with meaningful access to information about how the decision was made (via the DPP6 access right); and (c) maintain human review capability for significant decisions so that individuals can contest or seek reconsideration of automated decisions.
Organisations should not rely on the absence of an explicit statutory right to conduct fully automated decision-making without human oversight or transparency. Best practice, and the PCPD's recommendations, strongly suggest that significant automated decisions should have meaningful human involvement.
The deployment of generative AI tools (large language models, image generators, code assistants) in the workplace raises specific PDPO issues. Many generative AI services (including popular commercial offerings) retain user inputs and use them for model improvement, retraining, or marketing purposes unless the user opts out or enters into a special data processing agreement.
If an organisation's employees use a public generative AI tool to draft emails, documents, or code and that tool processes those inputs without deletion or use restriction, the organisation may inadvertently be transferring personal data of its employees (and potentially clients) to an external party (the AI service provider) without proper consent or safeguards. This violates DPP3 (purpose limitation—the employee was hired to work for the organisation, not to provide training data for an external AI vendor) and potentially DPP4 (security—the organisation has no control over the security of the AI vendor's systems).
Best practice for organisations using generative AI tools is to: (a) implement an internal Acceptable Use Policy explicitly prohibiting the input of personal data into public AI tools; (b) ensure employees understand what data is considered personal and should not be disclosed; (c) for organisations requiring extensive AI tool use, enter into data processing agreements with the AI vendor that restrict the vendor's use of inputs and provide for data deletion; and (d) educate employees on PDPO risks.
Some enterprises have begun restricting employee access to public generative AI tools entirely and instead deploying in-house AI systems trained on enterprise data but not exposed to external vendors. This approach is more resource-intensive but eliminates the risk of unauthorized data transfer to external parties.
Most generative AI and machine learning services operate on cloud infrastructure hosted outside Hong Kong (often in the United States, Singapore, or other jurisdictions). When an organisation processes personal data on these services, it is transferring that data outside Hong Kong. DPP3 contains a provision—sometimes called the "transfer restriction"—stating that personal data should not be transferred outside Hong Kong unless adequate safeguards are provided.
The PDPO does not define "adequate safeguards," and Hong Kong law does not contain an "adequacy decision" mechanism (unlike the EU GDPR, which contains a list of jurisdictions deemed to provide "adequate" protection). Accordingly, organisations relying on cross-border data transfers must typically implement contractual safeguards. The PCPD has recommended that organisations entering into cross-border data transfer agreements include terms requiring: (a) the recipient to use the data only for the specified purpose; (b) the recipient to provide security measures at least equivalent to Hong Kong law; (c) the individual's right to access and correct data held overseas; and (d) mechanisms for the organisation to verify compliance.
In practice, most organisations relying on US-based cloud AI services use the cloud provider's standard data processing agreement (which typically incorporates the provider's security certifications and privacy commitments) as the contractual safeguard. While these agreements are not bespoke to Hong Kong law, they are generally treated as adequate safeguards by the PCPD and by Hong Kong legal practice.
The PCPD published Guidance on the Use of Artificial Intelligence in an Automated Decision System in December 2021. The guidance is non-binding but reflects the PCPD's regulatory expectations. Key recommendations include:
The PCPD has increased enforcement activity in the AI space in recent years. In 2023, the PCPD investigated the use of facial recognition technology by employers and employment screening agencies, and issued guidance recommending against reliance on facial recognition systems without stringent safeguards and human oversight. In 2024, the PCPD investigated a technology platform's use of AI to process personal data for recommendation algorithms and recommended additional transparency and user consent mechanisms.
While the PCPD does not have statutory enforcement powers equivalent to EU data protection authorities (it cannot impose fines or levy administrative penalties, only compliance orders), it does have investigative authority, the ability to issue compliance notices, and the ability to accept compensation settlements. Organisations should be aware that increased enforcement risk applies to AI systems, and that reputational damage and compensation liability can result from AI-related PDPO breaches.
Financial institutions (banks, investment managers, insurance companies) deploying AI systems must comply with PDPO requirements and also with specific guidance from the HKMA and SFC addressing AI use. The HKMA's 2023 circular on AI in banking and the SFC's guidance on use of AI by intermediaries both address model risk management, governance, and testing requirements. These financial services requirements operate alongside PDPO obligations; they are not substitutes for PDPO compliance.
Organisations deploying AI systems in Hong Kong should implement the following compliance measures:
Learn about our data privacy and AI governance practice
This article is for general information and educational purposes only. It does not constitute legal advice and should not be relied upon as such. Laws and regulatory requirements are subject to change. You should seek independent legal advice in relation to your specific circumstances before taking any action or relying on any information in this article.
EN test
The global RWA market has reached USD 352 billion. But Hong Kong's "same activity, same risk, same regulation" framework imposes material compliance costs on issuers. Alan Wong LLP explains what the regulatory high wall means in practice — and how to navigate it.