Generative AI and Terms of Service: What Hong Kong Businesses Need to Address

Read

Generative AI and Terms of Service: What Hong Kong Businesses Need to Address

Generative AI creates output liability and IP gaps your standard T&Cs miss. Learn what every HK business must address. Read the full guide now.

Generative AI and Terms of Service: What Hong Kong Businesses Need to Address

Hong Kong businesses deploying or relying upon generative AI tools face a constellation of legal risks that emerge directly from the AI's operational characteristics: outputs may be inaccurate or defamatory; intellectual property ownership of AI-generated content is legally uncertain; data security and privacy obligations become complex when inputs are processed by external vendors. Yet most businesses have not updated their standard terms of service, employment agreements, or vendor contracts to address these AI-specific exposures. This article identifies the critical legal issues arising from generative AI use, examines the relevant statutory frameworks, and provides practical guidance on drafting and implementing contractual protections.

Output Liability: Who Bears Responsibility for What the AI Says

Generative AI systems produce outputs that may be inaccurate, biased, defamatory, or otherwise harmful. The question of legal liability for those outputs is often unclear to the business deploying the AI. The answer depends on the specific context and the nature of the output, but several general principles apply under Hong Kong law.

Defamation Liability for AI-Generated False Statements. Under Hong Kong's defamation law (which is primarily a matter of common law, informed by the Defamation Ordinance (Cap. 359)), a person is liable for publishing a false statement of fact that identifies or refers to another person and that damages that person's reputation. The statement must be communicated to a third party (publication requirement) and must be understood as referring to the plaintiff.

If a business uses a generative AI tool to produce a document or statement that is published to third parties (e.g., included in a marketing email, posted on the company's website, or used in an employment context), and if that statement is false and identifies or refers to an individual, the business publishing the statement is liable for defamation—regardless of the fact that the statement was generated by an AI system rather than authored by a human. The AI tool is not a natural person and cannot itself be sued; liability attaches to the publisher of the statement.

Example: A company uses a generative AI tool to draft news articles for its website. The tool generates an article falsely stating that a named competitor's CEO has been convicted of fraud. The company publishes the article without verification. The CEO sues for defamation. The company is liable, even though the falsehood was generated by the AI tool, because the company published the statement without reasonable precautions to verify accuracy.

Professional Negligence Liability for AI-Generated Advice. If a business holds itself out as providing professional services (legal advice, financial advice, medical advice, accounting services, etc.) and uses an AI tool to generate advice that is provided to a client, and if that advice is inaccurate and causes the client loss, the business may be liable for professional negligence. The negligence claim does not depend on the fact that an AI tool was used; the duty of care applies to all advice rendered by professionals.

Example: A small law firm uses a generative AI tool to draft employment contract templates for its clients. The AI-generated template contains a clause that is inconsistent with current Hong Kong employment law. The firm provides the template to a client without review, and the client uses the template in a hiring arrangement. When the employee later sues over the allegedly unlawful clause, the client seeks to recover from the law firm for negligence based on the defective advice. The firm is liable, notwithstanding that the template was AI-generated, because the firm owed a duty of care in rendering legal advice.

Product Liability Implications for AI-Integrated Products. If a business sells a product that incorporates or relies upon an AI system, and if the AI system malfunctions or produces incorrect outputs causing physical injury or economic loss, the business may face product liability claims. This is an emerging area of law, and Hong Kong courts have not yet addressed AI product liability extensively. However, the principles of strict liability and negligence in product liability cases would apply. A business selling an AI-enabled device should ensure that the device's AI components have been tested, that risks are disclosed, and that appropriate disclaimers are included in the product documentation and terms of sale.

Intellectual Property Ownership of AI-Generated Content

Who owns the copyright in text, images, code, or other creative works generated by an AI system? This question has not been definitively resolved by Hong Kong courts, though statutory guidance provides some direction.

Copyright Ordinance Treatment of Computer-Generated Works. Hong Kong's Copyright Ordinance (Cap. 528) contains provisions addressing computer-generated works. Section 9(3) states that for a literary, dramatic, musical, or artistic work that is computer-generated, the "author" is deemed to be the person who made the necessary arrangements for the creation of the work. This provision pre-dates generative AI and was drafted in reference to early computer graphics or music composition software where a human made specific creative decisions and instructions to the computer.

Applied to generative AI, the provision is ambiguous. If an employee of a company provides a prompt to a generative AI system and the system produces output, is the employee (or the company employing the employee) the "person who made the necessary arrangements"? Or does "necessary arrangements" require a higher level of creative direction that is absent in the prompt-driven, largely automated process of generative AI? Courts have not definitively answered this question in Hong Kong.

Other jurisdictions have begun to address the question: the US Copyright Office determined in 2023 that generative AI-created works without significant human creative input may not qualify for copyright protection, as they lack the "human authorship" that copyright requires. The UK's approach is still developing, but courts are grappling with whether generative AI outputs can be "original works of authorship" in the copyright sense. The EU's Copyright Directive contains provisions addressing AI-generated works, though those provisions are still subject to interpretation.

In the absence of clear Hong Kong judicial guidance, businesses should operate under the following practical assumptions: (a) generative AI-produced outputs that involve no human creative input may not qualify for copyright protection in Hong Kong, leaving the outputs in the public domain; (b) generative AI-produced outputs where the human has exercised significant creative direction or modification may qualify for protection as derivative works or as works where the human is the author; (c) for outputs where copyright protection is uncertain, businesses should not rely on copyright infringement claims against competitors who use similar outputs.

Training Data Provenance and Copyright Infringement Risk. Generative AI systems are trained on large datasets, which may include copyrighted works obtained without license from the copyright holder. When a generative AI system produces an output, that output may incorporate or closely resemble material from the training data, creating potential copyright infringement liability.

For a business using a generative AI tool, the infringement risk is complex: (a) if the business's input prompts are designed to elicit outputs resembling copyrighted works, and the system produces outputs that infringe those copyrights, the business may be liable as an infringer; (b) alternatively, if the business uses a generative AI tool provided by a vendor, the vendor may bear infringement risk for having trained the model on copyrighted works. Many generative AI vendor agreements include indemnification provisions shifting copyright infringement liability to the vendor (or allocating it based on specified conditions), which may protect the business user.

Businesses should review their generative AI vendor agreements carefully to understand copyright indemnification provisions and to assess their own exposure. Some vendors explicitly warrant that outputs do not infringe third-party copyrights; others disclaim such warranties entirely. The breadth of this indemnification is an important commercial negotiating point.

Trademark Issues in AI-Generated Content. Generative AI systems may produce outputs that incorporate trademarks, brand names, or identifying characteristics of products belonging to third parties. For instance, an image generation AI might produce an image of a product that closely resembles a branded product. Using such outputs commercially could infringe trademark rights. Businesses should be cautious about publishing, marketing, or selling AI-generated content that incorporates third-party brand identifiers without appropriate consent.

Data Protection Obligations with AI Tool Vendors

As discussed in the preceding article on PDPO and AI, using generative AI tools to process personal data raises data protection issues. This article focuses on the contractual mechanisms through which businesses can address those issues in vendor agreements.

Data Processing Agreements and Vendor Obligations. When a business uses a third-party generative AI tool that processes personal data, the business should enter into a data processing agreement (DPA) with the vendor specifying: (a) what personal data will be provided to the vendor; (b) the permitted uses (e.g., "for AI model inference only, not for model training"); (c) data retention and deletion obligations (e.g., "input data will be deleted within 30 days of processing"); (d) data security requirements; (e) the vendor's restriction on further transferring data to subprocessors; and (f) the vendor's obligation to assist with individual data subject access requests under the PDPO.

Many generative AI vendors (including public cloud providers and commercial AI platforms) now offer standard data processing agreements or model agreements for enterprise customers. These documents typically contain provisions addressing data security, retention, and use restrictions. However, the scope of these provisions varies widely: some vendors guarantee that input data will not be used for model training, while others reserve the right to use inputs for training unless the customer opts out. Businesses should review these agreements carefully and should negotiate for the level of data protection appropriate to the sensitivity of the data being processed.

Data Residency and Cross-Border Transfer Restrictions. Many generative AI tools operate on cloud infrastructure hosted outside Hong Kong, creating potential PDPO issues regarding cross-border data transfers. A business should confirm with the vendor: (a) where personal data will be stored and processed (e.g., EU data centres, US data centres, Hong Kong data centres); (b) whether the vendor processes data in multiple jurisdictions; and (c) whether the vendor offers data residency options (e.g., storing data exclusively in Hong Kong or the EU).

For businesses handling sensitive personal data (employee records, customer financial data, health information), data residency in Hong Kong or a jurisdiction with strong privacy protection (EU, Singapore) is preferable. Vendors that process data on servers distributed globally create compliance complexity and increase the risk of loss of control over data.

Key Contractual Clauses: Vendor Agreements for Generative AI Tools

Businesses entering into agreements with generative AI vendors should ensure that the following provisions are addressed (even if standard terms are used):

Output Indemnification. The agreement should specify whether the vendor will indemnify the customer for third-party claims that AI-generated outputs infringe copyright, trademark, trade secrets, or other intellectual property rights of third parties. An indemnification provision is important because customers are at risk of infringement liability when using outputs. Some vendors indemnify broadly (covering all IP infringement claims); others limit indemnification to cases where the customer has followed the vendor's usage guidelines; still others disclaim indemnification entirely. Strong negotiating position favours broad indemnification, though vendors may resist this and may require the customer to procure insurance.

Data Processing Restrictions. The agreement should prohibit the vendor from using customer inputs for model training, retraining, or improvement without explicit consent. This is a critical clause for customers handling confidential or sensitive data. Some vendors allow customers to opt out of training use; others permit training use by default. The customer should explicitly confirm the vendor's policy and should require contractual restriction if necessary.

Limitations of Liability. The agreement should address liability limitations and should allocate liability for outputs. A well-drafted provision might specify: (a) the vendor is not liable for the accuracy, completeness, or non-infringement of outputs; (b) the customer bears all liability for outputs it publishes or relies upon; (c) the vendor is liable only for direct damages up to a specified cap (e.g., the fees paid in the prior 12 months). These limitations allocate risk appropriately because the vendor has limited control over customer use of outputs.

Service Level Agreements and Availability. If the customer relies on a generative AI tool for critical business operations, the agreement should include a service level agreement (SLA) specifying uptime commitments, performance metrics, and remedies for service failures (typically credits against future fees). For non-critical applications, SLA negotiation may be less important.

Audit Rights and Transparency. The agreement should provide the customer with a reasonable right to audit the vendor's compliance with data protection and security obligations, or should provide for periodic third-party security assessments (SOC 2 compliance, for instance) that the vendor makes available to customers.

Term, Termination, and Data Return. The agreement should specify how long the contract lasts, under what circumstances either party can terminate, and what happens to customer data upon termination. The vendor should be obligated to return or securely delete customer data within a specified period after termination (typically 30–90 days).

Key Clauses: Customer-Facing Terms of Service

If a business uses generative AI to provide services or deliver products to customers, the business should update its customer-facing terms of service to address AI use. Key additions include:

Disclosure of AI Use. While Hong Kong law does not currently mandate disclosure of AI use in most commercial contexts (unlike some emerging regulatory frameworks in the EU and Singapore), best practice and reputational risk management favour transparency. A clear disclosure that an AI tool was used to generate or contribute to content provides customers with the information necessary to assess the reliability of the content and to make informed purchasing decisions.

Limitation of Liability for AI-Generated Content. The terms should explicitly state that: (a) AI-generated content is provided "as is" and may contain inaccuracies, errors, or omissions; (b) the business does not warrant the accuracy or completeness of AI-generated content; (c) customers should not rely on AI-generated content for critical decisions without independent verification; and (d) the business's liability for inaccuracies in AI-generated content is limited to a specified cap (often zero, or the fees paid by the customer).

Intellectual Property Ownership of AI-Generated Outputs. The terms should clearly specify who owns copyright in AI-generated outputs provided to the customer. Options include: (a) the business retains ownership and grants the customer a license to use the outputs; (b) the customer owns the outputs; (c) ownership is shared or depends on the degree of customer customization. The choice depends on the business model, but clarity is essential to avoid disputes.

User Obligations Regarding Input Data. The terms should require customers not to input confidential, proprietary, personal, or regulated information into the AI tool without appropriate safeguards. This provision is important for protecting both the customer (by restricting unintended disclosure) and the business (by limiting its liability for customer misuse).

Disclaimer for Professional Services. If the business is providing what appears to be professional advice (legal, financial, accounting, tax advice) with AI assistance, the terms should clearly disclaim that the output constitutes professional advice and should recommend that customers seek independent professional counsel for important decisions. This disclaimer does not eliminate the business's professional liability if it actually renders professional services, but it may reduce liability exposure and signals to customers the limitations of AI-generated output.

Employment Policy Considerations

Businesses deploying generative AI tools internally should establish clear employment policies addressing: (a) permitted and prohibited uses of AI tools by employees; (b) restrictions on inputting confidential company information or customer data into public AI tools; (c) disclosure requirements when AI is used to draft client communications or deliverables; (d) quality assurance responsibilities for work product incorporating AI; and (e) training on PDPO and intellectual property risks associated with AI use.

A sample employment policy might include provisions such as:

  • Employees shall not input confidential company information, customer data, employee personal data, or proprietary business information into any generative AI tool without explicit management approval.
  • Where generative AI is used to draft documents or communications provided to customers or third parties, the employee remains responsible for the accuracy and appropriateness of the final deliverable. AI-assisted work product must be reviewed and verified by the employee before delivery.
  • Employees who provide customer confidential information to any vendor or third party, including generative AI vendors, without appropriate authorization or without confirming that the vendor has executed appropriate confidentiality and data protection agreements, will be subject to discipline.
  • The company retains ownership of all work product generated by employees using generative AI tools, regardless of the degree of AI involvement. Employees shall not claim copyright or ownership over AI-assisted work.

SFC and HKMA Regulated Entities: Additional Considerations

Financial institutions and investment firms licensed by the SFC or HKMA face additional requirements when deploying generative AI. The SFC's 2024 circular on AI use by intermediaries addresses governance, model risk management, and conduct requirements. The HKMA's guidance on AI in banking covers similar ground. These regulatory frameworks require that:

  • AI systems used for customer-facing activities (investment advice, research, client communications) must be subject to governance frameworks ensuring they comply with conduct of business requirements, suitability obligations, and disclosure requirements.
  • AI systems must be appropriately tested and validated before deployment, with documentation of testing methodology and results.
  • There must be meaningful human oversight of AI systems used for significant customer decisions or recommendations.
  • AI-generated research or investment recommendations must comply with disclosure requirements and must not mislead customers regarding AI involvement.

Regulated entities should ensure that their generative AI implementations, and their terms of service or client agreements, are consistent with these regulatory requirements.

Risk Management: A Practical Framework

Businesses deploying generative AI should implement a basic risk management framework that includes: (a) identification of all use cases and the types of data or decisions involved; (b) classification of risk (high-risk uses such as automated lending decisions or legal advice; medium-risk uses such as customer service drafting; low-risk uses such as internal brainstorming); (c) tailored contractual, policy, and disclosure responses appropriate to the risk classification; and (d) periodic review and updates as the regulatory environment and the state of technology evolve.

How Alan Wong LLP Can Help

Learn about our technology, data and AI practice

This article is for general information and educational purposes only. It does not constitute legal advice and should not be relied upon as such. Laws and regulatory requirements are subject to change. You should seek independent legal advice in relation to your specific circumstances before taking any action or relying on any information in this article.

You may like

LOCALE-LINK-TEST EN

LOCALE-LINK-TEST EN

EN test

Hong Kong's RWA Tokenisation Wave: Navigating the Regulatory High Wall

Hong Kong's RWA Tokenisation Wave: Navigating the Regulatory High Wall

The global RWA market has reached USD 352 billion. But Hong Kong's "same activity, same risk, same regulation" framework imposes material compliance costs on issuers. Alan Wong LLP explains what the regulatory high wall means in practice — and how to navigate it.

window.addEventListener('load',function(){ var e=document.querySelector('.w-locales-empty'); if(e){var ls=[['English','https://www.awlawyers.co'],['繁體','https://www.awlawyers.co/hk'],['简体','https://www.awlawyers.co/cn']]; e.innerHTML=ls.map(function(l){return''+l[0]+'';}).join('');} [].forEach.call(document.querySelectorAll('.flags-wrapper img'),function(img,i){ var u=['https://www.awlawyers.co','https://www.awlawyers.co/hk','https://www.awlawyers.co/cn']; if(u[i]){img.style.cursor='pointer';img.addEventListener('click',function(){window.location=u[i];});} }); });