| Asia | Articles

The SFC has issued a circular addressing the implications of generative AI in the financial services sector, emphasising the need for firms to adopt robust governance frameworks when using AI language models.
The regulator has reminded licensed corporations (LCs) in Hong Kong that they must consider all relevant risk factors for their specific AI language model (LM) applications and implement appropriate measures to mitigate those risks. Using AI LMs for investment advice or research is particularly considered as high-risk, requiring additional mitigation measures.
Based on the four core principles highlighted in the circular, we recommend the following actions:
1. Senior management responsibilities
Senior management must ensure effective policies, procedures, and oversight is in place for the lifecycle of AI LMs. This should cover development, validation, and ongoing monitoring.
Qualified personnel should also be designated to manage AI LMs, including compliance assessments.
2. AI model risk management
A segregated model development function has to be maintained and should include robust validation and testing processes.
According to the circular, LCs must conduct ongoing monitoring to verify that AI LMs remain fit for purpose. Risk mitigation strategies should be implemented to address hallucination risks and ensure accountability for AI outputs.
For high-risk use cases, include model validation, human oversight to review outputs, and robust testing of model responses to variations in input, and to discuss and notify the SFC if necessary.
3. Cybersecurity and data risk management
Stay up to date on the cybersecurity landscape related to AI LMs and establish effective policies to manage cybersecurity risks, including prompt detection of intrusions.
Protect yourself against attacks that could extract confidential information or manipulate AI LM responses. We advise putting in place regular adversarial testing to enhance protection.
Non-public data must be encrypted both at rest and in transit to safeguard confidentiality. Be careful when using AI LM-based browser extensions to avoid potential privacy risks.
Check the quality of training data is adequate and actively identifies and mitigates biases. Your training processes should comply with relevant data protection frameworks.
Effective controls must also be in place to protect sensitive information throughout the AI LM lifecycle.
4. Third-party provider risk management
Carefully select and monitor third-party providers, ensuring they possess the necessary expertise and controls.
When validating third-party AI LMs, LCs should assess the provider’s model risk management framework and the appropriateness of the AI LM for their use cases.
For open-source AI LMs without clear third-party oversight, you have to still apply relevant model development and management practices.
Evaluate potential impacts of third-party breaches on personal data privacy and intellectual property laws and ensure providers have protective measures in place.
Responsibilities for managing cybersecurity risks between LCs and their third-party providers should be clearly defined. Assess vulnerabilities in the supply chain and apply stringent cybersecurity controls. It’s also important to maintain an inventory for monitoring.
Lastly, contingency plans to maintain operations should be prepared in case of service disruptions from third-party providers.
The fast-evolving nature of AI technology requires continuous testing and monitoring of AI LMs, especially in high-risk applications, to safeguard against emerging risks. Provide clear disclosures when clients interact with AI systems to ensure transparency.
How can Bovill Newgate help you strengthen your governance framework?
We can assist you with making sure your business aligns with the compliance obligations related to AI LMs.
We can also conduct a comprehensive risk assessment to identify potential compliance risks and assist in developing or updating your internal policies and procedures to align with SFC expectations.