Technology

Opinion | How Hong Kong’s AI data guidelines can help firms embrace the future

0
Please log in or register to do it.
Opinion | How Hong Kong’s AI data guidelines can help firms embrace the future


I don’t need to look into a crystal ball to believe that many aspects of our lives will, to some extent, become AI-assisted, if not entirely AI-driven. It is beyond dispute that the era of AI has arrived, and that AI will fundamentally reshape our future.

But for AI to be a positive game-changer, safety is an essential consideration. As Premier Li Qiang put it at the recent World Economic Forum annual meeting: “Like other technologies, AI is a double-edged sword. If it is applied well, it can do good and bring opportunities to the progress of human civilisation and provide great impetus to the industrial and scientific revolution.”

The rise of AI presents some of the thorniest challenges. When this technology falls into the wrong hands, it can be a nightmare. In Hong Kong, digital impersonations of senior officers at multinational companies have tricked employees into transferring funds into fraudsters’ accounts. AI’s creative ability to generate photos and videos in seconds have also been used by scammers to orchestrate deepfake scams.
In May, the Hong Kong Securities and Future Commission warned of a scam using deepfakes of Elon Musk to tout a cryptocurrency trading platform called Quantum AI, highlighting an alarming rise in AI-aided fraud. Photo: Screengrab
AI also often attracts criticism for the privacy and ethical issues associated with its application. For example, the lack of transparency in AI models, from training to deployment, raises concerns that personal data may be harvested and used without consent. Data inputs to generative AI chatbots may also be stored on external servers over which organisations may not have direct control. And a recruitment process solely driven by AI may reinforce gender or racial biases, especially when the AI models have been trained on unrepresentative data sets.
Given the profound implications stemming from AI applications, last October China announced its Global AI Governance Initiative, which advocates that the development and safety of AI be treated as being of equal importance. A month later, the Bletchley Declaration, endorsed by the European Union and 28 countries including China, delivered a powerful statement recognising “a unique moment to act and affirm the need for the safe development of AI”.

Managing the risks of AI is no easy task. Organisations may find it challenging to grapple with the regulatory landscape, given the complexity and novelty of AI technology.

Organisations in Hong Kong that develop, customise or use AI systems that involve personal data are duty-bound to comply with the Personal Data (Privacy) Ordinance. The application of the ordinance, as a piece of technology-neutral legislation, is not affected by the technology employed. In other words, there is no lacuna. The ordinance applies equally to the handling of personal data by AI.

With this in mind, my office recently published the “Artificial Intelligence: Model Personal Data Protection Framework” to provide internationally well-established and practicable recommendations as well as best practices to help organisations in the procurement, implementation and use of AI, including generative AI, in compliance with the ordinance.
Hong Kong’s Office of the Privacy Commissioner for Personal Data released its report, “Artificial Intelligence: Model Personal Data Protection Framework”, on June 11. Photo: May Tse

This model framework covers recommended measures for four general business processes: establishing AI strategy and governance, conducting risk assessment and human oversight, customising AI models and implementing and managing AI systems, as well as communicating and engaging with stakeholders.

Companies may be concerned about whether the adoption of the model framework would increase compliance costs. On the contrary, we believe that adoption of the framework would help to reduce compliance costs.

Indeed, the framework provides a step-by-step guide on the considerations and measures to be taken throughout the life cycle of AI procurement, implementation and use, which would materially reduce the need for organisations to seek external advice from system developers, contractors or even professional service providers.

Moreover, in line with international practice, the framework recommends that organisations adopt a risk-based approach, implementing risk management measures that are commensurate with the risks posed, including an appropriate level of human oversight. This effectively enables organisations to save costs by focusing their resources on the oversight of higher-risk AI applications.

Thus, the model framework has been introduced to facilitate the implementation and use of AI in a safe and cost-effective manner rather than inhibiting its use.

As Hong Kong is poised to become an international innovation and technology hub, I believe that the model framework will help to nurture the healthy and safe development of AI in Hong Kong and propel the expansion of the digital economy throughout the Greater Bay Area.

AI is set to transform almost every facet of our lives. Whether AI becomes a game-changer for better or for worse hinges on our actions today. By fostering the safe and responsible use of AI, together we can build a trustworthy AI-driven world.

Ada Chung Lai-ling is Hong Kong’s Privacy Commissioner for Personal Data



Source link

Missing 6-Year-Old Girl Reunites With Family; Had Not Eaten or Drank in 3 Days
Lucky phone numbers and wedding dates to dining don’ts, Chinese numerology superstitions