This content is made possible by our sponsors. Submit your expert blog here.

Navigating the legal landscape of generative AI: A guide to responsible practices

Mariam Sarr discusses generative ai

The legal community is grappling with the ethical implications of generative artificial intelligence (AI) as technological innovation continues. Developed within the realm of machine learning, generative AI has witnessed rapid growth, widespread use, and increased adoption. This technology involves training systems on vast datasets, often including personal information, to create content such as text, code, images, video, or audio in response to user prompts.

Various authoritative statements and resolutions indicate global recognition of the potential risks associated with generative AI. Notable instances include a joint statement from G7 data protection and privacy authorities and a resolution by the Global Privacy Assembly, all highlighting concerns and calling for responsible practices. Closer to home, the Office of the

Privacy Commissioner of Canada (OPC), along with its counterparts in British Columbia, Quebec, and Alberta  are actively investigating OpenAI’s ChatGPT.

This article presents the core guidelines outlined in the “Principles for responsible, trustworthy and privacy-protective generative AI technologies”, released on December 7, 2023, by the OPC. These guidelines provide a comprehensive framework for organizations involved with creating, providing, or using generative AI, emphasizing the need for compliance with existing privacy laws while fostering responsible and trustworthy AI development.

Key principles for responsible generative AI

Legal authority and consent

The foundation of responsible generative AI lies in understanding and documenting the legal authority for collecting, using, disclosing, and deleting personal information. Where consent is the legal authority, it must be valid, meaningful, and specific, avoiding deceptive design patterns. Organizations should be careful when sourcing information from third parties to ensure lawful collection and disclosure.

Appropriate purposes

All stakeholders must ensure that the collection, use and disclosure of personal information through generative AI serve appropriate purposes, as deemed reasonable by an average person in the given circumstances. Developers and providers are urged not to create systems that violate predefined “no-go zones”, such as profiling leading to unfair treatment. Organizations using generative AI tools should be vigilant, monitoring and avoiding inappropriate uses.

Necessity and proportionality

Stakeholders are called upon to establish the necessity and proportionality of using generative AI and personal information within these systems. This entails preferring anonymized or de-identified data over personal information where possible. Organizations using generative AI must evaluate the tool’s validity, reliability, and consider alternative privacy-protective technologies.

Openness

Transparency is essential: all parties must inform individuals about the collection, use, and disclosure of personal information throughout the lifecycle of generative AI. Developers and providers are expected to disclose risks and policies to organizations, while organizations using generative AI should communicate the system’s role in decision-making and ensure users are aware of privacy risks.

Accountability

Recognizing accountability is a shared responsibility, stakeholders must establish clear internal governance structures, undergo assessments, and be responsive to privacy-related concerns. Developers and providers must make AI outputs explainable, traceable, and subject to independent auditing. Organizations using generative AI are responsible for decisions made, providing effective challenge mechanisms, and understanding the system’s limitations.

Individual access

To enable individuals’ right to access their personal information, all parties must develop procedures that provide for the meaningful exercise of this right. Organizations using generative AI should maintain records to fulfill requests related to decisions made using the system.

Limiting collection, use, and disclosure

It is important to limit the collection, use, and disclosure of personal information. Stakeholders should avoid function creep, understand the sensitivity of publicly accessible data, and establish appropriate retention schedules. Developers should use filters to remove personal information from datasets, and organizations must ensure inferences from generative AI are for specified purposes.

Accuracy

Ensuring the accuracy, completeness, and currency of personal information is a joint responsibility. Developers need to assess the accuracy of training data while organizations using generative AI must consider how disclosed accuracy issues affect system use.

Safeguards

Safeguards must be implemented throughout the generative AI lifecycle, commensurate with the sensitivity of the information involved. Developers should design products to prevent inappropriate use and mitigate risks, while organizations using generative AI must ensure that data under their control does not compromise model safeguards.

Special consideration: Protecting vulnerable groups

The OPC emphasizes the need to protect vulnerable groups such as children and historically disadvantaged groups. Developers, providers, and organizations using generative AI must actively work to prevent biases, especially in decision-making processes. Additional oversight, monitoring, and privacy impact assessments are essential to ensure fairness and mitigate risks for these groups.

Conclusion

While generative AI offers unprecedented opportunities, it also introduces complex challenges to privacy and ethical considerations. The principles outlined by the OPC provide a comprehensive guide for legal professionals, organizations, and developers navigating the evolving landscape of generative AI. In this era of rapid technological advancement, the legal community must actively engage with these principles to ensure that the development and use of generative AI technologies is governed by responsible practices.