Generative AI will change the way humans work whether we’re ready for it or not, so the best choice is to be prepared for potential ethical pitfalls. That preparation must include a thoughtful approach to issues as they arise, and must be sufficiently flexible to evolve as our understanding of this emerging technology evolves.
Getting Started with Generative AI Ethics
If your enterprise isn’t using generative AI yet, your vendors are. At a point in the near future, gen AI functions will be woven throughout IT operations. Whether you’re directly engaged with it or not, it’s going to become a part of your business. Your customers may be anxious about AI deployments, as is often the case with emerging technologies, and perhaps more so considering the public conversation around ChatGPT and its ilk. Now is the time to begin building and articulating an ethical framework for your organization’s engagement with generative AI.
We’ve advocated for digital ethics on the Cascadeo blog before, with public fears of LLMs as a driver for clear and effective policy overall. But the growing adoption of generative AI, in particular, as a mainstream computing tool demands a more robust and specific framework to maintain customer trust.
The experts working with LLMs at Cascadeo have begun to discuss an evolving series of pillars of responsible generative AI use, in order to ensure that we honor our ethical engineering principles as we bring new offerings to our own customers. Some of those pillars include:
Transparency
AI systems should be transparent in their operation and decision-making. This means that users should be able to understand how the system works and why it makes the decisions that it does. Customers should be informed how the system is being used, as well.
Fairness
AI systems should be fair and unbiased in their treatment of all users. This means that they should not discriminate against any individual or group of individuals based on their race, gender, religion, sexual orientation, or any other protected characteristic. LLMs in public use so far have shown considerable bias in outputs unless guardrails are established and maintained; those guardrails should be engaged and communicated clearly.
Accountability
AI systems should be accountable for their actions. This means that there should be clear processes in place for holding the developers and users of AI systems responsible for any harm that they cause. The risks of harm should be acknowledged and understood.
Privacy
AI systems should respect the privacy of users. This means that they should only collect and use data that is necessary for the system to function, and they should not share this data with third parties without the user’s consent. Currently, all LLMs offer opt-outs for training data collection; Cascadeo is employing those opt-outs, as should all users who gather customer data.
Security
AI systems should be secure from unauthorized access, use, or modification. This means that they should be properly protected from cyberattacks and other threats. LLM integration processes must be safeguarded, as well.
Sustainability
AI systems should be developed and used in a sustainable way. This means that they should not have a negative impact on the environment or on the resources that are available to future generations. Managing the massive quantities of data involved in LLM operation requires substantial energy consumption, so it’s important for enterprise users to learn what AI providers are doing to minimize their LLM’s environmental impact, and consider that information as part of overall sustainability policies.
Culturally-specific concerns are likely to arise among your stakeholders, as well. In the U.S., where intellectual property is closely guarded, concerns about LLMs training on copyrighted works have spawned lawsuits and will likely be relevant to forthcoming legislation. While this is not necessarily a security issue, it begins, for those worried about data protection, to look like a security issue—if LLMs are swallowing entire libraries whole, customers will want to be assured that they’re not swallowing up and regurgitating their data, as well. This is another instance where the public discourse surrounding LLMs demands an extra layer of clarity and transparency to ensure customer trust.
Perhaps the most discussed fear around generative AI is that it will replace humans, eliminating jobs and accelerating cultural changes that suggest impending disaster. Using AI in the service of humans, to supplement and expand on human ingenuity, rather than to replace them, is one way to craft an ethical response to this most dramatic of anxieties. Generative AI offers unprecedented speed for straightforward tasks like coding, and can be used to free up engineers to innovate and push boundaries, for example, or to allow them to focus on more advanced aspects of their work while leaving the foundational tasks to the LLMs. While even that approach may result in eliminating some positions, it sets an expectation for using AI to improve productivity and the most humane way possible.
Ethical considerations are always challenging and often fraught. Emerging and poorly-understood technologies increase potential pitfalls. But generative AI will change the way humans work whether we’re ready for it or not, so the best choice is to be prepared. That preparation must include a thoughtful approach to ethical issues as they arise, and must be sufficiently flexible to evolve as our understanding evolves.
To that end, this starter reading list may help prompt useful conversations and develop your understanding of your customers’ worries about generative AI:
- IBM Artificial Intelligence Pillars: Our fundamental properties for trustworthy AI
- Google AI Responsibility Principles
- Facebook’s five pillars of Responsible AI
- European Commission Ethics guidelines for trustworthy AI
- Harvard Business Review: Managing the Risks of Generative AI
- Forbes: Six Risks of Generative AI
- Federal Trade Commission: Generative AI Raises Competition Concerns
- AI and the SDGs: How is Artificial Intelligence Helping us Achieve and Track the Sustainable Development Goals?
- Time: The Workers Behind AI Rarely See Its Rewards. This Indian Startup Wants to Fix That
- Washington Post: Behind the AI boom, an army of overseas workers in ‘digital sweatshops’
- Mindspark: Responsible AI: A human-centered design approach to responsible artificial intelligence