|
"Moral rules are second only to the law. Morality can help us uphold the law and fill the void of legal silence."
To fill these gaps, organizations must set ethical parameters that govern how they develop and use AI-based technologies. Speaking on the same webinar, Kathy Baxter, Ethical AI Practice Architect at Salesforce, explained these parameters in Salesforce:
“We need to empower our users. To do this, our AI needs to be inclusive and Azerbaijan WhatsApp Number List respect the rights of everyone affected by it. Therefore, we have developed an AI Charter that sets out our approach as a company Artificial Intelligence Principles.
“We believe we must safeguard all the data we are entrusted with and ensure what we are building protects and respects human rights. It must be accountable, and we seek and leverage feedback from our customers and civil society groups. Transparency is also important. We must be clear about how we build our models and explain to our users how our AI makes predictions or recommendations.”
Designing ethical AI
This clear ethical framework must be built into the DNA of the AI design and development process. Baxter explained how this is achieved at Salesforce:
“Salesforce works on the agile development methodology. During the very early design stages, we do an assessment with the teams to identify all the intended and unintended consequences of the AI application. We do an analysis of the likelihood and seriousness of the impact, and ask 'should this application even exist in the first place?'. If the answer is 'yes', we identify the strategies we need to put in place to ensure those unintended consequences are mitigated as much as possible.”
However, infusing an ethical framework into the design and development of AI-based technologies may not always be practical. David Hardoon, Senior Advisor on Data & AI, UnionBank of the Philippines, explained during the webinar:
"We need to be careful of the term 'by design'. If an AI methodology or solution algorithm is applied within a specific context or application, then you can hard code the ethics in. But if you have something that needs to be applied more generally from east to west you have to deliberately allow for certain flexibility. In these cases, we need a second line of defence.
|
|