Technology should be in service of the human, not the other way around’. With technology being available to more people around the world, we can’t help but wonder if the technology is slowing replacing humans and humanity. More organizations are implementing artificial intelligence or AI and realizing its power. Many society sectors are asking profound ethical and social questions on how AI should and can be used responsibly.
Avoiding the Perpetuation of Bias
One of the biggest concerns in AI is probably managing bias. Humans are full of agendas and opinions so it can only be expected that AI may see biases in existing datasets. Humans need to identify bias and manage it. We should also train AI systems to identify it so AI models can be prevented from reinforcing stereotypes which can allow bias to influence predictions and recommendations.
For example, a bank is trying to predict whether to give an applicant a loan. If past data from the bank shows that people from minorities or women aren’t given as many loans, the AI algorithm might come up with the decision to reject the loan application. This way, AI picked up the bias and perpetuated it.
This can be prevented by carefully examining the training data since AI algorithms depend highly on it. And training data can be influenced by human bias. Diversity should be considered in decision making. Protected classes like gender and race should be excluded where appropriate.
The Next Level of AI
AI ethics is continuously being debated and developed by groups like the Partnership on AI. These groups are composed of education and technology leaders, which include Salesforce, from different fields. They have joined forces to tackle the responsible development and use of AI technologies. The ultimate goal is for AI to empower humans instead of dehumanizing us.