Problems of AI making unethical decisions may be resolved
Artificial intelligence (AI) has made huge advancements to gradually becoming a part of many operations within a company; however, AI hasn’t been trained to make decisions based on what’s ethical and what is not – it has no sense of ethics. That situation might be up for correction with a recently presented mathematical method that can manage the unethical biases in AI systems. The investigation was conducted by researchers from Warwick University and The Imperial College, London, EPFL research institute in Switzerland’s Lausanne, and Sciteb Ltd, and it was created to help AI deal with regulators and businesses.
For companies, dealing with an unethical decision can represent a huge cost that could potentially damage their future business, so these researchers have created what they call the “unethical optimization principle.” This means that the way AI operates is being changed at its core so any unethical outcome might be rejected when the optimization process runs, according to Professor Robert MacKay of the Mathematics Institute of the University of Warwick. This study was published in The Royal Society Open Science Journal.
For Professor Wendy Hall of the University of Southampton, who has been doing a lot of work on the potential benefits and problems brought by AI, she said that people can’t rely on AI systems to act ethically, even though the objectives are neutral. There is the possibility that an “AI system will disproportionately find unethical solutions unless it is carefully designed to avoid them” under given conditions. AI-powered software might choose from many potential strategies, and some of them may end up being discriminatory or break the rules by misusing customer data. The New “Unethical Optimization Principle” gives AI machines a simple formula to avoid choosing unethical options. Most AI systems make unethical choices based on the fact that they can turn into profit.