Artificial intelligence innovation is becoming more regulated around the world
The European Commission recently released a white paper on the topic of artificial intelligence (AI) and ethical AI that included a few guidelines on how to regulate and accelerate these practices. Back in February, the Commission launched a public consultation regarding AI and asked both stakeholders and citizens to provide their feedback by June 14. Google issued a response last Friday, stating that it will support the commission to help businesses develop AI skills they need to thrive amid an economy that is transitioning to be digital.
“Next month, we’ll contribute to those efforts by extending our machine learning check-up tool to 11 European countries to help small businesses implement AI and grow their businesses,” said the tech giant in a statement. Google’s machine learning hubs in Zurich, Amsterdam, Berlin, Paris and London have current partnerships with several European universities. Many of them are already making significant contributions to European businesses. “We also support the Commission’s goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission’s proportionate, risk-based approach,” said Kent Walker SVP, Global Affairs at Google.
The document, “White Paper on Artificial Intelligence – A European Approach,” intends to promote a new European ecosystem of excellence and trust in AI. Google added that AI has the broadest range of applications for the future and some of them involve significant benefits and risks. “We think any future regulation would benefit from a more carefully nuanced definition of ‘high-risk’ applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions,” the company explained.