Google wants the government to make some AI rules
Google recently released a white paper, outlining five areas where the government can work with civil society and AI practitioners.
Artificial Intelligence (AI) is the next-big-thing, which is partly useful and partly creepy. Having machines learn human behaviour to predict the needs is something that has potential implications on the notions of privacy and security.
Addressing these concerns, Google recently released a white paper, outlining five areas where the government can work with civil society and Artificial Intelligence (AI) practitioners to provide guidance on responsible AI development and use.
These areas include explainability standards, fairness appraisal, safety considerations, human-AI collaboration and liability frameworks.
AI is still at its nascent stage and there are a number of loopholes, which needs to be fixed before the technology is rolled out to the masses. Google believes that no one company, country, or community has all the answers and it is crucial for all the stakeholders worldwide to engage in AI governance.