Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.
The company has set strict standards for ethical and safe development of AI.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc. More…