
Alphabet, the parent company of Google, has revised its AI principles, removing a previous commitment to never using artificial intelligence (AI) for weapons development and surveillance. The move signals a shift in the company’s approach to AI governance, aligning with national security priorities.
In a blog post, Google senior vice president James Manyika and Google DeepMind CEO Demis Hassabis defended the update, stating that AI had evolved from a niche research field into a global platform. They argued that businesses and democratic governments must collaborate to develop AI that protects citizens and supports economic and security interests.
The revision comes amid ongoing debates over AI regulation and ethics, with concerns over its use in warfare and surveillance. The company maintains that its AI development remains guided by democratic values such as freedom and human rights.
The announcement coincided with Alphabet’s latest earnings report, which fell short of market expectations despite a 10% increase in digital advertising revenue. The company plans to invest $75 billion in AI projects this year, significantly more than analysts anticipated.
Google’s AI-powered platform, Gemini, has already been integrated into search results and Pixel devices. The company’s evolving stance on AI ethics reflects a broader shift in the tech industry, where commercial and security interests increasingly intersect.
The decision marks a departure from Google’s past commitments, including its 2018 decision to end a controversial AI contract with the U.S. Pentagon following employee protests. However, with AI now at the forefront of global innovation and competition, Alphabet appears to be embracing a more pragmatic approach to its development and deployment.





Leave a comment