AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google

AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google

Geoffrey Hinton, a renowned pioneer in artificial intelligence (AI) and a former leader of Google’s AI research division, has resigned from his position at the tech giant. Hinton has cited growing concerns about the ethical implications of the technology he helped create as his reason for leaving. He aims to speak more openly about the potential risks and harms of AI, especially as the company and its rivals have been racing to develop and deploy ever more powerful and sophisticated models. Hinton’s departure is the latest and most prominent sign of a growing rift between some of the world’s leading AI researchers and the tech companies that employ them.

Hinton is widely regarded as the “Godfather of AI” for his groundbreaking work on deep learning and neural networks. He expressed his concerns about the impact of AI on society, and he has warned about the existential threat of superintelligent AI that could surpass human capabilities and goals. In an interview with the New York Times, he said, “It is hard to see how you can prevent the bad actors from using it for bad things.”

Hinton’s departure comes just over a month after more than 1,000 AI researchers signed an open letter calling for a six-month pause on the training of AI systems more powerful than GPT-4, the newest model released by OpenAI in March. The letter states that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

Many AI researchers have been raising alarms about the social, environmental, and political impacts of AI, as well as the lack of transparency, accountability, and diversity in the field. Timnit Gebru and Margaret Mitchell, two former co-leaders of Google’s Ethical AI team, were both fired by the company in after they challenged its practices and policies on AI ethics.

Gebru, a renowned expert on bias and fairness in AI and a co-founder of Black in AI, a group that promotes diversity and inclusion in the field, was ousted in December 2020 after she co-authored a paper that criticized the environmental and social costs of large-scale language models. Mitchell, who founded Google’s Ethical AI team in 2017 and was a vocal advocate for Gebru, was terminated in February 2021 after she conducted an internal investigation into Gebru’s dismissal and expressed her dissatisfaction with Google’s handling of the situation.

Other prominent voices in AI ethics have also been speaking out against the industry’s practices and priorities. Kate Crawford, a senior principal researcher at Microsoft Research and a distinguished professor at New York University, has written extensively on the risks of AI. She recently published a book titled Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, which reveals the hidden human and environmental tolls of AI production and consumption. Stuart Russell, a professor of computer science at the University of California, Berkeley, and a co-author of Artificial Intelligence: A Modern Approach, the standard textbook on AI, has been warning about the existential threat of superintelligent AI that could surpass human capabilities and goals.

Hinton has resigned from Google seemingly on good terms. He notified Google of his intention to resign last month and had a phone conversation with Sundar Pichai, the CEO of Google’s parent company, Alphabet, on Thursday, but declined to publicly disclose the details of their discussion. Hinton tweeted that he thinks “Google has acted very responsibly.”

The role of AI in society

The concerns raised by Hinton and other AI experts highlight the need for a deeper conversation about the role of AI in society. While AI has the potential to bring about significant benefits in areas such as healthcare, transportation, and education, it also poses significant risks, such as exacerbating existing inequalities, perpetuating bias, and threatening privacy and security.

As AI systems become more sophisticated and powerful, it is essential that we develop frameworks for evaluating and mitigating their potential harms. This requires a multidisciplinary approach that brings together experts from a range of fields, including computer science, law, ethics, and social science.

One promising approach is to develop robust ethical guidelines for AI that take into account the diverse perspectives and values of different stakeholders, including communities that have been historically marginalized or underrepresented in the field. Several organizations, including the IEEE, the EU Commission, and the Partnership on AI, have already developed such guidelines, but there is still much work to be done to ensure that they are widely adopted and enforced.

Another critical area for research is the development of more transparent and explainable AI systems that enable users to understand how decisions are made and provide a mechanism for challenging or appealing them. This is particularly important in domains such as healthcare and criminal justice, where AI systems can have life-altering consequences for individuals and communities.

Finally, it is essential that we continue to invest in education and training programs that prepare the next generation of AI professionals to approach their work with a critical and ethical lens. This includes fostering diversity and inclusion in the field and promoting collaboration across disciplines and sectors.

Comments

Popular posts from this blog

OpenAI's AI Text Classifier

Utilizing the Power of ChatGPT in Google Workspace App

How to Create OpenAPI Swagger Specification?