Of all the modern demands on Artificial Intelligence, “open the pod bay doors” is arguably the most infamous. Years after Kubrick’s 2001: A Space Odyssey, you can ask your own pocket AI. Unlike Siri, HAL 9000, the fictional AI who kept the pod bay doors closed, is a revealing reflection of humanity’s anxieties regarding technology created in one’s own image.
Perhaps the same shared, underlying unease that caused HAL’s misbehavior to still resonate to this day is partly behind Google, DeepMind, Facebook, Amazon, Microsoft and IBM banding together to police the development and implementation of AI. Notably absent are OpenAI (Elon Musk’s research project) and Apple. The group’s chosen name, “Partnership on AI to Benefit People and Society,” is fairly self-explanatory, if not clunky and slightly sinister. As stated on the Partnership’s website, the group was formed to leverage the “great promise” of AI “for raising the quality of people’s lives” and addressing “important global challenges such as climate change, food, inequality, health, and education.” However, many of the Partnership’s stated goals seem geared towards shaping the future public attitude towards AI via knowledge and reassurance.
- Advance public understanding of artificial intelligence.
- Create standards for future research.
- Support best practices.
- Create open discussion.
Further, some of the Partnership’s eight tenets listed on its website almost directly address public concerns regarding AI and its logical conclusion, robot uprising. In general, the tenets involve the following principles:
- As many people as possible should be benefited and empowered by AI.
- The public should be involved in the development of AI.
- Development will be held accountable to “a broad rage of stakeholders.”
- Research and development should be conducted with transparency, and systems’ reasoning should be equally transparent, but also explainable.
- Development will seek public feedback and address public questions.
The Partnership’s formation, as well as its focus on transparency, public involvement, and ethics, is notably near to the White House’s series of workshops and groups on AI’s risks and benefits. The government’s workshops addressed complex policy, safety and security questions, and also revealed fears that jobs will be lost to AI, and behavior will be unpredictable and uncontrolled.
One must wonder whether the Partnership is less an exercise in selflessness than a strategic maneuvering to control the AI conversation. Arguably, presenting solutions via the Partnership would potentially discourage government regulation, and in so doing avoid regulatory speed bumps to company growth.