The 10th of July team members of OpenAI released a paper on arXiv called The Role of Cooperation in Responsible AI Development by Amanda Askell, Miles Brundage and Gillian Hadfield. One of the main statements in the article goes as follows: “Competition between AI companies could decrease the incentives of each company to develop responsibly by increasing their incentives to develop faster. As a result, if AI companies would prefer to develop AI systems with risk levels that are closer to what is socially optimal — as we believe many do — responsible AI development can be seen as a collective action problem.” Therefore, how is it proposed we approach this problem?
AI in health and wellbeing is increasingly being implemented.
In the short term harmful scenarios have already occured, for example:
Longer term, larger scale scenarios include dangers such as:
Responsible AI development is described as lowering risk, in this case: safety, security, and the structural risks associated with AI systems.
It is said in the article that there may be confusion regarding which direction AI investments will make if it comes to prioritising speed as opposed to a safe amount of progress: “If the additional resources invested in ensuring that an AI system is safe and beneficial could have been put towards developing an AI system with fewer constraints more quickly, we should expect responsible AI development to require more time and money than incautious AI development.” As such they are raising an important question in regards to the risk perspective <> versus an often raised issue of competitiveness. Perhaps a make-it-work attitude could be harmful with scale?
“This means that responsible development is particularly costly to companies if the value of being the first to develop and deploy a given type of AI system is high.”
The cost discussed here however is relating to the possible financial benefit and in a sense ‘loosing out’ on opportunities.
This from not building certain ‘lucrative AI systems’ on the grounds of safety, security, or impact evaluation. Discussion of races assumes that they have a definitive endpoint, it is described as a ‘perpetual R&D race’.
Thankfully OpenAI does seemingly outline these negatives of responsibility to proceed into the upside. If consumers have a preference for safer products and respond rationally to this preference, they will not buy products that are insufficiently safe.
There is a balancing game or a weighing of the negative downside by companies. “Harms that safety failures inflict on non-consumers are negative externalities, and benefits that safer products produce for non-consumers are positive externalities.” This dilemma is being raised in companies that are developing products and if there is no punishment or focus from consumers it may be hard to keep companies accountable.
However not all workers or owners ignore responsibility which is said to have been demonstrated by recent actions by tech workers (protests etc.).
There has to be stronger incentives for being responsible. Market forces, liability law, and industry or government regulation may not be enough according to OpenAI and they outline why with a compelling argument.
“As cutting-edge AI systems become more complex, it will be difficult for consumers not involved in the development of those systems to get accurate information about how safe the systems are.”
It is notoriously difficult to explain the decision of a given system. A ‘wait and see’ strategy could be damaging in this regard particularly to consumers when these algorithms are implemented in a large scale. I have written previously about ‘track and tell’ which is another practice that seems common among technology companies — first track a data point and tell the consumer later, so little responsibility seems to be taken in both regards.
If regulators cannot assess how risky a given AI system is, they may be overly stringent or overly liberal when using regulatory controls
Harms are more likely to harm: “…those accused of crimes than those that purchase the tools” The researchers from OpenAI lists a few points regarding harm I thought I would convey:
Particularly the last point is of interest to me due to the lack of discussions on climate change or the climate crisis in the community developing artificial intelligence solutions/applications or researching in the field of artificial intelligence. It is astonishing to me as an example that climate change has not been mentioned a single time in this paper — this should be an obvious point that completely lacks mentioning despite how thorough this paper is. In the defence of OpenAI the word ‘sustainability’ is mentioned once and there is an encouragement for collaboration on AI for Good initiatives.
They ask the question of: “…why racing to the bottom on product safety is not ubiquitous in other industries in which decreasing time-to-market is valuable, such as in the pharmaceutical industry” this statement however is outright false. A blatant example made by the comedian John Oliver is opioids. There is in this context additionally bragging by certain CEOs on how fast they can get a drug approved. There was a record amount of approvals by the FDA in 2018 and McKinsey mentions there has been long-running discussions about being first to market. The race to the bottom is happening in pharmaceuticals too, however the point was perhaps that better regulation may be necessary in regards to the field of artificial intelligence.
There is additionally the mention of: “Collective action problems between companies can have positive effects on consumers and the public. A price war is a collective action problem between companies with mostly positive effect on consumers, for example, as it results in lower prices.” This negates the fact that price war affects the producers far more and people do get laid off — some of the poorest factory workers or farm workers do. It has a very negative effect, and I would not argue ‘mostly positive’ would be characteristic in this sense, it ignores a focus on labour.
There are some attempts in this paper to play our scenarios of cooperate-defect game, however as they say a shortcoming of their analysis is that it appeals to an overly simplified conception of cooperation and defection. They have argued that: “in order to “solve” a collective action problem, we can try to transform it into a situation in which mutual cooperation is rational.” However this assumes economic rationality or a rational human actor which generally does not tend to be the case.
I could explore the arguments of irrationality or the lack of rationality with Nobel Price recipient Daniel Kahneman or others criticising homo economicus or the myth of the rational man is much exaggerated. I recommend his lecture Maps of Bounded Rationality.
Five factors was outline to make it more likely that AI companies will cooperate if they are faced with a collective action problem:
They do say their points are lacking, and they stress:
“…there is a need for translating these factors into tangible policy strategies that various actors can implement in order to improve cooperation prospects.”
The authors are arguing against using adversarial rhetoric in the AI development. Another possibility is join research, and considering the recently released bsuite by DeepMind based on OpenAI frontline and Google Dopamine this seems to be an approach OpenAI somewhat is taking. “Active and explicit research collaboration in AI, especially across institutional and national borders, is currently fairly limited in quantity, scale, and scope.” They outline a few possibilities for collaboration:
Areas to consider might include
There is a call for openness in this paper, yet a consideration of implications as well as not opening up everything. “Full transparency is problematic as an ideal to strive for, in that it is neither necessary nor sufficient for achieving accountability in all cases”
There is a list of possible incentives that has been mentioned in the OpenAI papers and they are the following:
The authors outline a series of questions that has to be explored further. Verbatim it is listed as the following:
I am very happy that OpenAI raises these issues and despite some critical thoughts underway in regards to certain of the theoretical thoughts on overall this can be said to be an incredibly important initiatives moving the field in a direction towards a more responsible use of technology. This is additionally the roles that OpenAI has been asked to take to ensure that humanity benefits from this technological change. I would of course encourage you not to trust my opinion and make up your own by reading the original paper. Otherwise I hope this was useful and do make sure to leave me any thoughts should you think of something.
Originally posted here