AI and collective action

Posted on 12th November 2019

man speaking to audience

Written by Alex Moltzau, Co-Founder, AI Social Research

Towards a more responsible development of artificial intelligence with a research paper from OpenAI

The 10th of July team members of OpenAI released a paper on arXiv called The Role of Cooperation in Responsible AI Development by Amanda Askell, Miles Brundage and Gillian Hadfield. One of the main statements in the article goes as follows: “Competition between AI companies could decrease the incentives of each company to develop responsibly by increasing their incentives to develop faster. As a result, if AI companies would prefer to develop AI systems with risk levels that are closer to what is socially optimal — as we believe many do — responsible AI development can be seen as a collective action problem.” Therefore, how is it proposed we approach this problem?

Responsible AI development?

AI in health and wellbeing is increasingly being implemented.

In the short term harmful scenarios have already occured, for example:

  • Biases learned from large datasets distorting decisions in credit markets.
  • Historical data and algorithms distorting decisions in the criminal justice system.
  • Facial recognition technologies disrupting established expectations of privacy and autonomy.
  • Auto-pilot functions in some automobiles causing new types of driving risk (while reducing others).

Longer term, larger scale scenarios include dangers such as:

  • Inadvertent escalation of military conflict involving autonomous weapon systems.
  • Widespread job displacement.
  • Threats to political and social institutions.

Responsible AI development is described as lowering risk, in this case: safety, security, and the structural risks associated with AI systems.

Costly mistakes or costly responsibility?

It is said in the article that there may be confusion regarding which direction AI investments will make if it comes to prioritising speed as opposed to a safe amount of progress: “If the additional resources invested in ensuring that an AI system is safe and beneficial could have been put towards developing an AI system with fewer constraints more quickly, we should expect responsible AI development to require more time and money than incautious AI development.” As such they are raising an important question in regards to the risk perspective <> versus an often raised issue of competitiveness. Perhaps a make-it-work attitude could be harmful with scale?

 

 

 

The cost discussed here however is relating to the possible financial benefit and in a sense ‘loosing out’ on opportunities.

  1. potential loss of a first-mover advantage.
  2. Performance costs
  3. Loss of revenue

This from not building certain ‘lucrative AI systems’ on the grounds of safety, security, or impact evaluation. Discussion of races assumes that they have a definitive endpoint, it is described as a ‘perpetual R&D race’.

Benefits of responsibility

Thankfully OpenAI does seemingly outline these negatives of responsibility to proceed into the upside. If consumers have a preference for safer products and respond rationally to this preference, they will not buy products that are insufficiently safe.

There is a balancing game or a weighing of the negative downside by companies. “Harms that safety failures inflict on non-consumers are negative externalities, and benefits that safer products produce for non-consumers are positive externalities.” This dilemma is being raised in companies that are developing products and if there is no punishment or focus from consumers it may be hard to keep companies accountable.

However not all workers or owners ignore responsibility which is said to have been demonstrated by recent actions by tech workers (protests etc.).

It may not be easy and here is why

There has to be stronger incentives for being responsible. Market forces, liability law, and industry or government regulation may not be enough according to OpenAI and they outline why with a compelling argument.

 

 

 

It is notoriously difficult to explain the decision of a given system. A ‘wait and see’ strategy could be damaging in this regard particularly to consumers when these algorithms are implemented in a large scale. I have written previously about ‘track and tell’ which is another practice that seems common among technology companies — first track a data point and tell the consumer later, so little responsibility seems to be taken in both regards.

If regulators cannot assess how risky a given AI system is, they may be overly stringent or overly liberal when using regulatory controls

Harms are more likely to harm: “…those accused of crimes than those that purchase the tools” The researchers from OpenAI lists a few points regarding harm I thought I would convey:

  • Reduced trust in online sources.
  • Too large harm for a company or insurer to cover all losses.
  • AI systems could create negative externalities for future generations that are not in a position to penalize companies or prevent them from occurring

Particularly the last point is of interest to me due to the lack of discussions on climate change or the climate crisis in the community developing artificial intelligence solutions/applications or researching in the field of artificial intelligence. It is astonishing to me as an example that climate change has not been mentioned a single time in this paper — this should be an obvious point that completely lacks mentioning despite how thorough this paper is. In the defence of OpenAI the word ‘sustainability’ is mentioned once and there is an encouragement for collaboration on AI for Good initiatives.

They ask the question of: “…why racing to the bottom on product safety is not ubiquitous in other industries in which decreasing time-to-market is valuable, such as in the pharmaceutical industry” this statement however is outright false. A blatant example made by the comedian John Oliver is opioids. There is in this context additionally bragging by certain CEOs on how fast they can get a drug approved. There was a record amount of approvals by the FDA in 2018 and McKinsey mentions there has been long-running discussions about being first to market. The race to the bottom is happening in pharmaceuticals too, however the point was perhaps that better regulation may be necessary in regards to the field of artificial intelligence.

There is additionally the mention of: “Collective action problems between companies can have positive effects on consumers and the public. A price war is a collective action problem between companies with mostly positive effect on consumers, for example, as it results in lower prices.” This negates the fact that price war affects the producers far more and people do get laid off — some of the poorest factory workers or farm workers do. It has a very negative effect, and I would not argue ‘mostly positive’ would be characteristic in this sense, it ignores a focus on labour.

There are some attempts in this paper to play our scenarios of cooperate-defect game, however as they say a shortcoming of their analysis is that it appeals to an overly simplified conception of cooperation and defection. They have argued that: “in order to “solve” a collective action problem, we can try to transform it into a situation in which mutual cooperation is rational.” However this assumes economic rationality or a rational human actor which generally does not tend to be the case.

I could explore the arguments of irrationality or the lack of rationality with Nobel Price recipient Daniel Kahneman or others criticising homo economicus or the myth of the rational man is much exaggerated. I recommend his lecture Maps of Bounded Rationality.

How can cooperation be improved?

Five factors was outline to make it more likely that AI companies will cooperate if they are faced with a collective action problem:

  1. Being more confident that others will cooperate.
  2. Assigning a higher expected value to mutual cooperation.
  3. Assigning a lower expected cost to unreciprocated cooperation.
  4. Assigning a lower expected value to not reciprocating cooperation.
  5. Assigning a lower expected value to mutual defection.

They do say their points are lacking, and they stress:

 

 

 

The authors are arguing against using adversarial rhetoric in the AI development. Another possibility is join research, and considering the recently released bsuite by DeepMind based on OpenAI frontline and Google Dopamine this seems to be an approach OpenAI somewhat is taking. “Active and explicit research collaboration in AI, especially across institutional and national borders, is currently fairly limited in quantity, scale, and scope.” They outline a few possibilities for collaboration:

Areas to consider might include

  • Joint research into the formal verification of AI systems’ capabilities and other aspects of AI safety and security with wide application;
  • Various applied “AI for good” projects whose results might have wideranging and largely positive applications (e.g. in domains like sustainability and health);
  • Coordinating on the use of particular benchmarks;
  • Joint creation and sharing of datasets that aid in safety research;
  • Joint development of countermeasures against global AI-related threats such as the misuse of synthetic media generation online.

There is a call for openness in this paper, yet a consideration of implications as well as not opening up everything. “Full transparency is problematic as an ideal to strive for, in that it is neither necessary nor sufficient for achieving accountability in all cases”

We can incentivise good behaviour

There is a list of possible incentives that has been mentioned in the OpenAI papers and they are the following:

  • Social incentives (e.g. valorizing or criticizing certain behaviors related to AI development) can influence different companies’ perceptions of risks and opportunities.
  • Economic incentives (induced by governments, philanthropists, industry, or consumer behavior) can increase the share of high-value AI systems in particular markets or more generally, and increase attention to particular norms.
  • Legal incentives (i.e. proscribing certain forms of AI development with financial or greater penalties) could sharply reduce temptation by some actors to defect in certain ways.
  • Domain-specific incentives of particular relevance to AI (e.g. early access to the latest generation of computing power) could be used to encourage certain forms of behavior.

Questions resulting from this paper

The authors outline a series of questions that has to be explored further. Verbatim it is listed as the following:

  1. How might the competitive dynamics of industry development of AI differ from government-led or government-supported AI development?
  2. What is the proper role of legal institutions, governments, and standardization bodies in resolving collective action problems between companies, particularly if those collective action problems can arise between companies internationally?
  3. What further strategies can be discovered or constructed to help prevent collective action problems for responsible AI development from forming, and to help solve such problems if they do arise? What lessons can we draw from history or from contemporary industries?
  4. How might competitive dynamics be affected by particular technical developments, or expectations of such developments?

Final reflection

I am very happy that OpenAI raises these issues and despite some critical thoughts underway in regards to certain of the theoretical thoughts on overall this can be said to be an incredibly important initiatives moving the field in a direction towards a more responsible use of technology. This is additionally the roles that OpenAI has been asked to take to ensure that humanity benefits from this technological change. I would of course encourage you not to trust my opinion and make up your own by reading the original paper. Otherwise I hope this was useful and do make sure to leave me any thoughts should you think of something.


Originally posted here

whois: Andy White Freelance WordPress Developer London