Artificial intelligence and machine learning capabilities are growing at an unprecedented rate.These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis.Countless more such applications are being developed and can be expected over the long term.Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.
This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.We analyze but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be.We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.
In response to the changing threat landscape we make those high-level recommendations:
Like their work on concrete problems in AI safety, they’ve grounded some of the problems motivated by the malicious use of AI in concrete scenarios, such as:
OpenAI is excited to start having this discussion with their peers, policymakers, and the general public; they’ve spent the last two years researching and solidifying our internal policies at OpenAI and are going to begin engaging a wider audience on these issues.
They’re especially keen to work with more researchers that see themselves contributing to the policy debates around AI as well as making research breakthroughs.