2016–2019 Progress Report: Advancing Artificial Intelligence R&D

 

2016–2019 Progress Report: Advancing Artificial Intelligence R&D

November 2019

 

The United States national strategy for artificial intelligence (AI), the American AI Initiative, identifies research and development (R&D) as a top priority for maintaining global leadership in AI. The United States leads the world in AI innovation, due in large part to its robust R&D ecosystem. Federal agencies contribute significantly to AI innovation by investing in numerous world-class research programs in areas consistent with the unique missions of each agency. 

This 2016–2019 Progress Report on Advancing Artificial Intelligence R&D (“2016–2019 Progress Report”) documents the important progress that agencies are making to deliver on Federal AI R&D. 

Guiding Federal research investments is the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (“2019 Plan”), 2 which builds upon the 2016 version of the Plan. The 2019 Plan articulates eight national AI R&D strategies:

Strategy 1: Make long-term investments in AI research. 

Strategy 2: Develop effective methods for human-AI collaboration. 

Strategy 3: Understand and address the ethical, legal, and societal implications of AI.

Strategy 4: Ensure the safety and security of AI systems. 

Strategy 5: Develop shared public datasets and environments for AI training and testing.

Strategy 6: Measure and evaluate AI technologies through benchmarks and standards.

Strategy 7: Better understand the national AI R&D workforce needs.

Strategy 8: Expand public-private partnerships in AI to accelerate advances in AI. 

This 206–2019 Progress Report highlights AI research first by strategy, then by sector, with subsequent supporting details describing individual agency contributions that provide a whole-of-government overview. The diversity of programs and activities reflects the remarkable breadth and depth of Federal investments in AI. This report highlights not only the broad themes of Federal R&D but also provides illustrative examples in sidebars that highlight individual agency AI R&D breakthroughs that advance the state of the field. 

Taken as a whole, the 2016–2019 Progress Report conveys the following key messages: 

  1. The Federal Government is investing in a considerable breadth and depth of innovative AI concepts that can transform the state of the field. 
  2. The United States benefits significantly from the broad spectrum of Federal agencies that invest in AI from their unique mission perspectives, consistent with the national AI R&D strategy. 
  3. Federal investments have generated impactful breakthroughs that are revolutionizing our society for the better. 

Collectively, the investments described in this report demonstrate how the Federal Government leverages and improves America’s AI capabilities through R&D and ensures that those capabilities are increasing prosperity, safety, security, and quality of life for the American people for decades to come. 

 

You can reach the full report from the link below:

 

The US Air Force AI Strategy

 

The US Air Force AI Strategy

 

2 0 1 9
The United States Air Force Artificial Intelligence Annex

to The Department of Defense Artificial Intelligence Strategy

 

Executive Summary

 

Artificial intelligence is poised to change how warfare is conducted in the 21st century. The comparative advantage currently enjoyed by the Air Force will either erode or strengthen depending on the manner in which we adopt these technologies.

Unlike previous technological advances, AI has already proliferated into many commercial enterprises and, as such, cannot be governmentally controlled or contained. Just as the commercial sector has rushed to embrace these technologies, our global competitors are overtly accelerating the integration and weaponization of AI as an effective measure to counter our traditional strengths and exploit our perceived weaknesses. This is especially true for our Air Force, where our ability to execute missions across Air, Space, and Cyberspace rely on insights driven by data and information. Depending on the strategic choices we make now, our ability to operate around the globe may be blunted or bolstered by the adoption of—or hardening against—artificial intelligence.

The Air Force is charged to provide the nation with Air and Space Superiority, Global Strike, Rapid Global Mobility, Intelligence, Surveillance, and Reconnaissance, and Command Control. AI is a capability that will underpin our ability to compete, deter, and win across all five of these diverse missions. It is crucial to fielding tomorrow’s Air Force faster and smarter, executing multi-domain operations in the high-end fight, confronting threats below the level of open conflict, and partnering with our allies around the globe.

This Annex and associated Appendix serve as the framework for aligning our efforts with the National Defense Strategy and the Department of Defense Artificial Intelligence Strategy as executed by the Joint Artificial Intelligence Center (JAIC). It details the fundamental principles, enabling functions, and objectives necessary to effectively manage, maneuver, and lead in the digital age. Doing so is contingent upon our ability to operationalize AI for support and warfighting operations alike.

Artificial intelligence is not the solution to every problem. Its adoption must be thoughtfully considered in accordance with our ethical, moral, and legal obligations to the Nation. As stewards of this great responsibility, Airmen should execute their assigned missions with a focus on emerging technologies, but also with an understanding that everything we do is a human endeavor.

In this return to great power competition, the United States Air Force will harness and wield the most representative forms of AI across all mission-sets, to better enable outcomes with greater speed and accuracy, while optimizing the abilities of each and every Airman. We do this to best protect and defend our Nation and its vital interests, while always remaining accountable to the American public.

 

You can find original and full Annex below:

https://www.af.mil/Portals/1/documents/5/USAF-AI-Annex-to-DoD-AI-Strategy.pdf

Two Close Friends: AI and Law

 

Two Close Friends: AI and Law

 

I interviewed Prof. Ryan Calo who is the Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law. He is a faculty co-director (with Batya Friedman and Tadayoshi Kohno) of the University of Washington Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering. Professor Calo’s research on law and emerging technology appears or is forthcoming in leading law reviews (California Law Review, University of Chicago Law Review, and Columbia Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence) and is frequently referenced by the mainstream media (NPR, New York Times, Wall Street Journal).

 

Pleasant readings…

 

Cetin: Lets begin with a personal question. Robots and law… It has become quite a popular topic. How do you evaluate the development of this field?

Calo: I am very happy with the trajectory of robotics law and policy in the past ten years. It went from being a bit of a fringe conversation, at least in the United States, to a mature sub-discipline with sophisticated theory and concrete examples. I think that We Robot—the annual conference you’ve attended—has been instrumental.

 

Cetin: : Countries have gradually started to determine AI policies. What do you think about the effects of the democratic and economic structure of countries on AI policies?

Calo: Good question. Some countries are seeing AI as an opportunity to be more globally competitive, whereas others are worried about preserving their edge. The best policies in my view think about the social impacts of AI on their own society while understanding AI as a global enterprise. I don’t like the rhetoric of AI as a “race” that one or more countries will “win.” This kind of thinking leads to harmful shortcuts and hinders cooperation.

 

Cetin: Developing countries import the technological products largely, and these technological products find a great demand in the domestic market.  How do you think this situation affects AI regulations in developing countries?

Calo: I think it’s important to keep in mind that technology brings with it cultural and other assumptions. So when developed economies export technology to less developed ones, there is the potential that the values of those developed nations will accompany the product. Thus I think the AI policy of developing countries should include best practices around procurement. What I mean is that developing countries, though they may not be developing AI at the same rates, still have market power and can insist that the products they import respect their values and well-being. No entity should import AI without insisting on this.

 

Cetin: Comparing anglo-saxon law system with continental law system, what can the regulatory challenges of AI and robots be?

Calo: I actually think the challenges of robotics law are pretty consistent across common law and civil law. They include assessing responsibility for harm, privacy, and questions of control and ownership. It may be that common law proves more flexible in reacting to new technology but there’s no inherent reason that would be the case.

 

Cetin: Especially in recent years, technology companies have taken steps in artificial intelligence and ethics.  As an example, one of them was Google. What should be the most important issue for companies when determining policy on artificial intelligence and ethics?

Calo: I have long argued that we cannot emphasize ethics to the exclusion of law and policy. This phenomenon has become known as “ethics washing,” which captures the intuition many have that companies would rather craft their own ethical guidelines than face mandatory rules from government. So while the content of ethics programs is very important, so is the question of legitimacy.

With this understanding in place, companies should emphasize the ways that they harms and benefits of AI are often unevenly distributed and have processes in place to co-design AI with all stakeholders and assess the social impacts of new technologies, especially on the most vulnerable. This is more important in my opinion than mere transparency.

 

Cetin: The European Commission recently adopted the Cyber Security Act. This is an important step about information security in EU. What about the US? How do you evaluate the approachment of the US to cyber security in the terms of private sector and government applications?

Calo: I worry that the definition of hacking is outdated in light of AI, especially adversarial machine learning. I wrote a paper about this with colleagues in computer science entitled Is Tricking a Robot Hacking? We argue that manipulating AI by tricking it is becoming just as dangerous as breaking into a computer system. We need ways to make AI more robust against attack while also protecting researchers who are testing AI for insecurity or bias.

 

Cetin: CCPA is the one of the most important regulations in California.  But still there is no federal regulation on data protection in the US. What are the effects of this situation for individuals and companies?

Calo: I don’t know. Lots of people and groups, including companies themselves, are calling for federal baseline data protection in the United States. I think everyone is tired of the uncertainty and that fear and instability that results. I don’t the CCPA is perfect but I credit California for jump-starting the conversation in the U.S.

 

Cetin: I am sure this question is asked to you so much, but I ask it again for the robotic.legal readers.  What are your suggestions to university students who want to improve themselves in robot law?

Calo: Great question! I would say to attend or at least watch and follow We Robot. That is where this conversation is most vibrant. But also, seek out people in other disciplines. If you’re a roboticist, find the law and social science people. If you’re in law or social science, talk to the robotics and AI students and faculty. As I say often, very few important questions exist that can be resolved by reference to any single disciplines.

Thanks for your always excellent questions, Selin, and for educating people about about robotics law!

 

Respects to Dear Ryan Calo…