New Technologies Pose Ethical Challenges for Agencies
Officials need to ensure efforts to modernize are not in conflict with essential moral principles.
Emerging technologies offer agencies many opportunities to enhance services, but what happens when we take humans out of the equation? Some may think that because technology is not human it cannot be ethical. But that’s not the case.
While artificial intelligence can vastly expand our access to knowledge, it also can proliferate bias. The internet of things brings convenience to our lives, but raises issues around privacy. Virtual reality can educate us through immersion but can also be addictive. When government organizations are testing these technologies and setting the standards for usage, they must help to ensure the protection of citizens and entities, and promote positive outcomes. Here are a few things government officials can do to ensure the adoption of new technologies does not conflict with essential moral principles.
Develop roles and teams dedicated to the ethical use of technology. Clear accountability will help drive an ethical technology agenda in government. The government already has groups, such as the National Institute of Standards and Technology, the Office of Science and Technology Policy, and the Office of American Innovation, that are focused on IT modernization and could be leveraged to lead this effort. When DJ Patil served as the Chief Data Scientist, he saw it as part of his role to “work carefully and thoughtfully to ensure data science policy protects privacy and considers societal, ethical, and moral consequences.” This is a role that the future OSTP director could assume as well. Governments could also consider the creation of new teams.
The European Union, recognizing challenges with ethics in robotics and artificial intelligence, called for the creation of “a new European agency for robotics to supply public authorities with technical, ethical and regulatory expertise and a voluntary ethical code of conduct to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.” Aside from cross-government groups, individual agencies and departments should consider their use and role in emerging technology and who is responsible for ensuring the ethical application of it.
Create or update policies, regulations, and standards that guide technology ethics but also allow for innovation. Consortiums of private sector organizations, like the Partnership on AI to benefit people and society, have begun to consider standards around the ethics of technology, but the private sector does not have the broad view of government. It is through this lens that government can help to ensure the ethical development and use of technology. However, current policies and regulations may not be addressing or keeping up or in some cases, may even be hindering ethical or efficient use of technology.
As Vivek Wadhwa noted in the MIT Technology Review, “effective laws and standards of ethics are guidelines accepted by members of a society, and that these require the development of a social consensus.” While the current pace of technology change is rapid, the pace of social consensus is much slower. Nonetheless, there are certainly use cases for regulators and policymakers to reference such as foreign policy guiding the ethics of war and state data security laws, not to mention the soon to be enforced General Data Protection Regulation in Europe.
Research, test, and collaborate with experts and stakeholders to better understand emerging technology and its ethical implications. Ultimately, in order to create the policies, regulations, and standards for the ethical use of technology, government organizations must understand the technology by conducting or funding research, testing outcomes, and engaging experts and stakeholders both in and out of government. The Defense Advanced Research Projects Agency explains its responsibility as twofold: DARPA’s core function is to push critical frontiers ahead of U.S. adversaries while “addressing the broader societal questions raised by its work.”
To that end, DARPA engages with a variety of experts and stakeholders, both to hear what they have to say and to convey the agency’s insights about what technology can and cannot do. One example of this is in DARPA’s work around increasing trust in AI. In order to do so, they granted $6.5M to computer science professors at Oregon State University to better understand and communicate with AI.
Educate and enable government employees to understand and reduce ethical risks. The final component is ensuring that government employees and constituents are educated and have the tools and resources to ensure the ethical use of technology. The Office of Government Ethics published 14 general principles outlining what is considered ethical behavior for government employees, which could be updated to include the ethical use of technology and data. In addition, tools such as mobile apps could make it easier for employees to understand technology and its ethical implications. For example, in 2017, the Agriculture Department launched a mobile app to answer employees’ ethical questions on the go.
While it may seem easier to take a wait and see approach to avoid the risks associated with emerging technology, these tools are already deeply embedded in our everyday lives. Citizens are looking to government to provide protections that the market cannot—to understand the technology and be proactive in addressing ethical concerns. This requires government organizations to be nimbler as changes occur more rapidly and unforeseen consequences arise, to provide a framework for ethical modernization, and to be the voice of the human in an increasingly humanless landscape.
Darcie Piechowski is the social media and innovation fellow at the IBM Center for the Business of Government.
NEXT STORY: Understanding the Rising Rate of Suicide