Artificial Intelligence and the Ouster of Humans from Public Organizational Decisions

Like most science fiction fans, most of my life has been a wait for two discoveries that will fundamentally change humankind’s role in the world. One is the discovery of alien life. The second is the development of artificial intelligence (AI).

The concept of an AI, which surpasses our own intelligence, permeates our culture. It is also a salient enough threat that Stephen Hawking, Elon Musk and dozens of AI experts penned an open letter one year ago this week through the Future of Life Institute, outlining steps for steering the development of machine learning away from a devastating evolutionary course.

Public administration research — in particular, the study of organizational decision-making — should be playing a major role.

Public organizations –both hierarchical and networked structures — distribute public goods and services, with humans sitting at most of the decision nodes. They function in a rule-structured environment of human interactions. Yet they are still not that great at long-term sustainability or working marginalized citizens into authentic decision-making positions within government.

Public organizational goals such as efficiency are imbued with normative values. The humans at the decision-nodes may be expected to rank efficiency or effectiveness over secondary objectives such as social equity. This doesn’t mean bureaucrats don’t care about the disadvantaged; in fact, they are generally ethical. However, they may also be inconsistent in their rank ordering.

With widespread distrust in government, AI may be eagerly deployed to reduce perceived bureaucratic incompetence. Human decision-making nodes are likely to become scarce as machine learning advances. If we cannot engage disinterested or disadvantaged citizens in policymaking and implementation now, what will the chore look like when machines displace the bureaucrats?

According to Herbert Simon, humans have biases, sympathies and cognitive limits that force them to take mental shortcuts and make rational decisions. AI will be able to overcome this. Public organizations may be rendered far more efficient by removing corruptible and incompetent humans from public service delivery. But how will an autonomous law enforcement drone apply ethical standards?

In a response to Musk’s call for open-access to machine learning technologies, evolutionary biologist Suzanne Sadedin wrote that the competitive nature of organizational systems meant such a move would make it more likely that AI will “wipe out” itself and humanity. Her argument is straightforward: humans have been good at competing for scarce resources, but machines will be better.

This is the basic logic of Garrett Hardin’s “tragedy of the commons.” Agents will exploit a resource by taking more than their share when it is limited. Only humans have learned what happens when you overharvest a natural resource. Public and private organizations have never experienced a similarly scaled “tragedy of the commons,” Sadedin writes. They have no evolutionary history to draw from and their first tragedy may be the final one.

There is a middle ground.

Elinor Ostrom’s work on social-ecological systems has contributed to our ability to “govern the commons.” Social equity and citizen participation are holy grails in public administration. Their importance should become more mission-critical as humans are removed from the decision-making processes. Our field – with its normative focus on ethics and practical policy problems — has produced a tremendous depth of knowledge to offer the birthing of AI.

As terrifying as AI-empowered militaries might sound, why is it not possible to teach our machines as we would teach our children — to cooperate, to trust and reciprocate?

Remember “WarGames?” Last year, researchers developed a learning program which teaches itself to play different Atari games. Other programs are demonstrating the building-blocks of creative thinking. The sci-fi novelist Philip K. Dick once asked us, “Do Androids Dream of Electric Sheep?” MIT and Microsoft researchers moved us a step closer to answering with a graphic network that could “dream” meaningful imagery.

On many fronts, 2015 showed us AI is developing mentally much like a child. What we teach these children is still very much up to us.


Submitted by Aaron Deslatte

Advertisements

Tell Us What You Think

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s