Sunday, December 3, 2017

Is Technology Value-Neutral? New Technologies and Collective Action Problems


Via Wikimedia Commons

We’ve all heard the saying “Guns don’t kill people, people do”. It’s a classic statement of the value-neutrality thesis. This is the thesis that technology, by itself, is value-neutral. It is the people that use it that are not. If the creation of a new technology, like a gun or a smartphone, has good or bad effects, it is due to good or bad people, not the technology itself.

The value-neutrality thesis gives great succour to inventors and engineers. It seems to absolve them of responsibility for the ill effects of their creations. It also suggests that we should maintain a general policy of free and open innovation. Let a thousand blossoms bloom, and leave it to the human users of technology to determine the consequences.

But the value-neutrality thesis has plenty of critics. Many philosophers of technology maintain that technology is often (perhaps always) value-laden. Guns may not kill people themselves but they make it much more likely that people will be killed in a particular way. And autonomous weapons systems can kill people by themselves. To suggest that the technology has no biasing effect, or cannot embody a certain set of values, is misleading.

This critique of value-neutrality seems right to me, but it is often difficult to formulate it in an adequate way. In the remainder of this post, I want to look one attempted formulation from the philosopher David Morrow. This argument maintains that technologies are not always value neutral because they change the costs of certain options, thereby making certain collective action problems or errors of rational choice more likely. The argument is interesting in its own right, and looking at it allows us to see how difficult it is to adequately distinguish between the value-neutrality and value-ladenness of technology.


1. What is the value-neutrality thesis?
Value-neutrality is a seductive position. For most of human history, technology has been the product of human agency. In order for a technology to come into existence, and have any effect on the world, it must have been conceived, created and utilised by a human being. There has been a necessary dyadic relationship between humans and technology. This has meant that whenever it comes time to evaluate the impacts of a particular technology on the world, there is always some human to share in the praise or blame. And since we are so comfortable with praising and blaming our fellow human beings, it’s very easy to suppose that they share all the praise and blame.

Note how I said that this has been true for ‘most of human history’. There is one obvious way in which technology could cease to be value-neutral: if technology itself has agency. In other words, if technology develops its own preferences and values, and acts to pursue them in the world. The great promise (and fear) about artificial intelligence is that it will result in forms of technology that do exactly that (and that can create other forms of technology that do exactly that). Once we have full-blown artificial agents, the value-neutrality thesis may no longer be so seductive.

We are almost there, but not quite. For the time being, it is still possible to view all technologies in terms of the dyadic relationship that makes value-neutrality more plausible. Unsurprisingly, it is this kind of relationship that Morrow has in mind when he defines his own preferred version of the value-neutrality thesis. The essence of his definition is that value-neutrality arises if all the good and bad consequences of technology are attributable to praiseworthy or blameworthy actions/preferences of human users. The more precise formulation of this is this:

Value Neutrality Thesis: “The invention of some new piece of technology, T, can have bad consequences, only if people have vicious T-relevant preferences, or if users with “minimally decent” preferences act out of ignorance; and the invention of T can have good consequences, on balance, only if people have minimally decent T-relevant preferences, or if users with vicious T-relevant preferences act out of ignorance” 
(Morrow 2013, 331)
A T-relevant preference is just any preference that influences whether one uses a particular piece of technology. A vicious preference is one that is morally condemnable; a minimally decent preference is one that is not. The reference to ignorance in both halves of the definition is a little bit confusing to me. It seems to suggest that technology can be value neutral even if it is put to bad/good use by people acting out of ignorance (Morrow gives that example of the drug thalidomide to illustrate the point). The idea then is that in those cases the technology itself is not to blame for the good or bad effects — it is the people. But I worry that this makes value-neutrality too easy to establish. Later in the article, Morrow seems to conceive of neutrality in terms of how morally praiseworthy and blameworthy the human motivations and actions were. Since ignorance is sometimes blameworthy, it makes more sense to me to think that neutrality occurs when the ignorance of human actors is blameworthy.

Be that as it may, Morrow’s definition gives us a clear standard for determining whether technology is value-neutral. If the bad or good effects of a piece of technology are not directly attributable to the blameworthy or praiseworthy preferences (or levels of knowledge) of the human user, then there is reason to think that the technology itself is value-laden. Is there ever reason to suspect this?


2. Technology and the Costs of Cooperation and Delayed Gratification
Morrow says that there is. His argument starts by assuming that human beings follow some of the basic tenets of rational choice theory when making decisions. The commitment to rational choice theory is not strong and could be modified in various ways without doing damage to the argument. The idea is that humans have preferences or goals (to which we can attach a particular value called ‘utility’), and they act so as to maximise their preference or goal-satisfaction. This means that they follow a type of cost-benefit analysis when making decisions. If the costs of a particular action outweigh its benefits, they’ll favour other actions with a more favourable ratio.

The key idea then is that one of the main functions of technology is to reduce the costs of certain actions (or make available/affordable actions that weren’t previously on the table). People typically invent technologies in order to be able to do something more efficiently and quickly. Transportation technology is the obvious example. Trains, planes and automobiles have all served to reduce the costs of long-distance travel to individual travellers (there may be negative or positive externalities associated with the technologies too — more on this in a moment).

This reduction in cost can change what people do. Morrow gives the example of a woman living three hours from New York City who wants to attend musical theatre. She can go to her local community theatre, or travel to New York to catch a show on Broadway. The show on Broadway will be of much higher quality than the show in her local community theatre, but tickets are expensive and it takes a long time to get to New York, watch the show, and return home (about a 9-hour excursion all told). This makes the local community theatre the more attractive option. But then a new high speed train is installed between her place of residence and the city. This reduces travel time to less than one hour each way. A 9-hour round trip has been reduced to a 5-hour one. This might be enough to tip the scales in favour of going to Broadway. The new technology has made an option more attractive.

Morrow has a nuanced understanding of how technology changes the costs of action. The benefits of technology need not be widely dispersed. They could reduce costs for some people and raise them for others. He uses an example from Langdon Winner (a well-known theorist of technology) to illustrate the point. Winner looked at the effects of tomato-harvesting machines on large and small farmers and found that they mainly benefitted the large farmers. They could afford them and thereby harvest far more tomatoes than before. This increased supply and thereby reduced the price per tomato to the producer. This was still a net benefit for the large farmer, but a significant loss for the small farmer. They now had to harvest more tomatoes, with their more limited technologies, in order to achieve they same income.

Now we come to the nub of the argument against value-neutrality. The argument is that technology, by reducing costs, can make certain options more attractive to people with minimally decent preferences. These actions, by themselves, may not be morally problematic, but in the aggregate they could have very bad consequences (it’s interesting that at this point Morrow switches to focusing purely on bad consequences). He gives two examples of this:

Collective action problems: Human society is beset by collective action problems, i.e. scenarios in which individuals can choose to ‘cooperate’ or ‘defect’ on their fellow citizens, and in which the individual benefits of defection outweigh the individual benefits of cooperation. A classic example of a collective action problem is overfishing. The population of fish in a given area is a self-sustaining common resource, something that can shared fruitfully among all the local fishermen if they fish a limited quota each year. If they ‘overfish’, the population may collapse, thereby depriving them of the common resource. The problem is that it can be difficult to enforce a quota system (to ensure cooperation), and individual fishermen are nearly always incentivised to overfish themselves. Technology can exacerbate this by reducing the costs of overfishing. It is, after all, relatively difficult to overfish if you simply relying on a fishing rod. Modern industrial fishing technology makes is much easier to dredge the ocean floor and scrape up most of the available fish. Thus, modern fishing technology is not value-neutral because it exacerbates the collective action problem.

Delayed gratification problems: Many of us face decision problems in which we must choose between short-term and long-term rewards. Do we use the money we just earned to buy ice-cream or do we save for our retirements? Do we sacrifice our Saturday afternoons to learning a new musical instrument, or do we watch the latest series on Netflix instead? Oftentimes the long-term reward greatly outweighs the short-term reward, but due to quirk of human reasoning we tend to discount this long-term value and favour the short-term rewards. This can have bad consequences for individually (if we evaluate our lives across their entire span) and collectively (because it erodes social capital if nobody in society is thinking about the long-term). Morrow argues that technology can make it more difficult to prioritise long-term rewards by lowering the cost of instant gratification. I suspect many of us have an intimate knowledge of the problem to which Morrow is alluding. I know I have often lost days to work that would have been valuable in the long-term because I have been attracted to the short-term rewards of social media and video-streaming.

Morrow gives more examples of both problems in his paper. He also argues that the problems interact, suggesting that the allure of instant gratification can exacerbate collective action problems.


3. Criticisms and Conclusions
So is this an effective critique of value neutrality? Perhaps. The problems to which it alludes are certainly real, and the basic premise underlying the argument — that technology reduces the cost of certain options — is plausible (perhaps even a truism). But there is one major objection to the narrative: that even in the case of collective action problems and delayed gratification, it is human viciousness that does the damage?

Morrow rejects this objection by arguing that it is only right to call the human actors vicious if the preferences and choices they make are condemnable in and of themselves. He argues that the preferences that give rise to the problems he highlights are not, by themselves, morally condemnable; it is only the aggregate effect that is morally condemnable. Morality can only demand so much from us, and it is part and parcel of the human condition to be imbued with these preferences and quirks. We are not entitled to assume a population of moral and rational saints when creating new technologies, or when trying to critique their value-neutrality.

I think there is something to this, but I also think that it is much harder to draw the line between preferences and choices that are morally condemnable and those that are not. I discussed this once before when I looked at Ted Poston’s article “Social Evil”. The problem for me is that knowledge plays a crucial role in moral evaluation. If an individual fisherman knows that his actions contribute to the problem of overfishing (and if he knows about the structure of the collective action problem), it is difficult, in my view, to say that he does not deserve some moral censure if he chooses to overfish. Likewise, given what I know about human motivation and the tradeoff between instant and delayed gratification, I think I would share some of the blame if I spend my entire afternoon streaming the latest series on Netflix instead of doing something more important. That said, this has to be moderated, and a few occasional lapses could certainly be tolerated.

Finally, let me just point out that if technology is not value-neutral, it stands to reason that it's non-neutrality can work in both directions. All of Morrow’s examples involve technology biasing us toward the bad. But surely technology can also bias us toward the good? Technology can reduce the costs of surveillance and monitoring, which makes it easier to enforce cooperative agreements, and prevent collective action problems (I owe this point to Miles Brundage). This may have other negative effects, but it can mitigate some problems. Similarly, technology can reduce the costs of vital goods and services (medicines, food etc.) thereby making it easier to distribute them more widely. If we don’t share all the blame for the bad effects of technology, then surely we don’t share all the credit for its good effects?





1 comment: