The Onus of Choice: Picks of the Week
Twitter bans multiple ‘alt-right’ accounts | engadget
2016 was not a good year to be a tech company. Twitter’s struggling to profit, Facebook is still reeling from its massive fake news debacle, and major web forums are buckling under rampant abusive behavior. It was the witching hour of the Internet and we were able to see at full bore the most grotesque second half of its Janus face. Fake news, harassment, echo chambers and conspiracy theories overwhelmed the core value of the Internet: democratization. Now major tech companies are coming to grips with a problem they have delicately danced around for a decade; choosing between idealized free speech and reality.
Twitter is famous, or infamous, for proclaiming itself “the free speech wing of the free speech party of the Internet.” Since its inception Twitter struggled with maintaining its hardline commitment to free speech while consistently fumbling issues of abuse. For years they remained unable to find a comfortable position, vacillating between deploying algorithms preventing President Barack Obama from getting inflammatory or offensive questions during his Question and Answer session to flatly ignoring victims of sustained, targeted campaigns of abuse. While Twitter struggled with its demons quietly, similar problems manifested elsewhere. For each the root cause was the same: free speech. Connecting the world openly and freely, disseminating information equally and giving a voice to all are core tenants upheld by Internet Canon law, but the past year has revealed each of those adverbs is qualified.
Companies like Reddit and Twitter, fearing failure, have taken steps to create a welcoming appearance and attract and retain users. Responding to criticism, Twitter banned “Alt-Right” accounts, and Facebook began rolling out new measures to combat fake news. Reddit CEO Steve Huffman admitting to secretly altering comments on the Donald Trump themed subreddit “/r/The_Donald.” Huffman had seen Reddit plagued by issues stemming from “/r/The_Donald,” in one such example a hiccup in Reddit’s processing algorithm led to user’s homepages containing exclusively posts from “/r/The_Donald.” But, wanting to have his cake and eat it too, Huffman chose to keep the illusion of freedom while subtly changing reality. Each instance drew heavy criticism, equating the moves to censorship. Worse still, Twitter and Reddit are struggling to deal with a cold reality; championing “free speech” may mean playing host to ideas you do not agree with. Twitter CEO Jack Dorsey, like Huffman, had to answer critics claiming his service allowed Donald Trump’s election to President. When asked how he personally felt about Trump’s election, and Twitter use, he answered “it’s complicated.” His response demonstrates a waning commitment to the “free speech wing.”
Calling the Internet a “Public Place” is a common argument among advocates for Internet free speech. Mike Rugnetta of PBS’ “Idea Channel” argues that the Internet is more akin to a shopping mall than a square. Unlike the town crier, speech on the Internet occurs on private servers that are owned by individuals or corporations. Just as malls give the illusion of being an open public space one is not necessarily afforded the same freedoms, as they are technically on private property. Tech companies are now faced with a stark choice; adopt an “anything goes” policy and suffer the consequences or start filtering content and suffer the consequences. Twitter, Facebook and Reddit now must ask themselves, should we be more like shopping malls and less like the town square?
The Problem With Ethical Autonomy
Can We Program Self-Driving Cars To Make Ethical Choices?
Our conception of the future is often produced by a quixotic mix of the fictional and the real. Science fiction authors hand down a perspective of the future warped by a desire to reflect, and change, the present and consequently the future. Intertwined with our reading is our own movement away from the present. Weaving together science fiction and the individual creates a future that is marked by fabulous technology, but inhabited by ourselves. The childish flights of fancy we construct into our memory when examining the future we live in are held against two paradoxical standards: the imaginative conflation outlined above and the technological progress achievable in our lifetime. We are thus simultaneously impressed and disappointed by our technological circumstance. When Captain James T. Kirk first flipped open his communicator requesting a transport back to the Enterprise, audiences were taken by the fantasy. Kirk was able to talk wirelessly, instantly, to his crew in orbit, as well as be “beamed” almost anywhere. Fifty years later cellphones are a pervasive, almost cultural, force in society yet we still lament that we cannot “beam up,” that we are still very much terrestrial beings. Though technologies we now have access to might retroactively seem to be the “obvious” technologies humanity would pursue first, this ascribed logic of technological advance clouds our sight. When a technology seems be of our imagined future it is worthy of extra consideration.
In an op-ed for the Pittsburgh Post-Gazette President Barack Obama outlined an initiative by his administration to provide car manufacturers with a sixteen point compliance list for the safety “autonomous vehicles.” Vehicle autonomy, often referred to as “self-driving cars,” has appeared in the consciousness of government agencies seemingly overnight, though the technology for autonomous vehicles has existed for some time. After Telsa’s “autopilot” program was involved in a fatal car accident articles have begun to appear concerning the varying aspects of these vehicles. The fallout from Tesla and the apparent blessing of the United States Government spurred a wave of technological analysis. Yet economic and legal discourse, and even the President’s own initiative, sidestep ethical issues present at the foundation of vehicle autonomy.
At the very heart of self-driving cars is the programming that allows them to function. Such “Autonomy” is, in a philosophical sense, a bit of a misnomer. The type of “autonomy” that exists inside these cars is not so much a rational and conscious actor, but a machine following a set of pre-programmed commands. Running on algorithms the cars take inputs and run them through a series of rules to create an output. The “autonomy” of these self-driving cars is an illusion. The “freedom” these cars have is one that is merely programmed to react in specific ways per the context of the situation. The cars do not have the freedom to act outside their rule bounds. They are designed to appear autonomous when it is their programming dictating their actions.
In a car accident it is the driver of the car who is the rational actor involved in the decision making process and the driver bears the moral culpability for the results. The United States is a vehicle nation, containing almost as many cars as there are people. Millions of American’s commute to work every day via automobile, on interstates and highways across the country. As the President outlines, roads are dangerous. Thirty-five thousand people died in automobile accidents just last year. He goes on to say that self-driving cars will reduce the number of traffic fatalities. However, it is not a stretch of the imagination to consider a situation where a car accident will indefinitely result in death. How would we program a self-driving car to react in a scenario where, through the actions of the car, the end result would be fatal? What if a truck stops short and if the car swerves in either direction it will collide with a motorcycle but if it stops short the driver will be crushed by an oncoming truck. Do we program autonomous vehicles to always protect the driver? Or to minimize the amount of total harm? Such a scenario has resulted in two competing headlines: “Is your car programmed to kill you?” and “Is your car programmed to kill?” It appears that in allowing self-driving cars we are at least confirming the latter, they will, in some way, be programmed to kill. In a car accident the onus is on the driver, they make the ultimate decision, to swerve or to break, and are thus held responsible. But who bears the responsibility when an autonomous vehicle crashes?
While Captain Kirk and Mr. Spock were guided by the hand of the Prime Directive, in the field of vehicle autonomy there is no overarching principle. With companies like Google and Uber already pushing to deploy this technology and the most recent position by the Government being one of encouragement, it appears our haste to reach the future has superseded any reflection given to this technology’s ethical implications. Improvement and innovation can mask the ethical challenges of new technology and it is yet to be seen how problems of autonomous vehicles are approached.
The American Weapon
We spend every day locked in a tension, our minds fight to weather a storm brewed by the minds of those around us. As near as we can get to touching the true thoughts of another, the closer we seem to think we are, the farther away we realize we have become. To ourselves and our minds, our intents and our actions are neatly presented to us. Yet when we pass a stranger on the sidewalk as they draw in on us our mind scrambles to predict their intent. The inability to know another’s intent, the blindness we have to it, haunts our experience. Such a human problem becomes more complex when paired with a human creation; the gun. In dealing with this issue, I am working on a narrow platform, considering a mere node in a greater web. In this article I would like to raise only few questions, in hopes that they can be considered, not only by readers of this blog, but by other writers as well.
About every week or so the milk carton in my refrigerator runs dry, forcing a trip to the grocery store. The routine is usually the same: walk to the car, start it, drive a few miles to the market, enter, get milk, checkout, and return home. Such a chore is dreadfully boring and I often loathe taking the time. Throughout my entire journey, the intent of my actions was clear to me: I was going to get milk. At each step I crossed paths with others. Had you stopped them and asked these strangers what my intent was at any given part of trip you would probably be greeted not only with different answers, but also a sense of bewilderment. How would they know the intent of another? One of the more common pieces of dating is to “make your intentions clear.” Why? So the other person is not “guessing,” and your actions are not misconstrued under the pretext of a different intent. Yet when a nervous man comes across as creepy we would be in the minority in criticizing the woman for not understanding his intentions. Much of the difficulty in being human grows from the inability our minds have in regards to one another, which serves to only make matters more complicated for the rest of this article.
Ask any art major what the phrase they hear the most from family members at Christmas, the answer is likely “what are you going to do with a major in ceramics?” The onus then falls on a stressed nephew to demonstrate the value of his chosen path of study. How do they go about doing that? There is a struggle in demonstrating the utility of an art degree. For the average uncle there is very little pragmatic value to it. What job will it get you? How will it help you make money? To raise a second line of thought, when we look at most things we tend to immediately look at what they are for. Our conception of their “goodness” stems from their usefulness. Utility is not hard to see in everyday objects; a shovel is good for digging holes, a kettle for boiling water, a key is good for unlocking a lock. For less cut and dry examples, like education, the pragmatic “goodness” is not as obvious.
A gun, considered narrowly and in a reductionist stance, is a series of springs and levers that plunges a hammer-like arm into a bullet assembly. The gun includes the stock and barrel but when considering what a gun is good for we can say it is good for igniting the powder in a bullet and allowing it to be fired. We can even bring a more holistic view and include the bullet, saying that a gun is good for the firing and aiding in the projection of a bullet. But to further complicate this, we can look to the Greek work Telos, meaning “end.” Telos is the ultimate end of an object. I will consider it here as the goal, or final purpose. We can pull this concept down into a real world example to fully round out the exposition.
In the early 1990s one very unlucky criminal made a deal with an undercover officer, he would provide a MAC-10 firearm in exchange for a few ounces of cocaine. After the deal went down he was promptly arrested and charged with the usual offenses. However, he was also charged with using a firearm in a drug related crime. The use of a firearm, in this case, carries with it a heavier sentence. An appeal of this charge made its way to the Supreme Court of the United States who were tasked with answering the question of whether trading a firearm for cocaine constituted “use” of that firearm. The court eventually came down with a guilty verdict, arguing that though he had not used the firearm in its traditional, intended purpose, the word “use” connotes more than just “intended use” and does not exclude other ways to use a firearm. Justice Antonin Scalia dissented and used Smith v United States as an example in his essay Common Law Courts in a Civil-Law System. Scalia very frankly states his opinion on “use.”
“The phrase “uses a gun” fairly connotes use of a gun for what guns are normally used for, that is as a weapon. As I put the point in my dissent, when you ask someone “Do you use a cane?” you are not inquiring whether he has his grandfather’s antique cane as a decoration in the hallway.”
Scalia gives us an interesting precedent, is there an intended, goal-like end for the gun? Can it be argued that a gun has a Telos or that a “normal” use of a gun exists?
There is a danger in these mysteries, especially when paired. Guns have many uses, they can be used for hunting, they can be used for sport shooting, and they can even be used for decoration. Shedding the obvious, like encased decorative guns, there is a crux. Not only is there serious difficulty in understanding the gun, there is an even greater difficulty in deciphering the intent of the person in front of us. Now it does not seem unreasonable to ask, is a gun made to kill? Ardent supporters will say no, “guns don’t kill people, people kill people” while this is half the equation it seems to very clearly miss the other half. Putting this in a stronger form, one can claim the gun is a tool. Tools are fairly inert, the shovel does not dictate whether it is used for digging holes or used for decoration. The will of the user bears the accountability for the use of the tool. We can be correct in saying “shovels don’t dig holes, people dig holes” but why was the shovel made? Tools can still be crafted with intended purposes. Medicines are made to treat certain illnesses, and, while they often have multiple uses, they were made with a specific malady in mind. Guns are produced just as shovels are produced and this lets us ask two questions; “what are we making guns for?” and more specifically “what is our intent in making them?”
We have considered intent, use, and teleological end but can these questions be answered? Americans have a relationship with guns, that much is clear. The broader I go, the more I lose in my ability to examine a topic comfortably. These very narrow questions, ideally, bloom into larger discussion, but that is what I hope to generate, not to tackle on my own. The gun is an American weapon, we have seen our history born of revolution, our constitution give to the people the right to bear arms, and now our culture sits mired in tense discourse. Are guns made to kill? What is our intent in making guns and what is our intent in carrying them? The questions I hope to raise are ones that should be answered. Americans are still humans and the problems we experience with guns are not divorced from the human condition.
 I draw the distinction between what the gun does and how well it does it. The gun fires the bullet but the barrel determines how well the bullet flies, I believe there is a distinction between bringing the bullet into action and then aiding it in its own function. I must also make a concession in that I am considering only the narrowest sense of a gun. Many things are called “guns” but do not follow this definition, like the railgun which accelerates a projectile using a series of magnets. I am aware of the semantic issues but for brevity and clarity I refer to “gun” and “firearm” as one in the same.
 Smith v United States, 508 U.S. 223 (1993)
 Antonin Scalia, “Common-Law Courts in a Civil-Law System,” in Philosophy of Law Ninth Edition, ed. Joel Feinberg et al. (Boston: Wadsworth, 2010),