• An illustration of cattle and a barn in a rural field.

    America’s Fading Pastoral: Picks of the Week

    Who Wants to Run That Mom-and-Pop Market? Almost No One | The New York Times

    What happened when Walmart left | The Guardian

    When Health Law Isn’t Enough, the Desperate Line Up at Tents | The New York Times

    Rural divide | The Washington Post

    Why American Farmers Are Hacking Their Tractors With Ukrainian Firmware | Motherboard

    Doctor Shortage In Rural Arizona Sparks Another Crisis In ‘Forgotten America’ | NPR

    Walt Disney had Marceline, Missouri. A fable cemented in his mind by a fleeting childhood stint in the rural southern town. Through his company, through Disneyland and “Main Street U.S.A,” Disney channeled memories of a bucolic small town and rural life. Though Disney’s father only briefly flirted with a yeoman life in Marceline, dusty Midwestern paradises, small towns, and steamboat adventures dominated early Disney works. Historian Steven Watts, writing in 1997, contends the infatuation America shared with Disney and his Marceline grew out of yearning. The late 19th century drawing up to 1920 witnessed a shift in residence patterns of Americans toward urban centers. Following The Second World War our country’s “Amber waves of grain” deposited once rural children in coastal cities and in sprawling new cities like Chicago. Just like the Bergsons in Willa Cather’s O Pioneers!, there are those who have “stayed behind” but in our East Coast bubble few are seeing what has continued to afflict swaths of our country.[1]

    Disney was not the only American infatuated with rural life. Thomas Jefferson praised the yeoman farmers of the burgeoning Republic claiming, “Those who labor in the earth are the chosen people of God, if he had a chosen people…” but our small towns are shuttering.[2] The farmers and “fruited plains” cherished by Disney and Jefferson alike seem to be in a state of constant suffering. Our American population exists in strange contrast. With more people living in only 146 counties than outside of them, there exists a particular type of American tunnel vision, one which focuses on our own region, or “country.” But when drug addictions to meth, heroin and other opioids claw their way north up the coasts, and into these counties, behind them we suddenly see a history of addiction, a vanishing lifestyle and all the associated pain which ravaged an entire segment of the sea to shining sea. While Watts claims Disney films offered those “suffering from the dislocation of historical change” a comfort from their vanishing way of life, today we see even more vanishing. Doctor shortages, widespread poverty, perceived abandonment which may not be at all unfounded. This piece of America, far away from the forgetful minds of the Northeastern megalopolis, is still suffering the dislocation of historical change.

    When she was still alive, my family would load up our car and drive the hour or so from our New Jersey home to my grandmother’s house along the shores of the Delaware River.  My mother once, after giving her more trouble than I care to admit, claimed going to the Delaware River was the only vacation I liked. She was right. While I couldn’t stand the heat and busyness of the Jersey Shore, with its bumbling crowds and din, I seemed to find comfort slowly meandering up Delaware Drive in our car, watching the river blink out between farmland and the occasional small town center. There was rust on the road, railroad tracks, closed down silos and mills, signs of objects once important suddenly left to wither. However, through the rust, through the seemingly tempered march of life in this river valley and the hushed conversations about drugs, unemployment and other woes, an intense pride in life and lifestyle managed to cut through. Though I don’t think John Steinbeck’s farmer, stopped for lunch on an idling tractor, would ever believe he would need a piece of European software to fix his tractor, the tragedy of the Joads in Grapes of Wrath seems to still be playing out in between our dense urban centers. While hope in a return of lifestyle abounds, and regional pride has not subsided, few steps of consequence have been made.

    [1] Steven Watts. The Magic Kingdom: Walt Disney and the American Way of Life. (University of Missouri Press: Columbia, MO, 1997)

    [2] Thomas Jefferson. Notes on the State of Virginia. 1785.

  • Robotic arms with empty conveyor belt

    Our Automated Life: Picks of the Week

    California’s would-be governor prepares for battle against job-killing robots | The Guardian

    How automation is going to affect jobs in pharma, core, auto and consumer sector | The Economic Times

    One of Europe’s most influential investors gave a brutal example of how AI could wipe out white-collar jobs | Business Insider

     

    Last week, YouTube channel Kurzgesgat (German for “in a nutshell”) released a video titled “The Rise of the Machines – Why Automation is Different this Time.” The smartly animated video essay paints a bleak picture, one where “productivity is separating from human labor.” Their warning frames modern innovation as a movement decreasing the number of jobs. Automation, and the reliance on complex software to trim fat, so to speak, results in effective, streamlined corporations with higher profits, but ones employing fewer people. The ability to breakdown complex tasks into smaller, simple ones, and to “learn” from a mass of constantly acquired data, creates machines able to out class any human at specialized work. Fear of lost work from automation has struck deep enough to manifest, ironically, in a University of Phoenix advertisement. A factory worker, put out by machines, becomes a newly hired computer specialist after attending University of Phoenix. The commercial closes with our hero crossing out the “it” in the famous “We Can Do It!” Rosie the Riveter poster and writing over it with a capital “IT” for information technology. This deep-seated cultural fear of machines making humans irrelevant, and making those who own the machines very rich, is warranted, but ignores potentially more pernicious effects of automation.

    Nicholas Carr, in his book The Glass Cage, argues automation changes us from users of tools to technicians, and monitors of them. Such a world of automated tools can actually affect our ability to learn and even affect our skill sets. Carr uses the example of Airline Pilots. With increasingly automated and computer controlled “glass cockpits” pilots are frequently no more than monitors of a computer as it flies a plane. Yet, when incidents occur Pilots fail to react properly, driving up the number of accidents caused by “pilot error.” Furthermore, the “offloading” of our cognitive process onto tools stunts their growth. What can seem comical at first, like driving down a one way street because the GPS told us to, actually reflects a deeper issue: the trust we have in our tools usurps our own thoughts and even the awareness of our own actions.

    What appears on our screen is never random. The “Recommended” section is a carefully harvested crop of articles, videos, items, grains reaped from the wealth of data highlighted Kurzgesgat. But, in our Internet culture, speed triumphs over all else. Conversely hesitation, slowness, or what some would call thoughtfulness is antithetical to technology. While some fear the replacement of our skills with machines and Nicholas Carr fears automation is changing our humanity; other fears have only recently been realized. Neil Postman, writing in Amusing Ourselves to Death, claims the television, through shows like Sesame Street reduces education to entertainment. He argues television shows require no prerequisites nor induce any perplexity, stripping away the needed building, the formative and the foundational steps involved in learning. With the algorithms recommending for us having been designed to keep us in motion, to entertain us, our entire thought process, as combined with a machine, is reduced. The structure of our time, then, echoes Fire Chief Beatty in Ray Bradbury’s Fahrenheit 451: “People want to be happy, isn’t that right? Haven’t you heard it all your life. I want to be happy, people say. Well, aren’t they? Don’t we keep them moving, give them fun?”

    These algorithms do not just entertain us: they cloister us. Evgeny Morozov writes of Google’s understanding of Urbanism: “[Google’s Urbanism is of] someone who is trying to get to a shopping mall in a self-driving car. It is profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to.” Applying such a view to how we learn, to how we consume what is supposed to color our thoughts, we are left with a highly insular, relativistic mode of thinking. We see what is tailored for us, and we consequently experience none of the requisite displeasure or slowness necessary for true thought. We find ourselves at Marcel Proust’s misunderstanding of the theater, why he believed it to be continually engaging: “so incorrect was the picture I drew for myself of the pleasures to be enjoyed there that I almost believed that each of the spectators looked, as into a stereoscope, upon a stage and scenery which existed for himself alone, though closely resembling the thousand other spectacles presented to the rest of the audience individually.” German-Korean Philosopher Byung-Chul Han argues that in this atomized culture, one where life is stripped almost to bare, the excessive upward-ness, the comportment of constant positivity, obsession with health and the unending hum of thousands of voices, we are brought to the most severe stages of psychological burnout.

    Within our automated reality we are amused, entertained, without skill and without work. Our present is what Henry David Thoreau would describe as “an improved means to an unimproved end.” In writing about a telegraph recently strung from Maine to Boston, he lamented “our inventions are wont to be pretty toys, which distract our attention from serious things.” He argued connecting two places, who may not have much of anything to say to each other, creates a situation “as if the main object were to talk fast and not talk sensibly.” Our current technological narrative is both fast and unrelentingly positive. We extol the virtue of our “progress” forward, yet if this is where we end have we simply created improved means to an unimproved end? – Francis Quigley, Research Analyst.

     

  • Missiles launch towards Syria on board of USS Porter

    #31 in Trending: U.S Strikes Syria – Picks of the Week

    Brian Williams is ‘guided by the beauty of our weapons’ in Syria strikes | The Washington Post

    US strikes in Syria launched from USS Porter | YouTube

    Dozens of U.S. Missiles Hit Air Base in Syria | The New York Times

    Trump launches military strike against Syria | CNN Politics

     

    Friday morning the number thirty-one top trending video on YouTube, beneath asapSCIENCE: “Are you normal,” celebrities eating hot wings and Jimmy Fallon and the Rock “photobombing,” is a video from the United States Navy showing Tomahawk missile launches from the USS Porter. The under-lit video spikes briefly as rocket engines ignite, revealing the deck of the Porter. There is no movement, no change, only a hiss and a bright light as a spark lofts out of frame.  Little on the deck of the Porter can be described as “human” and no crewmen or beings of any kind exist in the footage. Thursday night I watched the almost three-minute video in bed, in pajamas, full from dinner and icing my shoulder after the gym. When the video ended YouTube queued up another, footage from the U.S.S Ross, but below the queue it recommended “Impractical Jokers” videos, a web series from a video game magazine, and clips from the television show “Scrubs.” I sent the video of the Porter to two friends via Facebook chat, one called it “eerie.” Last night Brian Williams called the footage “beautiful” invoking the words of Leonard Cohen: “I am guided by the beauty of our weapons.”

    YouTube is replete with videos of people dying. Should YouTube be unsatisfactory, any number of websites can sate one’s appetite for death; the social network Reddit contains a Subreddit titled just that: “/r/watchpeopledie.” After the video of the Porter I looked up “Tomahawk missile impact” and found a few videos of missile tests, the United States Navy blowing up ships in the ocean, and a video of an apparent Tomahawk strike on ISIS fighters; people died in that video. Following the impact, the fireball and cries through my computer speakers I closed out of the video and watched some clips from Impractical Jokers. I shut my laptop, went to bed, woke up for work, drove in with no traffic, sat and watched video of the U.S.S Porter again. At my desk in a Gilded Age mansion on Bellevue Avenue I consumed United States Military might. I watched AC-130 gunship training, A-10 Warthog strafes, and Nuclear weapons tests. Almost every piece of United States War Materiel is the subject of a YouTube video.

    I was seven years old on September 11th, 2001. News spread through my elementary school and my teacher turned on the television in our classroom, a decision the merit of which I still wrestle with (my mother has no such ambivalence), and a classroom of twenty or so seven and eight-year-olds watched an event none of us could grasp. We fed off the fear, the consternation; the façade of calm put on by our teacher. When I went home my grandmother tried to comfort me, my father was trapped in Long Island, and my mother still at work. I remember one word: “terrorist.” My grandmother assured me no terrorists could get me in our den. Now I know she was unsure. We saw the smoke in our New Jersey suburb, and from then on my memory is shoddy. It is hard for me to untangle what I remember and what I’ve made myself remember. We went to war shortly after and for the majority of my life we were at war.

    Soon children born after September 11th will graduate high school. Some of them already have driver’s licenses. We usually say this as a way to make ourselves feel old. To link the age of a young person to an event deeply seared in our memory, trauma so intense the taste remains stuck between our teeth. They’ve been at war almost their entire lives. Wikipedia claims the war in Afghanistan ended in 2014 and the war in Iraq ended in 2011. We know those figures aren’t accurate. Troops were still in Afghanistan into 2016. The wars continued, for some children the war comprised an entire lifespan. But war did not affect the overall trajectory of their lives. Some will say this is untrue, the war touched every facet of our lives.  They would be right. Policies related to war and related to the Post-9/11 reaction have shaped these children’s lives. But in a way, they don’t connect to the war. The trickle down policies, the professional military with no draft, the far away-ness of the conflict, did the war even really exist?

    Watching missiles launch from the deck of the Porter I thought about myself at 15, myself at 16 and 17. I thought about those pieces of my teenage boy brain not completely formed on the world; not quite right. Now, who we want to call “young men and women,” are watching Tomahawk missiles fly through the dark and flicker out in the night. They see it happen on their computer screen. An image ends at the corners of the screen, behind it no depth. MIT sociologist Sherry Turkle, in studying social interactions, noticed through digital communication we tend to expect more “frictionless” interactions. While problems worked out through looking at another person are difficult, messy, and taxing, interactions through media like text and Facebook chat carry no burrs. Interactions are diffuse, sliding between relationships and persons easily. Images of war become just as slippery. War, the “#31 top trending video” on YouTube, takes on a flatness, an empty, disconnected image. Do we watch these videos delighted that our wars are as frictionless as our social lives?

    Every few weeks or so I’ll be confronted with death in a small rectangle, only a few inches in area, on my Facebook feed. Soldiers shot in the head on camera, suicide bombs exploding; I once met a man who described to me his favorite videos of people dying. These are infinitely frictionless, able to be turned off at a moment’s demur. So much so I’ve grown concerned over myself. Concerned I can see atrocities, see death in front of my face and simply grunt, complain, and carry on unbothered. While September 11th, 2001 is a scar on my mind, I cannot remember all the videos and pictures of death I’ve inadvertently come across; or been sent as a “joke.” I watched the missile launch alone, atomized in a cloud of brute information, sanitized, presented flat.  My grandmother was scared on 9/11. Yet, each following “event” carried less sting. Awash in a collection of weak internet ties destruction, death, and fear, are stripped from their bearings, restructured, and “recommended for me,” as façade. Such digital manifestation breaks down the sinews of our most traumatic tendencies. – Francis Quigley

     

    Image Credit: United States Department of Defense. 

  • Hands type on laptop computer

    The Onus of Choice: Picks of the Week

    Facebook Wants Users to Help It Weed Out Fake News | Forbes

    Jack Dorsey says Twitter is not responsible for Trump’s election, but it’s ‘complicated’ | CNBC

    Reddit moves against ‘toxic’ Trump fans | BBC

    Twitter bans multiple ‘alt-right’ accounts | engadget

    2016 was not a good year to be a tech company. Twitter’s struggling to profit, Facebook is still reeling from its massive fake news debacle, and major web forums are buckling under rampant abusive behavior. It was the witching hour of the Internet and we were able to see at full bore the most grotesque second half of its Janus face. Fake news, harassment, echo chambers and conspiracy theories overwhelmed the core value of the Internet: democratization. Now major tech companies are coming to grips with a problem they have delicately danced around for a decade; choosing between idealized free speech and reality.

    Twitter is famous, or infamous, for proclaiming itself “the free speech wing of the free speech party of the Internet.” Since its inception Twitter struggled with maintaining its hardline commitment to free speech while consistently fumbling issues of abuse. For years they remained unable to find a comfortable position, vacillating between deploying algorithms preventing President Barack Obama from getting inflammatory or offensive questions during his Question and Answer session to flatly ignoring victims of sustained, targeted campaigns of abuse. While Twitter struggled with its demons quietly, similar problems manifested elsewhere. For each the root cause was the same: free speech. Connecting the world openly and freely, disseminating information equally and giving a voice to all are core tenants upheld by Internet Canon law, but the past year has revealed each of those adverbs is qualified.

    Companies like Reddit and Twitter, fearing failure, have taken steps to create a welcoming appearance and attract and retain users. Responding to criticism, Twitter banned “Alt-Right” accounts, and Facebook began rolling out new measures to combat fake news. Reddit CEO Steve Huffman admitting to secretly altering comments on the Donald Trump themed subreddit “/r/The_Donald.” Huffman had seen Reddit plagued by issues stemming from “/r/The_Donald,” in one such example a hiccup in Reddit’s processing algorithm led to user’s homepages containing exclusively posts from “/r/The_Donald.”  But, wanting to have his cake and eat it too, Huffman chose to keep the illusion of freedom while subtly changing reality. Each instance drew heavy criticism, equating the moves to censorship. Worse still, Twitter and Reddit are struggling to deal with a cold reality; championing “free speech” may mean playing host to ideas you do not agree with. Twitter CEO Jack Dorsey, like Huffman, had to answer critics claiming his service allowed Donald Trump’s election to President. When asked how he personally felt about Trump’s election, and Twitter use, he answered “it’s complicated.” His response demonstrates a waning commitment to the “free speech wing.”

    Calling the Internet a “Public Place” is a common argument among advocates for Internet free speech. Mike Rugnetta of PBS’ “Idea Channel” argues that the Internet is more akin to a shopping mall than a square. Unlike the town crier, speech on the Internet occurs on private servers that are owned by individuals or corporations. Just as malls give the illusion of being an open public space one is not necessarily afforded the same freedoms, as they are technically on private property. Tech companies are now faced with a stark choice; adopt an “anything goes” policy and suffer the consequences or start filtering content and suffer the consequences. Twitter, Facebook and Reddit now must ask themselves, should we be more like shopping malls and less like the town square?

     

  • Long exposure photo of cars on a highway overpass. The left lane is a blur of white showing the traffic moving while the right lane is a dotted canvas of tail lights showing a major traffic jam. Soft blues from a twilight sky mix with the rough orange of urban life. Streetlights in the background reveal a world underneath the overpass one that is dark both metaphorically and literally. The photo shows concrete life coming to fruition. The buildings, the overpass itself, all of the same material. What is this urban life? It is a complex mix of blue and orange on a concrete palette drowning out the cries of a lonely green park. Just a hint of the once expansive natural wonder of this land exists surrounded by the overpass, another metaphor. The overpass is layered much like the way we experience life. While we may wait trapped in traffic above, below the cars race to and fro, trapped in an inevitable push for space. Amidst the chaos of urban life a red light frees lanes in the bottom right. Show the constructed nature of this problem we call traffic. By mans design traffic comes and goes, ebbing and flowing not from the tides but from the red lights that dot the image.

    The Problem With Ethical Autonomy

    Can We Program Self-Driving Cars To Make Ethical Choices?

    Our conception of the future is often produced by a quixotic mix of the fictional and the real. Science fiction authors hand down a perspective of the future warped by a desire to reflect, and change, the present and consequently the future. Intertwined with our reading is our own movement away from the present. Weaving together science fiction and the individual creates a future that is marked by fabulous technology, but inhabited by ourselves. The childish flights of fancy we construct into our memory when examining the future we live in are held against two paradoxical standards: the imaginative conflation outlined above and the technological progress achievable in our lifetime. We are thus simultaneously impressed and disappointed by our technological circumstance. When Captain James T. Kirk first flipped open his communicator requesting a transport back to the Enterprise, audiences were taken by the fantasy. Kirk was able to talk wirelessly, instantly, to his crew in orbit, as well as be “beamed” almost anywhere. Fifty years later cellphones are a pervasive, almost cultural, force in society yet we still lament that we cannot “beam up,” that we are still very much terrestrial beings. Though technologies we now have access to might retroactively seem to be the “obvious” technologies humanity would pursue first, this ascribed logic of technological advance clouds our sight. When a technology seems be of our imagined future it is worthy of extra consideration.

    In an op-ed for the Pittsburgh Post-Gazette President Barack Obama outlined an initiative by his administration to provide car manufacturers with a sixteen point compliance list for the safety “autonomous vehicles.” Vehicle autonomy, often referred to as “self-driving cars,” has appeared in the consciousness of government agencies seemingly overnight, though the technology for autonomous vehicles has existed for some time.[1] After Telsa’s “autopilot” program was involved in a fatal car accident articles have begun to appear concerning the varying aspects of these vehicles.[2] The fallout from Tesla and the apparent blessing of the United States Government spurred a wave of technological analysis. Yet economic and legal discourse, and even the President’s own initiative, sidestep ethical issues present at the foundation of vehicle autonomy.

    At the very heart of self-driving cars is the programming that allows them to function. Such “Autonomy” is, in a philosophical sense, a bit of a misnomer. The type of “autonomy” that exists inside these cars is not so much a rational and conscious actor, but a machine following a set of pre-programmed commands. Running on algorithms the cars take inputs and run them through a series of rules to create an output. The “autonomy” of these self-driving cars is an illusion. The “freedom” these cars have is one that is merely programmed to react in specific ways per the context of the situation. The cars do not have the freedom to act outside their rule bounds. They are designed to appear autonomous when it is their programming dictating their actions.

    In a car accident it is the driver of the car who is the rational actor involved in the decision making process and the driver bears the moral culpability for the results. The United States is a vehicle nation, containing almost as many cars as there are people.[3] Millions of American’s commute to work every day via automobile, on interstates and highways across the country. As the President outlines, roads are dangerous. Thirty-five thousand people died in automobile accidents just last year. He goes on to say that self-driving cars will reduce the number of traffic fatalities.[4] However, it is not a stretch of the imagination to consider a situation where a car accident will indefinitely result in death. How would we program a self-driving car to react in a scenario where, through the actions of the car, the end result would be fatal? What if a truck stops short and if the car swerves in either direction it will collide with a motorcycle but if it stops short the driver will be crushed by an oncoming truck. Do we program autonomous vehicles to always protect the driver? Or to minimize the amount of total harm? Such a scenario has resulted in two competing headlines: “Is your car programmed to kill you?” and “Is your car programmed to kill?” It appears that in allowing self-driving cars we are at least confirming the latter, they will, in some way, be programmed to kill. In a car accident the onus is on the driver, they make the ultimate decision, to swerve or to break, and are thus held responsible. But who bears the responsibility when an autonomous vehicle crashes?

    While Captain Kirk and Mr. Spock were guided by the hand of the Prime Directive, in the field of vehicle autonomy there is no overarching principle. With companies like Google and Uber already pushing to deploy this technology and the most recent position by the Government being one of encouragement, it appears our haste to reach the future has superseded any reflection given to this technology’s ethical implications.[5] Improvement and innovation can mask the ethical challenges of new technology and it is yet to be seen how problems of autonomous vehicles are approached.

    [1] http://www.post-gazette.com/opinion/Op-Ed/2016/09/19/Barack-Obama-Self-driving-yes-but-also-safe/stories/201609200027

    [2] http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s

    [3] http://www.latimes.com/business/autos/la-fi-hy-ihs-automotive-average-age-car-20140609-story.html

    [5] The 116 page DOT report can be reviewed here: https://www.transportation.gov/AV

     

  • Painting by John Trumbull depicting The Death of General Warren at the Battle of Bunker's Hill on June 17, 1775 among fellow soldiers still engaged in warfare.

    The American Weapon

    We spend every day locked in a tension, our minds fight to weather a storm brewed by the minds of those around us. As near as we can get to touching the true thoughts of another, the closer we seem to think we are, the farther away we realize we have become. To ourselves and our minds, our intents and our actions are neatly presented to us. Yet when we pass a stranger on the sidewalk as they draw in on us our mind scrambles to predict their intent. The inability to know another’s intent, the blindness we have to it, haunts our experience. Such a human problem becomes more complex when paired with a human creation; the gun. In dealing with this issue, I am working on a narrow platform, considering a mere node in a greater web. In this article I would like to raise only few questions, in hopes that they can be considered, not only by readers of this blog, but by other writers as well.

    About every week or so the milk carton in my refrigerator runs dry, forcing a trip to the grocery store. The routine is usually the same: walk to the car, start it, drive a few miles to the market, enter, get milk, checkout, and return home. Such a chore is dreadfully boring and I often loathe taking the time. Throughout my entire journey, the intent of my actions was clear to me: I was going to get milk. At each step I crossed paths with others. Had you stopped them and asked these strangers what my intent was at any given part of trip you would probably be greeted not only with different answers, but also a sense of bewilderment. How would they know the intent of another? One of the more common pieces of dating is to “make your intentions clear.” Why? So the other person is not “guessing,” and your actions are not misconstrued under the pretext of a different intent. Yet when a nervous man comes across as creepy we would be in the minority in criticizing the woman for not understanding his intentions. Much of the difficulty in being human grows from the inability our minds have in regards to one another, which serves to only make matters more complicated for the rest of this article.

    Ask any art major what the phrase they hear the most from family members at Christmas, the answer is likely “what are you going to do with a major in ceramics?” The onus then falls on a stressed nephew to demonstrate the value of his chosen path of study. How do they go about doing that? There is a struggle in demonstrating the utility of an art degree. For the average uncle there is very little pragmatic value to it. What job will it get you? How will it help you make money? To raise a second line of thought, when we look at most things we tend to immediately look at what they are for. Our conception of their “goodness” stems from their usefulness. Utility is not hard to see in everyday objects; a shovel is good for digging holes, a kettle for boiling water, a key is good for unlocking a lock. For less cut and dry examples, like education, the pragmatic “goodness” is not as obvious.

    A gun, considered narrowly and in a reductionist stance, is a series of springs and levers that plunges a hammer-like arm into a bullet assembly. The gun includes the stock and barrel but when considering what a gun is good for we can say it is good for igniting the powder in a bullet and allowing it to be fired. We can even bring a more holistic view and include the bullet, saying that a gun is good for the firing and aiding in the projection of a bullet.[1] But to further complicate this, we can look to the Greek work Telos, meaning “end.” Telos is the ultimate end of an object. I will consider it here as the goal, or final purpose. We can pull this concept down into a real world example to fully round out the exposition.

    In the early 1990s one very unlucky criminal made a deal with an undercover officer, he would provide a MAC-10 firearm in exchange for a few ounces of cocaine. After the deal went down he was promptly arrested and charged with the usual offenses. However, he was also charged with using a firearm in a drug related crime. The use of a firearm, in this case, carries with it a heavier sentence. An appeal of this charge made its way to the Supreme Court of the United States who were tasked with answering the question of whether trading a firearm for cocaine constituted “use” of that firearm. The court eventually came down with a guilty verdict, arguing that though he had not used the firearm in its traditional, intended purpose, the word “use” connotes more than just “intended use” and does not exclude other ways to use a firearm. Justice Antonin Scalia dissented and used Smith v United States as an example in his essay Common Law Courts in a Civil-Law System.[2] Scalia very frankly states his opinion on “use.”

    “The phrase “uses a gun” fairly connotes use of a gun for what guns are normally used for, that is as a weapon. As I put the point in my dissent, when you ask someone “Do you use a cane?” you are not inquiring whether he has his grandfather’s antique cane as a decoration in the hallway.”[3]

    Scalia gives us an interesting precedent, is there an intended, goal-like end for the gun? Can it be argued that a gun has a Telos or that a “normal” use of a gun exists?

    There is a danger in these mysteries, especially when paired. Guns have many uses, they can be used for hunting, they can be used for sport shooting, and they can even be used for decoration. Shedding the obvious, like encased decorative guns, there is a crux. Not only is there serious difficulty in understanding the gun, there is an even greater difficulty in deciphering the intent of the person in front of us. Now it does not seem unreasonable to ask, is a gun made to kill? Ardent supporters will say no, “guns don’t kill people, people kill people” while this is half the equation it seems to very clearly miss the other half. Putting this in a stronger form, one can claim the gun is a tool. Tools are fairly inert, the shovel does not dictate whether it is used for digging holes or used for decoration. The will of the user bears the accountability for the use of the tool. We can be correct in saying “shovels don’t dig holes, people dig holes” but why was the shovel made? Tools can still be crafted with intended purposes. Medicines are made to treat certain illnesses, and, while they often have multiple uses, they were made with a specific malady in mind. Guns are produced just as shovels are produced and this lets us ask two questions; “what are we making guns for?” and more specifically “what is our intent in making them?”

    We have considered intent, use, and teleological end but can these questions be answered? Americans have a relationship with guns, that much is clear. The broader I go, the more I lose in my ability to examine a topic comfortably. These very narrow questions, ideally, bloom into larger discussion, but that is what I hope to generate, not to tackle on my own. The gun is an American weapon, we have seen our history born of revolution, our constitution give to the people the right to bear arms, and now our culture sits mired in tense discourse. Are guns made to kill? What is our intent in making guns and what is our intent in carrying them? The questions I hope to raise are ones that should be answered. Americans are still humans and the problems we experience with guns are not divorced from the human condition.

     

    [1]  I draw the distinction between what the gun does and how well it does it. The gun fires the bullet but the barrel determines how well the bullet flies, I believe there is a distinction between bringing the bullet into action and then aiding it in its own function. I must also make a concession in that I am considering only the narrowest sense of a gun. Many things are called “guns” but do not follow this definition, like the railgun which accelerates a projectile using a series of magnets. I am aware of the semantic issues but for brevity and clarity I refer to “gun” and “firearm” as one in the same.

    [2] Smith v United States, 508 U.S. 223 (1993)

    [3] Antonin Scalia, “Common-Law Courts in a Civil-Law System,” in Philosophy of Law Ninth Edition, ed. Joel Feinberg et al. (Boston: Wadsworth, 2010),

  • The Internet’s Eye

    Events happen before our eyes. “Over there,” as sung by American men off to fight the world wars, no longer exists. “Over there” now plays out in the living rooms, cubicles and pockets of a technologically saturated world. Television broadcasts have been “live” since the 1950s and the distance across the globe and back has shrunk ever since. The world is the smallest it has ever been. Socially, people are connected across borders practically instantaneously. When events of disaster, of terror and of mourning occur we are able to congregate on the Internet, speaking socially, as “live” as television is broadcast. The breakneck pace of current Internet sharing cuts both ways, allowing for a more connected world but on the same hand giving false information the ability to propagate incredibly rapidly.

    Speeds involved in computer networking are almost conceptually impossible in the mind. C, the speed of light, moves at a speed that has no equal. NASA attempts a comparison on their website, illustrating that a jet traveling 500 miles per hour would cross the United States in about four hours, while light travels around the entire earth seven and a half times in a single second. Information is encoded in light moving through submarine fiber optic cables; using this speed to bridge the Internet between continents. A connection of this sort allows a man in Paris to log on to Reddit and share information about terrorists in a “Live Thread” as it happens. Majority opinion on this subject is generally favorable. Increased information sharing allows facts to be dispersed quickly, across national boundaries, directly to the world. There is however a dark side to this ability.

    On March 16th of 2013 Sunil Tripathi did not return to his Brown University dorm room. A month later he would be found deceased, in the Providence River. A week before he would be found, two bombs exploded at the Boston Marathon lighting a social media firestorm. Internet vigilantes quickly joined police in the hunt for the perpetrator. Threads popped up on various social media services as Internet users scoured footage and personal profiles for answers. Eventually, Tripathi became their prime suspect. Beyond a shadow of a doubt he was their man. Soon hate messages began pouring in to Tripathi’s family and tips began to flood the Boston police. The Tripathi family was subject to intense harassment and the Facebook page “Help us find Sunil Tripathi” was inundated with racist, vitriolic messages directed at the then deceased 22 year old. As we now know, it was not Tripathi who placed pressure cooker bombs at the finish line and, to add insult to the injury of an already grieving mother, the Facebook page meant to help find her son had to be taken down. Pain of this magnitude is a demon no family should have to face. The Sunil Tripathi narrative is now a glaring black mark on the record of social media and speaks to a larger problem with high speed information sharing.

    False information is dangerous, yet when unchecked information can be transmitted almost instantaneously there is an added degree of danger. Once the genie has been let out of the bottle, it is almost impossible to get him back in. As Pell Center Director Jim Ludes encountered a few months ago in a blog post on vaccines, myths are persistent and hard to kill even when you have the majority of scientific consensus on your side.  When a fact, left unchecked, is allowed propagate so quickly it can, just as quickly, become canonized. In dealing with a tragedy like the recent events in Paris, social media can be a hand that both gives and takes away. The Internet has created a certain atmosphere of carelessness when it comes to speech. Opinions can be typed anonymously and thrown away just as easily but so can accounts, especially of disasters. The tumultuous atmosphere created in cyberspace should be approached carefully.

  • Did Al Gore Invent the Internet?

    In the course of researching topics occasionally I will find a source that says exactly what I was looking to say. Often though they are either too technical, synthesized into a greater topic or just better used as a secondary source.

    When I first decided to write on Al Gore I was excited. “Did Al Gore invent the internet?”  was a question that I knew I absolutely had the answer to. The history was there, as was the misquotation, this was a topic I could, with ease, completely cover in writing. So it was to my great dismay that, through the research process, I found a relatively accessible source that fully unpacks the question of Al Gore and the Internet. Against risking reinventing the wheel, I am going to post the article here, the authors have done what I consider the best possible job explaining Al Gore and I will do my best to explain who they are.

    Outside of the technology sphere Vint Cerf and Robert Khan are actually fairly well known. Frequently giving talks, and occasionally enjoying the media spotlight (Cerf appeared on the Colbert Report just this summer) they are referred to as “Fathers of the Internet.” Fathers they are, both men are cited as co-creators of TCP/IP. TCP is Transmission Control Protocol (TCP) and IP is Internet Protocol. In our blog post about the cloud we went over the client server style architecture in computing. TCP/IP are protocols (think directions or instructions) that allow this type of communication to occur, it is the instructions that client/server communication is built on. You are probably most familiar with TCP/IP through the term “IP address.” These are call numbers for computers using TCP/IP communication and allow other servers and computers to locate and identify your computer when online.[1] Whenever you use the Internet you are communicating through TCP/IP protocols and subsequently using a technology invented by Cerf and Khan in the 1970’s.

    Here is what they have to say about Al Gore and the Internet;

    http://amsterdam.nettime.org/Lists-Archives/nettime-l-0009/msg00311.html

     

    [1] It is important to remember when I say “Locate and Identify” that we are talking about a type of communication. This is not meant in such a way as your privacy being invaded. Imagine trying to meet up with someone. However you knew neither what this person looked like nor where they were. Would this conversation between you two be possible?

  • 21 and Up: A Snapshot of Computer History

    I recently turned twenty-one years old, which is a bit of a milestone in the United States. Many hours were spent thinking about how old I had become and how I could relate my new found age to this blog. Occasionally my academic path and my interest in technology intersect; today we will be going over a snapshot of technology history. Imagine what it would be like buying your first computer when I was born – the year 1993. How different would the technology look compared to what we can buy today? Not only is it a challenging question but its answer can show how far technology has come – in only twenty one years.

    Let’s say then that we have warped back to the 1990s more specifically 1993. We would hear Whitney Houston’s “I Will Always Love You” and The Proclaimers “I would walk (500 miles)” on the radio, watch Michael Jordan and Scottie Pippen play on the Chicago Bulls, and be able to see Jurassic Park and Schindler’s List in theaters. We could even use the phrase “I rented a VHS from Blockbuster but we have to rewind it first” and everyone would understand. What is most surprising about computers from this time period is how familiar they would appear. The basic set up from that era and today is the same with computers using a monitor, tower, mouse and keyboard (laptops were also available in 1993). Scratching the mouse around on mouse pad to get wake up the computer would reveal an operating system that, while not as stylish as today’s, would not be completely unusable. Operating a computer though, would feel profoundly slow. Modern solid-state drive (SSD) computers can go from off to ready to use in seconds, while a Macintosh in 1993 would take almost a minute. Word processing was available as was the Internet although most people were using dial-up. Operating computer has not changed much overall but technologically 1993 and today are worlds different.

    If you went to buy a Macintosh in 1993 you would probably pay around two thousand dollars. Adjusted for inflation that would be around thirty three hundred in today’s money. Computers were not exactly inexpensive. What you would get with your money would be about four or eight (depending on the model) megabytes of RAM, and a five hundred megabyte hard drive (there were options for as low as eighty megabytes). All of this was driven by a twenty megahertz processor. The monitors at the time were monochrome, cathode ray tube (CRT) just like the televisions of the 1990s. CRT monitors are bulky, extremely heavy and have the advantage of being able to be heard before the picture can be seen. An iPhone 6 has two hundred and fifty six times more memory (on hundred and twenty eight gigabytes), one hundred and twenty five times more RAM (one gigabyte) and is a phone, not a computer. We carry around in our pockets a device that is more power than fifteen computers of 1993, and yet we do not even think of it as a computer.

    Being twenty-one makes me feel old and when I started looking at computers of the 1990s I felt even older. The performance machines of that time are children compared to the average smart phone. What used to be too heavy, too hot and too unstable to even move off our desks, now sits comfortably in our pockets. For computers the 1990s are an strange time. A decade that was not different enough to be foreign, but not modern enough to feel familiar.

    One of the hardest parts of maintaining a hotel is often the in room televisions. When a television matches what we believe is current we often won’t notice it, yet when it is too old, it becomes an eyesore. Thus the hotel owner faces a problem. The speed at which our technology advances is enough to make even five year old televisions seem obsolete. The same is true of computers, where often using a computer that is five or six years old will seem insufferable. The hotel owner has a choice, either buy new televisions or accept his rooms will appear dated. Not much can make a twenty one year old college student feel old but when I look back at what was normal the year I was born, I cant help but want to shake a little dust out of my bones.

     

     

  • Why is Silicon Valley called Silicon Valley?

    This history byte is a Weekly What Is dedicated to the why and where of a technology name.

    When learning the geography of the American West Coast one realizes fairly quickly there are a lot of valleys. The San Fernando Valley, Santa Clara Valley, Death Valley, Mission valley, more than enough to go around. Most areas in the American west are named for the Spanish, who actively explored and settled the region. Yet there is one popular valley with a fairly strange name – Silicon Valley. Silicon is a metalloid, atomic number fourteen with an atomic weight of twenty-eight. The uses for silicon are varied, silicone grease can be used to lubricate a Rubix cube in order make it spin fast enough for use in a “speed cubing” competition, silica[1] gel packets are heavily relied upon to keep moisture out of packaging and silica is even used as a food additive. None of this however, explains why a tract of California is named after silicon; luckily this blog post will do just that.

    Silicon Valley is called Silicon Valley because of sand. As a term it first appeared on the cover of the January 11th edition of Electronic News Magazine in 1971. Don Hoefler a journalist for the publication had titled a three part series examining the history of the semiconductor “Silicon Valley U.S.A.” The term rapidly became associated with technology, in such a way that the two are now almost inseparable. But back to sand. There is a reason why Hoefler chose to title his articles on semiconductors after silicon. Many companies manufacturing computer chips (like Intel) were either operating or headquartered throughout the region, now known as Silicon Valley back, in 1971. The first ingredient in the manufacturing process of computer chips happens to be – sand.

    Understandably the term “computer chip” might be a bit foreign but the truth is actually less complex than the name implies. Last week we went over the difference between hardware and software, this week we won’t be talking about software at all, which makes our lives much easier. We will only be examining hardware, and in the interest of clarity, one specific type of hardware – the Central Processing Unit (C.P.U). This is your computer’s engine; handling most of its functions and processes. Admittedly making a C.P.U is a bit of a strange process. Manufacturers first take silica sand and heat it up. Those familiar with the glass making process will see where this is going. When making glass the glassmaker will first take sand, put it in a crucible and heat it (usually over 3,000 degrees Fahrenheit). The melted sand is then shaped, blown and cooled, resulting in glass. The same process is the first step in creating a C.P.U. Silica sand is melted down and formed into an extremely pure mono-crystal[2]. The crystal is then cut into extremely thin wafers that are anywhere from one to twelve inches in diameter. The wafer is polished and cleaned of any impurities[3]. Wafers are meant to hold thousands of very small transistors. Transistors are at the foundation of electronic computing and deserve their own blog post (coming soon) but what is important to know here is that they are very tiny on off switches, like a sink faucet or a light switch. The process of putting transistors on silicon wafers is a complex chemical one; which is not entirely beneficial to learn if you’re not studying to enter the industry. The result the process is a wafer filled with transistors. Wafers are then tested for functionality and working sets of transistor are cut from it. These tiny rectangles, filled with transistors are the basis of your Central Processing Unit. The chip is then packaged in a housing that allows it to connect to your computer.

    Silicon Valley takes its name from the large population of companies doing this work, headquartered or operating in the San Francisco Bay area. As with most technology terms, it evolved and stuck. Eventually coming to be a representation of the entire technology industry, due in part to a large influx of similar companies to the Bay Area during technology booms following 1971. Companies like eBay, Adobe, HP, Yahoo! and Lockheed Martin are still based in the Valley. The size of these Fortune 1000 companies is in stark contrast to where their home gets its name, tiny grains of sand.

    [1] Silica is a silicone oxide, a chemical containing an atom of oxygen. It’s chemical Formula is SiO2 (Silicone Dioxide)

    [2] “Mono-crystal” is referring to the silicon ingot that emerges from the Czochralski process. In the interests of simplicity and clarity this blog only discusses Czochralski based ingot making and not “Float Zone” growing methods.
    An image of the resulting ingot

    [3] An image of the resulting waver

    “Weekly What Is” breaks down a new technology related word every Friday.

Page 1 of 212