• C8xsoZ1WsAEKzMs

    #31 in Trending: U.S Strikes Syria – Picks of the Week

    Brian Williams is ‘guided by the beauty of our weapons’ in Syria strikes | The Washington Post

    US strikes in Syria launched from USS Porter | YouTube

    Dozens of U.S. Missiles Hit Air Base in Syria | The New York Times

    Trump launches military strike against Syria | CNN Politics


    Friday morning the number thirty-one top trending video on YouTube, beneath asapSCIENCE: “Are you normal,” celebrities eating hot wings and Jimmy Fallon and the Rock “photobombing,” is a video from the United States Navy showing Tomahawk missile launches from the USS Porter. The under-lit video spikes briefly as rocket engines ignite, revealing the deck of the Porter. There is no movement, no change, only a hiss and a bright light as a spark lofts out of frame.  Little on the deck of the Porter can be described as “human” and no crewmen or beings of any kind exist in the footage. Thursday night I watched the almost three-minute video in bed, in pajamas, full from dinner and icing my shoulder after the gym. When the video ended YouTube queued up another, footage from the U.S.S Ross, but below the queue it recommended “Impractical Jokers” videos, a web series from a video game magazine, and clips from the television show “Scrubs.” I sent the video of the Porter to two friends via Facebook chat, one called it “eerie.” Last night Brian Williams called the footage “beautiful” invoking the words of Leonard Cohen: “I am guided by the beauty of our weapons.”

    YouTube is replete with videos of people dying. Should YouTube be unsatisfactory, any number of websites can sate one’s appetite for death; the social network Reddit contains a Subreddit titled just that: “/r/watchpeopledie.” After the video of the Porter I looked up “Tomahawk missile impact” and found a few videos of missile tests, the United States Navy blowing up ships in the ocean, and a video of an apparent Tomahawk strike on ISIS fighters; people died in that video. Following the impact, the fireball and cries through my computer speakers I closed out of the video and watched some clips from Impractical Jokers. I shut my laptop, went to bed, woke up for work, drove in with no traffic, sat and watched video of the U.S.S Porter again. At my desk in a Gilded Age mansion on Bellevue Avenue I consumed United States Military might. I watched AC-130 gunship training, A-10 Warthog strafes, and Nuclear weapons tests. Almost every piece of United States War Materiel is the subject of a YouTube video.

    I was seven years old on September 11th, 2001. News spread through my elementary school and my teacher turned on the television in our classroom, a decision the merit of which I still wrestle with (my mother has no such ambivalence), and a classroom of twenty or so seven and eight-year-olds watched an event none of us could grasp. We fed off the fear, the consternation; the façade of calm put on by our teacher. When I went home my grandmother tried to comfort me, my father was trapped in Long Island, and my mother still at work. I remember one word: “terrorist.” My grandmother assured me no terrorists could get me in our den. Now I know she was unsure. We saw the smoke in our New Jersey suburb, and from then on my memory is shoddy. It is hard for me to untangle what I remember and what I’ve made myself remember. We went to war shortly after and for the majority of my life we were at war.

    Soon children born after September 11th will graduate high school. Some of them already have driver’s licenses. We usually say this as a way to make ourselves feel old. To link the age of a young person to an event deeply seared in our memory, trauma so intense the taste remains stuck between our teeth. They’ve been at war almost their entire lives. Wikipedia claims the war in Afghanistan ended in 2014 and the war in Iraq ended in 2011. We know those figures aren’t accurate. Troops were still in Afghanistan into 2016. The wars continued, for some children the war comprised an entire lifespan. But war did not affect the overall trajectory of their lives. Some will say this is untrue, the war touched every facet of our lives.  They would be right. Policies related to war and related to the Post-9/11 reaction have shaped these children’s lives. But in a way, they don’t connect to the war. The trickle down policies, the professional military with no draft, the far away-ness of the conflict, did the war even really exist?

    Watching missiles launch from the deck of the Porter I thought about myself at 15, myself at 16 and 17. I thought about those pieces of my teenage boy brain not completely formed on the world; not quite right. Now, who we want to call “young men and women,” are watching Tomahawk missiles fly through the dark and flicker out in the night. They see it happen on their computer screen. An image ends at the corners of the screen, behind it no depth. MIT sociologist Sherry Turkle, in studying social interactions, noticed through digital communication we tend to expect more “frictionless” interactions. While problems worked out through looking at another person are difficult, messy, and taxing, interactions through media like text and Facebook chat carry no burrs. Interactions are diffuse, sliding between relationships and persons easily. Images of war become just as slippery. War, the “#31 top trending video” on YouTube, takes on a flatness, an empty, disconnected image. Do we watch these videos delighted that our wars are as frictionless as our social lives?

    Every few weeks or so I’ll be confronted with death in a small rectangle, only a few inches in area, on my Facebook feed. Soldiers shot in the head on camera, suicide bombs exploding; I once met a man who described to me his favorite videos of people dying. These are infinitely frictionless, able to be turned off at a moment’s demur. So much so I’ve grown concerned over myself. Concerned I can see atrocities, see death in front of my face and simply grunt, complain, and carry on unbothered. While September 11th, 2001 is a scar on my mind, I cannot remember all the videos and pictures of death I’ve inadvertently come across; or been sent as a “joke.” I watched the missile launch alone, atomized in a cloud of brute information, sanitized, presented flat.  My grandmother was scared on 9/11. Yet, each following “event” carried less sting. Awash in a collection of weak internet ties destruction, death, and fear, are stripped from their bearings, restructured, and “recommended for me,” as façade. Such digital manifestation breaks down the sinews of our most traumatic tendencies. – Francis Quigley


    Image Credit: United States Department of Defense. 

  • Businessman using laptop computer

    The Onus of Choice: Picks of the Week

    Facebook Wants Users to Help It Weed Out Fake News | Forbes

    Jack Dorsey says Twitter is not responsible for Trump’s election, but it’s ‘complicated’ | CNBC

    Reddit moves against ‘toxic’ Trump fans | BBC

    Twitter bans multiple ‘alt-right’ accounts | engadget

    2016 was not a good year to be a tech company. Twitter’s struggling to profit, Facebook is still reeling from its massive fake news debacle, and major web forums are buckling under rampant abusive behavior. It was the witching hour of the Internet and we were able to see at full bore the most grotesque second half of its Janus face. Fake news, harassment, echo chambers and conspiracy theories overwhelmed the core value of the Internet: democratization. Now major tech companies are coming to grips with a problem they have delicately danced around for a decade; choosing between idealized free speech and reality.

    Twitter is famous, or infamous, for proclaiming itself “the free speech wing of the free speech party of the Internet.” Since its inception Twitter struggled with maintaining its hardline commitment to free speech while consistently fumbling issues of abuse. For years they remained unable to find a comfortable position, vacillating between deploying algorithms preventing President Barack Obama from getting inflammatory or offensive questions during his Question and Answer session to flatly ignoring victims of sustained, targeted campaigns of abuse. While Twitter struggled with its demons quietly, similar problems manifested elsewhere. For each the root cause was the same: free speech. Connecting the world openly and freely, disseminating information equally and giving a voice to all are core tenants upheld by Internet Canon law, but the past year has revealed each of those adverbs is qualified.

    Companies like Reddit and Twitter, fearing failure, have taken steps to create a welcoming appearance and attract and retain users. Responding to criticism, Twitter banned “Alt-Right” accounts, and Facebook began rolling out new measures to combat fake news. Reddit CEO Steve Huffman admitting to secretly altering comments on the Donald Trump themed subreddit “/r/The_Donald.” Huffman had seen Reddit plagued by issues stemming from “/r/The_Donald,” in one such example a hiccup in Reddit’s processing algorithm led to user’s homepages containing exclusively posts from “/r/The_Donald.”  But, wanting to have his cake and eat it too, Huffman chose to keep the illusion of freedom while subtly changing reality. Each instance drew heavy criticism, equating the moves to censorship. Worse still, Twitter and Reddit are struggling to deal with a cold reality; championing “free speech” may mean playing host to ideas you do not agree with. Twitter CEO Jack Dorsey, like Huffman, had to answer critics claiming his service allowed Donald Trump’s election to President. When asked how he personally felt about Trump’s election, and Twitter use, he answered “it’s complicated.” His response demonstrates a waning commitment to the “free speech wing.”

    Calling the Internet a “Public Place” is a common argument among advocates for Internet free speech. Mike Rugnetta of PBS’ “Idea Channel” argues that the Internet is more akin to a shopping mall than a square. Unlike the town crier, speech on the Internet occurs on private servers that are owned by individuals or corporations. Just as malls give the illusion of being an open public space one is not necessarily afforded the same freedoms, as they are technically on private property. Tech companies are now faced with a stark choice; adopt an “anything goes” policy and suffer the consequences or start filtering content and suffer the consequences. Twitter, Facebook and Reddit now must ask themselves, should we be more like shopping malls and less like the town square?


  • Long exposure photo of cars on a highway overpass. The left lane is a blur of white showing the traffic moving while the right lane is a dotted canvas of tail lights showing a major traffic jam. Soft blues from a twilight sky mix with the rough orange of urban life. Streetlights in the background reveal a world underneath the overpass one that is dark both metaphorically and literally. The photo shows concrete life coming to fruition. The buildings, the overpass itself, all of the same material. What is this urban life? It is a complex mix of blue and orange on a concrete palette drowning out the cries of a lonely green park. Just a hint of the once expansive natural wonder of this land exists surrounded by the overpass, another metaphor. The overpass is layered much like the way we experience life. While we may wait trapped in traffic above, below the cars race to and fro, trapped in an inevitable push for space. Amidst the chaos of urban life a red light frees lanes in the bottom right. Show the constructed nature of this problem we call traffic. By mans design traffic comes and goes, ebbing and flowing not from the tides but from the red lights that dot the image.

    The Problem With Ethical Autonomy

    Can We Program Self-Driving Cars To Make Ethical Choices?

    Our conception of the future is often produced by a quixotic mix of the fictional and the real. Science fiction authors hand down a perspective of the future warped by a desire to reflect, and change, the present and consequently the future. Intertwined with our reading is our own movement away from the present. Weaving together science fiction and the individual creates a future that is marked by fabulous technology, but inhabited by ourselves. The childish flights of fancy we construct into our memory when examining the future we live in are held against two paradoxical standards: the imaginative conflation outlined above and the technological progress achievable in our lifetime. We are thus simultaneously impressed and disappointed by our technological circumstance. When Captain James T. Kirk first flipped open his communicator requesting a transport back to the Enterprise, audiences were taken by the fantasy. Kirk was able to talk wirelessly, instantly, to his crew in orbit, as well as be “beamed” almost anywhere. Fifty years later cellphones are a pervasive, almost cultural, force in society yet we still lament that we cannot “beam up,” that we are still very much terrestrial beings. Though technologies we now have access to might retroactively seem to be the “obvious” technologies humanity would pursue first, this ascribed logic of technological advance clouds our sight. When a technology seems be of our imagined future it is worthy of extra consideration.

    In an op-ed for the Pittsburgh Post-Gazette President Barack Obama outlined an initiative by his administration to provide car manufacturers with a sixteen point compliance list for the safety “autonomous vehicles.” Vehicle autonomy, often referred to as “self-driving cars,” has appeared in the consciousness of government agencies seemingly overnight, though the technology for autonomous vehicles has existed for some time.[1] After Telsa’s “autopilot” program was involved in a fatal car accident articles have begun to appear concerning the varying aspects of these vehicles.[2] The fallout from Tesla and the apparent blessing of the United States Government spurred a wave of technological analysis. Yet economic and legal discourse, and even the President’s own initiative, sidestep ethical issues present at the foundation of vehicle autonomy.

    At the very heart of self-driving cars is the programming that allows them to function. Such “Autonomy” is, in a philosophical sense, a bit of a misnomer. The type of “autonomy” that exists inside these cars is not so much a rational and conscious actor, but a machine following a set of pre-programmed commands. Running on algorithms the cars take inputs and run them through a series of rules to create an output. The “autonomy” of these self-driving cars is an illusion. The “freedom” these cars have is one that is merely programmed to react in specific ways per the context of the situation. The cars do not have the freedom to act outside their rule bounds. They are designed to appear autonomous when it is their programming dictating their actions.

    In a car accident it is the driver of the car who is the rational actor involved in the decision making process and the driver bears the moral culpability for the results. The United States is a vehicle nation, containing almost as many cars as there are people.[3] Millions of American’s commute to work every day via automobile, on interstates and highways across the country. As the President outlines, roads are dangerous. Thirty-five thousand people died in automobile accidents just last year. He goes on to say that self-driving cars will reduce the number of traffic fatalities.[4] However, it is not a stretch of the imagination to consider a situation where a car accident will indefinitely result in death. How would we program a self-driving car to react in a scenario where, through the actions of the car, the end result would be fatal? What if a truck stops short and if the car swerves in either direction it will collide with a motorcycle but if it stops short the driver will be crushed by an oncoming truck. Do we program autonomous vehicles to always protect the driver? Or to minimize the amount of total harm? Such a scenario has resulted in two competing headlines: “Is your car programmed to kill you?” and “Is your car programmed to kill?” It appears that in allowing self-driving cars we are at least confirming the latter, they will, in some way, be programmed to kill. In a car accident the onus is on the driver, they make the ultimate decision, to swerve or to break, and are thus held responsible. But who bears the responsibility when an autonomous vehicle crashes?

    While Captain Kirk and Mr. Spock were guided by the hand of the Prime Directive, in the field of vehicle autonomy there is no overarching principle. With companies like Google and Uber already pushing to deploy this technology and the most recent position by the Government being one of encouragement, it appears our haste to reach the future has superseded any reflection given to this technology’s ethical implications.[5] Improvement and innovation can mask the ethical challenges of new technology and it is yet to be seen how problems of autonomous vehicles are approached.

    [1] http://www.post-gazette.com/opinion/Op-Ed/2016/09/19/Barack-Obama-Self-driving-yes-but-also-safe/stories/201609200027

    [2] http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s

    [3] http://www.latimes.com/business/autos/la-fi-hy-ihs-automotive-average-age-car-20140609-story.html

    [5] The 116 page DOT report can be reviewed here: https://www.transportation.gov/AV


  • Painting by John Trumbull depicting The Death of General Warren at the Battle of Bunker's Hill on June 17, 1775 among fellow soldiers still engaged in warfare.

    The American Weapon

    We spend every day locked in a tension, our minds fight to weather a storm brewed by the minds of those around us. As near as we can get to touching the true thoughts of another, the closer we seem to think we are, the farther away we realize we have become. To ourselves and our minds, our intents and our actions are neatly presented to us. Yet when we pass a stranger on the sidewalk as they draw in on us our mind scrambles to predict their intent. The inability to know another’s intent, the blindness we have to it, haunts our experience. Such a human problem becomes more complex when paired with a human creation; the gun. In dealing with this issue, I am working on a narrow platform, considering a mere node in a greater web. In this article I would like to raise only few questions, in hopes that they can be considered, not only by readers of this blog, but by other writers as well.

    About every week or so the milk carton in my refrigerator runs dry, forcing a trip to the grocery store. The routine is usually the same: walk to the car, start it, drive a few miles to the market, enter, get milk, checkout, and return home. Such a chore is dreadfully boring and I often loathe taking the time. Throughout my entire journey, the intent of my actions was clear to me: I was going to get milk. At each step I crossed paths with others. Had you stopped them and asked these strangers what my intent was at any given part of trip you would probably be greeted not only with different answers, but also a sense of bewilderment. How would they know the intent of another? One of the more common pieces of dating is to “make your intentions clear.” Why? So the other person is not “guessing,” and your actions are not misconstrued under the pretext of a different intent. Yet when a nervous man comes across as creepy we would be in the minority in criticizing the woman for not understanding his intentions. Much of the difficulty in being human grows from the inability our minds have in regards to one another, which serves to only make matters more complicated for the rest of this article.

    Ask any art major what the phrase they hear the most from family members at Christmas, the answer is likely “what are you going to do with a major in ceramics?” The onus then falls on a stressed nephew to demonstrate the value of his chosen path of study. How do they go about doing that? There is a struggle in demonstrating the utility of an art degree. For the average uncle there is very little pragmatic value to it. What job will it get you? How will it help you make money? To raise a second line of thought, when we look at most things we tend to immediately look at what they are for. Our conception of their “goodness” stems from their usefulness. Utility is not hard to see in everyday objects; a shovel is good for digging holes, a kettle for boiling water, a key is good for unlocking a lock. For less cut and dry examples, like education, the pragmatic “goodness” is not as obvious.

    A gun, considered narrowly and in a reductionist stance, is a series of springs and levers that plunges a hammer-like arm into a bullet assembly. The gun includes the stock and barrel but when considering what a gun is good for we can say it is good for igniting the powder in a bullet and allowing it to be fired. We can even bring a more holistic view and include the bullet, saying that a gun is good for the firing and aiding in the projection of a bullet.[1] But to further complicate this, we can look to the Greek work Telos, meaning “end.” Telos is the ultimate end of an object. I will consider it here as the goal, or final purpose. We can pull this concept down into a real world example to fully round out the exposition.

    In the early 1990s one very unlucky criminal made a deal with an undercover officer, he would provide a MAC-10 firearm in exchange for a few ounces of cocaine. After the deal went down he was promptly arrested and charged with the usual offenses. However, he was also charged with using a firearm in a drug related crime. The use of a firearm, in this case, carries with it a heavier sentence. An appeal of this charge made its way to the Supreme Court of the United States who were tasked with answering the question of whether trading a firearm for cocaine constituted “use” of that firearm. The court eventually came down with a guilty verdict, arguing that though he had not used the firearm in its traditional, intended purpose, the word “use” connotes more than just “intended use” and does not exclude other ways to use a firearm. Justice Antonin Scalia dissented and used Smith v United States as an example in his essay Common Law Courts in a Civil-Law System.[2] Scalia very frankly states his opinion on “use.”

    “The phrase “uses a gun” fairly connotes use of a gun for what guns are normally used for, that is as a weapon. As I put the point in my dissent, when you ask someone “Do you use a cane?” you are not inquiring whether he has his grandfather’s antique cane as a decoration in the hallway.”[3]

    Scalia gives us an interesting precedent, is there an intended, goal-like end for the gun? Can it be argued that a gun has a Telos or that a “normal” use of a gun exists?

    There is a danger in these mysteries, especially when paired. Guns have many uses, they can be used for hunting, they can be used for sport shooting, and they can even be used for decoration. Shedding the obvious, like encased decorative guns, there is a crux. Not only is there serious difficulty in understanding the gun, there is an even greater difficulty in deciphering the intent of the person in front of us. Now it does not seem unreasonable to ask, is a gun made to kill? Ardent supporters will say no, “guns don’t kill people, people kill people” while this is half the equation it seems to very clearly miss the other half. Putting this in a stronger form, one can claim the gun is a tool. Tools are fairly inert, the shovel does not dictate whether it is used for digging holes or used for decoration. The will of the user bears the accountability for the use of the tool. We can be correct in saying “shovels don’t dig holes, people dig holes” but why was the shovel made? Tools can still be crafted with intended purposes. Medicines are made to treat certain illnesses, and, while they often have multiple uses, they were made with a specific malady in mind. Guns are produced just as shovels are produced and this lets us ask two questions; “what are we making guns for?” and more specifically “what is our intent in making them?”

    We have considered intent, use, and teleological end but can these questions be answered? Americans have a relationship with guns, that much is clear. The broader I go, the more I lose in my ability to examine a topic comfortably. These very narrow questions, ideally, bloom into larger discussion, but that is what I hope to generate, not to tackle on my own. The gun is an American weapon, we have seen our history born of revolution, our constitution give to the people the right to bear arms, and now our culture sits mired in tense discourse. Are guns made to kill? What is our intent in making guns and what is our intent in carrying them? The questions I hope to raise are ones that should be answered. Americans are still humans and the problems we experience with guns are not divorced from the human condition.


    [1]  I draw the distinction between what the gun does and how well it does it. The gun fires the bullet but the barrel determines how well the bullet flies, I believe there is a distinction between bringing the bullet into action and then aiding it in its own function. I must also make a concession in that I am considering only the narrowest sense of a gun. Many things are called “guns” but do not follow this definition, like the railgun which accelerates a projectile using a series of magnets. I am aware of the semantic issues but for brevity and clarity I refer to “gun” and “firearm” as one in the same.

    [2] Smith v United States, 508 U.S. 223 (1993)

    [3] Antonin Scalia, “Common-Law Courts in a Civil-Law System,” in Philosophy of Law Ninth Edition, ed. Joel Feinberg et al. (Boston: Wadsworth, 2010),

  • 800px-OrteliusWorldMap1570

    The Internet’s Eye

    Events happen before our eyes. “Over there,” as sung by American men off to fight the world wars, no longer exists. “Over there” now plays out in the living rooms, cubicles and pockets of a technologically saturated world. Television broadcasts have been “live” since the 1950s and the distance across the globe and back has shrunk ever since. The world is the smallest it has ever been. Socially, people are connected across borders practically instantaneously. When events of disaster, of terror and of mourning occur we are able to congregate on the Internet, speaking socially, as “live” as television is broadcast. The breakneck pace of current Internet sharing cuts both ways, allowing for a more connected world but on the same hand giving false information the ability to propagate incredibly rapidly.

    Speeds involved in computer networking are almost conceptually impossible in the mind. C, the speed of light, moves at a speed that has no equal. NASA attempts a comparison on their website, illustrating that a jet traveling 500 miles per hour would cross the United States in about four hours, while light travels around the entire earth seven and a half times in a single second. Information is encoded in light moving through submarine fiber optic cables; using this speed to bridge the Internet between continents. A connection of this sort allows a man in Paris to log on to Reddit and share information about terrorists in a “Live Thread” as it happens. Majority opinion on this subject is generally favorable. Increased information sharing allows facts to be dispersed quickly, across national boundaries, directly to the world. There is however a dark side to this ability.

    On March 16th of 2013 Sunil Tripathi did not return to his Brown University dorm room. A month later he would be found deceased, in the Providence River. A week before he would be found, two bombs exploded at the Boston Marathon lighting a social media firestorm. Internet vigilantes quickly joined police in the hunt for the perpetrator. Threads popped up on various social media services as Internet users scoured footage and personal profiles for answers. Eventually, Tripathi became their prime suspect. Beyond a shadow of a doubt he was their man. Soon hate messages began pouring in to Tripathi’s family and tips began to flood the Boston police. The Tripathi family was subject to intense harassment and the Facebook page “Help us find Sunil Tripathi” was inundated with racist, vitriolic messages directed at the then deceased 22 year old. As we now know, it was not Tripathi who placed pressure cooker bombs at the finish line and, to add insult to the injury of an already grieving mother, the Facebook page meant to help find her son had to be taken down. Pain of this magnitude is a demon no family should have to face. The Sunil Tripathi narrative is now a glaring black mark on the record of social media and speaks to a larger problem with high speed information sharing.

    False information is dangerous, yet when unchecked information can be transmitted almost instantaneously there is an added degree of danger. Once the genie has been let out of the bottle, it is almost impossible to get him back in. As Pell Center Director Jim Ludes encountered a few months ago in a blog post on vaccines, myths are persistent and hard to kill even when you have the majority of scientific consensus on your side.  When a fact, left unchecked, is allowed propagate so quickly it can, just as quickly, become canonized. In dealing with a tragedy like the recent events in Paris, social media can be a hand that both gives and takes away. The Internet has created a certain atmosphere of carelessness when it comes to speech. Opinions can be typed anonymously and thrown away just as easily but so can accounts, especially of disasters. The tumultuous atmosphere created in cyberspace should be approached carefully.

  • Did Al Gore Invent the Internet?

    In the course of researching topics occasionally I will find a source that says exactly what I was looking to say. Often though they are either too technical, synthesized into a greater topic or just better used as a secondary source.

    When I first decided to write on Al Gore I was excited. “Did Al Gore invent the internet?”  was a question that I knew I absolutely had the answer to. The history was there, as was the misquotation, this was a topic I could, with ease, completely cover in writing. So it was to my great dismay that, through the research process, I found a relatively accessible source that fully unpacks the question of Al Gore and the Internet. Against risking reinventing the wheel, I am going to post the article here, the authors have done what I consider the best possible job explaining Al Gore and I will do my best to explain who they are.

    Outside of the technology sphere Vint Cerf and Robert Khan are actually fairly well known. Frequently giving talks, and occasionally enjoying the media spotlight (Cerf appeared on the Colbert Report just this summer) they are referred to as “Fathers of the Internet.” Fathers they are, both men are cited as co-creators of TCP/IP. TCP is Transmission Control Protocol (TCP) and IP is Internet Protocol. In our blog post about the cloud we went over the client server style architecture in computing. TCP/IP are protocols (think directions or instructions) that allow this type of communication to occur, it is the instructions that client/server communication is built on. You are probably most familiar with TCP/IP through the term “IP address.” These are call numbers for computers using TCP/IP communication and allow other servers and computers to locate and identify your computer when online.[1] Whenever you use the Internet you are communicating through TCP/IP protocols and subsequently using a technology invented by Cerf and Khan in the 1970’s.

    Here is what they have to say about Al Gore and the Internet;



    [1] It is important to remember when I say “Locate and Identify” that we are talking about a type of communication. This is not meant in such a way as your privacy being invaded. Imagine trying to meet up with someone. However you knew neither what this person looked like nor where they were. Would this conversation between you two be possible?

  • 21 and Up: A Snapshot of Computer History

    I recently turned twenty-one years old, which is a bit of a milestone in the United States. Many hours were spent thinking about how old I had become and how I could relate my new found age to this blog. Occasionally my academic path and my interest in technology intersect; today we will be going over a snapshot of technology history. Imagine what it would be like buying your first computer when I was born – the year 1993. How different would the technology look compared to what we can buy today? Not only is it a challenging question but its answer can show how far technology has come – in only twenty one years.

    Let’s say then that we have warped back to the 1990s more specifically 1993. We would hear Whitney Houston’s “I Will Always Love You” and The Proclaimers “I would walk (500 miles)” on the radio, watch Michael Jordan and Scottie Pippen play on the Chicago Bulls, and be able to see Jurassic Park and Schindler’s List in theaters. We could even use the phrase “I rented a VHS from Blockbuster but we have to rewind it first” and everyone would understand. What is most surprising about computers from this time period is how familiar they would appear. The basic set up from that era and today is the same with computers using a monitor, tower, mouse and keyboard (laptops were also available in 1993). Scratching the mouse around on mouse pad to get wake up the computer would reveal an operating system that, while not as stylish as today’s, would not be completely unusable. Operating a computer though, would feel profoundly slow. Modern solid-state drive (SSD) computers can go from off to ready to use in seconds, while a Macintosh in 1993 would take almost a minute. Word processing was available as was the Internet although most people were using dial-up. Operating computer has not changed much overall but technologically 1993 and today are worlds different.

    If you went to buy a Macintosh in 1993 you would probably pay around two thousand dollars. Adjusted for inflation that would be around thirty three hundred in today’s money. Computers were not exactly inexpensive. What you would get with your money would be about four or eight (depending on the model) megabytes of RAM, and a five hundred megabyte hard drive (there were options for as low as eighty megabytes). All of this was driven by a twenty megahertz processor. The monitors at the time were monochrome, cathode ray tube (CRT) just like the televisions of the 1990s. CRT monitors are bulky, extremely heavy and have the advantage of being able to be heard before the picture can be seen. An iPhone 6 has two hundred and fifty six times more memory (on hundred and twenty eight gigabytes), one hundred and twenty five times more RAM (one gigabyte) and is a phone, not a computer. We carry around in our pockets a device that is more power than fifteen computers of 1993, and yet we do not even think of it as a computer.

    Being twenty-one makes me feel old and when I started looking at computers of the 1990s I felt even older. The performance machines of that time are children compared to the average smart phone. What used to be too heavy, too hot and too unstable to even move off our desks, now sits comfortably in our pockets. For computers the 1990s are an strange time. A decade that was not different enough to be foreign, but not modern enough to feel familiar.

    One of the hardest parts of maintaining a hotel is often the in room televisions. When a television matches what we believe is current we often won’t notice it, yet when it is too old, it becomes an eyesore. Thus the hotel owner faces a problem. The speed at which our technology advances is enough to make even five year old televisions seem obsolete. The same is true of computers, where often using a computer that is five or six years old will seem insufferable. The hotel owner has a choice, either buy new televisions or accept his rooms will appear dated. Not much can make a twenty one year old college student feel old but when I look back at what was normal the year I was born, I cant help but want to shake a little dust out of my bones.



  • Why is Silicon Valley called Silicon Valley?

    This history byte is a Weekly What Is dedicated to the why and where of a technology name.

    When learning the geography of the American West Coast one realizes fairly quickly there are a lot of valleys. The San Fernando Valley, Santa Clara Valley, Death Valley, Mission valley, more than enough to go around. Most areas in the American west are named for the Spanish, who actively explored and settled the region. Yet there is one popular valley with a fairly strange name – Silicon Valley. Silicon is a metalloid, atomic number fourteen with an atomic weight of twenty-eight. The uses for silicon are varied, silicone grease can be used to lubricate a Rubix cube in order make it spin fast enough for use in a “speed cubing” competition, silica[1] gel packets are heavily relied upon to keep moisture out of packaging and silica is even used as a food additive. None of this however, explains why a tract of California is named after silicon; luckily this blog post will do just that.

    Silicon Valley is called Silicon Valley because of sand. As a term it first appeared on the cover of the January 11th edition of Electronic News Magazine in 1971. Don Hoefler a journalist for the publication had titled a three part series examining the history of the semiconductor “Silicon Valley U.S.A.” The term rapidly became associated with technology, in such a way that the two are now almost inseparable. But back to sand. There is a reason why Hoefler chose to title his articles on semiconductors after silicon. Many companies manufacturing computer chips (like Intel) were either operating or headquartered throughout the region, now known as Silicon Valley back, in 1971. The first ingredient in the manufacturing process of computer chips happens to be – sand.

    Understandably the term “computer chip” might be a bit foreign but the truth is actually less complex than the name implies. Last week we went over the difference between hardware and software, this week we won’t be talking about software at all, which makes our lives much easier. We will only be examining hardware, and in the interest of clarity, one specific type of hardware – the Central Processing Unit (C.P.U). This is your computer’s engine; handling most of its functions and processes. Admittedly making a C.P.U is a bit of a strange process. Manufacturers first take silica sand and heat it up. Those familiar with the glass making process will see where this is going. When making glass the glassmaker will first take sand, put it in a crucible and heat it (usually over 3,000 degrees Fahrenheit). The melted sand is then shaped, blown and cooled, resulting in glass. The same process is the first step in creating a C.P.U. Silica sand is melted down and formed into an extremely pure mono-crystal[2]. The crystal is then cut into extremely thin wafers that are anywhere from one to twelve inches in diameter. The wafer is polished and cleaned of any impurities[3]. Wafers are meant to hold thousands of very small transistors. Transistors are at the foundation of electronic computing and deserve their own blog post (coming soon) but what is important to know here is that they are very tiny on off switches, like a sink faucet or a light switch. The process of putting transistors on silicon wafers is a complex chemical one; which is not entirely beneficial to learn if you’re not studying to enter the industry. The result the process is a wafer filled with transistors. Wafers are then tested for functionality and working sets of transistor are cut from it. These tiny rectangles, filled with transistors are the basis of your Central Processing Unit. The chip is then packaged in a housing that allows it to connect to your computer.

    Silicon Valley takes its name from the large population of companies doing this work, headquartered or operating in the San Francisco Bay area. As with most technology terms, it evolved and stuck. Eventually coming to be a representation of the entire technology industry, due in part to a large influx of similar companies to the Bay Area during technology booms following 1971. Companies like eBay, Adobe, HP, Yahoo! and Lockheed Martin are still based in the Valley. The size of these Fortune 1000 companies is in stark contrast to where their home gets its name, tiny grains of sand.

    [1] Silica is a silicone oxide, a chemical containing an atom of oxygen. It’s chemical Formula is SiO2 (Silicone Dioxide)

    [2] “Mono-crystal” is referring to the silicon ingot that emerges from the Czochralski process. In the interests of simplicity and clarity this blog only discusses Czochralski based ingot making and not “Float Zone” growing methods.
    An image of the resulting ingot

    [3] An image of the resulting waver

    “Weekly What Is” breaks down a new technology related word every Friday.

  • What is hardware and software?

    Almost forty percent of the world’s population uses the Internet in some capacity. The rapid influx of technology has not given slow adopters and even laymen time to become acquainted with the jargon surrounding it. For the unfamiliar it is taxing to see this technological flux as a necessity, especially when even as little as thirty years ago much of what we use today did not even exist. Weekly What Is attempts to break down this jargon until every day terms.

    If software were a person, it would not be eligible for Social Security. To get full benefits of the US retirement program software will have to wait another five years. The term was first coined in 1953 by Paul Niquette, but took another few years to catch on. Software is a young gun in the terminology world and unlike clouds or cookies, it has not been re-purposed. However it is a sibling, only half the equation, something you will never see without its older brother, hardware. To explain one without the other would be doing a disservice to both. Every part of a functioning computer fits into the category of hardware or software. First we have to understand the foundation, hardware, then the director, software. Finally, we can see how they both work together.

    Hardware is the foundation of your computer. Every component part, the physical aspects of the computer, is hardware. When you use a computer, you interact with it through input/output devices. A fancy name for parts that will send information to the computer, or put out information sent from the computer (Both are considered computer hardware.) Input devices, such as mice and keyboards and output devices like monitors and speakers, are the most easily recognizable pieces of computer hardware. Other examples include Central Processing Units (CPU), motherboards, random access memory (RAM) Hard disk drives and any other chips or cards. When you build a computer you cannot simply plug hardware together and form a working computer. Hardware requires direction in order to function, which is the purpose of software.

    Software is a set of instructions meant to direct hardware. There are within the category of software, two sub-categories under which all types of software fall, system and application. System software handles the basic workings of a computer. Your operating system is the most important piece of system software on your computer; it blends together hardware and software to create the environment you use. The difference between application software and system software is not such an easy one to see. Application software allows the user to do tasks on the computer. Making a text document (Microsoft word), recording a song (Garage Band), browsing the web (Google Chrome) and editing a picture (Photoshop) are all functions of application software. While both are types of software, application software must first go through the system software to get results from the hardware.

    Now we have a process involving four parts the hardware, the system software, the application software and the user. When we use a computer, we are primarily using application software, which then communicates with the system software and the system software then communicates with the hardware. This process is similar to a ladder. Imagine you are on your roof fixing a few broken shingles when you realize you forgot your hammer in the toolbox. You walk over to the ladder climb down, grab the hammer, climb back up and fix the shingle. This is how words go from your brain to the screen. When you go to type “Francis Quigley Pell Center Blogs” in to Google a few different things are happening. For the sake of this example, we will ignore the Internet aspects of what is going on and simply stick to the way those words go from your brain to the search bar. There is a reason why typing in “Francis Quigley Pell center blogs” appears in the search bar in just the manner you spelled it out on the keyboard. This happens because typing on a keyboard is an input. That input moves through the software to the hardware. After receiving the directions from the software the hardware creates an output, which then travels back up the chain, from the hardware to the, system software and back to the application software, creating the desired effect. Moving up and down the ladder is the same way that inputs and outputs move up and down the computer hierarchy, from the top (the user) to the ground (the hardware) and back.

    The computer is a unification of three separate aspects working together to keep the machine working. Each of the three pieces can affect the performance of the computer. Installing new software on old hardware will make the software perform slower, just as installing old software on new hardware will not take full advantage of the hardware and may actually handicap the performance of the computer. The categories function dependently on one another.

    Since we have gone through in detail, let’s quickly review. Hardware is the physical electrical parts of your computer; software is what directs those parts. Software comes in two types application and system. System is what controls the hardware and application is what you use. Every computer part can be broken down to either software or hardware. Software is like a pilot, and hardware like a plane. The plane can fly, but only if it has proper directions, just as the pilot can fly so long as he has a plane.

    Weekly What is breaks down a new technology related word every Friday.

    Special thanks to Keith Monteiro for his consulting work on this piece. 

  • What are cookies?

    Almost forty percent of the world’s population uses the Internet in some capacity. The rapid influx of technology has not given slow adopters and even laymen time to become acquainted with the jargon surrounding it. For the unfamiliar it is taxing to see this technological flux as a necessity, especially when even as little as thirty years ago much of what we use today did not even exist. Weekly What Is attempts to break down this jargon until every day terms.

    In 1930 a cookbook bearing Ruth Wakefield’s original Toll House Cookie recipe was published. Eighty years have passed since Mrs. Wakefield first started serving cookies at her Toll House Inn restaurant, her recipe has since become the American cookie standard with almost all of Nestlé’s cookie’s bearing the Toll House insignia. Until the early 1990’s cookies were what your Grandmother baked for Christmas, what you tried to sneak into mom’s shopping cart at the grocery store but most of all a classic American treat. However cookies were not destined to stay in Nestlé factories or Ruth Wakefield’s cookbook. The year 1994 saw a Netscape programmer take the Toll House recipe and give it an Internet cousin.

    Unlike Grandma’s recipe, HTTP Cookies (or Internet cookies) do not enjoy the Christmas dinner spotlight. Computer cookies are text files, squared silently away on a users hard drive. In order to be clearer later on we’re going to have to regress into a technical definition. This is the description you’re going to see on most websites attempting to explain cookies. A cookie is a small text based file that is stored on a users computer by a webpage. This file is used in aiding both the user and the website in maintaining information as users browse throughout the site or leave and return later on. Cookies are one of those “use everyday and never know about” conveniences, like your cars alternator. Like an alternator, which charges your car battery, you really only notice cookies if they don’t work.

    When you log into Facebook but are asked by your brother, sister or roommates to look up the spelling of “hors d’oeuvres” you sometimes make the mistake of clicking out of Facebook. Yet when you return, Facebook is still logged on. Cookies stored on your computer, communicating with Facebook’s servers, make this possible. Facebook specifically utilizes cookies in a few ways. One type of cookie makes sure that other people aren’t accessing your Facebook at the same time. When your Facebook is logged on to multiple computers, Facebook’s servers will check the cookies of each in an attempt to verify which computer is really you.

    Cookies are primarily meant to improve websites functionality and to create a better user experiences. When you’re shopping on Amazon and adding items to your cart, cookies are the reason they don’t disappear after you close and re-open the Internet. Communicating with websites, cookies can make websites load quicker, create a more familiar, localized experience (for example, Facebook uses cookies to bring you the language that best fits your region) and help the Internet as a whole work better. They are similar to our memory when we meet a new person. Imagine every time you went to dinner with a friend you had to ask them their name, where they were from, how old they are, how they knew you and all the details you had previously known about them. Needing to relearn this would make finding out how their week was, a much longer process. The same is true of websites. Instead of having to get to know you every time you clicked in and out of the site, they use cookies to remember.

    Disabling cookies is a fairly easy process.  I would encourage you to try turning them off for a few minutes; you’ll see how different the web looks, pages load slower and not as fully. Less robust sites than Amazon and Facebook will inevitably become harder to use (for example some online shops will have their shopping cart services impeded.) For files that aren’t usually larger than a few kilobytes cookies significantly impact the way we browse.

    Let’s quickly review. When you (the user) click on a webpage a small file is stored on both your computer and the websites server. The next time you return to that site those stored files will help the website load faster and remember your preferences. Cookies make this happen by being an aid to the conversation, giving it a memory. Accessing those memories makes using the website easier, just as using your memory makes a conversation easier. Almost every Internet enabled computer in the world uses cookies making getting back Ruth Wakefield’s Toll House Recipe website a piece of cake.

    Weekly What is breaks down a new technology related word every Friday.

    Special thanks to Keith Monteiro, Kamil Bynoe and Peter Goggi for their consulting work on this article.

Page 1 of 212