Monday, December 27, 2010

A Night to Remember on the Deepwater Horizon

Walter Lord, in his classic nonfiction book A Night To Remember, used dozens of interviews and historical documents to recount the 1912 sinking of the Titanic in vivid and harrowing detail. Now David Barstow, David Rohde, and Stephanie Saul of the New York Times have done something similar for the Deepwater Horizon disaster last April 20. While official investigators will probably take years to complete a final technical reconstruction with all the available information, the story these reporters have pieced together already highlights some of the critical shortcomings that led to the worst deepwater-drilling disaster (and consequential environmental damage) in recent memory.

Their 12-page report makes disturbing reading. They describe how Transocean, the company which owned the rig and operated it for the international oil giant BP, was under time pressure to cap off the completed well and move to the next project. They show something of the complex command-and-control system for the rig that involved all kinds of safety systems (both manual and automatic) as well as dozens of specialists out of the hundred or so engineers, managers, deckhands, drillers, cooks, and cleaning personnel who were on the rig at the time. And they reveal that while the blowout that killed the rig was about the worst that can happen on an offshore platform, there were plenty of ways the disaster could have been minimized or even avoided—at least in theory. But as any engineering student knows, there can be a long and rocky road between theory and practice. I will highlight some of the critical missteps that struck me as common to other disasters that have made headlines over the years.

I think one lesson that will be learned from the Deepwater Horizon tragedy is that current control and safety systems on offshore oil rigs need to be more integrated and simplified. The description of the dozens of buttons, lights, and instrumentation in physically separate locations that went off in response to the detection of high levels of flammable gas during the blowout reminds me of what happened at the Three Mile Island nuclear power reactor in 1979. One of the most critical personnel on the rig was Andrea Fleytas, a 23-year-old bridge officer who was one of the first to witness the huge number of gas alarms going off on her control panel. With less than two years experience on the rig, she had received safety training but had never before experienced an actual rig emergency. She, like everyone else on the rig, faced some crucial decisions in the nine minutes that elapsed between the first signs of the blowout on the rig, and the point where the explosions began. Similarly, at Three Mile Island, investigators found that the operators were confused by the multiplicity of alarms going off during the early stages of the meltdown, and actually took actions that were counterproductive. In the case of the oil-rig disaster, inaction was the problem, but the cause was similar.

Andrea Fleytas or others could have sounded the master alarm, instantly alerting everyone that the rig was in serious trouble. She could have also disabled the engines driving the rig’s generators, which were potent sources of ignition for flammable gas. And the crew could have taken the drastic step of cutting the rig loose from the well, which would have stopped the flow of gas and given them a chance to survive.

But each one of these actions would have exacted a price, ranging from the minor (waking up tired drill workers who were asleep at 11 o’clock at night with a master alarm) to the major (cutting the rig loose from the well meant millions of dollars in expense to recover the well later). And in the event, the confusion of the situation with unprecedented combinations of alarms going off and a lack of coordination among critical personnel in the command structure meant that none of these actions that might have mitigated or avoided the disaster were in fact done.

It is almost too easy to sit in a comfortable chair nine months after the disaster and criticize the actions of those who afterward did courageous and self-sacrificing things while the rig burned and sank. None of what I say is meant as criticism of individuals. The Deepwater Horizon was above all a system, and when systems go wrong, it is pointless to focus on this or that component (human or otherwise) to the exclusion of the overall picture. In fact, a lack of overall big-picture planning appears to be one of the more significant flaws in the way the system was set up. Independent alarms were put in place for specific locations, but there were no overall coordinated automatic systems that would, for example, sound the master alarm if more than a certain number of gas detectors sensed a leak. The master alarm was placed under manual control to avoid waking up people with false alarms. But this meant that in a truly serious situation, human judgment had to enter the loop, and in this case it failed.

Similarly, the natural hesitancy of a person with limited experience to take an action that they know will cost their firm millions of dollars was just too much to overcome. This sort of thing can’t be dealt with in a cursory paragraph in a training manual. Safety officers in organizations have to grow into a peculiar kind of authority that is strictly limited as to scope, but absolute within its proper range. It needs to be the kind of thing that would let a brand-new safety officer in an oil refinery dress down the refinery’s CEO for not wearing a safety helmet. That sort of attitude is not easy to cultivate, but it is vitally necessary if safety personnel are to do their jobs.

Disasters teach engineers more than success, it is said, and I hope that the sad lessons learned from the Deepwater Horizon disaster will lead to positive changes in safety training, drills, and designs for future offshore operations.

Sources: The New York Times article “The Deepwater Horizon’s Final Hours” appeared in the Dec. 25, 2010 online edition at

Sunday, December 19, 2010

Cheaters 1, Plagiarism-Detection Software 0

The Web and computer technology have revolutionized the way students research and write papers. Unfortunately, these technologies have also made it vastly easier to plagiarize material: that is, to lift verbatim chunks of text from published work and pass it off as your own original creation. In response, many universities have promoted the use of commercial plagiarism-detection software, marketed under names such as Turnitin and MyDropBox. Still more unfortunately, in a systematic test of how effective these programs are in detecting blatant, wholesale plagiarism, the software bombed.

Why is plagiarism perceived as getting worse than it used to be? One factor is the physical ease of plagiarism nowadays. Back in the Dark Ages when I did my undergraduate work, it was not quite the quill-pen-by-kerosene-lamp era, but if had ever I decided to plagiarize something, it would have taken a good amount of effort: hauling books from the library, photocopying journal papers, dragging them to my room, and typing them into my paper letter by letter with a manual typewriter. With all that physical work and dead time involved, copying a few paragraphs with the intent of cheating wasn’t much easier than simply thinking up something on your own. The physical labor was the same.

Fast-forward to 2010: there’s Microsoft Word, there’s Google, and if you’re under 22 or so these things have been there for at least half your life. The “copy” and “paste” commands are vastly easier than hunting and pecking out your own words. And you suspect that a good bit of everything out on the Web was copied and pasted from somewhere else anyway. So what is the big deal some professors make about this plagiarism thing? The big deal is this: it’s wrong, because it constitutes theft of another person’s ideas, and fraud in that you give the false impression that you wrote it yourself.

In engineering, essays and library-research reports make up only a small part of what students turn in, so I do not face the mountains of papers that instructors in English or philosophy have to wade through every semester. But with plagiarism being so easy, I do not blame them for resorting to an alleged solution: the use of plagiarism-detection software. Supposedly, this software goes out and compares the work under examination with web-accessible material and if it finds a match, it flags the work with a color code ranging from yellow to red. Work that passes muster gets a green.

In a recent paper in IEEE Technology and Society Magazine, Rebecca Fiedler and Cem Kaner report their tests of how well two popular brands of plagiarism-detection software actually work on papers that were copied word-for-word from academic journals. The journals themselves were not listed in the article, but appear to be the usual type of research journal which requires payment (either from an individual or a library) for online access. There is the key, I think, to why the software failed almost completely to disclose that the entire submission was copied wholesale, in twenty-four trials of different papers. If I interpret their data correctly, only one of the two brands tested was able to figure this out, and even then it was in only in two of the twenty-four cases. Fiedler and Kaner conclude that professors who rely exclusively on such software for catching plagiarism are living with a false sense of security, at least where journal-paper plagiarism is concerned.

I think the results might have been considerably better for the software if the authors had chosen to submit material that is openly accessible on the Web, rather than publications that are sitting behind fee-for-service walls that require downloading particular papers. In my limited experience with doing my own plagiarism detection, I was able simply to Google a suspiciously well-written passage out of an otherwise almost incomprehensible essay, and located the university lab’s website where the writer had found the material he plagiarized. And I didn’t need the help of any detection software to do that.

As difficult as it may seem, the best safeguard against plagiarism (other than honesty on the part of students, which is always encouraged) is the experience of instructors who become familiar with the kind of material that students typically turn in, and even with passages from well-known sources which might be plagiarized. No general-purpose software could approach the sophistication of the individual instructor who deals with this particular class of students about a particular topic.

Of course, if we’re talking about a U. S. History class with 400 students, the personal touch is hard to achieve. Especially at the lower levels, books are more likely to be plagiarized from than research papers, and as Google puts pieces of more and more copyrighted books on the Web, plagiarism detection software will probably take advantage of that to catch more students who try to steal material. It’s like any other form of countermeasure: the easy cheats are easily caught, but the hard-working cheats who go find stuff from harder-to-access places are harder to catch. But it’s not impossible, and one hopes that by the time students get to be seniors, they have adopted enough of their chosen discipline’s professionalism to leave their early cheating ways behind. Sounds like a country-western song. . . .

If any students happen to be reading this, please do not take it as an encouragement to plagiarize, even from obscure sources. The fact that your instructors’ cheating-detection software doesn’t work as well as it should is no reason to take advantage of the situation. Anybody reading a blog on engineering ethics isn’t likely to be thinking about how to plagiarize more effectively, anyway—unless they have to write a paper on engineering ethics. In that case, leave this blog alone!

Sources: The article “Plagiarism Detection Services: How Well Do They Actually Perform?” by Rebecca Fiedler and Cem Kaner appeared in the Winter 2010 (Vol. 28, no. 4) issue of IEEE Technology and Society Magazine, pp. 37-43.

Monday, December 13, 2010

The Irony of Technology in “Voyage of the Dawn Treader”

I write so often about bad news involving engineering and technology because engineers usually learn from mistakes more than they learn from success. But not always. A more positive theme in engineering ethics takes exemplary cases of how engineering was done right, and asks why and how things worked out so well. That’s what I’m going to do today with the latest installment of the series of “Chronicles of Narnia” movies, namely “The Voyage of the Dawn Treader.”

It is ironic that the most advanced computer-generated imagery (CGI) and computer animation was used to bring to the screen a story by a man who was a self-proclaimed dinosaur, an author who wrote all his manuscripts by hand with a steel pen and never learned how to drive a car. C. S. Lewis, who died in 1963 after achieving fame as one of the greatest imaginative Christian writers of the twentieth century, also wrote one of the most prescient warnings about the damage that applied science and technology could do to society. In The Abolition of Man, Lewis warned that the notion of man’s power over technology was wrongly conceived. The thing that increased scientific and technological abilities really allow, is for those in control of the technology to wield more power over those who are not in control. Of course, he granted that technological progress had also led to great benefits, but that was not his point.

Perhaps the most popular of all his works of fiction is the “Chronicles of Narnia” series, a set of seven interrelated books for children in which he drew upon his vast learning as a scholar of medieval and Renaissance literature to produce one of the most completely realized works of fantasy ever written. I have read all of the stories many times. And like many other readers, I had my doubts that any cinematic version of them would stand a chance to live up to the unique standard posed by the books. For one thing, Lewis’s descriptions of fantastic beings such as minotaurs, centaurs, and fauns are suggestive rather than exhaustive, leaving much to the reader’s imagination, as most good literature does. This throws a great burden upon anyone who attempts to render the stories in a graphic medium. I was saddened to see at the end of the movie the dedication “Pauline Baynes 1922-2008.” Baynes was the artist chosen by both Lewis and his friend J. R. R. Tolkien to provide illustrations for the “Chronicles” and for many of Tolkien’s imaginative works as well. Baynes’s drawings fit in with Lewis’s descriptions so well because they did what book illustrations are supposed to do: namely, they enhanced the reader’s experience without turning the story in a direction not intended by the author.

And that is what the hundreds of IT professionals, artists, technicians, computer scientists, entrepreneurs, and others involved in “The Voyage of the Dawn Treader” film have done. As computer graphics has advanced, people engaged in what began as a purely engineering task—to render a realistic image of a natural feature such as the hair on a rat being blown by the breeze atop the mast of a sailing ship—find themselves having not only to deal with the sciences of mechanics and fluid dynamics, but even now and then making fundamental advances in our understanding of how air flows through fibrous surfaces or how light travels through a complex mineral surface. Fortunately for the moviegoing public, none of this needs to be understood in order to watch the movie, the production of which is comparable in today’s terms with the effort needed to build part of a medieval cathedral. But anyone can walk into a cathedral and enjoy the stained-glass windows without understanding how they were made. This connection is not lost on the moviemakers. In fact, the very first scene in the film focuses on a stained-glass window showing the Dawn Treader ship, just before the camera zooms away to reveal a tower in the city of Cambridge, where the story begins.

It is this sensitivity to the spirit of the tales and the style, if you will, of Narnia that makes the movie both an essentially faithful rendition of the book, and an excellent adventure on its own. For cinematic reasons, the screenwriters did some mixing of plot elements and originated a few new ones, but entirely within the spirit of what G. K. Chesterton calls the “ethics of elfland.” Chesterton expresses the ethic this way: “The vision always hangs upon a veto. All the dizzy and colossal things conceded depend upon one small thing withheld.” The chief plot innovation concerns a search for the seven swords of the lost lords of Narnia, which unless I’m mistaken were not in the original story. But until these swords are placed on a certain table, the Narnians cannot triumph over a strong force of evil that threatens to undo them.

What would C. S. Lewis think? Well, those who believe in an afterlife can conclude that he will find out eventually about what has been done with his stories, and perhaps some of us will some day be able to ask the man himself. He may answer, but then again he may view his earthly works in the same light that St. Thomas Aquinas viewed his own magisterial works of philosophy toward the end of his life. According to some reports, Aquinas was celebrating Mass one day when he had a supernatural experience. He never spoke of it or wrote it down, but it caused him to abandon his regular routine of dictation. After his secretary Reginald urged him to get back to work, Aquinas said, “Reginald, I cannot, because all that I have written seems like straw to me.” Once one encounters that joy which, in Lewis’s words, is the “serious business of Heaven,” the fate of a children’s story at the hands of this or that film crew may not seem all that important. But those of us still here in this life can rejoice in a faithful rendition of a spiritually profound work, made possible in no little part by engineers who simply did their jobs well and with sensitivity to the spirit of the project.

Sources: The Chesterton quotation is from chapter 4, “The Ethics of Elfland,” of Chesterton’s 1908 book Orthodoxy. I used material from the Wikipedia article on St. Thomas Aquinas in the preparation of this article.

Monday, December 06, 2010

TSA Has Gone Too Far

It’s not too often that I take an unequivocal stand on a controversial issue. But this time I will. The U. S. Transportation Safety Administration (TSA) is wasting millions of dollars putting thousands of harmless passengers through humiliating, indecent, and probably unconstitutional searches, while failing in its primary mission to catch potential terrorists. I say this as a participant in the invention of one of the two main technologies currently being deployed for whole-body scans at U. S. airports.

Back in 1992 when airport security checks of any kind were a novelty, I was consulting for a small New England company whose visionary president anticipated the future demand for whole-body contraband scans. I helped in the development of a primitive version of the millimeter-wave scanning technology that is now made by L3Comm. The scan took 45 minutes, had very low resolution, but produced recognizable images of non-metallic objects hidden under clothes. As I recall, the main reason the company didn’t pursue the technology further was that it revealed too many details of the human body, and we thought the public would rise up in revolt if some bureaucrat proposed to electronically strip-search all passengers.

Well, here we are eighteen years later, and the TSA is now installing that technology plus a similar (but even more detail-revealing) X-ray technology at dozens of airports across the land. The agency is reluctant to share any information that would cause it problems, but the few images that have gotten into the public media are enough to tell us that Superman’s X-ray vision is indeed here. In the movie of the same name starring the late Christopher Reeve, the X-ray vision thing was played for a joke in his encounter with Lois Lane. But forcing thousands of ordinary, harmless citizens, including elderly folks and young children, none of whom have been charged with a crime, to subject themselves to electronic invasions of privacy, with the potential for abuse that entails, is an outrage.

Not only is it an outrage, but it is unlikely to achieve the purpose which the TSA says it is achieving at this tremendous price: lowering the risk of terrorist acts in the air. So far, airport body scans have caught zero terrorists. None. All the interceptions and near-misses we have had lately have been thwarted either by alert passengers (and incompetent terrorists), by tips from people with knowledge of the plots, or by old-fashioned detective work that doesn’t stop looking when it runs up against a matter of political correctness. The U. S. is nearly unique among all major nations in relying on this inefficient and intrusive blanket of technologically-intensive measures to achieve safe air travel, rather than focusing limited resources on groups and individuals who are the most likely to cause trouble, as the Israelis do.

The current administration is bending over backwards not to offend Muslim sensibilities in this or any other situation. I am all for respecting and allowing religious freedom, but when nearly all crimes of a certain kind are associated with members of an identifiable group, whether they be Muslim, Jewish, Christian, liberal, conservative, red-haired, or whatever, I don’t want those charged with the responsibility of catching them to purposely throw away that information and instead impose punitive and humiliating (and ineffective) searches on every single person who chooses to fly. And I haven’t even gotten to the “enhanced” pat-downs that the TSA offers as alternatives to the whole-body scans. That amounts to asking whether you would rather have your thumb squeezed with a pair of pliers or in a vise.

The public statements of the TSA on this matter have been about what you’d expect from a rogue bureaucracy. Inanities such as saying “if you don’t want to be searched, just don’t fly” are as useful today as saying “if you don’t like risking your life in automobile traffic, get out and walk.” Here is where the best hope of reversing this egregious and unconstitutional overreaching lies: in the boycotting of airports where the new systems are used. If air travel decreases to the point that the airlines notice it, they will become allies to the public in the battle, and there will be at least a chance that Washington will listen to corporations that employ a lot of union workers, rather than the great unwashed masses that have been ignored repeatedly on everything from health care to offshore oil drilling already.

Civilizations can decline either with a bang or by slow degrees. In historian Jacques Barzun’s monumental From Dawn to Decadence: 1500 to the Present, we find described as one of the characteristics of modern life a slow encrustation of restrictions on freedom exacted by bureaucracies whose ostensible purpose is to make life better in the progressive fashion. I think Barzun had in mind things like income-tax forms and phone trees, but he lives right down the road in San Antonio, whose airport just installed the new scanning systems. I doubt that he flies much anymore (he turned 103 last month), but if he does, he will be faced with a good example of his own observation: some hun-yock* in a blue uniform will treat the dean of American historians, a man whose family fled World War I to the U. S. and freedom, to the degrading and wholly unnecessary humiliation of being suspected as a terrorist and having his naked body exposed to the eyes of some nosy minion of the government.

To Jacques Barzun and to all the other people who simply want to get from A to B on a plane and have no malevolent intentions regarding their mode of transportation, I apologize on behalf of the engineers and scientists whose work has been misused, among whom I count myself.

Sources: The millimeter-wave technology used for whole-body scans is described well in the Wikipedia article “Millimeter-wave scanner,” and the X-ray system can be read about at My Jan. 10, 2010 entry in this blog has a reference to my published work on the early version of the millimeter-wave scanner. *The word “hun-yock”, which I find spelled on the Web as “honyock” or “honyocker” was used by my father to indicate a person who did something unwise and publicly irritating. I can think of no better term for the present situation.