Monday, January 16, 2017

No Airbags for Takata's Crash


The story of Takata Corporation's defective air-bag inflators is one we've been following for the last couple of years.  Last Friday, Jan. 13, Takata itself received what amounts to a corporate deathblow by admitting guilt in a single criminal charge brought by a Federal grand jury in Detroit.  In the agreement, Takata will pay a total of $1 billion which will go to fines, to compensation for individuals who were killed or injured by defective inflators, and mostly to car companies who bought the bad inflators, and who are now facing the world's largest recall headache.  Takata is expected to file for bankruptcy and be sold or liquidated shortly thereafter. 

First, some background.  Air bags are safety devices which demonstrably save lives.  An older friend of ours who was driving her pickup truck when it was hit by a delivery van a few months ago is alive today, thanks in part to the airbags that went off in her truck cab.  But when a safety device turns into a deadly weapon, as a certain fraction of Takata air-bag inflators do, you have the automotive equivalent of razor blades in Halloween candy.  That's not what's supposed to be going on.

By admitting guilt, Takata has implicitly endorsed the findings of the Federal indictment that charges three managers in particular with covering up the defects in the air-bag inflators for over a decade.  As we discussed in an earlier blog on this matter, air bags work by setting off a propellant chemical that is supposed to burn in a controlled way, releasing lots of gas rapidly to inflate the air bags.  But a controlled burn is not an explosion, and if the propellant detonates instead, the spike in pressure can rupture the metal container, sending shrapnel toward the vehicle's occupants.  This has happened worldwide hundreds of times with Takata inflators, resulting in over a hundred injuries and sixteen deaths. 

The requirement for controlled burning is tricky, and various chemicals have been used over the years.  One of the main challenges with airbag inflators is to make sure they'll work when needed after years of changing temperatures and humidity inside a car body.  This calls for chemically stable propellants, which tend to be expensive.

Takata had the notion years ago of using one of the cheapest propellants around:  ammonium nitrate.  It can be made to burn controllably, but it is sensitive to humidity and can turn into a highly explosive state unless protected from moisture.  Internal Takata tests showed that their ammonium-nitrate inflators tended to leak, leading to instability of the chemical and the possibility of an explosion when triggered. 

What the indictment shows is that the Takata executives intentionally and repeatedly falsified test data as long ago as 2005, calling it "XX-ing" the data, in order to keep selling the inflators to automakers.  When problems with the Takata inflators began to surface, the company first ascribed them to isolated manufacturing issues.  But investigations have revealed the truth:  Takata executives have known there was a systematic problem for years, and concealed it from their customers and the public. 

As a result, although many Takata inflators worked properly, over a dozen people died and hundreds were injured by defective ones.  And millions of drivers (including yours truly) are wondering whether a minor fender-bender in their Honda or Toyota will set off a Takata inflator and turn the incident into a deadly encounter with a time bomb. 

It's probably pointless to speculate, but I wonder if any of the Takata executives involved in this sordid mess ever took an engineering course that mentioned ethics.  When I discuss ethics in my engineering classes, one of the standard case studies I trot out is the (hypothetical) situation in which some engineering test results come out negative, and your boss tells you to fake the results so it looks like the product passed anyway.  It's one thing to sit in a classroom as an impoverished engineering student and say, "Oh, sure, I'd never do anything like that."  And I suppose it's another thing altogether to be in charge of a large American division of a firm whose profit margins depend on sales of a product that you know to be defective. 

There are limits to the ability of education to influence behavior.  The most that educators can do is to alert students to the moral implications of their work, to urge them to be aware that such situations can arise, and to think carefully about how they would respond before being caught up in the heat of the moment when an ethical dilemma arises.  Even if the Takata managers took some such class way back when they were students, in their case the workplace pressures overwhelmed whatever inclinations they had to do the right thing. 

It's unusual that an ethical lapse ends up basically destroying a firm, but it has happened before—think Enron—and the Takata story shows that it can happen again.  Even if Takata manages to liquidate itself to the extent of paying the full $1 billion (which is dubious), I don't think it will help the wronged automakers much in their attempts to replace the millions of airbag inflators that are now under a cloud of suspicion.

As far as the three individuals who were personally charged in the indictment, the U. S. government is attempting extradition, but the final decision is up to the government of Japan.  Assigning blame for such situations on an individual level is complicated, simply because one has to have a good enough understanding of the management structure that prevailed at the time of the wrongdoing to figure out who was really doing the coverup and how it was managed.  Should the janitor in the lab where the tests were falsified go to jail?  Probably not.  Should both the technician who falsified the reports, and his boss who ordered him to, be jailed?  That is a judgment call that I'm certainly not qualified to make, but complexities like these will arise in the denouement of this sad tale.

In the meantime, if you're like me and have received a recall notice about defective airbags, either don't sit in the seat next to the airbag, or if you can't help but sit there, drive really carefully.

Sources:  The Associated Press report of Takata's guilty plea and fine were carried by numerous outlets, including the Los Angeles Times on Jan. 13 at http://www.latimes.com/business/la-fi-hy-takata-charges-20170113-story.html.  I also referred to an ABC News story at http://abcnews.go.com/Business/wireStory/justice-department-announce-takata-criminal-penalty-44759439.  I previously blogged on the Takata inflator problems on Oct. 27, 2014 (http://engineeringethicsblog.blogspot.com/2014/10/do-not-sit-here-exploding-airbag-recall.html), and on Sept. 19, 2016 (http://engineeringethicsblog.blogspot.com/2016/09/time-to-make-airbags-optional.html).

Monday, January 09, 2017

Californians Talk To Their Cars


The citizens of the U. S.'s most populous state have long had a love affair with the automobile.  Life in Los Angeles is well-nigh impossible without wheels of some kind, and many commuters spend almost as much time in their cars as they do on the job.  As of Jan. 1, it is illegal in the state of California to use your mobile phone while driving unless you use hands-free technology.  Fortunately for the millions who will now have to find some other way to communicate from their cars, the automakers are rushing to integrate voice-recognition systems such as Amazon's Alexa into their products so that you can simply ask for directions or ask to talk to a friend, and the system will do the rest.

As reported in a recent  New York Times article, Ford announced that Alexa will soon be a feature of its newest hybrid models later this year.  A mobile Internet connection is vital to the new service, which counts on using cloud computing for the often computationally-intensive task of voice recognition.  The same Internet connection will be used for many of the services accessed by the software:  online purchases, remote control of "Internet-of-Things" devices, and many other uses besides the obvious ones of telephone service and GPS guidance. 

The new law is a step forward in the struggle to reduce traffic accidents caused by distracted driving.  But we have yet to see what the effects of a well-functioning voice-recognition system in a car may be in terms of safety. 

Studies have shown that visual distractions can be deadly to drivers, while sounds are much less so.  Most people can carry on an animated conversation with a passenger without being too distracted from driving, and it's reasonable to assume that conversations with voice-recognition software will not be much more distracting than having a live passenger beside you.  Still, depending on the usefulness and accuracy of the system and the number and complexity of features, things could get complicated.

Your scribe here lives such a sheltered life that the closest I've come to an Alexa is seeing the ad for it every time I click onto Amazon.com.  So I am not in a position to pass judgment personally on how well they work.  Apparently they work well enough to have made Amazon a lot richer in the past year or so, and the quality trend as more artificial-intelligence resources are applied to these things will only be upward.  Like many other new technologies, the real challenge in growing the market won't be so much technical as it will be changing peoples' habits.  And the California law is a powerful incentive to do so.

Consumers lie on a spectrum with regard to the adoption of new technologies.  Some folks—often younger ones—are early adopters who are the ones who wait in line all night long to be the first to buy a new iPhone or what have you.  The bulk of us don't rush out right away to get every latest thing, but when friends or acquaintances tell us about the item and how pleased they are with it, we go ahead and buy one when our old one wears out or when some business or personal need makes it better to buy than not.  And then, bringing up the rear of the bell curve, there are late adopters such as myself, who cling to old technologies with a grip that often takes legal force to loosen. 

There's no need to spend much marketing effort on early adopters—they often turn out to be a product's best informal salespeople as they show off their new purchases to others.  The major challenge is getting the average person to change their ways in the face of a new technology.  And California has done the automakers and the voice-recognition people a big favor in passing their hands-off-the-phone law.

Casual observation shows that a large fraction, if not a majority, of people who drive also like to talk on the phone at the same time.  If they haven't already adopted hands-free technology, as of this month, in California at least, they'll have to do something in order to avoid the threat of getting a ticket.  Enforcement is going to be lax at first, but the understanding is that this is just a grace period to give people time to adopt a new way of phoning while driving, and eventually you'll have to be using some kind of voice-recognition system, whether it's in your phone or installed in your vehicle. 

For people such as real-estate agents, maintenance providers, and others who drive around all day and have to be in touch with customers, the new law is just part of having to do business, and they will either buy a car with a built-in system or achieve their goal some other way, if they haven't already. 

For others who have not made a habit of talking on the phone while driving, the law will mean either pulling off the road when their hands-on phone goes off, or ignoring it until reaching one's destination.

Eventually, though, such actions will seem as quaint as hunting around for a pay phone to make a phone call.  The last time I saw a working pay phone was last summer on a drive through a small Nebraska town.  If I recall correctly, the same town also had a small operating movie theater in the middle of town, and a factory near the edge of town that made lawnmowers.  I didn't see any signs saying "Caution — Entering the Twilight Zone" but it gave me that feeling. 

The California law, and the automotive voice-recognition systems that will allow people to abide by it, are all part of the push to make us constantly connected whether we're at home, at work, or in between.  It's what people seem to want, or at least think they want.  Why they think they want it is another question, but one best left for another time.
Sources:  The New York Times article "Coming From Automakers: Voice Control That Understands You Better" by Neal F. Boudette and Nick Wingfield appeared on Jan. 5, 2017 at http://www.nytimes.com/2017/01/05/automobiles/automakers-voice-control-amazon-alexa.html. 

Monday, January 02, 2017

What Are the Rules of Cyberwarfare?


We are now well into the era of cyberwarfare—the use of computers and computer networks in military, terrorist, and diplomatic conflicts.  But to judge by the recent tiff between President Obama and Russian President Vladimir Putin, neither the U. S. nor Russia has figured out exactly how to use these new weapons, or how to defend against them effectively.

Last July, Wikileaks unleashed a flood of embarrassing emails hacked from the Democratic National Committee, leading to the resignation of that organization's chairwoman Debbie Wassermann Schultz and undoubtedly influencing the Presidential selection process, though to what degree it is impossible to say.  In December, the CIA announced that they were confident that Russian hackers were responsible for stealing the emails and giving them to Wikileaks.  And on Dec. 23, President Obama announced that he was retaliating for the hacks by sending home 35 Russian diplomats and taking other actions against the Russian diplomatic corps in the U. S.  After initial talk by Russian officials of retaliation against the retaliation, Russian President Vladimir Putin surprised many by saying he would suspend any actions against U. S. diplomats in Russia, at least until the Trump administration takes office. 

Retaliation against diplomats has been around ever since there have been diplomats.  Over the decades, countries have developed traditional ways of treating official representatives from foreign lands with policies such as diplomatic immunity from routine prosecution, the suspension of normal customs inspection for diplomatic materials, special diplomatic zones around embassies, and other perks.  But one reason for all these special privileges is that they can be revoked at any time. 

This writer is old enough to recall some of the many times that the old Soviet Union (USSR) engaged in these kinds of games with the U. S. on any pretext or sometimes no pretext at all.  It was all part of the Cold War chess game, and watched closely for indications that the Soviets might be wanting to warm up the war a little.  Everyone agrees that sending a diplomat packing is a lot better than throwing bombs, so while tensions are raised by such incidents, it's usually a sign that serious conflicts are not in the immediate offing.

Still, there are a couple of notable and disturbing aspects of the DNC hacks and their consequences.  One concerns the identity of the hackers, and the other concerns what constitutes a truly effective response to such attacks.

It took nearly six months for the CIA to be confident enough to announce publicly that Russians were in fact responsible.  In that aspect, hacking and other hard-to-trace cyberattacks resemble terrorism, in that the identity of the terrorists responsible for a given attack is usually not immediately known, and may not ever be discovered.  Although good detective and investigative work often uncovers the perpetrators eventually, the delay between the attack and the discovery of who did it allows for uncertainty to dominate the situation, leading to general confusion, controversy, and other problems that are usually exactly what the attacker wants to achieve in the enemy camp.  It's possible that the CIA made its announcement when it did not because it took all that long to figure out who did it, but for other diplomatic or political reasons.  Still, it's hard to fight back against an enemy if you don't know who he is.

Identifying the source of a cyberattack is only the first step in an effective response.  As in conventional warfare, one doesn't want to overreact, but on the other hand, just letting an enemy get away with anything isn't good either.  An important factor in these not-yet-open-warfare conflicts is how the public perceives them.  Both the U. S. and the Russian presidents do everything with an eye to their constituents, so things done in secret which have secret effects are not that useful.  Instead of using the hacked emails for their own purposes, whoever hacked them (probably the Russians) gave them maximum publicity, and to the extent that the DNC was hampered in its operations, the attack was a success. 

What's new and disturbing about this particular incident is that it represents a significant intrusion into the domestic electoral process by a foreign power which overtly favored a particular candidate—one who will take office on Jan. 20, barring unforeseen circumstances.  What makes the situation worse is that the President-elect does not seem to be all that troubled about it.  Four years in office is a long time, though, and it's likely that Trump and Putin will at some point fail to agree on something, after which it's anyone's guess what will happen.

Part of what makes it so hard to defend against cyberattacks is the global nature of the Internet environment—Moscow or Paris or Adelaide is just as close to my Internet connection as the neighbor down the street.  Traditional military defenses were geographically fixed and you could draw contours of safety within them—here, you have to be concerned about ground attacks, there you are subject to air bombings, and way back behind the front lines, there was almost nothing to worry about.  But cyberattacks can go anywhere there's an Internet connection, and the targets are often only as well-defended as the private organizations and their IT people can make them.  As we know, these defenses range from the almost impregnable to the nearly nonexistent, and so many attractive cyber-targets are almost defenseless against a concerted attack by well-resourced agents of a foreign power.

It's not clear that the best defense is a good offense either, especially when it's not immediately clear who is doing the attacking.  And when many thefts of data are not discovered until months or years after the damage is done, it's even harder to mount an effective response.

It looks like international cyberwarfare will muddle along in this confused state unless and until such a major attack occurs that we get serious about some sort of national defense policy against foreign cyberwarfare.  There are serious concerns being voiced these days about the hacking of power grids and other vital infrastructure systems such as air-traffic control and the domestic Internet itself.  Our best defense for these systems right now is that nobody has a strong reason to attack them, but that could change at any time.  And if it does, I just hope we're ready for what comes afterwards.

Sources:  I referred to a report on President Obama's retaliatory actions against Russia carried by CNN on Dec. 29 at http://www.cnn.com/2016/12/29/politics/russia-sanctions-announced-by-white-house/, and also a report on Putin's non-response at https://www.washingtonpost.com/world/russia-plans-retaliation-and-serious-discomfortoverus-hacking-sanctions/2016/12/30/4efd3650-ce12-11e6-85cd-e66532e35a44_story.html.

Sunday, December 25, 2016

Clifford Furnas and the Clouded Crystal Ball


In 1936, during the depths of the Great Depression, a professor of physical chemistry at Yale named Clifford C. Furnas published a book in which he tried to anticipate the next great advances in science and engineering during the following century.  His book was inspired by a visit he made to the Chicago World's Fair in 1933, otherwise known as the "Century of Progress Exposition," which marked the 100-year anniversary of the founding of Chicago.  A lot of the technical exhibits that were designed to show how the world of tomorrow would be better than the depression of today didn't work properly, and so he went home and surveyed the state of science, engineering, and technology and made his best guesses as to how things would be by 2033, appropriately entitling it The Next Hundred Years.

My interest isn't so much in the accuracy of his technical predictions as in his expectations for what the trend of automation would yield for the economy and the working life of the average citizen.  It was already obvious by 1933 that a lot of jobs formerly done partly or wholly by hand up to then would be performed by machines or even robots in the future.  But what Furnas missed, along with nearly every other prognosticator up to the end of World War II, was the rise of the electronic computer, computer networking, and the growth in Internet-based economic activity.  And without the computer, modern robotics would be impossible, because without digital control systems (now including artificial intelligence), a robot can't do anything much more than act as a power-assist to a human being.

What we're talking about is the rise in what economists call productivity:  the economic output of a nation divided by the number of hours worked.  One person using a small lathe and a few hand tools can build a watch in maybe a few dozen hours, depending on what they start with.  But one person at the controls of an otherwise fully automated watch factory can make hundreds or thousands of watches per hour.  And Furnas was right in his prediction that advances in automation would (a) greatly increase the productivity of the average worker, and (b) render obsolete entire classes of jobs that previously employed millions of people. 

Where he went wrong was his prediction about what the result of these changes would be.

In Furnas's view, the average man (he barely discussed women at all), when faced with a choice of working 40 or 50 hours a week for ever-increasing pay, or else getting paid the same wages for less and less work, would choose to work less and get paid the same amount for it.  Consequently, the great challenge he foresaw for the future was to find things for people to do with all their spare time, now that their jobs could be done in as little as one or two hours a day.  He summarized the difficulty thus:  "Our problem will be to keep the citizenry on even keel while they have a wealth of time on their hands, for certainly a society steeped in mere idleness will soon lose its moral fiber, its material possessions and its reasons for existence." 

Why didn't things turn out that way?  Why isn't the U. S. a peaceful country full of debating societies, painting groups, and volunteer choirs, instead of harboring an increasingly divided populace in which some better-educated folks live a life of relative freedom and interesting work, while most people without advanced degrees work longer and longer hours in uncertain dead-end jobs (sometimes two or three jobs at once) and feel they can barely get by?  And don't forget the growing class of working-age men who have simply resigned from the workforce altogether and spend their days playing video games and in other forms of, in Furnas's words, "mere idleness."

A complete answer to these questions would require a book, or several books by a group of experts with talents that I lack.  But in my 300 words or so remaining, I'll hazard a few guesses.

One answer will sound paradoxical:  the rise in the standard of living.  The phrase "keeping up with the Joneses" captures some of this idea.  For Furnas's vision of the leisure class to come to pass, it wouldn't do for just a few people to choose shorter working hours over more pay—most of the country would have to do it.  And in the hyper-competitive international economic arena, a country in which most of its working people work only two hours a day would lag behind countries where 40 or 50 hours a week was the norm. 

Another answer is that people are, frankly, greedy.  And greed, at least of the mildly acquisitive type, is the engine that fuels advertising and consumer economies such as in the U. S. and most other industrialized nations these days.  There are a few people who choose to live on next to nothing and cut themselves off from the grid, but most of us regard them as eccentrics at best and dangerous at worst. 

A third factor is what I call "building-code creep."  If you attempted to build a house today in the way a modestly-priced house was built in 1930, you would be violating nearly every building code in the book.  Where's the third wire for grounding the outlets?  Where's your insulation, air conditioning, smoke alarms?  What's all this lead paint doing here?  That gas water heater has no automatic flameout-protection valve.  In thousands of  ways that have made life safer and more convenient, we have changed the rules of material life so that it costs a great deal more to live simply than it used to.  In certain rural parts of the country, most if not all of these things can be skipped, but at the price of living dangerously.

For a variety of reasons, we seem to be entering a period in which increasing numbers of people in the U. S. choose to live without jobs.  But most of them don't seem to be happy about it, and I think Furnas was on to something when he expressed concern about the deteriorating moral fiber of a nation where idleness becomes a way of life for many people.  The key, if there is one, lies in the phrase "reasons for existence," but that is a topic for another blog.

Sources:  Clifford C. Furnas's The Next Hundred Years was published in 1936 by Reynal & Hitchcock, New York.  The quotation about keeping citizens on an even keel is from p. 367.  I previously referred to this book in my blog on Sept. 23, 2013, "Engineers and Technological Unemployment:  What Are People For?"

Monday, December 19, 2016

Are We Ready For an AI World?


The other day I was making some hotel reservations, and set them up with two different hotel chains.  One is universally pet-friendly (we often travel with a dog), and you can call the hotel you want to stay at and talk with the desk clerk directly to make your reservation.  The clerk gets into their reservation system and takes your information and usually there's no problem, although if you call at a busy time it can be a little stressful on the clerk. 

The other chain makes all phone reservations through a centralized phone system—if you call the individual motel, the desk clerk transfers you to the same reservation number you can call directly.  Recently this chain transitioned to a computerized voice-recognition system—your voice is unheard by human ears when you dial the number.  It didn't go well.

I suppose those familiar with the robotic phone-tree industry could name the company that makes this system by the way it sounds.  It has a friendly female voice saying, "Okay, what can we do for you?  Tell me if you want to make a reservation," etc.  At first I hoped I'd eventually get to talk with a live human, because my experience with these robot voices has been mixed at best.  Maybe it's my tone of voice, maybe it's my Southern background, but unless the computer is asking for simple yes-or-no answers, I don't have much luck with them. 

It asked me for the place I wanted to stay and what day and how many nights.  I tried to tell it—twice, in fact—but all I got back was this peculiar fast clicking ("pip-pop-pip-pop") which I have to believe is what the system puts on the line instead of Muzak while it's trying to puzzle out what you said, and then it asked the same question all over again.  Finally I hung up and used the chain's website to make the reservation, which may be what they want people to do anyway—I'm sure it's a lot less trouble to them than their robot telephone operators. 

This is an up-close and personal encounter with something that is only going to get worse—or better, depending on your point of view—in the future.  I'm talking about the replacement of people with technology in a wide variety of jobs.  In a recent issue of The New Yorker magazine, Elizabeth Kolbert reviews a number of books concerned with the recent advances in artificial intelligence (AI), and the effects this is going to have on the the job market, the economy, and society in general. 

This isn't going to happen overnight.  Paradoxically, it's easier to program a computer to diagnose certain types of diseases with expert systems than it is to teach one how to fold towels.  Kolbert cites an experiment at U. C. Berkeley with a robot that learned to fold a towel—after practicing, it got its time down to twenty-five minutes per towel.  In that regard, at least, Rosie the Robot isn't going to replace hotel housemaids any time soon. 

On the other hand, if you work in a phone-answering "boiler room," you have reason to be worried, although my own experience with the robotic reservation clerk shows there is still a place for humans on the other end of the line.  Kolbert classifies jobs into four types:  manual routine jobs (e. g. folding towels or working on an assembly line), cognitive routine jobs (e. g. keeping track of a warehouse inventory), manual nonroutine jobs (e. g. home health care or brain surgery), and cognitive nonroutine jobs (e. g. developing a new AI system).  Both types of routine jobs, where you can basically write an algorithm about what to do in any given situation, are ripest for replacement by robots and AI software.

The fear that humans will lose their jobs to machines goes back at least to the 1700s, when mechanical looms and spinning jennies began to replace weavers and the one-person spinning wheel.  But until recently, industrialization produced at least as many new jobs as the old ones it eliminated, if not more. 

The problem now is that many new firms that attract billions in capital now operate with essentially nobody.  Kolbert cites an extreme example:  the messaging firm Whatsapp, with its fifty-five employees, was bought by Facebook in 2014 for twenty-two billion dollars.  That's four hundred million dollars per employee.  When I told my wife about it, she said, "Well, I hope they didn't lose their jobs when they got bought out."  I hope not either.  Maybe the janitor did, but you can rest assured that some of that twenty-two billion found its way into the pockets of at least a few of those people. 

Leaving lottery-like occurrences aside, the point is that both software-based and manufacturing enterprises are finding ways to do what they need to do with fewer and fewer warm bodies who are not in the upper echelon of the cognitive non-routine class.  The few people they still need—lawyers, managers, creative people, and other "symbolic manipulators," in George Gilder's phrase—may form the future ruling class of what software developer Martin Ford calls "techno-feudalism." 

But even feudal lords needed their serfs to work their lands.  The ruling class of the future will have no need for anyone not in their class, except as consumers.  Most of the authorities Kolbert cites figure that the best we can do with the vast majority of us ordinary mortals who have no aptitude for programming, management, the law, or high finance, is to pension us off with guaranteed incomes, or something that amounts to that, and hope we don't decide to up and storm the castle some day.

Next week I plan to look at an alternate view of the same problem, written during the depths of the Great Depression, but I've run out of space today.  In the meantime, if you have a job, be grateful for it, and share some of what you have with those less fortunate.   

Sources:  Elizabeth Kolbert's piece "Rage Against the Machine:  Will Robots Take Your Job?" begins on p. 114 of the Dec. 19 & 26, 2016 issue of The New Yorker magazine.

Monday, December 12, 2016

Hot-Air Ballooning Needs Down-to-Earth Regulation


On the morning of Saturday, July 30, 2016, a group of sixteen people gathered in a Wal-Mart parking lot in Central Texas before sunrise for what they hoped would be a thrilling and memorable experience.  Several of them were married couples or newlyweds.  Ross and Sandra Chalk were 60 and 55 but recently married, while John and Stacee Gore were both in their 20s and celebrating their third wedding anniversary that week.  Others showed up as a result of a birthday present given by a loving friend or relative.  All fifteen passengers were trusting balloon pilot Alfred Nichols to take them up in his hot-air balloon, give them a wonderful experience, and return them safely to earth.  But two out of three wasn't going to be good enough.

As often happens on summer mornings in this part of Texas, low clouds drifted through the sky.  But after a short delay, Nichols decided to fly anyway, and around 7 AM, shortly after sunrise, the balloon took off with fifteen passengers and the pilot.

Photos taken during the flight show patchy clouds and fog beneath the balloon.  Evidently Nichols decided to land near Maxwell, Texas, about forty miles southeast of Austin.  Utility-company records show that at 7:42 AM, something happened to trip a protective relay on a high-voltage transmission line crossing a cornfield.  First responders soon discovered that the balloon became entangled in the transmission line, caught fire, and crashed, killing all sixteen people aboard, including Nichols.  This was the worst balloon crash ever in the U. S., in terms of fatalities, and subsequent investigations have revealed some unsavory facts about Nichols and about the industry in general.

At a hearing held Friday, Dec. 9 in Washington, D. C., the National Transportation Safety Board (NTSB) presented documentation and evidence about the crash, which is still under investigation.  Toxicology reports show that Nichols had seven different prescription drugs at detectible levels in his body.  Prior to the crash, he had been convicted in Missouri of four charges of driving while intoxicated, and at the time of the crash was not allowed to drive a car in Texas.  Nevertheless, he held a valid commercial balloon pilot certificate.  Weather reports from the day of the crash show that the cloud ceiling had lowered to only 700 feet at the time of launch, and other balloon pilots present at the hearing agreed that they would not have flown under such conditions.  Nichols appears to have been a disaster waiting to happen.

We may be seeing a pattern that is all too familiar:  a new activity or business arises with no or minimal regulation, a tragedy results in headline-grabbing deaths, and only after the tragedy laws are amended to more properly regulate the activity or business.  Although hot-air balloons were the first form of human flight to be invented back in the 1700s, balloon rides were so infrequent, and the number of people involved so small, that a light-handed regulatory environment seemed to have sufficed for decades.  But this tragedy may mark the point at which regulations will catch up with the larger volume of customers taking rides in larger balloons that present a greater danger to more people than ever. 

The Federal Aviation Administration (FAA), recognizing these dangers, has established regulations for commercial hot-air balloon pilots, and makes them undergo rigorous tests, both on paper and practical ones in a working balloon.  But beyond that, pilots are largely left on their own to follow the elaborate advice in the 252-page Balloon Flying Handbook issued by the FAA.  Most commercial balloon operations are small, like the one-man show that Nichols ran, and lack the natural supervision that working for even a small charter-plane company would entail.  The solo nature of balloon flying, plus the fact that the same person piloting the balloon is probably the one who stands to profit the most if a full-capacity flight goes forward in hazardous conditions, means that there are built-in conflicts of interest in this type of flying that are not faced by pilots who work for major airlines, for example.  For this reason alone, one would hope that regulatory oversight would be at least as rigorous as it is for commercial charter-flight pilots of fixed-wing aircraft, not less.  As it is, however, there are not even any reliable statistics on how many flight hours are logged by commercial balloon pilots in the U. S., as some public-health experts researching the problem found in 2013. 

Part of the problem is that the regulatory question is caught in a turf war between the NTSB, which investigates transportation accidents of all kinds, and the FAA, which issues flight safety regulations and requirements for both flight equipment and pilots.  The NTSB has been pushing for tighter balloon-pilot regulations for years, but the FAA has so far refused to act, trusting to private balloon-pilot organizations to do self-enforcement.  In Nichols' case, at least, this kind of enforcement failed.

It's all very well to publish books of regulations and advice, but if enforcement is left solely up to the person who also stands to profit personally if the rules are flouted, the FAA is guilty of putting too much trust in fallible human nature.  Something along the lines of periodic background checks and even surprise drug tests should be implemented for commercial hot-air balloonists who take the lives of others into their hands.  Commercial balloons can carry as many as 32 passengers, and newspaper reports have pointed out that many charter and common-carrier fixed-wing aircraft don't carry that many passengers.  The bottom-line purpose of flight regulation is to protect the lives of passengers, and the FAA's creaky system for doing that for hot-air balloon riders crashed along with the sixteen people who lost their lives on that summer day.

Balloons tend to be associated in the public mind with fun, frivolity, and pleasant times.  The balloon Nichols was piloting had a big smiley face with sunglasses painted on it.  If people are going to continue to ride balloons for pleasure, we should make sure that they aren't putting their lives into the hands of someone who can't drive them to the takeoff point because of drunk-driving convictions.  I hope the FAA and the NTSB can work out their differences to revise hot-air ballooning regulations and policies so that the tragic crash last summer is the last one of that magnitude for a long, long time.

Sources:  I referred to reports of the NTSB hearing held Dec. 9, 2016 on the San Antonio Express-News website at http://www.mysanantonio.com/news/local/texas/article/NTSB-holds-hearing-on-balloon-crash-that-killed-10777463.php and KXAN-TV at http://kxan.com/2016/12/09/witnesses-recall-lockhart-hot-air-balloon-crash-that-killed-16/and http://kxan.com/2016/10/07/hot-air-balloon-regulations-unchanged-despite-deadly-crash/.  The paper "Hot-Air Balloon Tours:  Crash Epidemiology in the United States, 2000-2011" by S.-B. Ballard, L. P. Beaty, and S. P. Baker, was published in Aviation Space and Environmental Medicine in 2013 in vol. 84, pp. 1172-1177, and is available online at 
  The FAA's "Balloon Flying Handbook" is available as a download at https://www.faa.gov/regulations_policies/handbooks_manuals/aircraft/media/FAA-H-8083-11.pdf.

Monday, December 05, 2016

How Public Utilities Became Public Utilities


The idea of a "public utility" is firmly entrenched in the minds of most people who live in industrialized countries today.  Things like the water supply, electric power, and more recent developments such as Internet service are all considered well-nigh essential to modern life.  Most people would probably agree that because of this, governments have the right to regulate public utilities in a way that would be regarded as heavy-handed or illegal if the firm involved was making dental floss, for example, instead of providing a necessity like clean water or electric power.  But I, for one, never stopped to wonder where the phrase came from until I read a historical article by Adam Plaiss called "From Natural Monopoly to Public Utility."

Plaiss traces the origin of the phrase all the way back to philosopher John Stuart Mill, who used it in a different sense, as a modifier rather than a noun.  Mill referred to canals and bazaars as works useful to the general public—that is, works of "public utility."  But the concept that a system of waterworks or communications could be called a public utility dates back only to the late 1800s, when the related concept of a natural monopoly began to influence thinkers during what came to be called the Progressive Era.

Progressives enthused about applying relatively new social sciences such as economics to pressing public problems such as the exploitation of the working classes by private monopolistic companies.  One of the first professionally-trained economists in the U. S. was Richard T. Ely, who obtained his doctorate from Germany and came back to join the effort to apply scientific approaches to economics as a way of "bring[ing] about a better world."  And during a period in the U. S. when utility companies selling gas, water, electricity, and telephone service were rapidly expanding, Ely examined the question of a natural monopoly.  Was there such a thing, and if so, what were its characteristics?

Around 1888, Ely came up with a set of criteria that made an entity a natural monopoly.  The thing it supplied had to be a necessity, like water.  The area it served had to be geographically distinct.  And there could be no wasteful duplication of service within the area.  A classic example of what Ely called a natural monopoly was a water-supply company.  The heavy expense of laying pipes and distribution networks made it virtually impossible for there to be meaningful competition between two rival water-supply companies for the same customers.  So if a service met Ely's criteria for being a natural monopoly, Ely believed it was the public's right to regulate that service closely. 

Perceptive and thoughtful as Ely was, Plaiss points out that he had a blind spot when it came to the root cause of a natural monopoly.  Ely attributed the cause to the nature of the hardware infrastructure itself.  But the idea that only private capital could afford to build utility services was so universally accepted at the time, that Ely failed to see the contribution of the economic background, so to speak, of late 1800s America, to the existence of natural monopolies.  It is only a slight exaggeration to say that Ely believed technology caused natural monopolies, not people. 

And because Ely saw the creation of natural monopolies as "technologically determined," as historians put it, he felt it was necessary for all owners of such monopolies to be subject to government regulation.  Otherwise, horrors such as Plaiss cites in his paper might come about, and did in fact happen in the 1880s and 1890s.  For example, privately-owned water companies in cities such as Houston and Seattle refused to extend their networks to newer parts of the cities, hampering fire departments which had no water hydrants to connect to in case of fire.  And a typhoid-fever outbreak in Superior, Wisconsin was caused by impure water provided by a private water company.  Thus, Ely believed that effective governmental control, if not outright ownership, of natural monopolies was necessary to prevent the exploitation of the masses that would result from unregulated private ownership.

After Ely published his thoughts along these lines, a Progressive journalist named Henry Call first used the phrase "public utility" as a noun in 1895, meaning by it any organization that enjoys what Ely would call a natural monopoly in the delivering of what was considered a modern necessity.  Call widened this category to include "banks, railroads, telegraphs," and municipal services such as water and gas.  In the coming years, as cities and states established regulatory commissions and agencies for such utilities, the public got used to the idea that certain types of business could be categorized as a public utility, and therefore subjected to regulation.  Many states passed regulatory laws for public utilities in the twenty years or so after 1900, which saw the height of the Progressive Era.  And although the free-market trends of the 1920s put a damper on further attempts at regulation, the distress of the Great Depression renewed public enthusiasm for government controls on all sorts of businesses that looked like public utilities.  The establishment of the Federal Communications Commission in 1933 was square in the tradition of regulating public utilities such as the air waves, for example. 

Since the Progressive Era, the scales of regulation have swung back and forth.  As late as the 1970s, airlines, the telephone system, and electric utilities in the U. S. were all closely-regulated and rather dull businesses, guaranteed an annual profit by their regulatory agencies, but not encouraged to do anything rash or speculative.  By and large, this situation produced stability and profitability, but discouraged technological innovation.  The spate of deregulation that began in the 1980s and continues largely to this day contributed to an explosion of new communications technologies—cable TV, mobile phones, and the Internet, to mention only a few—but has arguably had its downsides, as many smaller cities lost air service altogether and the deregulated electric-power market was gamed by near-criminal enterprises such as Enron. 

With at least the hope of some fresh winds blowing through Washington these days, we may see a swing of the regulatory pendulum back toward tighter controls in some services, or looser ones, depending on whether the interests of the supposedly downtrodden public or of the wealthy owners of public utilities win out. 

But whatever happens, we will do well to remember that the idea of a public utility is only about 130 years old, and its definition has twisted and turned with the political winds of the times in which it was used.

Sources: "From natural monopoly to public utility: technological determinism and the political economy of infrastructure in progressive-era America," by Adam Plaiss, appeared in the Society for the History of Technology journal Technology and Culture (Oct. 2016, vol. 57, no. 4, pp. 806-830).