Monday, December 05, 2016

How Public Utilities Became Public Utilities


The idea of a "public utility" is firmly entrenched in the minds of most people who live in industrialized countries today.  Things like the water supply, electric power, and more recent developments such as Internet service are all considered well-nigh essential to modern life.  Most people would probably agree that because of this, governments have the right to regulate public utilities in a way that would be regarded as heavy-handed or illegal if the firm involved was making dental floss, for example, instead of providing a necessity like clean water or electric power.  But I, for one, never stopped to wonder where the phrase came from until I read a historical article by Adam Plaiss called "From Natural Monopoly to Public Utility."

Plaiss traces the origin of the phrase all the way back to philosopher John Stuart Mill, who used it in a different sense, as a modifier rather than a noun.  Mill referred to canals and bazaars as works useful to the general public—that is, works of "public utility."  But the concept that a system of waterworks or communications could be called a public utility dates back only to the late 1800s, when the related concept of a natural monopoly began to influence thinkers during what came to be called the Progressive Era.

Progressives enthused about applying relatively new social sciences such as economics to pressing public problems such as the exploitation of the working classes by private monopolistic companies.  One of the first professionally-trained economists in the U. S. was Richard T. Ely, who obtained his doctorate from Germany and came back to join the effort to apply scientific approaches to economics as a way of "bring[ing] about a better world."  And during a period in the U. S. when utility companies selling gas, water, electricity, and telephone service were rapidly expanding, Ely examined the question of a natural monopoly.  Was there such a thing, and if so, what were its characteristics?

Around 1888, Ely came up with a set of criteria that made an entity a natural monopoly.  The thing it supplied had to be a necessity, like water.  The area it served had to be geographically distinct.  And there could be no wasteful duplication of service within the area.  A classic example of what Ely called a natural monopoly was a water-supply company.  The heavy expense of laying pipes and distribution networks made it virtually impossible for there to be meaningful competition between two rival water-supply companies for the same customers.  So if a service met Ely's criteria for being a natural monopoly, Ely believed it was the public's right to regulate that service closely. 

Perceptive and thoughtful as Ely was, Plaiss points out that he had a blind spot when it came to the root cause of a natural monopoly.  Ely attributed the cause to the nature of the hardware infrastructure itself.  But the idea that only private capital could afford to build utility services was so universally accepted at the time, that Ely failed to see the contribution of the economic background, so to speak, of late 1800s America, to the existence of natural monopolies.  It is only a slight exaggeration to say that Ely believed technology caused natural monopolies, not people. 

And because Ely saw the creation of natural monopolies as "technologically determined," as historians put it, he felt it was necessary for all owners of such monopolies to be subject to government regulation.  Otherwise, horrors such as Plaiss cites in his paper might come about, and did in fact happen in the 1880s and 1890s.  For example, privately-owned water companies in cities such as Houston and Seattle refused to extend their networks to newer parts of the cities, hampering fire departments which had no water hydrants to connect to in case of fire.  And a typhoid-fever outbreak in Superior, Wisconsin was caused by impure water provided by a private water company.  Thus, Ely believed that effective governmental control, if not outright ownership, of natural monopolies was necessary to prevent the exploitation of the masses that would result from unregulated private ownership.

After Ely published his thoughts along these lines, a Progressive journalist named Henry Call first used the phrase "public utility" as a noun in 1895, meaning by it any organization that enjoys what Ely would call a natural monopoly in the delivering of what was considered a modern necessity.  Call widened this category to include "banks, railroads, telegraphs," and municipal services such as water and gas.  In the coming years, as cities and states established regulatory commissions and agencies for such utilities, the public got used to the idea that certain types of business could be categorized as a public utility, and therefore subjected to regulation.  Many states passed regulatory laws for public utilities in the twenty years or so after 1900, which saw the height of the Progressive Era.  And although the free-market trends of the 1920s put a damper on further attempts at regulation, the distress of the Great Depression renewed public enthusiasm for government controls on all sorts of businesses that looked like public utilities.  The establishment of the Federal Communications Commission in 1933 was square in the tradition of regulating public utilities such as the air waves, for example. 

Since the Progressive Era, the scales of regulation have swung back and forth.  As late as the 1970s, airlines, the telephone system, and electric utilities in the U. S. were all closely-regulated and rather dull businesses, guaranteed an annual profit by their regulatory agencies, but not encouraged to do anything rash or speculative.  By and large, this situation produced stability and profitability, but discouraged technological innovation.  The spate of deregulation that began in the 1980s and continues largely to this day contributed to an explosion of new communications technologies—cable TV, mobile phones, and the Internet, to mention only a few—but has arguably had its downsides, as many smaller cities lost air service altogether and the deregulated electric-power market was gamed by near-criminal enterprises such as Enron. 

With at least the hope of some fresh winds blowing through Washington these days, we may see a swing of the regulatory pendulum back toward tighter controls in some services, or looser ones, depending on whether the interests of the supposedly downtrodden public or of the wealthy owners of public utilities win out. 

But whatever happens, we will do well to remember that the idea of a public utility is only about 130 years old, and its definition has twisted and turned with the political winds of the times in which it was used.

Sources: "From natural monopoly to public utility: technological determinism and the political economy of infrastructure in progressive-era America," by Adam Plaiss, appeared in the Society for the History of Technology journal Technology and Culture (Oct. 2016, vol. 57, no. 4, pp. 806-830).

Monday, November 28, 2016

Driving While Online: Does the NHTSA Know Best?


Many generations of technology ago—that is to say, in the 1950s—there was a popular TV show called "Father Knows Best," starring Robert Young as the father of four children whose escapades and misfortunes always wound up with the kids having a talk with Daddy.  When this happened, you knew the final commercial break was coming up and everything would be tied up neatly in a few more minutes. 

Real family life in the 1950s wasn't as easy to fix as "Father Knows Best" portrayed, and neither is the problem of drivers getting distracted by portable devices such as mobile phones, tablets, and so on.  Some observers are attributing the recent rise in per-mile auto fatalities in the U. S. mainly to electronic distractions, and the U. S. National Highway Transportation Safety Administration (NHTSA) has a Department of Transportation (DOT) that has recently issued a draft set of "guidelines" for makers of electronic devices and automotive manufacturers to follow in order to address this problem.

Everybody admits there's a real problem.  If you've driven more than a few hours in rush-hour traffic in any major city, you've probably seen people doing things at the wheel that you can't believe they're doing, like texting or studying something on the car seat, even watching videos.  The question is what to do about it.

Lots of municipalities have tried to attack the problem by passing a no-hand-held-device-use ordinance for drivers, but enforcing such a thing is not something that highway patrol officers get real excited about, and the consensus is that these ordinances have not made a big dent in the problem.

So on Nov. 23, the NHTSA announced a draft of guidelines for makers of portable devices:  mobile phones, tablets, GPS display systems, you name it.  Two of the new concepts that these guidelines, if followed, would introduce to the driving public are "pairing" and "Driver Mode."

Pairing refers to an electronic connection between the portable device and the vehicle's built-in displays and controls.  Historically, the automakers have taken the NHTSA's word seriously regarding its recommendations for how to incorporate safety features in cars.  Although guidelines do not have the force of law, they can become law if Congress so chooses, and so many safety features such as seat belts and air bags showed up in cars as options before they were made mandatory.  In an earlier set of guidelines, the NHTSA set up rules for built-in instrumentation that would meet the agency's non-distraction requirements.  This involves things like not requiring the driver to glance away from the road for more than two seconds at a time and so on.  Their reference maximum distraction is tuning a radio manually.  Anything that distracts you more than that is basically regarded as too much.

Assuming the car's built-in controls and displays meet that criterion, pairing basically ports the portable device's controls to the car's built-in controls, which automatically meet the distraction guidelines already.  Maybe this sounds easy to a regulatory agency, but to this engineer, it sounds like a compatibility nightmare.  For pairing to work most of the time, every portable device that anyone is likely to use in a car will have to be able to communicate seamlessly with the wide variety of in-car systems, and be able to use those systems as a remote command and control point instead of the device's own controls and displays.  Maybe it can be made to work, but at this time it looks like a long shot.  And even if it does, you have the problem of those die-hards (such as yours truly) who cling to cars that are ten or fifteen years old and will never catch up to the latest technology.  (Those folks tend not to buy the latest portable devices either, but there are exceptions.)

Recognizing that pairing won't solve all the problems, the next step is Driver Mode.  This is an operational mode that goes into effect when the device figures out it's in a moving car.  Most new portable gizmos these days have built-in GPS systems, and so they can detect vehicle motion without much of a problem, although there might be issues with things like rides on a ferry boat and so on.  But those situations are rare enough to be negligible.  Once in Driver Mode, the device will refuse to let the user do things like texting, watching videos, and other activities that distract more than the reference tuning-the-radio operation would. 

One can foresee problems with Driver Mode as well.  The NHTSA says the user should be able to switch it off, and if this option is available, my guess is a lot of people will choose to disable Driver Mode altogether.  A determined distracted driver is going to find a way to text while driving no matter what, but the hope is that with these new measures in place—pairing and Driver Mode, mainly—the number of incidents of distracted driving will decrease, and we will resume our march to fewer traffic accidents that has been going on historically for the last several decades.

While the NHTSA deserves credit for encouraging device makers and car manufacturers to consider these ideas, it is not clear that there is a lot of enthusiasm for them, especially on the part of the mobile phone makers.  Automakers selling big-ticket cars can more easily adapt their products to the different requirements of different legal regimes in the U. S. and, say, France.  But piling a bunch of complicated pairing features onto phones sold only in the U. S. may not be an easy thing to convince phone makers to do.  Unless the U. S. initiative proves so popular that it becomes a global phenomenon, my guess is that mobile phone makers will resist building in the pairing function, especially because they would have to deal with a bewildering variety of host controls and displays in cars that would be hard to keep up with.

This issue is just one aspect of the huge upheaval in the auto industry that IT is causing right now.  Integrating cars with the Internet and portable devices, and making sure in-car displays work without causing wrecks, are only two of the many challenges that car makers face in this area.  Ironically, the move toward driverless cars, if successful, would render all the driver-distraction precautions pointless anyway.  If the driver's not doing anything, it's fine to let him or her be distracted.  That's Google's hope, anyway, in developing driverless cars:  less time paying attention to driving means more time on the Internet. 

The hope is that all the confusion will eventually settle down, or at least we will make the transitions to highly IT-intensive cars that are still at least as safe to drive as the older ones, if not safer—until we don't have to drive them at all.  But it looks like right now, at least, car makers will have to aim simultaneously at two targets that are moving in opposite directions. 

Sources:  An article summarizing the NHTSA proposed guidelines appeared in the San Jose Mercury-News on Nov. 23, 2016 at http://www.mercurynews.com/2016/11/23/biz-break-feds-nudge-phone-makers-to-block-drivers-from-using-apps-behind-the-wheel/.  The NHTSA press release about the guidelines can be found at https://www.nhtsa.gov/About-NHTSA/Press-Releases/nhtsa_distraction_guidelines_phase2_11232016, and the press release has a link to a .pdf file of the draft guidelines.

Monday, November 21, 2016

Can the Digital Future of Cars Save Lives and Time?


Despite all the positive changes the automobile has wrought, there are still a few big problems.  Leading the list is the rate of automotive fatalities and injuries—thousands of people die in car crashes every year, and many times that number are seriously injured.  Next on my list is the millions of person-hours wasted each year by people sitting in slow traffic—needlessly long commute times.  Add the carbon footprint of each car to that picture, and you can see plenty of room for improvement in the way we use machines to get around. 

At a one-day event called AutoMobilityLA held at the annual Los Angeles Auto show that runs through Nov. 27, New York Times reporter Tom Volek surveyed a number of digital technologies that promise to deal with all of these problems.  But as with many nice ideas, the difficulty is how we're going to get from here to there without making things worse before they get better.

Take self-driving cars, for instance.  According to Dr. Alexander Hans, a blogger at a site called www.driverless-future.com, several studies have shown the potential for a self-driving taxi to perform the transportation work of six to ten privately-owned vehicles.  He also claims that the first widespread use of self-driving cars will be in fleets of self-driving taxis operating in restricted geographic areas such as densely populated districts of urban areas (think places like Singapore, where the first commercial self-driving taxi fleet debuted last August). 

Maybe these forecasts are right, but computer simulations leave out certain factors that may be decisive.  For example, there are lots of cabs in Manhattan, and there would be even more if the existing cab companies had not engaged in rent-seeking by restricting the total number of medallions available and fighting innovative unlicensed services such as Uber and Lyft.  But even if all the restrictions on cabs and taxi-like services in Manhattan were removed, I think you would still have a lot of cars clogging the streets, many of them privately owned. 

A city is a complex thing, and it is a mistake to assume everything else will stay the same if all you do is insert a change in the transportation mix.  That is why new freeways get crowded so quickly and the race to alleviate congestion by building more freeways never seems to be won.  Better and more congenial transportation attracts residential and commercial development until the new transportation mode is just as crowded as it used to be, and then people go somewhere else to repeat the cycle.

And even more important than alleviating commuting time and headaches is safety.  We are told that once most cars on the road are self-driving ones, that auto accident rates will plummet.  Given the fact that most auto fatalities are due to operator misjudgments and not mechanical failures, I can believe that.  Computers don't get drunk and try to impress their friends with their alcohol-impaired driving skills. 

But as the isolated but well-publicized fatality involving a Tesla quasi-self-driving vehicle showed last May, people can put more trust in a nearly self-driving car than is warranted.  Despite warnings to keep his hands on the wheel when the self-driving feature was engaged, Joshua Brown apparently was watching a video at the wheel of his Tesla when a truck unexpectedly crossed its path, and the system failed to recognize it in time to avoid a fatal crash.  Tesla has since made changes to their system to avoid such problems, but no system is going to be 100% safe no matter how much the software is tweaked. 

What the consumers and the auto insurance industry are waiting for is evidence that over time, truly self-driving cars that require nothing more from the passenger than to sit there and not mess with things, will lead to fewer injuries and deaths than would result if all those people were driving instead of sitting on their hands.  Despite all the self-driving car test drives and public demonstrations of the last few years, we are nowhere near the point at which a reasonably robust statistical study of this type can be made.  And until that time, neither insurers nor the general public will get interested in self-driving cars in a major way.

On the other hand, fleets owned by a single entity and driving in a specific well-mapped area can make real headway, and probably will unless entrenched interests stop them, as existing cab companies are trying to do with unlicensed services. 
           
The current situation reminds me of a scene I saw recently in a 2003 movie made mostly in Germany.  Some bicyclists come to a railroad crossing with a gate lowered across it.  Now in the U. S., railroad crossings with gates are completely automatic—some track-sensor gizmo lowers the gates when a train passes by and raises them afterwards.  But in this scene, a young man in an elevated booth next to the tracks finally looks up from the book of poetry he's reading and walks over to a crank and turns it by hand to raise the gate. 

There in a nutshell you have the two choices we face regarding self-driving vehicles.  I don't know what combination of union rules and tradition and exaggerated concerns for safety led to preserving the job of crossing-guard keeper in Germany some eighty years after the technology to eliminate that job became available.  But if in 2060, we still have medallioned cabs in Manhattan manually driven by immigrants who can't find a better job and 40,000 traffic deaths a year in the U. S., it won't be because the technology isn't available.  It will be because human organizations and political factors intervened to stifle the change for fifty years.  And if for no other reason than for the sake of those whose lives will be lost to automobile accidents in that time, that would be a shame.

Sources:  Tom Votek's article "At the Los Angeles Auto Show, Industry Ponders Its Digital Future" appeared on Nov. 17, 2016 at http://www.nytimes.com/2016/11/18/automobiles/autoshow/los-angeles-auto-show-digital-future-of-industry.html.  Dr. Hans's blog appears at http://www.driverless-future.com/ and is sponsored by Inventivio GmbH of Germany.  A report on the commercial driverless-car taxi service in Singapore appeared at http://bigstory.ap.org/article/615568b7668b452bbc8d2e2f3e5148e6/worlds-first-self-driving-taxis-debut-singapore.  The movie in which the hand-cranked crossing gate appeared is "Schultze Gets The Blues" released in 2003 and written and directed by Michael Schorr. 

Monday, November 14, 2016

Can Democracy Survive Social Media?


That's the question that Wired reporter Issie Lapowsky raises in a Nov. 12 piece entitled "Facebook Alone Didn't Create Trump—The Click Economy Did."  Like many in the media, Lapowsky wasn't expecting Trump to win.  But she got a hint of what might happen when she spoke in October with a 75-year-old Trump supporter in Ohio who told her a string of crazy stories about the various depravities of Hillary and Bill Clinton.  The source of all these patently false but juicy tales?  Facebook. 

It wasn't just negative rumors that helped Trump win, says Lapowsky, but the way Trump conveyed his anger and outrage through tweets that were picked up by the media so that even non-tweeters like yours truly read about them.  It turns out that certain emotions play better over social media than others, and anger is near the top of the list. 

Once a surprising and unexpected thing happens, it's not hard to find reasons why it happened.  Whatever your political sympathies may be, the outcome of last Tuesday's presidential race shows us that social media are playing an increasing role in the way politics works in democracies such as the U. S.  And the social and ethical implications of that shift are just now beginning to be understood.

Probably the single most important difference between the way social media convey political messages today and the way the old mass media used to do it, is the fact that people now can choose media that agree with their politics.  This includes friends on Facebook, twitter feeds, websites, and even cable TV channels.  Liberals tend to listen to and read other liberals, and ditto for conservatives.  The ability to self-select one's news sources leads most people to shield themselves in comfortable bubbles or echo chambers in which people hear only the kinds of talk they want to hear.

There's nothing new about this, of course.  But for a period of about sixty years—from around 1920 to 1980—most U. S. citizens received their news from sources that were designed to appeal to the widest range of readers and listeners—and viewers, when TV came along.  John Durham Peters is a professor of communication studies at the University of Iowa, and he points out that what he calls the "old mass media" used capital-intensive plant and equipment—printing presses, news organizations such as the Associated Press, and radio and TV networks—and therefore had to make money by appealing to the largest number of people.  They did this by developing so-called "objective journalism" that strenuously avoided partisanship and tried to present an even-handed view of political and social events.  The fact that nearly everyone in the U. S. received their news from only a few news networks, which often sounded alike, imposed a uniformity of viewpoint that was not always good—minority and dissident views were often suppressed—but tended to give everyone the same starting point in political discussions.  It's hard to tell, but we may owe a good deal of the comparative unity and domestic peace within the U. S. for that period to the homogenizing influence of mass media.

The funny thing is that the objective journalism of the twentieth-century mass media was itself something of an anomaly historically.  Before newspapers got big enough to organize and use the Associated Press and similar wire-news organizations for most of their news content, most papers were highly partisan.  Even in small towns, Republicans subscribed to the Republican paper and Democrats to the Democratic paper.  Editors took radical stands and learned to deal with the consequences.  In 1869, Mark Twain penned a humorous but only slightly exaggerated view of life at a nineteenth-century newspaper in a satirical piece called "Journalism in Tennessee."  A substitute editor of a small-town paper starts his first day on the job and gets shot at, bombed, thrown out the window, and subjected to a general riot and insurrection that wrecks the office.  When the chief editor returns from vacation, he hears of these disasters and says nothing more than, "You'll like this place when you get used to it." 

Maybe Facebook and Twitter aren't as physically violent as Tennessee journalism was in 1869, but the verbal equivalent of bullets and bombs fly around social media every day, and the effects are often similar.  In 1960, no responsible newspaper would have knowingly printed false stories that one of the Presidential nominees was getting secret messages in an earpiece from a billionaire during debates and was married to a man who had an illegitimate half-black son.  But that's the kind of thing the Wired reporter heard from the Trump supporter, and the stories came from Facebook. 

Every new communications medium, going all the way back to the electromagnetic telegraph, has been hailed at first as a promising means of unifying people, parties, and nations.  And if people were angels, all these glowing predictions would come true.  But angels don't need to send telegrams or tweets, and the fallible, sinful humans who do use communications media often put them to the worst conceivable purposes. 

This is not a call for censorship or any third-party control of the way people communicate with each other.  We need only to recall how social media have played helpful and positive roles in the overthrow of repressive regimes to realize that authoritarian measures to suppress free speech are harmful to democracy.

But in the wake of last week's election, it wouldn't surprise me to see renewed calls for such restraints, although the political climate will soon change to the point that such calls may fall on deaf ears.  What should concern us more is the bad habit many have of isolating themselves by means of social media to the point that so-called discussions amount to nothing more than a group of like-minded people massaging each others' prejudices.  Politics is the art of compromise, but if you spend all your time talking with people who think just like you, you'll lose the ability to compromise.  And no one else is going to make us get out of our self-created shells.  We have to do that on our own.

Sources:  Issie Lapowsky's article " Facebook Alone Didn't Create Trump—The Click Economy Did" appeared in Wired on Nov. 12, 2016 at https://www.wired.com/2016/11/facebook-alone-didnt-create-trump-click-economy/.  John Durham Peters spoke on the old mass media in an interview with Mars Hill Audio's Ken Myers in Vol. 131 of that online audio journal, available at https://marshillaudio.org.  And Mark Twain's satirical piece "Journalism in Tennessee" can be found in The Complete Short Stories of Mark Twain (ed. Charles Neider), published by Bantam in 1971.

Monday, November 07, 2016

Can Science and Technology Studies Prevent the Next Engineering Disaster?


"Technology is neutral.  It's only how it's used that can be good or bad." 

Back in the 1960s and even up to the 1970s, a statement along those lines was often the standard response you got from an engineer or scientist if you raised questions about the dangers or moral implications of a given invention.  The neutrality argument was used to defend radio, television, computers, and even nuclear energy.  But Sheila Jasanoff, for one, would disagree.

Jasanoff teaches science and technology studies (STS) at the Harvard Kennedy School.  In an editorial in the October edition of the journal IEEE Spectrum, Jasanoff told chief editor Susan Hassler that there is no such thing as a value-neutral technology.  Hassler was speaking with Jasanoff about her new book, The Ethics of Invention:  Technology and the Human Future (Norton, 2016), in which Jasanoff argues that every technology worthy of the name is designed with some idea of the good in mind.  And we don't get ideas of what is good only from technology itself.  That comes from the wider culture, which invariably informs and shapes the motivations of those who strive to create innovations that will do something that somebody, somewhere will regard as good.  Even the terrorist assembling a kettle bomb in his basement thinks it will be good, in his private sense, if the bomb goes off and kills people.  So in that limited sense, every technology is designed with some good in mind, and while the particular good may be influenced by the technology, it is what the philosophers call "logically prior to" the technology, at least most of the time. 

So far so good.  But then Hassler goes on to say that (STS) programs such as the one Jasanoff teaches in ought to be more closely integrated with the engineering curricula of more schools, as they are already in a few places such as the University of Virginia and Stanford.  Maybe if engineering students were obliged to take in-depth looks at the social implications of technology, and STS students had to study more technical subjects, we could avoid creating monsters that look good in the laboratory or as prototypes, but end up causing disasters once they reach thousands or millions of customers. 

Hassler's position is one I'm in sympathy with.  I spent seven years as an officer of the IEEE's Society on Social Implications of Technology, and in the process met a lot of interesting and thoughtful people who share Jasanoff's concern that, as Hassler puts it, we seem to be stuck on a "hamster wheel of innovation, disaster, and remediation."  In other words, the main way we seem to find out that a given technology can be harmful is not by doing forward-thinking studies while it's still in the planning stages, but by selling it on an industrial scale and then reaping the adverse consequences when they become so obvious that we can't ignore them. 

Hassler complains that most engineering undergrads will lump STS classes in with the other humanities as time-wasting compared to the burdensome technical classes they must take in order to graduate.  And by and large, she's right.  This even goes for the subject that is probably the most prominent educational intersection between engineering and the humanities:  engineering ethics.  Here at Texas State University, philosophy courses are required for every undergraduate student on campus, and engineering and philosophy faculty have worked together to get NSF funding to sponsor an engineering-ethics-specific undergraduate philosophy course.  Hassler also cites Stanford as a place where STS majors have to complete technical requirements as well as humanities requirements.  But I would point out that, unless these humanities students go on to get an advanced technical degree, they are not going to have the influence on real-world innovations that engineering students would have.

I think the basic problem here is not educational, but attitudinal.  The type of person who goes in for an engineering degree likes to think that he or she is going to make a positive difference by helping to create innovative products and services that, yes, are regarded as good by somebody.  The basically optimistic mindset this requires is often at cross-purposes to the mindset required in many STS subjects, which is that of a critical stance.  I'm not saying that all STS people are anti-technologists.  Many of them are former engineers or engineering students whose enthusiasm for their technical studies carried them beyond technical matters to explore the wider social implications of that technology, and remain basically supportive of it. 

But to sustain a career, one must establish a basic point of view, and answer a question like this:  Am I going to join this technical field as a participant and team player, not stopping to question the basic goodness of what I'm doing, but taking reasonable precautions to avoid foreseeable harm?  Or am I going to devote my life to viewing this technology from the outside, observing its effects and consequences on various organizations and groups of people, and thinking and writing about that?  It's not as simple a division as action versus contemplation, but it comes close.  And the fact of the matter is that many of the adverse consequences of certain technologies, such as burning fossil fuels, were such as to be invisible and undetectable until such time as it was way too late to forestall any harm.  Some bad effects simply cannot be discovered until a technology is already in widespread use.

I sympathize with Jasanoff's concern, and Hassler's wish that STS was something that more engineers and scientists knew about.  But I'm not sure that if we just had engineers taking more STS courses and STS majors taking more engineering courses, that the world would be much safer than it is now.

Sources:  Susan Hassler's editorial "STEM Crisis?  What About the STS Crisis?" appeared on p. 9 of the October 2016 North American issue of IEEE Spectrum.  Sheila Jasanoff's book The Ethics of Invention:  Technology and the Human Future was published in 2016 by W. W. Norton & Co.

Monday, October 31, 2016

Zombie Cameras On the Internet of Things


On Friday, Oct. 21, millions of Internet users trying to access popular websites including Twitter, Netflix, the New York Times, and Wired suddenly saw them stop working.  The reason was that for a few hours, a massive distributed-denial-of-service (DDOS) attack hit a domain-name-server (DNS) company called Dyn, based in New Hampshire.  As I mentioned in last week's blog, DNS companies provide a sort of phone-book service that turns URLs such as www.google.com into machine-readable addresses that connect the person requesting a website to the server that hosts it.  They are a particularly vulnerable part of the Internet, because one DNS unit can handle requests for thousands of websites, so if you take that DNS machine down, you've automatically damaged all those websites as long as the DNS is out of service.

DDOS attacks are nothing new, but the Oct. 21 attack was the largest yet to use primarily Internet-of-Things (IoT) devices in its "botnet" of infected devices.  The Internet of Things is the proliferation of small sensors, monitors, and other devices less fancy than a standard computer that are connected to the Internet for various purposes. 

Here's where the zombie cameras come in.  Say you buy an inexpensive security camera for your home and get it talking to your wireless connection.  If you're like millions of other buyers of such devices, you don't bother to change the default password or otherwise enhance the security features that would prevent unauthorized access to the device, like you might do if you bought a new laptop computer.  Security experts have known for some time about a new type of malware called Mirai that takes over poorly protected always-on IoT devices such as security cameras and DVRs.  When the evil genius who sent out the Mirai malware sends a signal to the infected gizmos, they all start spouting requests to the targeted DNS server, which immediately gets buried in requests and can't respond to anybody.  That is what a DDOS attack is. 

As the victim learns the nature of the requests, programmers can mount a defense, but skillful attackers can foil these defenses too, for a time, anyway.  The attackers went away after three attacks that day, each lasting a couple of hours, but by then the damage had been done.  The attacks made significant dents in the revenue streams of a number of companies.  And perhaps most importantly, we learned from experience that the much-ballyhooed Internet of Things has a dark side.  The question now is, what should we do about it?

Sen. Mark Warner, a Democrat from Virginia, has reportedly sent letters to the FCC and other relevant Federal agencies asking that same question.  According to a report on the website Computerworld, Warner has a background in the telecomm industry and recognizes that government regulation may not be the best answer.  For one thing, Internet technology can change so fast that by the time a legislative or administrative process finally produces a regulation, it can be outmoded even before it's put into action.  Warner thinks that the IoT industries should develop some kind of seal of security approval or rating system that consumers could use to compare prospective IoT devices before they buy. 

This may get somewhere, and then again it may not.  The reason is that an IoT device that can be used in a DDOS attack but otherwise functions normally as far as the consumer is concerned, is a classic case of what economists call an "externality."

A more familiar type of externality is air-pollution abatement devices on cars:  catalytic converters, the diesel exhaust fluid that truckdrivers now have to buy, and all that stuff.  None of it makes your car run better; in fact, cars can get better mileage or performance if they don't have that anti-pollution stuff working, as Volkswagen knew when it purposely disabled the anti-pollution function on some of its diesel models and turned it on only to pass government inspections.  The pollution your car would cause without anti-pollution equipment is an externality.  The additional pollution that your car causes is so small that you won't notice it.  Only when you add up the contributions of the millions of cars in a city does it become a problem.  But if you don't have anti-pollution stuff on your car, you're adding a tiny bit to the air pollution that everybody in your city has to breathe.  It's that involuntary aspect, the fact that other people are put at a disadvantage because of your action (or inaction), that makes it an externality.

The vulnerability of IoT devices to being used in DDOS attacks is an externality of a similar kind.  When you buy and install a security camera, or rent a DVR from your cable company, and they don't have enough security software installed to prevent them from being used in a DDOS attack, you're raising the risk of such an attack for everybody on the Internet.  And they don't have a choice in the matter.

Historically, externality problems such as air and water pollution have been resolved only when the government gets involved at some level.  When the externality problems are strictly local, sometimes local political pressures can resolve the issue, but the Internet is by its nature a global thing, in the main, although for reasons that are not entirely clear, the Oct. 21 attacks affected mainly East Coast users.  So my guess is that to fix this issue, we are going to have to have national or international governmental cooperation to set some rules and fix minimum standards for IoT devices regarding this specific problem.

The solutions are not that hard technically:  things like attaching a unique username and password to each IoT device and designing them to receive security updates.  These measures are already in place for conventional computers, and as IoT devices get more sophisticated, the additional cost of these security measures will decline to the point that it will be a no-brainer, I hope. 
           
But right now there's millions of the gizmos out there that are still vulnerable, and it would be very hard to get rid of them by any means other than waiting for them to break or get replaced by new ones.  So we have created a serious security problem that somebody, somewhere has figured out how to take advantage of.  Let's hope that the Oct. 21 attack was the last big one of this kind.  But right now that's all it is—just a hope. 

Sources:  I referred to the article " What We Know About Friday’s Massive East Coast Internet Outage" by Lily Hay Newman of Wired at https://www.wired.com/2016/10/internet-outage-ddos-dns-dyn/, and the article "After DDOS attack, senator seeks industry-led security standards for IoT devices" by Mark Hamblen at http://www.computerworld.com/article/3136650/security/after-ddos-attack-senator-seeks-industry-led-security-standards-for-iot-devices.html.  I also referred to the Wikipedia articles on "externality" and "Mirai" (which means "future" in Japanese).

Monday, October 24, 2016

The Day The Internet Goes Down


This hasn't happened—yet.  But Bruce Schneier, an experienced Internet security expert with a track record of calling attention to little problems before they become big ones, is saying he's seeing signs that somebody may be considering an all-out attack on the Internet.  In an essay he posted last month called "Someone Is Learning How to Take Down the Internet," he tells us that several Internet-related companies which perform essential functions such as running domain-name servers (DNS) have come to him recently to report a peculiar kind of distributed denial-of-service (DDOS) attack.

For those who may not have read last week's blog about ICANN, let's back up and do a little Internet 101.  The URLs you use to find various websites end in domain names—for example, .com or .org.  One company that has gone public on its own with some limited information about the attacks is Verisign, a Virginia-based firm whose involvement with the Internet goes back to the 1990s, when they served as the kind of Internet telephone book for every domain ending in .com for a while, before the ICANN, now an internationally-governed nonprofit organization, took over that job.  Without domain-name servers, networked computers can't figure out how to find websites, and the whole Internet communication process pretty much grinds to a halt.  So the DNS function is pretty important.

As Schneier explains in his essay, companies such as Verisign have been experiencing DDOS attacks that start small and ramp up over a period of time.  He likens them to the way the old Soviet Union used to play tag with American air defenses and radar sites in order to see how good they were, in case they ever had to mount an all-out attack.  From the victim's point of view, a DDOS attack would be like if you were an old-fashioned telephone switchboard operator, and all your incoming-call lights lit up at once—for hours, or however long the attack lasts.  It's a battle of bandwidths, and if the attacker generates enough dummy requests over a wide enough bandwidth (meaning more servers and more high-speed Internet connections), the attack overwhelms the victim's ability to keep answering the phone, so to speak.  Legitimate users of the attacked site are blocked out and simply can't connect as long as the attack is effective.  If a critical DNS is attacked, it's a good chance that most of the domain names served will also disappear for the duration.  That hasn't happened yet on a large scale, but some small incidents have occurred along these lines recently, and Schneier thinks that somebody is rehearsing for a large-scale attack.

The Internet was designed from the start to be robust against attack, but back in the 1970s and 1980s, the primary fear was an attack on the physical network, not one using the Internet itself.  Nobody goes around chopping up fiber cables in hopes of bringing down the Internet, because it's simply not that vulnerable physically.  But it's likely that few if any of the originators thought of the possibility that the Internet's strengths—universal access, global reach—would be turned against it by malevolent actors.  It's also likely that few of them may have believed in original sin, but that's another matter.

Who would want to take down the Internet?  For the rest of the space here I'm going to engage in a little dismal speculation, starting with e-commerce.  Whatever else happens if the Internet goes down, you're not going to be able to buy stuff that way.  Schneier isn't sure, but he thinks these suspicious probing attacks may be the work of a "state actor," namely Russia or China.  Independent hackers, or even criminal rings, seldom have access to entire city blocks of server farms, and high-bandwidth attacks like these generally require such resources.

If one asks the simple question, "What percent of retail sales are transacted over the Internet for these three countries:  China, the U. S., and Russia?" one gets an interesting answer.  It turns out that as of 2015, China transacted about 12.9% of all retail sales online.  The U. S. was next, at about 8.1%.  Bringing up the rear is Russia, at around 2%, which is where the U. S. was in 2004.  Depending on how it's done, a massive attack on DNS sites could be designed to damage some geographic areas more than others, and without knowing more details about China's Internet setup I can't say whether China could manage to cripple the Internet in the U. S. without messing up its own part.  But there is so much U. S.-China trade that Chinese exports would start to suffer pretty fast anyway.  So there are a couple of reasons that if China did anything along these lines, they would be shooting themselves in the foot, so to speak.

Russia, on the other hand, has much less in the way of direct U. S. trade, and while it would be inconvenient for them to lose the use of the Internet for a while, their economy, such as it is, would suffer a much smaller hit.  So based purely on economic considerations, my guess is that Russia would have more to gain and less to lose in an all-out Internet war than China would.

A total shutdown of the Internet is unlikely, but even a partial shutdown could have dire consequences.  Banks use the Internet.  Lots of essential utility services, ranging from electric power to water and natural gas, use the Internet for what's called SCADA (supervisory control and data acquisition) functions.  The Internet has gradually become critical piece of infrastructure whose vulnerabilities have never been fully tested in an all-out attack.  It's not a comfortable place for a country to be in, and in these days of political uncertainty and the waning of dull, expert competence in the upper reaches of government, you hope that someone, somewhere has both considered these possibilities in detail, and figured out some kind of contingency plan to act on in case it happens. 

If there is such a plan, I don't know about it.  Maybe it's secret and we shouldn't know.  But if it's there, I'd at least like to know that we have it.  And if we don't, maybe we should make plans on our own for the Day The Internet Goes Down.

Sources:  Bruce Schneier's essay "Someone Is Learning How to Take Down the Internet" can be found at https://www.schneier.com/blog/archives/2016/09/someone_is_lear.html.  I obtained statistics on the percent of U. S. retail e-commerce sales from the website https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales, the China data from https://www.internetretailer.com/2016/01/27/chinas-online-retail-sales-grow-third-589-billion-2015, and the Russia data from https://www.internetretailer.com/commentary/2016/02/08/russian-e-commerce-domestic-sales-slump-chinese-imports-soar.  I also referred to the Wikipedia article on Verisign.