Monday, June 19, 2017

The Grenfell Tower Tragedy

In 1974, a new high-rise public housing apartment building opened in West London.  Called Grenfell Tower, it was 24 stories tall and designed to house as many as 600 people in 120 apartments.  Photographs of it taken before a renovation in 2015 show large windows on one side and smaller ones on the adjacent side. 

In 2014, as reported in this blog, the 63-story Address Hotel in Dubai, United Arab Emirates went up in flames as aluminum-clad foam-plastic panels called architectural cladding or sandwich cladding on its exterior caught fire and quickly spread the conflagration to most of the outside of the building.  Amazingly, no one died in that fire, due to a quick evacuation order by the authorities and the failure of the fire to spread to the interior of the hotel rooms.  But this was only one of numerous exterior-cladding fires that have resulted from the use of flammable architectural materials on buildings that are too tall to be reached conveniently by fire ladders.

In 2015, the Kensington and Chelsea Tenant Management Organization, the bureaucracy in charge of public housing in the Grenfell Tower district, decided to do a renovation, possibly to improve the structure's insulation and lower heating costs.  New windows were installed, thermal insulation was added, and to cover these changes, sandwich cladding panels were installed to cover the four exterior side walls. 

Some, perhaps most, of the cladding was made by the U. S. firm Arconic, which sells various types with different kinds of plastic between the outer aluminum sheets.  A cheaper type uses polyethylene plastic, but is not recommended for structures over 10 meters (33 feet) tall.  A slightly more expensive type is fire-resistant, as was the thermal insulation used underneath the cladding.  But even fire-resistant plastic can burn under some conditions.

When constructed, the building had no sprinkler system, but the apartments were piped for gas cooking and gas lines were present throughout the building.  Each apartment had fire detectors, but a residents' organization called the Grenfell Action Group has voiced complaints to authorities over the past few years about outmoded and non-functional fire extinguishers, flammable clutter in hallways, and other fire-safety issues, with little apparent response.

Residents of the Grenfell Towers, as were most other residents of London, had been instructed in case of fire to remain in place to be rescued by firefighters, rather than attempt an escape on their own.

In retrospect, the Grenfell Towers fire was a disaster waiting to happen:  an aging, open-style building without a sprinkler system but full of gas lines, covered with apparently flammable sandwich cladding outside potentially flammable insulation material, crowded with up to 600 residents who had been told to stay in their apartments in case of a fire.  And in the early morning hours of June 14, 2017, a fire broke out, reportedly in a kitchen on the fourth floor.

No sprinkler system or fire extinguisher succeeded in stopping the blaze before it ignited the exterior cladding, which in a matter of a few minutes spread the flames upward and eventually completely around the structure.  Many survivors got out by disobeying the orders to stay in place.  As of this writing (June 18), the estimated death toll is 58, and is expected to go higher.  If this is confirmed, it will be the largest number of people to die in a single fire in London since the Blitz of World War II.

Fires that kill lots of people at once are not that uncommon, but usually they happen in crowded single-room venues such as nightclubs where fireworks or other sources of ignition catch flammable materials on fire.  The spectacle of an entire high-rise building going up in flames because of flammable exterior cladding is something that is not supposed to happen in modern "fireproof" structures.  But the invention of a cladding material that is light, inexpensive compared to concrete, solid steel, or aluminum, and reasonably durable has led to its use and abuse throughout the world.  And as numerous cladding fires have shown, you can take the most fireproof building in the world and surround it with thin, flammable sheets exposed to a lot of air, and what you get is a giant Roman candle waiting to be set off. 

The Grenfell Towers fire may become a turning point in the politics and regulations of exterior cladding, similar to the infamous Triangle Shirtwaist Factory fire in New York City that killed 146 garment workers in 1911.  Like many of the residents of the public-housing Grenfell Towers, most of those who died in the 1911 fire were poor immigrants, though they died on the job amid flammable clothing materials, not at home surrounded by flammable architectural panels.  The Triangle fire had the good result of inspiring calls for improved fire-safety building codes and regulations, which if implemented can prevent tragedies like this.

British Prime Minister Theresa May, already in a politically weak position, has been jeered and attacked for what many saw as her inadequate response to the tragedy.  She and other politicians could turn this situation to the benefit of their country by leading a thorough investigation into the causes of both the Grenfell Towers fire and other similar fires in which flammable exterior cladding has played a role.  Then, they could take vigorous and definite action with regard to both existing and future architectural cladding that has any significant chance of short-circuiting fire safety by enabling the spread of a fire on an otherwise fireproof structure's exterior. 

It is ironic that after making people suffer for centuries the hazards of living in wooden structures that were chronically prone to burn down, nineteenth-century architects thought they had solved the problem of fire with concrete-and-steel structures, only to torch their triumphs in the last few decades by using what amounts to cheap window-dressing materials that burn like fireworks.  If I were an architect, I would be afraid to show my face in London after the Grenfell Towers tragedy. 

The most basic ethical requirement of a profession is that the professionals look out for the interests of those average citizens affected by their professional activities, citizens who have no way of knowing what hazards they could be subject to and how to avoid them.  I would be surprised if more than a few residents of Grenfell Towers knew anything about sandwich cladding, or the fact that under the right circumstances it would burn.  Well, everyone knows now.  And I can only hope that this knowledge gets applied to similar dangerous situations, and we do whatever it takes to keep another Grenfell Towers fire from happening anywhere, ever again.

Sources:  I referred to news reports about the Grenfell Towers fire carried by the Australian Broadcasting Company on June 17 at, the Canadian Global News at, and the Wikipedia articles "Grenfell Tower fire" and "Triangle Shirtwaist Factory fire."  My blog on the Address Hotel fire in Dubai appeared on Jan. 4, 2016.

Monday, June 12, 2017

Moving Automated Driving To the Next Level

If there had been a competition for world-class back-seat drivers, my grandmother would have won it hands down.  Back in the 1980s when we were living in Massachusetts, we drove to Boston's Logan Airport and picked her up for a visit.  Despite never having been closer to New England than Ohio in her entire life, she immediately started telling me which turns to take in downtown Boston as soon as I got lost, which I always did anyway, but without her help.  We made it home, but not without lots of needless distraction.

Developers of what the Society of Automotive Engineers (SAE) calls "automated driving" are facing the opposite problem of back-seat driving in trying to get people in the front seat to pay attention to the road while a robot does what the SAE calls "Level 3" automatic driving. 

In a recent New York Times piece, tech writer John Markoff highlights the problems that arise when autonomous vehicles are not yet capable of 100% "hands-off" operation.  Two or three years ago, the National Highway Traffic Safety Administration (NHTSA) and SAE concurred on a classification scheme for automated driving systems.  What most people do now in older cars when they do all the driving themselves is Level 0 (no automation).  Level 5 is a system that could adapt to any and all driving conditions with no input whatsoever from the driver, who could therefore safely sleep or do crossword puzzles for the whole trip.  No one has yet been able to field a Level 5 system, but the standard assumes we will eventually get there.  In between, there are vehicles such as Tesla cars equipped with an autopilot system (Level 2), and the latest self-driving cars now being fielded by Google's autonomous-car spinoff Waymo (Level 4).  But even Level 4 cars can't cope with all situations, and when a driver starts to treat a Level 2 system like it was Level 5, trouble lies ahead. 

The worst example so far of driver inattention while riding in a partially autonomous vehicle happened in 2016, when a Tesla Model S in Florida failed to detect a semi that suddenly crossed the vehicle's path.  Despite the fact that Tesla warns the driver that he or she must be prepared to take evasive action in such situations, he was apparently watching a video, which was the last thing he saw, let us say.  This fatal accident was the first such mishap in Tesla vehicles, which have since been modified to test the driver's attention periodically.  And if the driver isn't paying consistent attention, the car will terminate the autopilot feature for the rest of the trip, forcing the driver to go back to work.

This is just one specific example of a general problem with partially autonomous vehicles—say Levels 2 through 4.  They all require the driver to be prepared to regain control of the vehicle in an emergency or other situation that the robot driver can't cope with.  But as Markoff points out, going from sheer inattention to fully capable operation of a motor vehicle in a few seconds is not something people do particularly well.

Studies have shown that even with those who are mentally prepared for the transition, it can take as long as five seconds to adjust to the feel of the steering at a particular speed and get to the point where the driver is truly in control and capable of dealing with problems.  Five seconds can be a longer time than you have—a car traveling at 70 MPH will move over 500 feet (156 meters) in five seconds.  If the potential problem is only 200 feet away, by the time you're ready to act it may well be too late.

Those wanting to deploy cars with more and more autonomous features face a chicken-and-egg problem.  Everybody admits that as of today, there is no system in which it is completely safe for the driver to act like he or she is at home in bed.  But to get to that point, we have to gain experience with less-than-perfect systems, which all require the human driver's input at some point.  The issue then becomes how to accustom drivers to this wholly new mode of "driving."  And people being people, they are not always going to follow instructions.  The man who lost his life in the Tesla accident was told to keep his hands on the steering wheel at all times.  But he'd found that nothing bad happened most of the time he didn't, and so would many others unless the system enforces attention in some way, which it now apparently does.

As for me, I may be fairly typical in that I am not interested in automated driving systems until I can trust them at least as well as I can trust my wife to drive—if not better.  We may be encountering a different form of what in aesthetics is known as the "uncanny valley."  Humanoid robots that look like classical robots—hardware sticking out from their metal chests and so on—don't bother us particularly.  And a humanoid robot that is such a good imitation of a human that you can't tell the difference between the robot and a real human presumably wouldn't bother us too much either.  But students of robotics have found that "human-like" robots that are close to real humans, but not close enough, give people the creeps.  And it will give me the creeps, or worse, if I sit behind the wheel without steering unless told to do so by a machine.

If I was sort of driving and sort of not driving a car that was doing things in traffic that I couldn't predict, and I was constantly hoping I wouldn't have to intervene but always wondering if something was about to happen that would require me to grab the wheel—well, I might as well quit my job and start teaching driver's education at Nelson's Driving School for the Chronically Nervous.  Back when high schools were obliged to teach driver's ed, you would learn in a car equipped with two brake pedals, one on the passenger's side where the instructor sat.  My instructor got to use her pedal more than once, and I can now only imagine what torment she went through while she watched me move jerkily through traffic.  If I was riding in anything less than a Level 5 autonomous vehicle, I'd be in the same position as my unfortunate driving instructor—all the time it was moving.

The prospects for autonomous driving hinge critically on how manufacturers and developers will handle the next five or so years, before truly autonomous (Level 5) driving is possible.  It may be the wisest thing to continue mainly with experiments until automakers can say with reasonable confidence and safety what the bus companies have been saying all along:  "Leave the driving to us."

Sources:  John Markoff's article "Robot Cars Can't Count on Us in an Emergency" appeared on the New York Times website on June 7, 2017 at  It has a reference to a summary of the SAE Standard J3016 for the classification system of automated driving, at  I also referred to Wikipedia articles on Waymo, the history of autonomous cars, and the uncanny valley. 

Monday, June 05, 2017

Cambria Corn Mill Dust Explosion Kills Three

Last Wednesday, May 31, everything seemed normal at the Didion corn mill in the small village of Cambria, Wisconsin.  Like most factories of its type, the mill operated 24 hours a day, and late that night only sixteen workers remained as the machinery processed corn into ethanol and other products.  Shortly after 11 P. M., a tremendous explosion sent flames high into the air, knocked out power, and destroyed most of the processing division of the plant.  Three workers were killed, eleven others were hospitalized with injuries, and millions of dollars of damage was done. 

Ever since 1878, when what was then the world's largest grain mill was destroyed by a dust explosion in Minnesota, grain mill operators have known that the fine particles produced by various milling operations can combine with air to produce explosive mixtures.  Unfortunately, the science of dust explosions does not appear to be as complete as the science of gas explosions, for example.  Scientists have studied mixtures of hydrogen and oxygen for flammability, and can predict down to the third decimal place exactly what the mixture limits are for that combination of gases to explode.

Dust is different.  It comes in all sizes, ranging from particles almost too large to stay in the air very long, down to submicron bunches of molecules that take expensive equipment even to detect.  And as one expert interviewed by the Journal Times of Racine, Wisconsin pointed out, dust lying on the floor is relatively harmless, but if somebody walks by and kicks up a dust cloud, and it's a dry day and the person's body accumulates electric charge and then touches a grounded metal object, you can have the fatal combination of enough dust in the air and an ignition source to cause an explosion, whereas five seconds earlier there was no way an explosion could occur. 

This unpredictability may be one reason that dust explosions are relatively common compared to other types of major industrial accidents.  An insurance executive interviewed by the Journal Times said that over 500 dust explosions at grain processing facilities have occurred since 1982, killing more than 180 people in all.  If all 180 had been killed at once, dust explosions would be more prominent on the nation's scope screen of safety concerns, but a typical grain mill is manned by only a few dozen people at most and so the fatality numbers are rarely high enough to garner more than the occasional national headline.

Another problem with preventing such explosions is that they typically do so much damage that the originating cause is often never determined.  When I was about ten years old, I witnessed a demonstration of a dust explosion performed by a fireman who traveled to elementary schools to give fire-safety lessons.  He had a big box inside of which was a small container of ordinary baking flour, and a rubber hose was rigged to the box along with some kind of ignition source—maybe a candle he lit inside the box.  Anyway, when he closed the hinged lid and blew into the tube, the flour hit the candle, flung the lid open, and produced a huge yellow whoosh of flame.  It impressed the heck out of me, but in retrospect it must have been mostly for show, because dust explosions are not a big domestic fire hazard—almost all of them occur in industrial plants. 

Especially when it occurs in a confined area, a dust explosion's pressure wave wrecks everything in sight.  While investigators can sometimes gather some general idea of the sequence of events, it is often impossible to locate even the specific site where the blast originated, let alone to reproduce the conditions that led to the explosion. 

Anything that can reach a dust-air mixture's ignition temperature can cause such an explosion.  Static electricity is a favorite scapegoat, and in facilities where humidity can be raised high enough to eliminate this hazard, it is easy to control.  But for grain processing, too much humidity can be both expensive to produce and detrimental to the product, especially in drying operations, so other means of prevention are employed:  ventilation to keep concentrations of dust below dangerous levels, regular cleaning to prevent dust piles from accumulating and getting kicked up to cause hazardous clouds, and explosion-proof electrical fittings and equipment, which can be very expensive but are needed in certain locations where dust cannot be avoided. 

Records indicate that the Didion mill was cited for a potentially hazardous dust condition back in 2011, but the owners paid a fine and apparently corrected the problem.  The ongoing investigation may or may not find out what caused the explosion.  But since renewable fuels were mandated to be mixed into U. S. automotive fuels in 2005 and 2007 with the passage of the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007, the U. S. has outstripped Brazil as the world's largest producer of ethanol, and most ethanol made in the U. S. comes from corn.  Hence, there are a large number of corn mills in the U. S. that turn corn into ethanol, such as the Didion plant did until last Wednesday, and the risk continues that dust explosions in these mills will injure or kill workers.

To many residents of rural areas, making ethanol from corn is one of the few bright spots in an otherwise dismal future faced by farm communities, decimated by children moving to cities, drug problems, and other woes.  It's too bad that the process has the inherent and difficult-to-prevent hazard of dust explosions, but hopefully the industry will learn some lessons from this latest catastrophe and improve its track record by good safety and housekeeping practices. 

The safety culture of leading oil refiners is a gold standard that grain mills could aspire to.  Oil refinery operators have learned how to handle millions of gallons of hot, pressurized, flammable products with an exemplary safety record overall, but only at the price of what seems to outsiders to be ridiculously involved and rigorous safety practices.  It may be time for owners of grain mills to look to their more experienced compatriots in the petrochemical and refining industries for guidelines as to how to keep dust from killing their workers and wrecking their plants.

Sources:  I referred to an Associated Press report on the Cambria explosion (not to be confused with the Cambrian explosion of new species over 500 million years ago!) on the ABC News website at, a Racine Journal Times report at, and Wikipedia articles on dust explosions and ethanol fuel in the U. S.

Monday, May 29, 2017

Reflections on Technology in France

My wife and I recently had the privilege of spending a week in southern France, at a conference in the small town of Aurillac (pronounced "AW-ree-ack").  I say small—27,000 people is about the size of Cleburne, Texas, which is a town I'm somewhat familiar with.  Based on my admittedly very limited and undoubtedly biased observations of what we saw and experienced, I'd like to make some comparisons between the different ways that the French citizens we encountered and Texans have dealt with technology, broadly defined.

First, transportation.  In Texas, if you don't have access to a car, you are automatically placed in a category that is inhabited largely by very poor people, eccentrics, and the homeless.  There are some folks who don't drive and who also don't meet any of those descriptions, but the great majority of able-bodied adults in Texas drive nearly everywhere.

Not in Aurillac.  We flew into town and landed at the single airport, which is basically one building in a field, by a parking lot.  And we took a taxi into town, about a seven-minute ride.  But from that point onward for the next week we didn't set foot in any motorized transport, and frankly didn't miss it a bit.  At the end of our visit, we walked twenty minutes or so to the train station and rode the train to Paris.

A lot of people appear either to walk to work in Aurillac or ride bicycles.  There are cars, but the main parking in the center of town is an underground garage.  This allows the Aurillacans (Aurillacois?—I don't know enough French to say) to avoid cluttering up their thousand-year-old town with ugly parking garages, or knocking down a 15th-century church to pave the land over for Renaults or Audis.  I can't imagine how much it cost to excavate the garage without disturbing the quaint 19th-century plaza park above it—many millions, I suppose.  But it was done, somehow, and consequently, much of downtown Aurillac would still be familiar to a peasant who knew the town as it was in 1600 A. D. 

In Cleburne, they have old stuff too—the county courthouse, for instance, which dates all the way back to 1913 A. D., and was recently restored.  But for parking, you just have to find a lot somewhere or park on the street.  There is no commercial airport, and although there are train yards and an Amtrak station, getting anywhere on the train is really complicated and inconvenient.  Nearly everybody who wants to go to Cleburne drives there along U. S. 67, or takes the new tollway that connects it to downtown Fort Worth nearby via the highway loop around the city for those who are just passing through.

Next, the pattern of daily life.  When my sister lived in Cleburne, she would get up early, get in her car at maybe 7:30, and drive 45 minutes or so to her job in Fort Worth, where she runs a nursing department that uses very high-tech stuff, computers, and so on.  Then she'd drive back in the evening around 5 or 6 and have supper, and while she lived in Cleburne for close to a decade, I'm not aware that she developed any serious connections with other people in the town.

In doing this routine, my sister follows a pattern laid down by the Industrial Revolution, which requires the close scheduling of large numbers of people doing coordinated things in institutions such as factories, schools, and hospitals. 

Things are different in Aurillac.  Yes, the little tobacco and newspaper shop across the street from our hotel opened up every day about 6 AM.  But for the next three hours, there wasn't much else going on in the way of business.  Around 9 or 10, most places were open, but at noon, a lot of them closed for two hours—lunch, you see.  Then at 2, they would open up again, sometimes, and then again maybe not.  The Museum of Volcanoes we visited had such hours, and stayed open till 7 PM. 

Then, and only then, the typical Aurillac resident starts thinking about supper.  The restaurants we went to typically didn't even open in the evening until 7.  In the afternoons and evenings especially, the outdoor cafes would fill with people of all ages, sitting around talking about—well, I mostly couldn't tell what they were talking about, because I don't understand French.  But they seemed to be content to jaw for hours on end, either in person or on their mobile phones.  We did see a lot of people using mobile phones there, and I suppose that's one way in which the French and the Americans are pretty much alike:  the near-universality of the smart phone.  But the French folks we saw haven't allowed it to put an end to the practice of polite conversation at the supper table, which smartphones have nearly succeeded in doing in many U. S. households and public places.

There were bars in Aurillac, but they weren't crammed with people seemingly desperate to unwind from a tense day.  People there seemed content to sit at a table with a glass of beer and just look around, or think, and not have a phone or a paper in their hand.  You don't see that much in Cleburne.

As I say, this is a completely unscientific sample of life in France.  I'm aware of many of the negatives cited by some Americans about life there:  the excessive government regulation and intervention in the economy, the high taxes, the paucity of religious influence.  But somehow, the citizens of Aurillac have made it to 2017 without letting modern technological society homogenize them into looking like any mid-size town in the U. S. with multinational-corporation logos plastered everywhere.  They do have a McDonald's in Aurillac, but they also have butcher shops that have been in the same place, with the same tile on the floor, since 1925.  And that isn't unusual there. 

I liked Aurillac a lot, and our week there was a sample of life in a slower, more meditative lane that I hope to keep with me, at least in thought, now that I'm back in Texas.  It wasn't better or worse than Cleburne, it was just different.  But different in some ways that were very appealing.

Monday, May 22, 2017

Your Money Or Your Data: The WannaCry Ransomware Attack

On May 12, thousands of users of Windows computers around the globe suddenly saw a red screen with a big padlock image and a headline that read, "Ooops, your files have been encrypted!"  It turned out to be a ransom note generated by an Internet worm called WannaCry.  The ransom demanded was comparatively small—about US $300—but the attack itself was not.  The most critical damage was caused in Great Britain where many National Health Service computers locked up, causing delays in surgery and preventing access to files containing critical patient data.  Fortunately, someone found a kill switch for the virus and so its spread was halted, but over 200,000 computers were affected in over 100 countries, according to Wikipedia.

No one knows for sure who implemented this attack, although we do know the source of the software that was used:  the U. S. National Security Agency, which developed something called the EternalBlue exploit to spy on computers.  Somehow it got into the wild and was weaponized by a group that may be in North Korea, but no one is sure. 

At this writing, the attack is mostly over except for the cleanup, which is costing millions as backup files are installed or re-created from scratch, if possible.  Experts recommended not paying the ransom, and it's estimated that the perpetrators didn't make much money on the deal, which was payable only in bitcoin, the software currency that is virtually untraceable. 

Writing in the New York Times, editorialist Zeynep Tufekci of the School of Information and Library Science at the University of North Carolina put the blame for the attack on software companies.  She claims that the way upgrades and security patches are done is itself exploitative and does a disservice to customers, who may have good reasons not to upgrade a system.  This was painfully obvious in Great Britain, where their National Health Service was running lots of old Windows XP systems, although the vast majority of the computers affected were running the more recent Windows 7.  Her point was that life-critical systems such as MRI machines and surgery-related instruments are sold as a package, and incautious upgrading can upset the delicate balance that is struck when a Windows system is embedded into a larger piece of technology.  She suggested that companies like Microsoft take some of the $100 billion in cash they are sitting on and spend some of it on free upgrades to customers who would normally have to pay for the privilege.

There is plenty of blame to go around in this situation:  the NSA, the NHS, Microsoft, and ordinary citizens who were too lazy to install patches that they had even paid for.  But such a large-scale failure of what has become by now an essential part of modern technological society raises questions that we have been able to ignore, for the most part, up to now.

When I described a much smaller-scale ransomware attack in this space back in March, I likened it to a foreign military invasion.  That analogy doesn't seem to be too popular right now, but I still think it's valid.  What keeps us from viewing the two cases similarly has to do with the way we've been trained to look at software, and the way software companies have managed to use their substantial monopolistic powers to set up conditions in their favor.

Historically, such monopolistic abuse has come to an end only through vigorous government action to call the monopoly to account.  The U. S. National Transportation Safety Board can conduct investigations and levy penalties on auto companies who violate the rules or behave negligently.  So far, software firms have almost completely avoided any form of government regulation, and the free-marketers among us have pointed to them as an example of how non-intervention by government can benefit an industry. 

Well, yes and no.  People have made a lot of money in the software and related industries—a few people, anyway, because the field is notorious for the huge returns it can give a few dozen employees and entrepreneurs who happen to get a good idea first, implement it, and dominate a new field (think Facebook).  But when you realize that the same companies charge customers over and over again for the ever-required upgrades and security patches (which are often bundled together so you can't keep the software you like without having it get hacked sooner or later), the difference between a software company and an old-fashioned protection racket where a guy flipping a blackjack in his hand comes in your candy store, looks around, and says, "Nice place you got here—a shame if anything should happen to it" becomes hard to distinguish in some ways.

Software performs a valuable service to billions of people, and I'm not calling for a massive takeover of software firms by the government.  And users of software have some responsibility for doing maintenance, assuming that maintenance is of reasonable cost and isn't impossibly hard to do, or leads to situations that make the software less useful.  But when a major disaster like WannaCry can cause such global havoc, it's time to rethink the fundamentals of how software is designed, sold (technically, it's leased, not sold), and maintained.  And like it or not, the U. S. market has a huge influence on these things.

Even the threat of regulation can have a most salutary effect on monopolistic firms, which to avoid government oversight often enter voluntarily into industry-wide agreements to implement reforms rather than let the government take over the job.  It's unlikely that the current chaos going on in Washington is a good environment in which to undertake this task, but there needs to be a coordinated, technically savvy, but also ethically deep conversation among the principals—software firms, major customers, and government regulators—to find a different way of doing security and upgrades, which are inextricably tied together. 

I don't know what the answer is, but companies like Microsoft may have to accept some form of restraint on their activities in exchange for remaining free of the heavy hand of government regulation.  The alternative is that we continue muddling along as we have been while the growth of the Internet of Things (IoT) spreads highly vulnerable gizmos all across the globe, setting us up for a tragedy that will make WannaCry look like a minor hiccup.  And nobody wants that to happen.

Sources:  Zeynep Tufekci's op-ed piece "The World Is Getting Hacked.  Why Don't We Dp More to Stop It?" appeared on the website of the New York Times on May 13, 2017, at  I also referred to the Wikipedia article "WannaCry ransomware attack."  My blog "Ransomware Comes to the Heartland" appeared on Mar. 27, 2017.

Monday, May 15, 2017

India's Energy Future and Climate Change

In an article that appeared in May's Scientific American, Council on Foreign Relations Fellow Varun Sivaram shows that India's path of energy development could have a large impact on future greenhouse-gas emissions.  Unlike China, which currently pumps out about twice as much carbon into the air as the U. S., India's infrastructure is largely yet to be built.  And in that fact lies both a challenge and an opportunity.

It will help to get things in proportion if we compare greenhouse emissions and populations for China, the U. S., and India.  According to the U. S. Environmental Protection Agency, in 2014 the world leader of global carbon dioxide emissions was China, contributing about 30% of the total.  Next in line was the U. S., with 15%, and third was India, with 7%.  The much-ballyhooed Paris accords of 2015 committed India to an apparently almost meaningless limit, because Sivaram says "its overall commitment to curb emissions was underwhelming.  If the government just sat on its hands, emissions would rise rapidly yet stay within the sky-high limits the country set for itself in Paris."

By many measures, most citizens of India are still living in the same energy environment their ancestors occupied:  using dried cow dung, straw, charcoal, and firewood for domestic heating and cooking.  The lucky third or so who have access to more advanced fuel sources use either coal or oil.  The nation's electric grid is somewhat of a joke by Western standards, reaching less than a fourth of the population.  And those who get electricity can't count on it:  outages (both planned and accidental) are common, and government-inspired policies to keep rates low has resulted in chronic underinvestment that has further contributed to the grid's rickety status.

Unlike China, India has something approaching a democratic government, although with a heavy dose of socialist-style traditions left over from the Nehru years of the 1950s and 60s.  While the economy has improved greatly under more recent governments since the 1990s that have favored private enterprise and privatization of formerly government-owned enterprises, Sivaram points out that investment money is hard to come by.

Examining the two extremes of how things go from here, suppose that India follows the easier path trod already by China, exploiting readily-accessible fossil fuels and building coal-fired power plants to supply its increasing population of about 1.4 billion, which is due to outstrip China's population in a few years.  If that happens, the U. S. will no longer be the world's No. 2 carbon-dioxide emitter—India will be, and might even surpass China to become No. 1. 

Of course, this is a competition that no government wants to win.  But zooming down to the micro view of individual citizens, the meaning of drastic global-warming restrictions on future fossil-fuel use becomes more problematic.  Most Indian citizens do not drive cars, and the vast majority of motorized vehicles sold even today are motorbikes or three-wheel jitneys.  Mobility is something everyone wants, and as more Indians get better jobs and are able to save money to buy larger items, the market for automobiles could grow tremendously.  But that development would only exacerbate carbon-dioxide emissions.  The same people who want to drive would like to have plentiful, reliable electricity both for domestic uses and for things like agriculture and manufacturing.  But if power is generated with coal or oil, there goes more CO2.

In his article, Sivaram holds out an alternative energy future that could become reality, given enough willingness on the part of national and state governments and citizens generally.  Solar energy is abundant in the countryside, and the government is already deploying solar panels to power irrigation pumps, but on a small scale.  Given enough investment, the desperately-needed expansion of the electric grid could include the latest smart-grid technologies that would enable it to take advantage of wind and solar power, which otherwise would not fit easily into an old-fashioned grid designed for 24-hour-a day power sources.  And the nice thing is that little retrofitting will be required, because most of the needed grid does not yet exist today.

While coal and oil will be a large part of India's energy mix in the near future, another hope Sivaram has is that conservation measures will limit the increase in demand to less than it would be otherwise.  Rapid deployment of electric vehicles powered by renewable energy sources could help here, as well as an emphasis on energy-efficient appliances and buildings. 

The fly in this sweet-smelling ointment of the future, Sivaram admits, is the crying need for investment money.  And here is where things get murky.  In common with many other countries in Asia, India's regulatory environment is marred by complexity, delays, and corruption.  Even major infrastructure projects such as hydroelectric dams and grid improvements have been torpedoed by high interest rates, permit delays, and poor fiscal planning, resulting in abandoned projects and even bankruptcies.  These are not engineering problems.  These are social and government-policy problems, and it will take political courage and intelligence to make much progress in these areas.

With India halfway around the world, it's easy to ignore internal problems like these, but this academic semester just ending, I taught a graduate class for the first time in many years, and most of the students in it were from the Indian subcontinent.  Thirty years ago, most of them would have been from China, but there are plenty of Chinese universities that are as good or better than your average state school in the U. S. now, and so the new-graduate-student pool for middle-ranked U. S. universities has shifted south over the years.

If these students are like most foreign grad students, many of them will try to stay in the U. S.  But some will return to their native lands.  I hope that what they learn here about the social and political structure of the U. S. will help them realize that in many ways, India has a chance to avoid mistakes others have made before them.  Whatever your views on global warming, I think we can agree that it's a hard problem both to allow millions of people in India to enjoy some of the benefits of advanced technology that we in the U. S. have enjoyed for three generations, while avoiding preventable harm to the planet we all live on.  I hope the citizens of India can take advantage of their opportunities to work out this problem in the best way possible.

Sources:  The article "The Global Warming Wild Card" by Varun Sivaram appeared on pp. 48-53 of the May 2017 issue of Scientific American.  The EPA website from which I obtained 2014 data on carbon-dioxide emissions is at  I also referred to the Wikipedia articles on the demographies of China and India and the history of the Indian republic. 

Monday, May 08, 2017

The False Promise of Digital Storage for Posterity

Now that almost every book, photograph, artwork, article, news item, story, drama, or film is published digitally, we are supposed to rejoice that the old-fashioned imperfect and corruptible analog forms of these media—paper that ages, film that deteriorates—has been superseded by the ubiquitous bit, which preserves data flawlessly—that is, until it doesn't.  A recent article in the engineering magazine IEEE Spectrum highlights the problems that Hollywood is having in simply keeping around usable digital copies of their old films.  And "old" in this sense can mean only three or four years ago. 

It's not like there isn't a standard way of preserving digital copies of motion pictures.  About twenty years ago, a consortium of companies got together and agreed on an open standard for magnetic-tape versions of movies and other large-volume digital material called "linear tape-open" or LTO.  If you've never heard of it, welcome to the club.  An LTO-7 cartridge is a plastic box about four inches (10 cm) on a side and a little less than an inch thick.  Inside is a reel of half-inch-wide (12 mm) tape about three thousand feet (960 m) long, and it can hold up to 6 terabytes (6 x 1012 bytes) of uncompressed data.  Costing a little more than a hundred bucks, each cartridge is guaranteed to last at least 30 years—physically.

The trouble is, the same companies that came up with the LTO standard are part of the universal high-tech digital conspiracy to reinvent the world every two years.  Keeping something the same out of respect for the simple idea that permanence is a virtue is an entirely foreign concept to them.  Accordingly, over the last twenty years there have been seven generations of LTO tapes, and each one hasn't been backward-compatible for more than one or two generations. 

What this means to movie production companies that simply want to preserve their works digitally is this:  every three or four years at the outside, they have to copy everything they've got onto the new generation of LTO tapes.  And these tapes don't run very fast—it's not like burning a new flash drive.  Transferring an entire archive can take months and cost millions of dollars, but the customers are at the mercy of the LTO standard that keeps changing. 

According to the Spectrum article, Warner Brothers Studios has turned over the job of preserving their films to specialist film archivists at the University of Southern California, which already had a well-funded operation to preserve video interviews with Holocaust victims.  But USC faces the same digital-obsolescence issues that the studios are dealing with, and one USC archivist calls LTO tapes "archive heroin"—it's a thrill compared to the old analog archive methods, but it gets to be an expensive habit after a while.

And that gets us to a more fundamental question:  given limited resources, what should each generation preserve, in terms of intellectual output, for the next one?  And how should preservation happen?

For most of recorded history, preservation of old documents was left mostly to chance.  Now and then a forward-looking monarch would establish a library, such as the famous one in Alexandria that was established by Ptolemy I Soter, the successor of Alexander the Great, about 300 B. C.   It held anywhere from 40,000 to 400,000 scrolls, and lasted until the Romans conquered Egypt around 30 B. C., when it suffered the first of a series of fires that destroyed most of its contents. 

One can argue that the entire course of Western history would be different if all the works of the Greek philosopher Aristotle (384 B. C. - 322 B. C.) had been lost.  The way we came to possess what works we have of his is hair-raising.  After Aristotle died, his successor Theophrastus at the school where Aristotle taught, the Lyceum, inherited from Aristotle a large set of what we would call today lecture notes.   After Theophrastus died, he left them to Neleus of Scepsis, who took them from Athens, where the Lyceum was, back home to Scepsis, and stuck them in his cellar.  Then he died.  Evidently the Greek families held on to real estate back then, and it's a good thing too, because it wasn't till about 100 B. C., more than two centuries after Aristotle's passing, that Neleus's descendants had a garage sale or something, and a fellow named Apellicon of Teos found the manuscripts and bought them.  He took them back to Athens, where Apellicon's library was confiscated by the conquering Romans in 86 B. C.  Finally, some Roman philosophers realized what they had in Aristotle's works and started making copies of them around 60 B. C.

I won't even go into how most of Aristotle's works were lost again to everyone except Arabic scholars up to about 1200 A. D., but we've had enough ancient history for one blog.  The point is that historic preservation was left largely to chance until people began to realize the value of the past to the present in an organized way. 

While the movie industry deserves credit for laying out lots of money to preserve chunks of our visual cultural history, one must admit that their interests are mostly financial.  Once the people who see a movie when they're in their twenties die out, the only folks interested in such films are the occasional oddball historian or fans of specialty outlets such as the Turner Classic Films channel. 

The real problem with digital archives is not so much the fact that the technology advances so fast, although that could be alleviated.  It's the question that never has an answer until it's sometimes too late:  what is worth preserving? 

If you're a well-heeled library like the one at Harvard University, the answer is simple:  everything you get your hands on.  But most places are not that well off, so it's a judgment call as to what to toss and what to keep using the always-limited resources at hand.

Despite the best intentions of well-funded film archivists, my suspicion is that a few centuries hence, we will find that many of the works of most importance to the future, whatever they are, were preserved not on purpose, but by hair-raising combinations of fortunate accidents like the ones that brought us the works of Aristotle.  And if I'm wrong, well, chances are this blog won't be one of those things that are preserved.  So nobody will know.

Sources:  The article "The Lost Picture Show:  Hollywood Archivists Can't Outpace Obsolescence" by Marty Perlmutter appeared in the May 2017 issue of IEEE Spectrum and online at  The story of how Aristotle's works came down to us is reported independently by at least two ancient sources, and so is probably pretty close to the truth, according to the Wikipedia article on Aristotle.  I also referred to Wikipedia articles on the Library of Alexandria and the Ptolemaic dynasty.