May 05, 2014

[London Review of Books]
WHAT KILLED THE NEANDERTHALS?
The Sixth Extinction: An Unnatural History, by Elizabeth Kolbert, 336 pp, US $28



In 1739, Captain Charles Le Moyne was marching four hundred French and Indian troops down the Ohio River when he came across a sulphurous marsh where, as Elizabeth Kolbert writes, ‘hundreds – perhaps thousands – of huge bones poked out of the muck, like spars of a ruined ship.’ The captain and his soldiers had no idea what sort of creatures the bones had supported, whether any of their living kin were nearby and, if so, what sort of threat they presented. The bones were similar to an elephant’s, but no one had seen anything like an elephant near the Ohio River, or indeed anywhere in the New World. Perhaps the animals had wandered off to the uncharted wilds out west? No one could say. The captain packed up a massive circular tusk, a three-foot-long femur and some ten-pound teeth, carried them around for several months as he went about the difficult task of eradicating the Chickasaw nation, and finally delivered the relics, after a stopover in New Orleans, to Paris, where they confounded naturalists for several decades.

A contemporary reader might guess, correctly, that the bones belonged to a species of animal that had long since ceased to exist – in fact, they came from Mammut americanum, the American mastodon – but at the time such an imaginative leap would have been very difficult, because it hadn’t yet occurred to anyone that an entire species could cease to exist. ‘Aristotle wrote a ten-book History of Animals without ever considering the possibility that animals actually had a history,’ Kolbert writes, and in Linnaeus’s Systema Naturae, published four years before Le Moyne’s discovery, ‘there is really only one kind of animal – those that exist.’ The French naturalist Georges-Louis Leclerc thought the bones might belong to a species that, uniquely in history and for reasons unknown, had disappeared from the Earth, but his conjecture was widely rejected. Thomas Jefferson put forward the consensus view in 1781, in his Notes on the State of Virginia: ‘Such is the economy of nature, that no instance can be produced of her having permitted any one race of her animals to become extinct; of her having formed any link in her great work so weak as to be broken.’

In 1796, Georges Cuvier presented a new theory: nature did permit links to be broken, sometimes a lot of them all at once. Cuvier, just 27, was teaching at the Paris Museum of Natural History, one of the few institutions to survive the Terror, and had spent many hours studying its collection of fossils and bones. He noticed that the teeth of Le Moyne’s incognitum had unusual little bumps on them, like nipples. He became convinced these were not elephant teeth. He called their owner mastodonte, ‘breast tooth’. Other remains were similarly unmatched to the contemporary world: the elephant-sized ground sloth, called megatherium, bones of which had been discovered near Buenos Aires and reassembled in Madrid (Cuvier worked from sketches); the meat-eating aquatic lizard (now called mosasaurus), whose massive fossilised jaw had been picked out of a quarry near Maastricht; the woolly mammoth, whose frozen remains were everywhere in Siberia. Such creatures must have populated a lost world. ‘But what was this primitive earth?’ Cuvier asked. ‘And what revolution was able to wipe it out?’

These were interesting questions, but Cuvier’s contemporaries were slow to consider answers. More and more they were coming to accept that occasionally species might disappear – Darwin would soon propose that ‘the appearance of new forms and the disappearance of old forms’ were procedurally ‘bound together’ by natural selection – but evolution was a gradual process. Mass extinction, ‘revolution’, was something else. The claim that nature could undergo a sudden radical shift seemed not just historically unfounded but scientifically (and perhaps politically) untenable. Charles Lyell countered Cuvier’s anarchic ‘catastrophism’ with stately ‘uniformitarianism’. All change, geological or biological, took place gradually, steadily. Any talk of catastrophe, Lyell admonished, was ‘unphilosophical’.

The evidence of catastrophe accrued nonetheless. Geologists have understood since the 17th century that sedimentary layers of rock and soil mark the passage of time, the youngest layers at the top, the oldest at the bottom, and that sometimes geological forces will push up a slice of the world that contains several aeons’ worth of fossil-rich strata, which we can read like the lines of a census report. The fossils in most layers did indeed demonstrate a uniform degree of biodiversity, but some indicated a massive decline. Where once there were many different forms of life, suddenly there were few.

Palaeontologists and geologists now generally agree that the Earth has endured five major extinctions, and more than a dozen lesser ones. The first took place 450 million years ago, during the late Ordovician period, and the most lethal 200 million years later, during the Permian-Triassic – ‘the great dying’, when nine out of ten marine species vanished. The most terrifying of the mass extinctions, though, was surely the fifth, the Cretaceous-Palaeogene incident, which began 65 million years ago when an asteroid the size of Manhattan smashed into the Yucatan Peninsula with the explosive impact of a hundred million hydrogen bombs, triggering what the palaeobiologist Peter Ward, writing last year in Nautilus, called "life’s worst day on Earth, when the world’s global forest burned to the ground, absolute darkness from dust clouds encircled the earth for six months, acid rain burned the shells off of calcareous plankton, and a tsunami picked up all of the dinosaurs on the vast, Cretaceous coastal plains, drowned them, and then hurled their carcasses against whatever high elevations finally subsided the monster waves."

* * *

The lesson that mass extinction is normal is hard to accept. Scientists are beginning to recognise that we’re in the middle of another event, perhaps the sixth mass extinction, but that recognition too has been slow in coming. In 1963, Colin Bertram, a marine biologist and polar explorer, warned that human expansion could destroy ‘most of the remaining larger mammals of the world, very many of the birds, the larger reptiles, and so many more both great and small’, and in 1979 the biologist Norman Myers published a little-read book called The Sinking Ark, showing with statistics that Bertram had been correct. But it wasn’t until the 1990s that large numbers of biologists began to take such concerns seriously. In 1991, the palaeobiologist David Jablonski published a paper in Science that compared the present rate of loss to that of previous mass extinctions. Other papers followed and by 1998 a survey by the American Museum of Natural History found that seven out of ten biologists suspected another mass extinction was underway. In 2008, two such biologists, David Wake and Vance Vredenburg, asked in a widely discussed paper, ‘Are We in the Midst of the Sixth Mass Extinction?’ The answer arrived in 2012 from a large team of biologists and palaeontologists writing in Nature: we almost certainly are. If we continue at the current rate of destruction, about three-quarters of all living species will be lost within the next few centuries.

Theories about what caused the earlier extinctions have varied – droughts, methane eruptions, volcanic ash, the ongoing problem of asteroids, the orbit of an invisible sun, our motion through the spirals of the Milky Way – but there’s little doubt about the culprit behind the sixth extinction. Wake and Vredenburg list the proximate causes: ‘human population growth, habitat conversion, global warming and its consequences, impacts of exotic species, new pathogens etc’. What most of these causes have in common isn’t just that they are the result of human activity, but that they have been going on for a very long time. In an elegant tracing, Kolbert demonstrates how precisely the human wake matches the millennial waves of extinction:
The first pulse, about forty thousand years ago, took out Australia’s giants. A second pulse hit North America and South America some 25,000 years later. Madagascar’s giant lemurs, pygmy hippos and elephant birds survived all the way into the Middle Ages. New Zealand’s moas made it as far as the Renaissance. It’s hard to see how such a sequence could be squared with a single climate change event. The sequence of the pulses and the sequence of human settlement, meanwhile, line up almost exactly.

Despite the clear trend, it’s hard to say with any precision how many species are dying, or have died, or will die. One reason, as Darwin said, is that species come and go. The ordinary ‘background’ rate of extinction for mammals is about one every seven hundred years, and for amphibians a little higher. A second reason, not unrelated to the first, is that biologists have no real baseline for the current number of species. They can make good guesses, but the world is a big place, and biologists are getting better and better at finding new species. They may discover dozens in a day, but half of them will be in the process of dying out: it’s as if the biologists were going from room to room flicking on the lights in a house from which much of the life was rapidly scurrying.

They have seen enough, though, to draw a bleak picture. The historical record shows that the European lion, the Labrador duck and the passenger pigeons that once darkened the American prairie have all gone the way of the dodo (and the Pallas cormorant and the white-winged sandpiper and the Carolina parakeet). We know too that in recent years amphibians seem to have become especially endangered. (A 2007 study suggested that the current amphibian extinction rate was 45,000 times greater than the expected background rate.) And we also know, Kolbert writes, that ‘one-third of all reef-building corals, a third of all freshwater molluscs, a third of sharks and rays, a quarter of all mammals, a fifth of all reptiles and a sixth of all birds are headed towards oblivion.’ E.O. Wilson calculated that the current rate of extinction for all animals was ten thousand times greater than the background rate, a loss of biodiversity that is helping to create what the nature writer David Quammen memorably described as a ‘planet of weeds’, a simple world where ‘weedy’ animals – pigeons, rats, squirrels – thrive and little else remains.

* * *

Most of The Sixth Extinction is about dead or dying animals: the great auk, little brown bats, Neanderthals, sea snails, the Sumatran rhino. Such a book should be depressing, but Kolbert’s isn’t, largely because our attention is drawn not just to the work of destruction but also to the work of discovery. The asteroid impact that wiped out the dinosaurs, for instance: how did we come to know about such an unlikely event? In the 1970s, Walter Alvarez, a geologist, was studying the Gola del Bottaccione, a gorge in Perugia where tectonic activity had lifted and tilted the ancient Italian limestone 45 degrees, and visitors could hike past a hundred million years of strata in just a few hundred yards. About halfway up, which is to say about 65 million years ago, was a puzzling half-inch clay stratum – the exposed Cretaceous-Palaeogene (K-Pg) boundary. In the limestone below was evidence of bountiful Cretaceous life; in the limestone above, quite a lot less Palaeogene life. A good uniformitarian would argue that the clay marked the passage of a very long period of time, enough for a stately adjustment of the census. Alvarez was a good uniformitarian but he was also curious: how long? He took some samples home to California and mentioned the mystery to his father. Luis Alvarez, a physicist at the Lawrence Berkeley Laboratory, had won the Nobel Prize in 1968 for discovering (in his son’s apposite description) ‘a whole zoo of subatomic particles’. He liked interesting challenges. He had just probed one of the pyramids at Giza with cosmic rays in the hope of finding secret chambers. (There were none.) Why not determine how much time was compressed into the clay layer by using another radiometric technique? Fine meteorite dust, which could be identified by its high concentration of iridium, settles on the Earth at a steady rate. A layer with more iridium would be a layer that had accumulated over a longer period. Detecting such infinitesimal traces wouldn’t be easy, but Luis had a former student who could do it. They sent the samples out for testing and the results were startling. The clay contained much more iridium than anyone expected: much more than would ordinarily be found anywhere. Now father and son were really interested. They studied other samples from the K-Pg boundary, from Denmark and New Zealand. The results were the same. A hypothesis occurred to them: asteroid impact. They wrote it up for Science, which published ‘Extraterrestrial Cause for the Cretaceous-Tertiary Extinction’ in 1980. The world was interested, but unconvinced. More data needed. The Alvarezes realised that an impact would leave a kind of fingerprint, in the form of ‘shocked quartz’, a pressure deformation that geologists had first noticed around the sites of underground nuclear tests. They looked for that fingerprint in the records of thousands of core samples from around the world, and were able to zero in on a possible centre of impact on the Yucatan Peninsula, where geologists had previously discovered, then forgotten, a hundred-mile-wide crater hidden under a half-mile of sediment: Chicxulub. This time scientists were more persuaded. The last to get on board was the press. ‘Astronomers should leave to astrologers the task of seeking the cause of earthly events in the stars,’ the editors of the New York Times wrote. ‘Complex events seldom have simple explanations.’ Walter Alvarez wrote back to the editors and told them their claim was contradicted by the entire history of physics.

The evidence of the sixth extinction has been more direct. In the 1960s, David Wake studied the toads that densely populated the Sierra Nevada in Panama. ‘You’d be walking through meadows, and you’d inadvertently step on them,’ he told Kolbert. In the 1980s, his students in the field began to report that toads were nowhere to be found. Wake assumed they were looking in the wrong places. He went down to see for himself and ‘found like two toads’. Other herpetologists were reporting similar amphibian crashes: the golden toad of Costa Rica, the southern day frog of Australia, even the blue poison-dart frogs raised in captivity at the National Zoo in Washington DC. What was happening? The culprit, veterinary pathologists discovered, was Batrachochytrium dendrobatidis. The fungus, which makes it difficult for amphibians to soak up the electrolytes they need to prevent their hearts from stopping, was spreading by way of the ships that connect all the watery parts of the world. On any given day, the ballast water of the global fleet may contain as many as ten thousand different species, any one of which might, once blasted from the bilge, make war on some new world.

The fifth extinction was caused by an asteroid, the sixth by man. The comparison is unflattering. One doesn’t wish to be revealed as unthinkingly, irredeemably murderous. Balzac suggested that Cuvier’s discoveries made him the greatest poet of the century, because he had ‘reconstructed worlds from a whitened bone; rebuilt, like Cadmus, cities from a tooth’. But Cuvier’s greater achievement, perhaps, was simply to recognise so subtle a disruption in the pattern of existence. We have long been aware of our own mortality, and now we are waking to the existence of another, longer chain of life. It’s an important recognition. Palaeontologists have found Neanderthal bones everywhere from Israel to Wales, and agree that the species died out suddenly, about thirty thousand years ago, which is suspiciously close to the time that Homo sapiens began its expansion from Africa. One theory is that clever man simply murdered his stronger cousin. But there are other theories. Maybe we simply outhunted our cousins, or carried a disease that was novel to them. Or maybe our contribution to their demise was even more indirect; animals with a long reproductive cycle are vulnerable to even the slightest of disruptions. John Alroy, an American palaeobiologist, has run computer simulations that suggest it would take just a tiny bit of interference with the Neanderthal birth rate, over the course of a few thousand years, to drive it to extinction. Alroy called this a ‘geologically instantaneous ecological catastrophe too gradual to be perceived by the people who unleashed it’. Such imperception is no longer possible. ■

February 10, 2013

We should track attitudes about human rights at least as well as we track attitudes about presidents

The fact that torture appears to have grown more popular in recent years is disturbing, but it's also worth knowing. Indeed, we should be doing a lot more to understood American attitudes about human rights. Unfortunately, we’re not asking the same questions often enough to draw any meaningful conclusions about trends. (Pew's polling is so infrequent that any given trend could be an outlier, and we can’t check Pew against other polls because other pollsters are phrasing the questions differently.) So here's what should happen: Pew or some other non-profit group should launch a wide-ranging poll of attitudes about human rights, including torture, and repeat it monthly until the end of time. We do it for presidential approval ratings. Opinions about human rights are at least as important as opinions about presidents!

November 15, 2012

[London Review of Books]
BURNING UP THE WORLD
Private Empire: ExxonMobil and American Power, by Steve Coll, Penguin, 704 pp, US $36


Titusville, Pennsylvania, 1865

In November 1902, Ida Tarbell published ‘The Birth of an Industry’, the first of 19 reports for McClure’s Magazine about the organisation that had come to control 90 per cent of the business – still new at the time – of producing oil. Collected two years later in her History of the Standard Oil Company, the series did little to celebrate the company or its founder, John D. Rockefeller. ‘It is doubtful if there has ever been a time since 1872 when he has run a race with a competitor and started fair,’ Tarbell wrote. But she didn’t call for an end to the corporate form Rockefeller had done so much to invent. Instead, she saw in his creation an ideal case study. ‘The perfection of the organisation of the Standard,’ she wrote, ‘the ability and daring with which it has carried out its projects, make it the pre-eminent trust of the world, the one whose story is best fitted to illuminate the subject of combinations of capital.’ Tarbell grew up in Pennsylvania oil country, and admired the ingenuity and ambition of the independent oilmen (her father among them) who had ‘peopled a waste place of the earth’ and ‘added millions upon millions of dollars to the wealth of the United States’. She admired Rockefeller’s genius and discipline as well, but for Tarbell, and eventually for the US government, Standard’s long record of collusion, espionage and predatory pricing was too much. ‘I was willing that they should combine and grow as big and rich as they could,’ Tarbell later wrote. ‘But they had never played fair, and that ruined their greatness for me.’ In 1911, the Supreme Court, influenced in part by Tarbell’s disappointed muckraking, split Rockefeller’s company into 34 ‘baby Standards’. In 1973, the Standard Oil Company of New Jersey, largest of the babies, changed its name to Exxon Corporation. And in 1998 Exxon recombined with the Standard Oil Company of New York, which had by then changed its name to Mobil. The new company, with eighty thousand employees in nearly two hundred countries, remains our ‘pre-eminent trust’, but – as Steve Coll argues in his fine bookend to Tarbell’s masterpiece – it has also become something more.

Coll picks up the story in 1989 with the wreck of the Exxon Valdez, which dumped 240,000 barrels of crude oil into the Gulf of Alaska. He goes on to recount, among other sensational episodes, the lethally bungled kidnapping in 1992 of an Exxon division president from the driveway of his New Jersey mansion; an abortive rebel siege of an ExxonMobil outpost in Aceh in 2001; the failed coup (funded in part by Mark Thatcher) against the ExxonMobil-backed president of Equatorial Guinea in 2004; and the unsteady rise in 2006 of Nigerian oil pirates, whose ‘picaresque criminality – their head scarves, bandoliers and speedboats; their bank robbery techniques, which included using massive charges of dynamite to blast away reinforced steel doors – seemed increasingly inspired by Hollywood’. Like Tarbell, Coll has constructed a narrative around the oilmen’s extraordinary efforts to stay a step ahead of anyone who might get between them and some new patch of crude. Where kidnappers, rebels, conspirators and (for the most part) pirates failed, ExxonMobil succeeded. But Private Empire is no boy’s own adventure for middle managers.

The pivotal event in the history of ExxonMobil, as Coll sees it, wasn’t the wreck of the Exxon Valdez, important though that was, but the fall of the Berlin Wall. ‘The Cold War’s end,’ he writes, ‘signalled a coming era when non-governmental actors – corporations, philanthropies, terrorist cells and media networks – all gained relative power.’ The title of Daniel Yergin’s history of the oil industry, The Prize (1991), came from a similar argument made by Winston Churchill in 1911, when he was First Lord of the Admiralty. The best way to prepare for war with Germany, Churchill believed, would be to upgrade the Royal Navy so that it used oil as fuel rather than coal. It would be risky, in large part because ‘the oil supplies of the world were in the hands of vast oil trusts under foreign control.’ But if ‘we overcame the difficulties and surmounted the risks, we should be able to raise the whole power and efficiency of the navy to a definitely higher level; better ships, better crews, higher economies, more intense forms of war power – in a word, mastery itself was the prize of the venture’. As Yergin noted, winning such a prize ‘inevitably meant a collision between the objectives of oil companies and the interests of nation-states.’ This clash is the real subject of Coll’s book. A single nation, the United States, once had the power to break apart the mighty Standard Oil Company. But in the post-Soviet era, ExxonMobil prevailed.

* * *

The oil age began with two mid-19th-century inventions. In 1849, a Canadian physician developed a method for refining petroleum into a clear liquid, ‘kerosene’, that cost less than whale oil and burned more brightly in lamps. And in 1859, Edwin Drake, realising that subterranean pools of oil could be tapped like water in a well, drilled a pipe seventy feet down through the granite beneath a creek near Titusville, Pennsylvania, and was soon counting profits at 25 barrels a day. Within months, an army of entrepreneurial chemists, coopers, drillers, engineers, geologists, pipefitters, surveyors and teamsters had transformed the rugged forestland of Western Pennsylvania into a pipe-infested oil works, the horizon crowded with derricks and rivers clogged with barrel-packed barges. In 1859, they produced two thousand barrels of oil. By 1879, it was twenty million.

Producing oil is far more difficult today than it was when Rockefeller made his fortune. For every barrel of oil it sells, ExxonMobil has to discover another, otherwise its total reserve would diminish and with it the overall share price. By the time Lee Raymond became CEO in 1993, the company had to replace more than a billion barrels a year just to stand still. (When an oil industry analyst asked him what disturbed his sleep, Raymond answered: ‘Reserve replacement.’) The era of ‘easy oil’, when domestic deposits of light sweet crude all but leapt to the surface, was long past. The material problem – of discovering and extracting a resource that is hidden under miles of rock or ocean or both, often in a form that is not amenable to easy pumping or shipping – was challenging enough. But that challenge had in recent years been exacerbated by ‘resource nationalism’: oil-rich nations were creating their own oil companies. And as Exxon struggled to sign and maintain lease agreements that could last for up to forty years with the variously dictatorial or failing governments of nations that happened to find themselves in control of newly discovered oil deposits, it was also called on to have opinions about local politics. The oil it needed, Coll writes, ‘was subject to capture or political theft by coup makers or guerrilla movements, and so the corporation became involved in small wars and kidnapping rackets that many other international companies could gratefully avoid’. It also inserted clauses into its contracts with national oil companies and foreign governments guaranteeing its rights to arbitration at, say, the World Bank if the host country tried to alter the terms of their agreement.

As the scope of the business grew, so did the scope for disaster: bigger ships to sink, longer pipelines to leak, more complex refineries to explode. Spills meant losing oil. They cost a lot to clean up. They drew notice from regulators. Lawsuits could be extremely expensive. And there was also the price of bad publicity. The wreck of the Exxon Valdez made Exxon ‘the most hated oil company in America’. Being hated might not make it harder to sell oil, but it did make it harder to recruit the very best engineers, which amounted to the same thing. ExxonMobil met the challenge with rigid purposefulness. The engineer charged with overseeing worldwide safety argued, in Coll’s paraphrase, ‘that a fanatical devotion to safety in complex operational units such as refineries could lead to greater profits because the discipline required to achieve exceptional safety goals would also lead to greater discipline in cost controls and operations’. He may have been right. ExxonMobil still causes serious disasters – in 2006, for instance, an underground storage tank at one of its service stations in Jacksonville, Maryland leaked 24,000 gallons of gasoline into the local groundwater supply – but it keeps track of every mishap, all the way down to (literally) bee stings and paper cuts. The corporation requires employees to back into parking spots, so that in an emergency they can speed away more quickly, and rewards those who have low incident rates with customised safety vests or Walmart gift cards. The corporate motto, posted everywhere, is: ‘Nobody gets hurt.’ By 2006, ExxonMobil had an incident rate well below the industry norm, and was consistently turning record-breaking profits.

ExxonMobil has thrived because it has never lost sight of its purposes. Find oil, sell oil, make money. Meanwhile, as Coll notes near the conclusion of Private Empire, the United States has been heading in the opposite direction. In 2011, Standard & Poor’s downgraded US bonds to AA-plus. The downgrade, Coll notes, ‘meant that ExxonMobil, one of only four American corporations to maintain the AAA mark, now possessed a credit rating superior to that of the US’. ExxonMobil also had better cash flow – a positive $493 billion between 1998 and 2010, versus a negative $5.7 trillion for the US. Bond ratings and cash flow are far from the best or only indicators of wise governance, but nonetheless, as Coll observes, ‘in an era of terrorism, expeditionary wars and upheaval abroad, coupled with tax cutting and reckless financial speculation at home,’ ExxonMobil ‘navigated confidently’, while the US ‘foundered’.

* * *

What is the source of ExxonMobil’s confidence? Part of it is simply the expertise of the professional engineer. ‘From the beginning the Standard Oil Company has studied thoroughly everything connected with the oil business,’ Tarbell wrote. ‘It has known, not guessed at, conditions. It has had a keen authoritative sight. It has applied itself to its tasks with indefatigable zeal.’ A century later, little has changed. ‘They’re all engineers, mostly white males, mostly from the South,’ one former ExxonMobil board member told Coll. ‘They shared a belief in the One Right Answer, that you would solve the equation and that would be the answer, and it didn’t need to be debated.’ The attitude gained them respect but little love. Executives from other oil companies, Coll writes, ‘tended to regard their Exxon cousins as ruthless, self-isolating and inscrutable, but also as priggish Presbyterian deacons who proselytised the Sunday school creed Rockefeller had lived by: “We don’t smoke; we don’t chew; we don’t hang with those who do.”’ One executive, Coll writes, was startled to learn that ‘the corporation’s top five leaders, all white males, were the fathers, combined, of 14 sons and zero daughters.’ He had no explanation for this statistical fluke.

When Raymond took over as CEO, he had already overseen the company’s move from Manhattan to Irving, Texas, a blank suburb of Dallas that was more in keeping with his sensibility. He had a PhD in chemical engineering from the University of Minnesota and quizzed his engineers in great detail about their work. If he didn’t like their answers, he dismissed them as ‘stupid shits’. Raymond was born with a cleft palate, and on bad days employees sometimes referred to him as ‘the Lip’. His only hobby was golf. His protégé, Rex Tillerson, who took over in 2006, was literally a boy scout. His father was an assistant district executive for the Boy Scouts of America, and Tillerson made the top rank, eagle scout. Scout language soon ‘found its way into ExxonMobil promotional materials’. Tillerson’s favourite book was Atlas Shrugged.

Raymond, whose private jet crew was instructed to make sure that his favourite drink, milk with popcorn in it, was always in reach, did not lack confidence. ‘I’m never going to say that we are always doing everything exactly right,’ Raymond testified at an Exxon Valdez deposition. ‘I would be naive to do that: but if you are asking me, are there any major decision points that we faced in how to respond to that spill, that in hindsight we go back and say we were wrong … I don’t think there are any.’ That phrase – ‘decision points’ – was the title of George W. Bush’s memoir. Bush and Dick Cheney both worked in the oil industry, and took on some of the tics of their colleagues’ behaviour. Both are ostentatiously blunt and ‘candid’. Both present themselves as ‘results-oriented’. But this was mostly for show. (Bush’s 11-year run as a Texas oilman, in which he lost his investors millions of dollars, ended when he was bought out by friends of his father.) Actual oilmen do seem more appealing by comparison. Certainly Tarbell would have approved. Unlike Standard before it, ExxonMobil achieved a reputation for operating, just barely, within the letter of the law. ‘Exxon made a fetish of rules,’ Coll reports, because they thought they were smart and disciplined enough to win without rigging the game. They were hardnosed, rigid, but, as one former Republican staffer told him, ‘honest as the day is long’.

Tarbell, recalling her youth in the New Yorker in 1937, remembered one of her neighbours struggling with the changes wrought by the invention of the modern oil well. The ‘countryside was turned topsy-turvy’, she wrote.
Less than twenty miles away a man drilled a hole some seventy feet into the earth and began pumping up large quantities of petroleum. The news spread, and overnight men from all directions came hurrying into the country to try their luck. They even hauled their engines and tools over his hilltop, cutting up the roads, tearing down his fences. Many of his neighbours turned teamsters or drillers. He thought the whole business impious and applauded when the preacher declared that taking oil out of the Earth was interfering with the plans of the Almighty, because He had put it there to use in burning up the world on the last day.

* * *

In 1997, Raymond travelled to China to address the 15th World Petroleum Congress on the subject of climate change. The bulk of his speech was devoted to three points: the climate was not changing; even if the climate was changing, our demand for fossil fuels was not the cause; and even if our demand for fossil fuels was the cause, we should continue to demand them. ‘The most pressing environmental problems of the developing nations are related to poverty, not global climate change,’ he said. ‘Addressing these problems will require economic growth, and that will necessitate increasing, not curtailing, the use of fossil fuels.’

Raymond could have been quoting Rockefeller, who would often say to his board members: ‘Give the poor man his cheap light, gentlemen.’ It’s a legitimate point. Even Coll, who does not shy away from cataloguing ExxonMobil’s sins, and has been careful to chronicle the company’s long and shameful history of funding climate-change denialism through its various PACs and research groups, is philosophical about the need to keep the oil flowing: ‘85 per cent of the world’s energy – to fuel cars and trucks, to run air conditioners, to keep iPhone-tapping legions fully charged – still came from taking fossil fuels out of the ground and burning them.’ He could as well have added a note on world hunger: people need fossil fuels not just to fuel the combines that harvest food and the trucks that deliver it, but also to create the fertiliser that grows it. Fritz Haber’s invention – right around the time of the break-up of Standard – of a technique for manufacturing ammonia using hydrogen derived from methane, effectively transforming fossil fuels into rich fertiliser, is the reason food production has kept pace with population growth; the best argument for cheap fuel is that it means cheap food.

But as Coll acknowledges, these are all short-term arguments. Much of Private Empire focuses on war and specific incidents of manmade disaster: the Exxon Valdez, various pipeline spills, Deepwater Horizon, dirty wars in Africa and Indonesia, the wars in Iraq. The largest oil-company related disaster, though, is climate change, which will destroy not just life in the Gulf of Mexico, but life in all of the oceans and on much of the land as well. The smaller disasters happened when the oilmen failed, but climate change is happening because they are successful.

Forecasters in ExxonMobil’s strategic planning department predicted in 2005 that the only thing that would prevent growing demand for oil (and, not incidentally, growing profits for ExxonMobil) would be an unprecedented global carbon tax, and for that to happen, in Coll’s summary of their findings, ‘the world’s governments would have to reach a unified conclusion that climate change presented an emergency on the scale of the Second World War – a threat so profound and disruptive as to require massive national investments and taxes designed to change the global energy mix.’ The forecasters assumed this would not happen. But a decade after Raymond made his speech against taking any kind of action on climate change, his successor was making headlines by calling for just such a tax. Some environmentalists suggested that Tillerson made his move in order to sabotage a more ‘realistic’ plan to pass a cap-and-trade bill (which did in fact end up going nowhere). But Tony Kreindler, the national media director of the Environmental Defense Fund, suggested a theory that seemed more in keeping with the ExxonMobil mindset: ‘They took a very hard look at their business model and decided they could simply out-compete everyone else if the policy were a carbon tax.’

Last June, Tillerson spoke before the Council on Foreign Relations in New York. Alan Murray, an editor at the Wall Street Journal, introduced the forum by citing Coll’s book, which, Murray noted, describes ExxonMobil as a corporate state with its own foreign policy. After making some comments about the opportunities that hydraulic fracturing (fracking) afforded, in terms of exploiting North America’s vast reserves of natural gas, Tillerson began to answer questions from the audience. Eventually, a white-haired man in a blazer asked him about the potentially devastating effects of climate change. ‘The seas will rise, the coastlines will be unstable for generations, the price of food will go crazy. This is what we face, and we all know it,’ the man said, calmly but obviously with great concern. And yet ‘if we burn all these reserves you’ve talked about, you can kiss future generations goodbye. And maybe we’ll find a solution to take it’ – carbon dioxide – ‘out of the air. But, as you know, we don’t have one. So what are you going to do about this? We need your help to do something about this.’

Tillerson, after a long preamble about the uncertainty of climate models in general, and the imprecision of sea-level rise estimates in particular, got to the point. ‘We believe those consequences are manageable,’ he said. ‘As a species, that’s why we’re all still here. We have spent our entire existence adapting, OK? So we will adapt to this.’ ExxonMobil, the greatest corporation in human history, would do nothing to address the greatest crisis in human history. Certainly nothing remotely on the scale of the endeavours that it regularly undertook in its century and a half of inventing and dominating history’s most powerful and consequential industry. ExxonMobil remains ‘the pre-eminent trust of the world – the one whose story is best fitted to illuminate the subject of combinations of capital’. Coll does illuminate it, and what we see is tragic: smart people doing their best to deliver what the world most wants, and in so doing destroying it. ■

December 04, 2009

[Harper's Magazine]
UNDERSTANDING OBAMACARE
To reform a system, first capture it

Signing the Affordable Care Act, March 22, 2010

The idea that there is a competitive "private sector" in America is appealing, but generally false. No one hates competition more than the managers of corporations. Competition does not enhance shareholder value, and smart managers know they must forsake whatever personal beliefs they may hold about the redemptive power of creative destruction for the more immediate balm of government intervention. This wisdom is expressed most precisely in an underutilized phrase from economics: regulatory capture.

When Congress created the first U.S. regulatory agency, the Interstate Commerce Commission, in 1887, the railroad barons it was meant to subdue quickly recognized an opportunity. "It satisfies the popular clamor for a government supervision of railroads at the same time that that supervision is almost entirely nominal," observed the railroad lawyer Richard Olney. "Further, the older such a commission gets to be, the more inclined it will be found to take the business and railroad view of things. It thus becomes a sort of barrier between the railroad corporations and the people and a sort of protection against hasty and crude legislation hostile to railroad interests." As if to underscore this claim, Olney soon after got himself appointed to run the U.S. Justice Department, where he spent his days busting railroad unions.

The story of capture is repeated again and again, in industry after industry, whether it is the agricultural combinations creating an impenetrable system of subsidies, or television and radio broadcasters monopolizing public airwaves for private profit, or the entire financial sector conjuring perilous fortunes from the legislative void. The real battle in Washington is seldom between conservatives and liberals or the right and the left or "red America" and "blue America." It is nearly always a more local contest, over which politicians will enjoy the privilege of representing the interests of the rich.

And so it is with health-care reform. The debate in Washington this fall ought to have been about why the United States has the worst healthcare system in the developed world, why Americans pay twice the Western average to maintain that system, and what fundamental changes are needed to make the system better serve us. But Democrats rendered those questions academic when they decided the first principle of reform would be, as Barack Obama has so often explained, that "nothing in our plan requires you to change what you have."

This claim reassured not just the people who like their current employment benefits but also the companies that receive some part of the more than $2 trillion Americans spend every year on health care and that can expect to continue receiving their share when the current round of legislation has come to an end. The health-care industry has captured the regulatory process, and it has used that capture to eliminate any real competition, whether from the government, in the form of a single-payer system, or from new and more efficient competitors in the private sector who might have the audacity to offer a better product at a better price.

The polite word for regulatory capture in Washington is "moderation." Normally we understand moderation to be a process whereby we balance the conservative-right-red preference for "free markets" with the liberal-left-blue preference for "big government." Determining the correct level of market intervention means splitting the difference. Some people (David Broder, members of the Concord Coalition) believe such an approach will lead to the wisest policies. Others (James Madison) see it only as the least undemocratic approach to resolving disputes between opposing interest groups. The contemporary form of moderation, however, simply assumes government growth (i.e., intervention), which occurs under both parties, and instead concerns itself with balancing the regulatory interests of various campaign contributors. The interests of the insurance companies are moderated by the interests of the drug manufacturers, which in turn are moderated by the interests of the trial lawyers and perhaps even by the interests, of organized labor, and in this way the locus of competition is transported from the marketplace to the legislature. The result is that mediocre trusts secure the blessing of government sanction even as they avoid any obligation to serve the public good. Prices stay high, producers fail to innovate, and social inequities remain in place.

No one today is more moderate than the Democrats. Indeed, the triangulating work that began two decades ago under Bill Clinton is reaching its apogee under the politically astute guidance of Barack Obama, "There are those on the left who believe that the only way to fix the system is through a single-payer system like Canada's," Obama noted (correctly) last September. "On the right, there are those who argue that we should end employer-based systems and leave individuals to buy health insurance on their own." The president, as is his habit, proposed that the appropriate solution lay somewhere in between. "There are arguments to be made for both these approaches. But either one would represent a radical shift that would disrupt the health care most people currently have. Since health care represents one-sixth of our economy, I believe it makes more sense to build on what works and fix what doesn't, rather than try to build an entirely new system from scratch."

With such soothing words, the Democrats have easily surpassed the Republicans in fund-raising from the health-care industry and are even pulling ahead in the overall insurance sector, where Republicans once had a two-to-one fund-raising advantage. The deal Obama presented last year, the deal he was elected on, and the deal that likely will pass in the end is a deal the insurance companies like, because it will save their industry from the scrap heap even as it satisfies the "popular clamor for a government supervision."

* * *

The private insurance industry, as currently constituted, would collapse if the government allowed real competition. The companies offer no real value and so instead must create a regulatory system that virtually mandates their existence and will soon actually do so.

A study by the McKinsey Global Institute found that health insurance cost the United States $145 billion in 2006, which was $91 billion more than what would be expected in a comparably wealthy country. This very large disparity may be explained by another study, by the American Medical Association, which shows that the vast majority of U.S. health-insurance markets are dominated by one or two health insurers. In California, the most competitive state, the top two insurance companies shared 58 percent of the market. In Hawaii, the top two companies shared the entire market. In some individual towns there was even less competition—Weilmark, for instance, owns 96 percent of the market in Decatur, Alabama. "Meanwhile, there has been year-to-year growth in the largest health insurers' profitability," the AMA reports, even as "consumers have been facing higher premiums, deductibles, copayments and coinsurance, effectively reducing the scope of their coverage." And yet no innovating entrepreneurs have emerged to compete with these profitable enterprises. The AMA suggests this is because various "regulatory requirements" provide "significant barriers to entry." Chief among those barriers, it should be noted, is an actual congressional exemption from antitrust laws, in the form of the McCarran-Ferguson Act of 1945.

Insurance companies aren't quite buggy-whip manufacturers. But they are close. In the past, one could have made an argument that in their bureaucratic capacities—particularly, assessing risk and apportioning payments—insurance companies did offer some expertise that was worth paying for. But all of the trends in politics and in information technology are against insurance companies offering even that level of value. Insurance is an information business, and as technology makes information-management cheaper, technological barriers to entry will fall, and competition will increase. (People who relied on the cost of printing presses to maintain a monopoly should be able to relate.)

At the same time, the very idea of assessing health risk is beginning to be understood as undemocratic, as was revealed by the overwhelming support for the 2008 Genetic Information Non-Discrimination Act, which bars insurers from assessing risk based on genetic information. Over time, more and more information will be off-limits to underwriters, so that insurance ultimately will be commoditized—every unit of insurance will cost about the same as every other unit of insurance. Managers know that one must never allow one's product to become a mere commodity. When every product is like every other product, brand loyalty disappears and prices plummet.

Which perhaps is one reason why the insurers themselves have always favored the central elements of the Democratic plan. As long ago as 1992, when Hillary Clinton was formulating her own approach to reform, the Health Insurance Association of America (now America's Health Insurance Plans, or AHIP) announced that insurers would agree to sell insurance to everyone, regardless of medical condition (guaranteed issue) if the government required every American to buy that insurance, and used tax dollars to subsidize those who could not afford to do so (universal mandate). Carl Schramm, the president of the association, said this was the "only way you preserve the private health-insurance industry. It's plain-out enlightened self-interest." The deal collapsed nonetheless, in part because Congress wanted to introduce a "community rating" system that would have put an end to underwriting by making insurers sell insurance to everybody in a given community for the same price. Insurers wanted to maintain the profitable ability to charge different prices to different people.

Last December, though, AHIP said it would support community rating as well, and since then the real negotiation has been all about details. The insurance companies would agree to sell their undifferentiated commodity to all people, no matter how sick, if the government agreed to require all people, no matter how healthy, to buy their undifferentiated commodity. Sick people who need insurance get insurance and healthy people who don't need insurance cover the cost. A universal mandate would include the 47 million uninsured—47 million new customers.

The Democratic plan looks to be a huge windfall for the insurance companies. How big is not known, but as BusinessWeek reported in August, "No matter what specifics emerge in the voluminous bill Congress may send to President Obama this fall, the insurance industry will emerge more profitable." The magazine quoted an unnamed aide to the Senate Finance Committee who said, "The bottom line is that health reform would lead to increased revenues and profits."

* * *

Democrats have crafted a plan full of ideas that almost certainly will help a lot of people who can't afford insurance now. It also happens to be the case that some of those ideas will significantly benefit the corporations that at one time or another have paid Democrats a lot of money.

The framework for reform, for instance, was authored not by Max Baucus, the Democratic senator who chairs the Finance Committee, but by his senior aide, Liz Fowler, who also directs the committee's health-care staff. She worked for Baucus from 2001 to 2005 but then left for the private sector. In 2008, reports the Washington gossip paper Politico, "sensing that a Democratic-controlled Congress would make progress on overhauling the health care system," she returned to Baucus's side. Where had she retreated to recover from her Washington labors? Politico does not say. In fact, she had become the vice president for public policy and external affairs at WellPoint, one of the nation's largest health-insurance corporations.

Pretty much everyone involved in health-care reform has been on the payroll of one health-care firm or another. Howard Dean, the former head of the Democratic National Committee and, heroically, a longtime proponent of a single-payer system, nonetheless recently joined McKenna Long & Aldrich, a lobbying firm with many clients in the industry. Nancy-Ann DeParle, the so-called health czar who is overseeing reform at the White House, is reported to have made as much as $6 million serving on the boards of several major medical firms. Tom Daschle, who was set to be Obama's secretary of health and human services until it emerged that he had failed to pay taxes on his limousine and driver, now earns a $2 million salary as a "special public policy advisor" for the lobbying firm of Alston & Bird, which represents, among many other clients, HealthSouth and Aetna. Asked to describe his current role, Daschle said, "1 am most comfortable with the word resource."

Most illustrative of the clever efficiency with which the Democrats have allowed themselves to be captured, though, is the strange journey of Billy Tauzin. He spent his first fifteen years in Congress as a "conservative" Democrat, struggling mightily to make his fellow party members more amenable to the needs of the healthcare industry. In 1994 he founded the "moderate" Blue Dog coalition, whose members continue to deliver the most reliably pro-business vote in the Democratic caucus. But the Blue Dogs of 1994 did not go far enough for Tauzin, so in 1995 he became a Republican, and by 2003 he finally had mastered the system to the degree that he could personally craft one of the largest corporate giveaways in American history: Medicare Part D. After that bill was made into law, he took the natural next step—he became president of the Pharmaceutical Research and Manufacturers of America, the lobbying arm of the drug industry.

Now the circle is complete. The Democratic president of the United States, the candidate of change, the leader of the party Billy Tauzin deserted so long ago for failing to meet the needs of business, must "negotiate" directly with this Republican lobbyist, and rather than repeat this entire tortured journey himself, all Obama has to do is agree to Tauzin's demands—which he has. The Democratic deal for the drug companies is, if anything, even sweeter than the Democratic deal for the insurance companies. After one of Tauzin's many visits to the White House, he told the Los Angeles Times that the president had decided Medicare Part D would not be touched. "The White House blessed it," Tauzin said, assuring his clients that billions of government dollars would continue to flow their way. Democrats, meanwhile, must have been almost equally assured by the subsequent headline in Ad Age: "Pharma Backs Obama Health Reform with $150 Million Campaign."

* * *

What can Republicans do against opponents like that? They are trying to win back their friends in industry, but the effort is a bit sad. In September, for instance, Senator Jim Bunning of Kentucky proposed an amendment that would, among other things, require a "cooling-off period" of seventy-two hours once the bill was completed. His colleague, Pat Roberts of Kansas, said such a pause would provide "the people that the providers have hired to keep up with all of the legislation that we pass around here" the opportunity to say, "Hey, wait a minute. Have you considered this?"

But of course "the people that the providers have hired"—having actually already written the legislation—are quite familiar with the details. The only hope for Republicans right now is if the insurers themselves decide they can get an even better deal by turning on the Democrats, which no doubt they eventually will. Just because competition has moved from the marketplace to the legislature does not mean it is any less intense. Even as various cartels and trusts compete for the favor of the parties, so too must the parties continue to compete for the favor of the cartels and the trusts. In October, for instance, the insurers appeared to turn against the Democrats when AHIP released a study that claimed the Democratic approach to reform would radically increase the cost of insurance. Obama, meanwhile, hit right back. In his weekly radio address, he said the study was "bogus," noted that the insurance companies had long resisted attempts at reform, and even called into question the validity of the industry's antitrust exemption. The New York Times reported that such attacks indicated a "sharp break between the White House and the insurance industry," but this was better understood as a negotiating gambit—perhaps insurers believed drug manufacturers were getting a better deal and saw an opening, or perhaps they simply wanted to revise a specific term of the bill, which at the time, according to the Wall Street Journal, would have increased their industry's tax burden by $6.7 billion a year.

As Democrats negotiate such impasses, the Republicans, no longer the favored party of corporate America, are left to represent nothing and no one but themselves. They are opposing reform not for ideological reasons but simply because no other play is available. They have lost the business vote, and even their call for "fiscal responsibility" is gestural at best. The "public plan" so hated by Republicans, for instance, would have reduced the cost of reform by as much as $250 billion over the next decade, yet the party universally opposed it because, as Senator Charles Grassley of Iowa explained, "Government is not a fair competitor. It's a predator."

Such non sequiturs have opened the way to the darker dream logic that of late has come to dominate G.O.P. rhetoric. Nothing remains but primordial emotion—the fear, rage, and jealousy that have always animated a significant minority of American voters—so Republican congressmen are left to take up concerns about "death panels" and "Soviet-style gulag health care" that will "absolutely kill seniors." Republicans, having lost their status as the party of business, have become the party of incoherent rage. It is difficult to imagine anything good coming from a system that moderates the will of corporations with the fantasies of hysterics. ■

July 04, 2009

[Harper's Magazine]
WE STILL TORTURE
The new evidence from Guantánamo

Untitled #2/07, ink on paper, by Sandy Walker

We face the temptation to believe that an election can “change everything”—that the stark contrast between Barack Obama and George W. Bush recapitulates an equally stark contrast between the present and the past. But political events move within a continuum, and they are driven by many forces other than democratic action, including the considerable power of their own momentum. Such is the case with the ongoing American experiment with torture.

The release in April of documents from the International Committee of the Red Cross, from the U.S. Justice Department’s Office of Legal Council, and from the House Armed Services Committee gave further credence to what had long been known about CIA and military interrogation techniques. They are brutal and, despite the surreal claims of the Bush Justice Department, they are illegal. The assumption underlying coverage of “the torture story,” however, has been that U.S.-sponsored torture came to a halt on January 21. The culpability of the previous administration remains to be determined, we are told, and in terms of ongoing criminal liability, the worst Obama himself could do is obstruct an investigation. Regarding the launch of that investigation, we must be patient.

We cannot be patient, though, and not simply because justice must be swift. We cannot be patient because not only have we failed to punish the people who created and maintained our torture regime; we have failed to dismantle that regime and, in many cases, even to cease torturing.

This last charge is the least heard. Although it is true that waterboarding is once again proscribed, it is equally true that the government continues to permit a series of “torture lite” techniques— prolonged isolation, sleep and sensory deprivation, forcefeeding—that even Reagan appointee Judge Susan Crawford had to acknowledge amounted to torture when she threw out the government’s case against one accused terrorist. Like waterboarding, these techniques cause extreme mental anguish and permanent physical damage, and, like waterboarding, they are not permitted under international law. But unlike waterboarding, they remain on the books, in detailed prison regulations and field-manual directives, unremarked by anyone except a few activists.

* * *

The United States has always tortured. But our approach to torture has evolved over time. In the past, we preferred to keep the practice hidden. During the Cold War, we exported most of our torture projects to client regimes in Latin America, the Middle East, and South Asia, while at home we worked to perfect a new form of “no touch” interrogation that would achieve terror and compliance without leaving scars, even as we denounced similar practices employed by our enemies. This was the age of hypocrisy—our secrecy was the tribute war crimes paid to democracy.

The hypocritical period ended, of course, with the attacks of September 11, the national flinch, the chestthumping of George W. Bush, and the grim pronouncements of Dick Cheney, who loudly advertised his willingness to take the United States to “the dark side.” This, as we have all come to understand, was the time of open torture. It was the “shameful era,” when we put the techniques we had developed during the Cold War to use in the new “war on terror.”

Now we have entered what we may wish to call the post-torture era, except that it is not. Indeed, we cannot even revert to the easy hypocrisy of the Cold War. We have returned to our traditional practice of torturing and pretending not to, but the old routine is no longer convincing. We know too much. We know that we are still imprisoning men who very likely are completely innocent. We know that we still beat them. We know that we still use a series of punishments and interrogation techniques—touch and “no touch”— that any normal person would acknowledge to be torture. And we know that when those men protest such treatment by refusing to eat, we strap them to chairs and force food down their throats. We know all of this because it is well documented, not just by reporters and activists but by the torturers themselves.

It is this very openness that suggests why this new age—let’s call it the era of legitimized torture—is so perilous, not just to the men who are tortured but to liberal democracy. The moment is rapidly approaching when President Obama will cease to be the inheritor of a criminal regime and instead become its primary controlling authority, when the ongoing war crimes will attach themselves to his administration. And when they do attach themselves, Obama’s administration will be forced to defend itself, as all administrations do. And it will defend itself by claiming that what we call crimes are not in fact crimes.

This process has already begun. Rather than end illegal torture, we are now solidifying the steps that we have taken to make these activities legal. By failing to change the underlying problem even as we celebrate its supposed “solution,” we actually further entrench the past, the “bad” Bush era, into the present, the “good” Obama era. We will return to the rule of law, but within that rule will remain a rule of torture, given all the greater authority by our love of the new regime.

* * *

We have a tendency in the United States to judge actions not by their intrinsic merit but by the stylishness with which they are executed. Although the ostentatious lawlessness of the previous administration was pleasing to some, it ultimately frightened the majority of Americans. It was far too flamboyant. Obama and the Democrats seem to have rejected ostentation and lawlessness, and are all the more popular for that rejection. But they have not rejected torture itself.

As we learned from the Office of Legal Counsel memos, it is possible to parse “torture” to a considerable degree. What is the allowable incline for a waterboard? How many calories will suffice to avoid starvation? Which insects are permitted to be used in driving a man insane? The correct answer, according to those who parse, is the difference between a war crime and a heroic act of patriotism.

The OLC memos have been discredited but not the thinking behind them. We are still parsing, still weighing, still considering the possibilities. Whereas once we understood torture to be forbidden—something to be hidden and denied—now we understand it to be “complex.” We are instrumental in our analysis, and that instrumentality is held to be a virtue. We don’t torture not because it is illegal or immoral or repugnant to democracy but because “it doesn’t work,” leaving the way clear to torture that does “work.”

The combination of complexity and instrumentality creates the potential for a new inversion. We enter the “complex” realm of torture and draw a new line, and the logical consequence—the unavoidably intended consequence—is that whatever is on the “good” side of that line, the “useful” side, can no longer be called torture. And since it is no longer torture, it must be something else. In this way we arrive at the strangest and most absurd conclusion. What was once a crime becomes a sensible approach to law enforcement. And in becoming sensible it also becomes invisible.

It is our evolving understanding of force-feeding that most clearly demonstrates this process of inversion and invisibility—not because it is the most horrifying form of torture, though it is horrifying, but because it has been so completely mainstreamed. Indeed, as it is practiced at Guantánamo, forcefeeding is understood not only to not be torture but in fact to be a form of mercy. It is understood, above all, as a way to “preserve life.”

* * *

As of this writing, at least thirty men are being force-fed at Guantánamo. They are being force-fed despite the departure of the administration that instituted force-feeding, despite the current administration’s order to shut down Guantánamo, and despite its even more specific order requiring prisoners there to be treated within the bounds of Common Article 3 of the Geneva Conventions, which—by every interpretation but that of the U.S. government—clearly forbids force-feeding.1

Most of these prisoners are not facing imminent death. In fact, force-feeding is itself a risky “treatment” that can cause infections, gastrointestinal disorders, and other complications. The feedings begin very soon after prisoners begin a hunger strike, and continue daily— with military guards strapping them to restraint chairs, usually for several hours at a time—until the prisoners agree to end the strike. This hunger striker is not an emaciated Bobby Sands lying near death after many weeks of starvation. He is a strongman bound to a chair and covered in his own vomit.2

If force-feeding does not save lives, then what does it do? What makes it useful? From the perspective of the prisoner, there can be only one answer: Pain makes force-feeding useful. The pain makes the strike unbearable, and therefore it prevents further protest.

This is not just a logical inference. The first experience many Guantánamo prisoners had with being forced to eat was not when they went on hunger strikes but rather when they underwent interrogations at the secret CIA bases where they were held prior to their arrival at Guantánamo. At these “black sites,” we now know from the ICRC and OLC reports, CIA interrogation teams used “dietary manipulation” as a “conditioning technique” to help gather “intelligence.” These techniques, in other words, were a form of torture, no different from other, more infamous techniques outlined in the same reports, including “walling,” “cramped confinement,” and “water dousing” (now better known as waterboarding).

A 2005 memo signed by Steven Bradbury, then the acting head of the Office of Legal Counsel, explains the method. Dietary manipulation “involves the substitution of commercial liquid meal replacements for normal food, presenting detainees with a bland, unappetizing, but nutritionally complete diet.” The CIA interrogation team would strap the prisoners to chairs and feed them bottles of Ensure Plus—cited by name—for weeks on end. As Bradbury noted, it was hoped that this would cause the prisoners to become compliant.

The interrogation team believed [redacted] “maintains a tough, Mujahidin fighter mentality and has conditioned himself for a physical interrogation.” The team therefore concluded that “more subtle interrogation measures designed more to weaken [redacted] physical ability and mental desire to resist interrogation over the long run are likely to be more effective.” For these reasons, the team sought authorization to use dietary manipulation, nudity, water dousing, and abdominal slap. In the team’s view, adding these techniques would be especially helpful [redacted] because he appeared to have a particular weakness for food and also seemed especially modest.

In imposing dietary control, safety was always a concern. “While we do not equate commercial weight-loss programs and this interrogation technique,” Bradbury wrote, “the fact that these calorie levels are used in the weight-loss programs, in our view, is instructive in evaluating the medical safety of the interrogation technique.” Bradbury even anticipated the almost sentimental patina of caregiving that informs the presentday discussion of force-feeding at Guantánamo, noting that “a detainee subjected to the waterboard must be under dietary manipulation, because a fluid diet reduces the risks of the technique”—by reducing the risk of choking on undigested vomit. The force-feeding, in other words, was for the good of the prisoner.

Forcing a man to drink a diet shake may seem like a minor affront, far removed from the rack or even from waterboarding. But actual prisoner testimony from another set of documents, the Red Cross interviews acquired by Mark Danner and published in The New York Review of Books in April, suggests that the dietary manipulation was traumatizing:

During the first two weeks I did not receive any food. I was only given Ensure and water to drink. A guard would come and hold the bottle for me while I drank....

During the first month I was not provided with any food apart from on two occasions as a reward for perceived cooperation. I was given Ensure to drink every 4 hours. If I refused to drink then my mouth was forced open by the guard and it was poured down my throat by force....

I was transferred to a chair where I was kept, shackled by [the] hands and feet [and] given no solid food during the first two or three weeks, while sitting on the chair. I was only given Ensure and water to drink. At first the Ensure made me vomit, but this became less with time....

That is how we treated prisoners at CIA black sites, back in the shameful era. It is by no means the worst instance of man’s inhumanity to man. But dietary manipulation clearly was not a technique meant primarily to preserve life.

* * *

Compare now the shameful and repudiated practice of dietary manipulation under Bush to the sensible, life-preserving practice of “involuntary feeding” at Guantánamo today, in the post-torture era.

In February, Lieutenant Colonel Yvonne Bradley, a U.S. military lawyer representing Binyam Mohamed, the British resident who was recently released from Guantánamo, described a now-familiar situation to the Guardian. “Binyam has witnessed people being forcibly extracted from their cell,” she said. “SWAT teams in police gear come in and take the person out; if they resist, they are force-fed and then beaten.”

Bradley continued,

It is so bad that there are not enough chairs to strap them down and forcefeed them for a two or three-hour period to digest food through a feeding tube. Because there are not enough chairs the guards are having to force-feed them in shifts. After Binyam saw a nearby inmate being beaten it scared him and he decided he was not going to resist. He thought, “I don’t want to be beat, injured or killed.”

That same month, Ahmed Ghappour, an attorney with the human-rights group Reprieve, which represents thirty-one detainees at Guantánamo, told Reuters that prison officials were “over-force-feeding” hunger strikers, who were suffering from diarrhea as they sat tied to their chairs. He said in some cases officials were lacing the nutrient shakes with laxatives. And the situation was getting worse. “According to my clients, there has been a ramping up in abuse since President Obama was inaugurated,” Ghappour said, speculating that guards there wanted to “get their kicks in” before the camp closed.

David Remes, an attorney who represents fifteen detainees at Guantánamo, wrote in an April petition to the U.S. District Court for the District of Columbia that one of his clients, Farhan Abdul Latif, had been suffering in particular. When the nasogastric tube “is threaded though his nostril into his stomach,” it “feels like a nail going into his nostril, and like a knife going down his throat.” Latif had in recent months resorted to covering himself with his own excrement in order “to avoid force-feeding and that, when he was finally force-fed, the tube was inserted through the excrement covering his nostrils.”3

Another prisoner, Maasoum Abdah Mouhammad, told his lawyers at the Center for Constitutional Rights that he and fifteen other men had also refused to eat:

Mr. Mouhammad described that men were vomiting while being overfed. Some of the striking detainees had kept their feeding tubes in their noses even when not being force-fed just to avoid having the tubes painfully reinserted each time. Mr. Mouhammad reported that interrogators were pressuring and coercing the men on hunger strike to eat, making promises that they would be moved to the communal living camp if they began eating. Mr. Mouhammad described these experiences as “torture, torture, torture.”

What was torture at the black sites remains torture today at Guantánamo. It is perhaps ironic that what began as a method for making men talk—in fact, as we are now learning, in order to make them lie, about ties between Al Qaeda and Iraq—is now a method of preventing men from “talking,” of preventing them from registering protest at the injustice of their condition. But that irony should not prevent us from recognizing the simple fact of the torture itself.

* * *

Every U.S. institution that could prevent force-feeding has failed to do so. Congress has failed to act, as have the courts, as has the president. Today the American Medical Association refuses even to sanction the doctors employed at Guantánamo.

District Judge Gladys Kessler had the opportunity to address forcefeeding in February, when lawyers for Mohammed Al-Adahi and four other prisoners at Guantánamo sought an immediate injunction against the practice. Kessler denied the injunction on the unconvincing grounds that her court lacked not just the jurisdiction but the competency to dispense justice. “Resolution of this issue requires the exercise of penal and medical discretion by staff with the appropriate expertise,” she wrote, “and is precisely the type of question that federal courts, lacking that expertise, leave to the discretion of those who do possess such expertise.” Once again, complexity prevents intervention. (Kessler, it should be noted, began her career working for Democrats in Congress.)

The Pentagon, so richly empowered by the circuit court, has failed as well. Dr. Ward Casscells was appointed assistant secretary of defense for health affairs in 2007 and thus far has survived in his role as the Pentagon’s top health official. I asked his spokesperson, Cynthia Smith, why he was continuing the previous administration’s policy of force-feeding even after the new president had ordered prisoners to be treated within the bounds of Common Article 3 of the Geneva Conventions. “The policy does save lives,” Smith wrote back (a week later, stipulating that I attribute quotes to her instead of to Casscells). “Idly watching detainees for whose care we are responsible engage in self-starvation to the point of permanent damage to health or death is not required by U.S. law, Common Article 3, or medical ethics.” 4

Smith went on to note that some strikers may be protesting because they feel pressured to do so by other prisoners. In such cases, forcefeeding was a way to help them resist that pressure. This was a strange argument. Given that the prisoners are separated from one another and are under constant surveillance, such pressure could come only in the form of appeals to conscience. Smith’s logic was reminiscent of the claim by Marc Thiessen, a former Bush speechwriter, in the Washington Post in April: “The job of the interrogator is to safely help the terrorist do his duty to Allah, so he then feels liberated to speak freely”—which itself brings to mind the case of Alvaro Jaume, who was tortured under medical supervision in Uruguay in the 1980s, and who recalled, “These doctors are saving lives, but in a perverse way. The aim of torture is thwarted if the victim cannot support the interminable ordeal. The doctor is needed to prevent you from dying for your convictions.”5 All of which, in any case, suggests that the Pentagon has no intention of changing its policy.

President Obama, to date, has done nothing either. In February, Ramzi Kassem, a Yale law professor who represents one of the hunger strikers, sent a formal letter to Gregory Craig, the new White House counsel, outlining the legal concerns about forcefeeding and recommending in detail how to bring the treatment of hunger strikers in line with the Geneva Conventions (for instance, by prohibiting the use of restraint chairs). Obama could simply order these changes, but he has not.

Obama did ask Navy Admiral Patrick Walsh to visit Guantánamo and report back on conditions there. Walsh found the practices in question, including the use of restraint chairs, to be perfectly acceptable. When Reuters asked Walsh about specific incidents of abuse, he was evasive. “We heard allegations of abuse,” he said. “What we found is that there were in some cases substantiated evidence where guards had misconduct, I think that would be the best way to put it.”

* * *

Force-feeding is an especially egregious example of legitimized torture, but it is far from the only example. Just one percent of the prisoners held offshore by the United States are held at Guantánamo, and many other techniques remain legally available to their jailers. The Army Field Manual still permits solitary confinement, sensory deprivation, and sleep deprivation, as well as so-called emotional techniques such as “fear up,” which involves terrifying prisoners into a state of “learned helplessness.”

It is difficult to know the degree to which these practices are employed, though, because President Obama has adopted not only much of the Bush Administration’s torture policy but also its radical doctrine of secrecy. The Obama White House has sought to prevent detainees at Bagram prison in Afghanistan from gaining access to courts where they may reveal the circumstances of their imprisonment, sought to continue the practice of rendering prisoners to unknown and unknowable locations outside the United States, and sought to keep secret many (though not all) of the records regarding our treatment of those detainees.

The result is that what would at first seem to be something positive— a “national conversation about torture”—has instead become a form of complicity. We know that torture occurred, and we know that it continues to occur. Yet we allow ourselves to pretend otherwise because we don’t know enough. The secrecy allows us to transform a taboo into an “issue,” and most voters seem to desire, as Judge Kessler did, to leave the resolution of that issue to the “penal and medical discretion” of “a staff with the appropriate expertise.” In one recent poll only 35 percent of Americans called for the closing of Guantánamo, whereas 45 percent wanted to keep it open and 20 percent weren’t sure what we should do.

As ever, Democrats are attempting to split the difference. A major claim by Obama is that he does not want people in the CIA “to suddenly feel like they’ve got to spend all their time looking over their shoulders”— presumably because he does not want to prejudge their “appropriate expertise.” A more persuasive means of preventing torture would be to say precisely the opposite, that people in the CIA should spend all their time looking over their shoulders. But that is not what Obama has said. Now those who would speak against torture in a crisis situation face a strong deterrent. They will be understood as taking a side on an issue—a complex issue—rather than simply upholding well-established legal (and at one time political) precedent.

We have seen too much in the past eight years to pretend any longer that the United States is incapable of criminal abuse or to trust the “experts” to act secretly in what they believe, sincerely or not, to be our best interests. We have seen too much to permit ourselves the luxury of ambivalence. Indeed, now that we have seen what our nation has done in the depths of a panic, we should also be able to recognize the larger, longer-term crimes of our leaders. We have for many years imprisoned a greater proportion of our own people than any other nation on earth, kept many of those prisoners in the kind of prolonged solitary confinement that is shown in study after study to drive people insane, and countenanced the rape of those who aren’t in solitary confinement as part of a system of “rough justice.” We have known this about ourselves for a very long time and done nothing.

Now we have a choice. We can continue our experiment with torture or we can harness the obvious horror of the last eight years to rectify the more discreet horrors of the distant past and the darkening present, and in so doing at last become a nation whose actions embody its pretensions. ■

1 The conventions forbid “humiliating and degrading treatment,” and doctors who advise the Red Cross, which in turn has considerable oversight in interpreting the conventions, have repeatedly made clear that force-feeding is humiliating and degrading. See, for instance, the judgment of Red Cross adviser Hernán Reyes, in a 1998 policy review: “Doctors should never be party to actual coercive feeding, with prisoners being tied down and intravenous drips or oesophageal tubes being forced into them. Such actions can be considered a form of torture, and under no circumstances should doctors participate in them, on the pretext of ‘saving the hunger striker’s life.’”

2 Dr. William Winkenwerder, who served as Bush’s assistant secretary of defense for health affairs and was therefore responsible for the forcefeeding policy at Guantánamo, explained this peremptory approach to me three years ago with an almost poignant question: “If we’re there to protect and sustain someone’s life, why would we actually go to the point of putting that person’s life at risk before we act?”

3 Latif, who is now being held in Guantánamo’s “Behavioral Health Unit,” has quite clearly been broken by his many years of confinement. Remes reports that his client has made several suicide attempts, the most recent of which was in his presence. “Without my noticing, he chipped off a piece of stiff veneer from the underside of the table and used it to saw into a vein in his left wrist,” he said. “As he sawed, he drained his blood into a plastic container I had brought and, shortly before our time was up, he hurled the blood at me from the container. It must have been a good deal of blood because I was drenched from the top of my head to my knees.” Latif survived this attempt as well.

4 Smith is one-third right. Force-feeding is indeed permitted under U.S. Bureau of Prison guidelines. But as previously noted, the Geneva Conventions are well understood to forbid the practice, and the guidelines of the World Medical Association are even more unambiguous: “Forcible feeding is never ethically acceptable.”

5 The historian A. J. Langguth recalled some similar thinking many years ago in the
New York Times, drawing from the memoirs of a CIA asset in the Uruguayan police force who was trained in the 1960s by Dan Mitrione, of the U.S. Office of Public Safety (which was founded to facilitate the training of officials in states believed to be threatened by Communist subversion).

“Before all else,” Mitrione explained to his Latin American protégé, “you must be efficient. You must cause only the damage that is strictly necessary, not a bit more. We must control our tempers in any case. You have to act with the efficiency and cleanliness of a surgeon and with the perfection of an artist.”

Mitrione was a bureaucrat at heart. “It is very important to know beforehand whether we have the luxury of letting the subject die,” he said, adding that a “premature death means a failure by the technician.”

Compare Mitrione’s claims with the words of the top lawyer at the CIA’s CounterTerrorism Center, Jonathan Fredman, at a 2002 strategy meeting (the minutes for which were released in 2008 by Carl Levin as part of an investigation by the Senate Armed Services Committee, which he chairs). Fredman was similarly professional, emphasizing that “techniques that lie on the harshest end of the spectrum must be performed by a highly trained individual. Medical personnel should be present to treat any possible accidents.” He also discussed the strong requirement of a bureaucracy for documentation: “If someone dies while aggressive techniques are being used, regardless of cause of death, the backlash of attention would be severely detrimental. Everything must be approved and documented.” And he brought the same dark, almost humorous, perception of his task to bear, declaring that torture “is basically subject to perception. If the detainee dies you’re doing it wrong.”

Fredman, it should be noted, claims that he was “paraphrased sloppily and poorly.” The prudent degree of specificity may vary from regime to regime, but the mind of the torturer remains the same at all times and in all places.

February 04, 2009

[Harper's Magazine]
SICK IN THE HEAD
Why America won't get the health-care system it needs


The Government Sector

When Congress is in session, Michigan Congressman John Conyers holds a regular public meeting at the Rayburn House Office Building, where, if you happen to be interested in health policy, you are welcome to join like-minded citizens in considering the merits of HR 676, also known as The National Health Insurance Bill. If signed into law, HR 676 would require a single payer (the government) to provide health insurance to every American, which is likely why most Americans have never heard of it. Nearly every other wealthy nation has a single-payer system, but in the United States—or at least in Congress—single payer generally is understood to be too utopian, too extreme, and certainly too socialist for domestic consumption.

I was surprised, therefore, when I went to one of the meetings in July and found a hundred or so people stuffed into a stately conference room. Everyone had a notebook, but no one had the bored look of a political reporter. These were activists, young and mostly black or Hispanic. Conyers, along with several guest speakers, sat behind balusters on a low platform that crossed the width of the room. At the other end, near the door, someone had arranged a banquet table potluck style, with tins of homemade brownies and cupcakes. I pushed my way to one of the few remaining chairs in the back as Conyers, now at the lectern and speaking softly into a microphone, asked whether anyone would like to address the gathering.

The first to speak was a large man in an immaculate green suit. “My name is Kenny Barnes,” he said in a raspy whisper, “and I’ve got an organization called ROOT, Reaching Out to Others Together. It deals with the—my son was murdered, by the way—and it deals with the epidemic of gun violence that’s taking place in the United States of America.” Barnes quickly explained this striking interjection. Children in Washington were being traumatized by a culture of gun violence, and they had little access to mental-health services. A lot of them were being labeled as learning-disabled when in fact what they probably had was post-traumatic stress disorder. They needed help and they weren’t getting it.

Conyers thanked Barnes, and then more people spoke. Each of them told a similarly compelling story. A group of people had been forgotten; they needed help and they weren’t getting it. Some of the groups fit within familiar bounds—minorities with AIDS, for example—but others were parsed to an almost surreal degree of precision. One woman spoke, persuasively, about the special problem of black men who don’t floss. Another addressed the challenge stoplights present to old people who cannot walk across the street in the amount of time it takes for a green light to turn red. Conyers’s aides, watching from seats next to the lectern, would occasionally stand and walk over to someone, whisper in an ear, shake a hand. I wondered what the speakers thought would happen as the result of their varied petitions.

Then two doctors began to put all the divisions and inequities into context. Dr. Walter Tsou, well-fed and graying, first gave a PowerPoint presentation brimming with data about health disparities between various groups in America. We learned that the black infant-mortality rate is still double the white infant-mortality rate, that many doctors are strangely reluctant to recommend cardiac catheterization for elderly black women with chest pain, that Asian Americans had a significantly higher occurrence of hepatitis B than non-Asian Americans until 1993, when doctors began vaccinating all newborns against the disease. Remedying these disparities, Dr. Tsou said, was not a matter of repairing the health-care system. It was a matter of repairing everything. Your health is determined not only by your genes, after all, but also by your environment. And that environment is determined by the rules society itself sets up—rules about who lives in what place, who goes to what school, who gets what job. “Until we actually address the social determinants of health,” Dr. Tsou said, “we will not truly eliminate health disparities.”

The next speaker, Dr. Robert Zarr, continued the line of thought. “The single most important reason why we see these disparities is lack of health insurance,” he said, with staccato confidence. “That is the truth. It’s the truth for those of us who have gone periods of our lives without health insurance. It’s the truth for my patients.” Dr. Zarr explained that he is the director of pediatric medicine for a group of community health centers that process more than 60,000 pediatric visits a year, and that most of the children who come through have a shaky connection at best to any kind of benefits. Without insurance, he said, there was not much he could do for these kids. “What good is it if I write them a prescription for the antibiotic if they don’t have money to go to the pharmacy to get it? What good is it that I diagnose dysplasia of the hip of a baby if I can’t get him in to a specialist to get seen?”

A natural salesman, Dr. Zarr then asked his audience some leading questions. “What if I told you there’s an answer right now, right here and today? There is an answer to getting rid of this single most important barrier. Can anybody tell me what that answer is?” Several people in the audience, anticipating what would become a Dem ocratic campaign mantra, shouted out: “Universal health care!” But Dr. Zarr was indignant. “Not just universal health care—even President Bush talks about universal health care!—single-payer universal health care.” Then he lowered his voice. “Now let me tell you why. HR 676 clearly is going to give lifetime, comprehensive, quality access to care to every single American. Keep it simple. That’s what it is going to do.”

This was a strong claim, of course. Single payer would not end racism. Poor people would still be poor and sick people would still be sick. There was no doubt, though, that in a single-payer system the whole idea of “forgotten groups” simply could be eliminated. Instead of separating the healthy from the sick, the high-risk from the low-risk, the rich from the poor, a single payer would unite all Americans into a single system. There would be no tiers, no ghettos, no red lines, at least not in terms of access to health insurance, because a single payer—the government—would cover everyone.

There was one phrase we had to remember, Dr. Zarr said, and it was this: Everybody in, nobody out. “Say it with me. Everybody in, nobody out.

A Preference for Markets

The argument for single payer is straightforward. When everybody is in, you don’t have to spend a lot of time and money deciding who to keep out. You also don’t have to worry about what to do with the people you’ve kept out when they get sick anyway. (Uninsured sick people cost insurers nothing, but since they often end up seeking expensive emergency-room treatment, they cost taxpayers a lot.) If you want to quit your job and work someplace else, you can do so without fear of losing your health insurance, which means that labor is more mobile. And employers don’t have to carry the burden of benefits, which means that capital is more mobile. If you get sick, you don’t have to worry about losing your coverage or your house. Your insurance is paid for through taxes. And your taxes don’t go up just because you have a preexisting condition; under single payer, there is no such thing as a “preexisting condition.” Moreover, your provider—the single payer—has an incentive to keep you healthy your entire life, rather than just getting you to age sixty-five and then dumping you into Medicare. And if the experience of most other countries is any indication, the whole thing would cost a lot less than our current bloated mess of a system.

The benefits of single payer were at one time if not a matter of consensus then at least a topic considered worthy of discussion, at least among Democrats. “I happen to be a proponent of a single-payer health-care program,” Barack Obama said in 2003. “As all of us know, we may not get there immediately. Because first we have to take back the White House, we have to take back the Senate, we have to take back the House.” And yet as Democrats began to take all of those things back, Obama began to reconsider. In 2007, he recast the debate in terms that were more reflective than prescriptive. “If you’re starting from scratch,” he told The New Yorker, “then a single-payer system would probably make sense. But we’ve got all these legacy systems in place, and managing the transition, as well as adjusting the culture to a different system, would be difficult to pull off.” And now that Democrats have the White House, the Senate, and the House, it is clear that a single- payer program is not a part of their agenda.

Something is going to happen, though. That much is certain. And it probably will be similar to the approach set forth in a white paper this November by Montana Senator Max Baucus, who is chairman of the Senate Committee on Finance. The plan borrows ideas from (among many others) Hillary Clinton, incoming Secretary of Health and Human Services Tom Daschle, and Obama himself. The details are vague, but the outline is clear. It achieves universal coverage by requiring Americans who do not already receive health benefits to purchase insurance from a private company. (Obama has proposed that such a mandate should cover only children, but Daschle—whom Obama has charged with overseeing the reform process—has called for the mandate to be universal.) In turn, most employers would be required either to provide benefits to all of their employees or to pay into a fund that would be used to subsidize the purchase of private insurance by those who could not afford to pay for it themselves. This approach is designed not only to assuage the concerns of the many Americans who do not want to change their present arrangements but also to keep America’s health-insurance plans—which employ half a million people, and which saw a major decline in profits in 2008—in business.

It would not be unfair to describe the Baucus approach as “market-oriented.” This may, in fact, be why it has emerged as an acceptable locus of reform. In Washington, there is little that is considered wiser or more bipartisan than a preference for markets. And that preference can even be expressed in terms that are surprisingly far from the standard Cato Institute talking points. Jill Quadagno and Brandon McKelvey, researchers at Florida State University, for instance, report a widely held vision of so-called consumer-directed health care, which would inject an almost Naderite devotion to consumer awareness into discussions about health care. The goal, they write, “is to transform patients into informed consumers by making medical care into a commodity that is purchased in the same way as other market goods.”

This preference for markets is common, but it is not wise. The health-care system is not at all like other markets, because health, for obvious reasons, is not at all like other goods. (The demand for not dying, to give just one example, is pretty much unlimited.) And in America, market-based solutions very often end up involving the government anyway, as has been made evident most recently in the aftermath of the failed deregulation of Wall Street. Thus far, as Quadagno and McKelvey note, the consumer-directed health-care vision actually has “been implemented through obscure changes in tax law, technocratic provisions added to bills designed for other purposes, experiments with Medicaid ‘waivers’ and a new option, Medicare Advantage,” which introduces supplementary private insurance into the Medicare system. Which is to say that no matter what happens this year, the dead hand of government will continue to direct the flow of health-care dollars in the United States.

Nonetheless, Democrats clearly do not want to discuss the role of government in terms that could be understood as unfriendly to the market. “We all have to keep an open mind on all this stuff, figure out how to get to yes. Everything is on the table,” Baucus had cautioned. “The only thing that’s not on the table is a single-payer system. That’s going nowhere in this country.”

Which is why I was in Washington. Even the new president, with a near landslide victory and a huge congressional majority, sees an intraversable divide between what he himself has claimed to understand as the best approach and what can actually be done. I wanted to know what defined that divide, and why single payer fell on the far side.

Rationing

No one doubts that fixing the health-care system is going to require someone to make difficult choices. In 2006, Americans spent $2.1 trillion on health care—at $7,026 per person, more than any other nation—and yet we lag far behind other nations in such measures as infant mortality, life expectancy, and early detection of life-threatening illnesses. This bad bargain is irritating, of course, but, more important, it is also unsustainable. Aging baby boomers will increase their demands on the Medicare system even as the government faces the revenue shortfall that will result from their retirement. Employer-based health care, meanwhile, is increasingly unaffordable, causing many companies financial distress. And even as the cost of the system goes up, a growing number of Americans are being left out of it entirely.

The word for making such choices, so often unsaid in American politics, is “rationing.” All health-care systems, no matter how wealthy, require some form of it, because advances in technology always outpace the ability to pay for them. But there are (at least) two ways to decide who gets what in a health-care system. One of them is to let the market sort it out: those with the most money get the most care. The other is through triage: society seeks to determine, within a given budget, the most effective treatment for the greatest number of people. The difference between these two approaches is significant.

We tend to think about systems in terms of our personal motivations. But systems have their own logic. A market system may be driven by individuals or corporations seeking profits, but the primary function of the market itself is to grow. That is why growth, not profit, is the conventional measure of economic health. Similarly, a health-care system may be driven by individuals seeking to improve their own health, but the primary function of the health-care system itself is (or ought to be) to ensure the overall health of society. Within each realm, these goals can sometimes be in conflict. The success of banks at maximizing profits for a time by extending shaky loans to prospective homebuyers, for instance, has resulted in a recession (i.e., negative growth). Similarly, individuals acting on various beliefs about health care—say, that vaccination leads to autism—can cause the health of society as a whole to decline. More important, though, is that these two realms themselves be understood as independent. Societal health and economic growth are not mutually exclusive, but they are in tension, and when we confuse one with the other, problems occur.

In Overtreated, Shannon Brownlee argues that the major problem of health care in the United States is not that there is too little but that there is too much. “We know that people who don’t get enough care have a higher risk of death,” Brownlee told me. “About 20,000 Americans die prematurely each year from lack of access. But getting unnecessary care isn’t any better for you. In fact, about 30,000 Medicare recipients die each year from overtreatment. This sounds counterintuitive until you think about the fact that practically any medical treatment you can name poses some risk.” For instance, doctors regularly test prostate-specific antigen levels in men to see if they have early signs of prostate cancer. As Maggie Mahar, the author of Money-Driven Medicine, explained it to me, this sounds like due diligence, but in fact the National Cancer Institute does not recommend routine PSA testing, because the majority of older men diagnosed with this slow-growing cancer will die of something else before they experience any overt symptoms, whereas if they are treated for prostate cancer, many will experience such side effects as erectile dysfunction, incontinence, and sometimes even death. “When I was at a conference in Berlin last spring, doctors from other countries were shocked that we still do routine PSA testing,” Mahar said. Why do we do it then? “Urologists stand to gain. The prostate-testing market is worth $200 million to $300 million annually. And no doubt many urologists believe they are saving lives.”

Overtreatment, of course, is another word for growth, and it is the natural consequence of a market-driven system. A triage approach, meanwhile, would save money, both by removing some (though not all) of the incentive to overtreat and by simply eradicating the massive bureaucracy that currently is required just to figure out who is paying for what. Physicians for a National Health Care Program notes that “private insurance bureaucracy and paperwork consume one-third of every health care dollar” and that going to a single-payer system “would save more than $350 billion per year.”

So the mystery remained. Why is single payer off the table? At Conyers’s meeting, it was Dr. Zarr who presented what at first seemed to be the most plausible theory. “You’ve got to get rid of the middleman,” he said, “and that middleman is the private health-insurance industry. And they have got to go.

The Insurance Sector

America’s health-insurance plans are represented in Washington by an organization that is called, plainly enough, America’s Health Insurance Plans. Its headquarters happened to be a very short walk from the Rayburn House Office Building, and I had managed to convince its president and CEO, Karen Ignagni, to spend a few minutes with me. When we sat down in AHIP’s sleek conference room, I mentioned that I had just been at a meeting on the Hill where people were discussing how to enact a single-payer system.

“Oh good,” she said. “That’s great!”

Ignagni, who is brisk and extraordinarily attentive, had worked for a time at Walter Reuther’s Committee for National Health Insurance, a background that might strike some as pretty distant, philosophically speaking, from AHIP. I asked if she was surprised to have ended up representing insurance companies. She said it was not something she had planned on. She seemed, however, to have adapted well to the role.

I explained that I had been thinking a lot about information. The profit in the current system, after all, comes not from acquiring as many customers as possible but rather from creating two classes of possible customers—good risks and bad risks—and avoiding the latter class entirely. As we get better at understanding why people get sick, we will also get better at deciding whether or not to insure them. Ultimately, the entire nation could be reduced to two perfect circles: the people who pay for insurance and don’t need it, and the people who need insurance but can’t pay for it. “I mean, asymptotically,” I said, “you will slowly approach perfect knowledge . . .”

“Which will be a terrific thing for patients, a terrific thing for clinicians,” Ignagni leaped in. “It’ll be a terrific thing in terms of actually improving health.” Which is true. Genetic medicine, everyone agrees, will likely help millions of people enjoy longer, healthier lives. But Ignagni was not addressing my concern about groups. Doctors can use evidence to achieve their ends, I proposed, which presumably are to improve the health of their patients. But insurance companies can also use it to achieve their ends, which presumably include reducing medical losses. “But that’s not true in the health-care arena,” Ignagni said, “because they passed the genetic nondiscrimination bill.”

Ignagni was referring to the Genetic Information Nondiscrimination Act, or GINA, which was made law last May. Under GINA, it is illegal for employers or insurers to deny anyone health coverage on the basis of genetic data. If people feared losing their insurance because their very genes could be understood as a preexisting condition, they might avoid seeking information that could help them stay healthy. Ignagni said that the health plans had thought about the law a great deal. They didn’t want to lose a powerful underwriting tool, but ultimately they decided that the benefit, in terms of long-term health, outweighed the cost, in terms of inefficiency of pricing. My concerns, therefore, were unfounded. “We would have, I think, a very different conversation today if the legislation hadn’t been passed,” Ignagni said, but “this perfect information that you’ve talked about is, I think, a marvelous opportunity to actually reduce health-care costs.”

Her reasoning seemed humane and, for the moment, economically sound. Genetic information still makes up only a very small proportion of what underwriters examine to determine health risks. But the trend is clear. What will health- insurance plans do as larger and larger categories of health information are determined to be off limits? As the database grows, so too will the temptation to revise GINA, to use some part of the data, perhaps, to make decisions about who to insure and who not to insure, just for the sake of efficiency. What else are insurance companies for, after all, other than apportioning risk? How could AHIP support a law that logically concludes in the demise of underwriting?

Ignagni’s answer was not what I expected. In fact, she echoed precisely the words I had heard Dr. Zarr chanting just a few hours earlier. “In the new market,” she said, “everybody’s going to be in. And then—and I don’t want to be an irresponsible Pollyanna about it—but if you have everybody in, you have the large numbers working for you.”

In retrospect, I should not have been surprised. The preference for markets is more often claimed than felt; the preference for profit is far more sincere, and the method by which it is achieved—competition, bribery, lobbying—is a secondary concern. After Ignagni and I met, when the Obama transition team had made clear that some form of universal health care was forthcoming, she announced that AHIP would support a law requiring private insurers to provide insurance to all people regardless of their medical condition—a form of insurance known as “guaranteed issue”—if Congress would in turn require all Americans not covered by government insurance programs to buy some form of private insurance. This combination of guaranteed issue and individual mandates would add up to a system wherein the government requires healthy people who do not want insurance to buy it anyway, in order to subsidize unhealthy people who need insurance but can’t afford it—which sounds like what most people would call “socialized medicine.”11. In a notably succinct press release, Rose Ann DeMoro, the executive director of the California Nurses Association/National Nurses Organizing Committee, called the AHIP proposal a “Marshall Plan for the health insurance industry” that “fully privatizes profit while socializing the health-care risk. The public systems could be bankrupted by their responsibilities to care for the sickest while guaranteeing huge new profit streams” to insurance companies, which would continue to avoid selling insurance to anyone who actually needed it. “Rather than subsidizing these industries,” she concluded, “we would be better off either letting them fail, or simply taking them over, as we have been forced to do with other obsolete sectors.”

A Preference for Invention

If the insurance companies themselves were openly endorsing a non-market solution—albeit one that required millions of new customers to buy their products—then what else could be preventing Americans from embracing single payer?

Another possibility is that Americans believe technology itself will be their salvation. If nothing changes in the next decade, at least one fifth of our economy will end up devoted to health care. And most of that money will be spent not on basic care or preventative treatment but on expensive new technologies such as thallium heart scans. One report, from the Center for Studying Health System Change, suggests that new technology may account for as much as two-thirds of spending growth.

So here was another clue. One of the major trends in U.S. health care is “evidence-based medicine,” which calls for making medical choices by comparing empirical evidence about an individual patient’s condition to a larger body of best practices. This may sound like common sense, but medicine for most of history has been imprecise, decentralized, and as much an art as a science. With extremely complex and expensive genetic and proteomic procedures increasingly defining the future of medicine, however, doctors—and insurers—will come to rely on the same industrial practices that previously made it possible to manufacture jet fighters or set up an international retail operation. At least that is the hope.

Americans put a considerable amount of faith in their nation’s industrial capacity. A 2001 study found that 45 percent of Americans “disagree that it is impossible for any government or public or private insurance to pay for all new medical treatments.” That is to say, they believe that the U.S. health-care system has the potential to pay for every single new treatment that someone invents. The only nation with a more positive outlook is France, where 51 percent believe this and where socialized medicine is considered a birthright. (The European average is 36 percent.) This apposition may not be as odd as it seems at first. Many Americans appear actively to desire socialized medicine, even by that name. In one recent survey, a 45 percent plurality of Americans claimed to prefer a system of “socialized medicine.” And another survey found that 59 percent of American physicians now support some form of national health insurance, up from 49 percent in 2002. Here was evidence, contrary to the Washington consensus, that in the American faith, markets were an easily discarded icon—our preference was for something far deeper and stranger.

Rationalizing

There is a great deal of literature available on health reform, and most of it is just as colorful as you might expect. Every once in a while, though, you come across something unusual. One recent such exception is Skin in the Game, which was written by John Hammergren, the chairman, president, and CEO of McKesson, a health-services corporation in San Francisco. The title suggests that Hammergren is making another boilerplate argument for consumer-directed health care. He is a fan of evidence-based medicine, believes that health care can be understood as a commodity, and his argument builds from the claim that people don’t currently have enough “skin in the game” because the current structure of health-care delivery hides from them many of the costs of the decisions they make. What is fascinating about Hammergren’s argument, though, is what he proposes as the means by which consumers will be made to understand those hidden costs.

Hammergren shares with his Silicon Valley neighbors an abiding faith in the power of technology to improve the world, and that technology would apply not only to the treatment but to the system itself. A smarter system of health care would first recognize itself as a system, and then it would attempt to perfect itself. This would require a kind of scientific management of the human body, in which health providers analyze every part of the patient’s interaction with the system. The mortal coil would be, as much as possible, shuffled into a controllable digital component. Hammergren’s prescription has the flavor of the early twentieth century—what he calls the “golden age of management and operational efficiency”—when Frederick Taylor and other corporate philosophers began to equate rationality with profitability. A hopeful time.

Hammergren first argues that technology will revolutionize specific forms of treatment. Heart surgeons, for instance, will deploy “caterpillar robots” that crawl through a small incision into your heart. No rib cracking, no collapsing of the left lung, no hands inside the chest cavity. It is an expensive technology, Hammergren writes, but one that is only a few years away. Eventually, molecule-sized robots may be able to repair individual cells and even strands of DNA, with the result that people will be able to live two hundred years “without showing any signs of aging.”

But Hammergren’s real concern is much larger. The robot caterpillars will be wielded in service of what he and others call “personalized medicine,” a numbers-driven approach to treatment that tailors every decision to your individual genetic makeup. Personalized medicine “goes beyond the idea of genetic testing to an entirely new level of care, and in turn a higher quality of life,” he writes, and it will require its own entirely new system of data collection—its own aggregation of large numbers:
When my oldest daughter has her first child, I believe that baby will get a genomic profile for roughly $800. The data obtained through that profile will be stored in a central information system, called an integrated delivery network (IDN), to which primary care physicians and specialists will have access throughout the course of my grandchild’s life. Within the IDN database there will be a kind of artificial intelligence search engine—based on the principles of semantic knowledge and driven by complex algorithms—that can support physicians in their decision-making and recommendations.

My grandchildren’s doctors will know from the moment of birth the likelihood that they will develop some form of chronic condition, cancer, or other significant illness. This knowledge will shape and form their health care for the rest of their lives. Compared to today’s 40-year gap in treatment, my grandchildren will receive constant monitoring and prevention. Tapping the database’s artificial intelligence, their doctors will know which clinical interventions will be most effective, which cardiology or cancer drugs they will respond best to, and when care should be delivered.
Hammergren’s culminating vision of an integrated delivery network is simultaneously deeply idealistic—indeed hopeful—and disturbing. With all its tender humanity and seductive hubris, the passage reads as the first chapter of a cautionary tale.

The Technology Sector

McKesson, it turns out, is the eighteenth-largest corporation in the United States, and the largest corporation of any kind that is involved primarily in health care. In 2008, its various businesses—the distribution of drugs and surgical supplies, and the sale of information systems for all aspects of health care—generated nearly $94 billion in revenues. By comparison, revenues for UnitedHealth Group, the country’s largest health- insurance provider, were just over $75 billion. McKesson processes about 80 percent of all the prescriptions written in America, nearly 10 billion transactions a year—more than Amazon and eBay combined.

The company cultivates a low profile. I lived about eight blocks from its world headquarters—a thirty-eight-story black slab on Market Street—at the time of the dot-com bubble, but until I read Hammergren’s book, I had no idea the place existed. Still, when I called the company to see if I could talk to someone about the integrated delivery network, they not only were receptive to the idea but also suggested that I tour the company’s Vision Center.

Which is how, in August, I came to be standing with Tracy Webber, who oversees the Vision Center, and Randy Spratt, who is McKesson’s CIO, in front of a museum case filled with ancient bottles and advertisements. “This is where we honor our 175 years of history,” Spratt said. McKesson had entered the world as a distributor of imported worm seed, effervescent sodium phosphate, soap bark, and Russian oil, all on display. One yellowing flyer proclaimed that McKesson’s imported Russian oil “lubricates and aids excretion without harmful medicinal action.” I asked Webber, who was exceedingly friendly and well informed, if Russian oil had been a cure-all. “I’m sure,” she said.

Webber pointed to several three-foot-thick stacks of dusty Post-It-sized slips of paper that had been pierced with a wire and now hung like sausages from hooks at the top of the case. “That’s one year’s worth of prescriptions from 1910. That’s the very first pharmacy-management system.” She then pointed to what looked like the handheld scanning device that UPS drivers carry to track their deliveries. “This is the Mobile Manager 100,” she explained. McKesson was the first company to use bar-code scanning technology for distribution. With the Mobile Manager 100, McKesson packers could fill orders for drugs with 99.98 percent accuracy. In Skin in the Game, Hammergren had suggested that doctors eventually would use similar devices to keep the integrated delivery network up to date on a person’s condition—they would be the nerves at the tips of the IDN’s fingers.

Webber guided us to the next exhibit, a computer monitor that was displaying what appeared to be a simple email program. She explained that this system would allow me to interact with my physician when I had basic questions that didn’t require an actual visit.

I said that sounded useful.

“Well, physicians would never respond to you,” Webber said, “because they’re not being paid to respond to you.” Spratt explained the solution. “The trick to it was going out to the payers and getting them to reimburse the physician for an electronic visit, which we have done.”

Payers would benefit because it would cost less than an office visit?

“Right,” Spratt said. “The physicians make out because they can do ten of them in the time it took them to do one office visit. And the patients make out because of convenience. And the payers make out because it costs less to treat the patient in total.”

McKesson, it turned out, also handles about a third of the nation’s health-care claims transactions. Indeed, as we wandered into the Vision Center, it was becoming obvious that the company had at some point traded up from the distribution of cure-alls to the far more profitable business of distributing information, though sometimes it was hard to tell which business was which. Pills certainly are amenable to information metaphors—little bits of data, distributed as efficiently as possible across the country, each with a discrete task and a clear profit margin; a rush of pills through the system, all of them marked, radio-frequency identified, and tracked perpetually like the packets that flow through the Internet. The process of distributing pills, like the process of distributing data, is non-intuitive, does not require the human touch, and is in constant peril of failure due to human error.

I asked Spratt about this blurring of functions, and he nodded. “Some of the most critical processes in health care are the processes around safety,” he said. “And then there’s another set of processes that, essentially, surround the translation of you into information and then the reading of that information and the response to that by a caregiver.”

This sounded familiar. In the 1990s, the process was called “bit-from-it,” and the idea was that solid matter was too difficult to manipulate, whereas bits—information—were much easier to control. Business consultants and futurists made their fortunes by explaining to executives that bits were the future and atoms were the past. I was a little embarrassed to be using decade-old pop-business jargon, so I asked Spratt if the company had a more modern term for bit-from-it. “Oh,” he said. “We would call it adoption.”

McKesson, it was becoming clear, was serious. “Some people, fearing change, say we shouldn’t automate the process of health care because we will lose the human touch,” Hammergren had written. “I believe that the human touch can only be reclaimed by relying on automation.” Such claims seemed ambitious and even somewhat absurd in print. Spratt had been careful to point out, though, that everything at the Vision Center could be purchased and put to use today. The inventions I had seen thus far were just the beginning, a primitive sensory apparatus for an evolving system. I asked whether the information systems at the Vision Center were the prototype for an integrated delivery network.

Spratt nodded again. “We think this is the most effective way to get there,” he said. There had been many experiments with regional databases, but nothing had yet caught on in terms of a large-scale adoption. The plan was to get the doctors and patients involved, such that they built the database from the bottom up with their own hands, by entering millions upon millions of queries, diagnoses, and prescriptions into various McKesson systems. “We don’t need to build giant databases in the sky,” Spratt said. The databases would simply emerge over time.

I looked at the monitor again, and then asked Spratt where the information for the system in front of us resided.

“This resides in servers that we own and operate,” he said.

Interlude: the Measure of Man

We continue to invent devices that “translate us into information,” but we sometimes forget that such measurement itself was an invention, and a revolutionary one at that.

Marcus Vitruvius Pollio, the great Roman architect and engineer, for instance, considered the structure of the human body as a matter of qualities, not quantities; he saw not numbers but rather spectacular symmetries: “If a man be placed flat on his back, with his hands and feet extended, and a pair of compasses centered at his navel, the fingers and toes of his two hands and feet will touch the circumference of a circle described therefrom.” And so from this insight, Da Vinci painted his famous Vitruvian man, a picture of proportionality and health. A perfect specimen.

But it is not the Vitruvian man that is stored in McKesson’s servers. Proportion is not the same as measure. Proportion relates one reality to another. Measure transports reality to a virtual realm. It allows us to translate a material fact into an idea, and ideas, in their Platonic perfection, are considerably more amenable to the strictures of rational analysis than the raw grit of nature. “What else can the human mind hold besides numbers and magnitudes?” asked Johannes Kepler in 1599. “These alone we apprehend correctly, and if piety permits to say so, our comprehension is in this case of the same kind as God’s, at least insofar as we are able to understand it in this mortal life.”

It was by using this godlike ability to translate material qualities into abstract—and therefore controllable—quantities that the physician William Harvey was able to discover the circular nature of the flow of blood in humans, a discovery equivalent in physiology to Galileo’s claim of heliocentrism in cosmology. Caspar Hoffmann of the University of Altdorf reflected the beliefs of the day in a rebuke to Harvey. “Of a truth,” he chided, “you do not use your eyes or command them to be used, but instead rely on reasoning and calculation, reckoning at carefully selected moments how many pounds of blood, how many ounces, how many drachms have to be transferred from the heart into the arteries in the space of one little half hour. Truly, Harvey, you are pursuing a fact which cannot be investigated, a thing which is incalculable, inexplicable, unknowable.”

Hoffmann was a man of the past, though. Even as Harvey introduced new techniques of measurement to medicine, others were already introducing them to commerce, notably with the invention of double-entry bookkeeping, which led shortly thereafter to the invention of the corporation. Indeed, the new truth, as Walter Burley of Merton College was quick to observe, was that “Every saleable item is at the same time a measured item.”

These seem like ancient insights, but our preference for invention has only amplified their relevance. And when the measure of man and the measure of markets combine, the results are positively futuristic. The IBM Institute for Business Value, for instance, envisions a new age of “just-in-time insurance” in which “maximum efficiency in risk pricing” would be achieved by segregating the components of customers’ lives into measurable “spaces” through which they journey in a given day.

Each step of the journey represents a different risk such as car-to-train-station, train-to-city-station, station-to-office, and so on. Each leg of the trip truly represents a varying amount of risk. A “pay-as-you-live” product would trade some location and time-of-day privacy data for lower insurance bills overall. And in the spirit of active risk management, the same network of sensors could also provide convenient information (such as advice on avoiding an overloaded expressway) relayed on the appropriate device such as the car audio system, a phone, and, then, in e-mail or as a phone call in the office.

Pay as you live! Just so.

The Technology Sector

As we continued our exploration of the Vision Center, it was becoming clear that McKesson had also considered the problem of transforming data back into action. At some point, the prescription would become a pill, and the pill would become a cure. A real person would really be healed or not. But the translation from the information realm to the physical realm is often highly flawed. One major breakdown is in the simple process of selecting a pill. A large hospital delivers thousands of doses a day, and each of those deliveries presents a chance for error. A 1994 study in the Journal of the American Medical Association found that patients in intensive care underwent an average of 178 “activities” per day, and that 1.7 of those activities were in error—a 1 percent failure rate, which is quite high compared to what engineers would find acceptable in assembling airplanes or nuclear power plants.

The McKesson solution, as Spratt was now explaining to me, was to remove people from the process. We were standing in front of a six-sided device that looked like the game-show cage into which contestants are locked with a vortex of swirling loot and invited to grab all they can. This was a far more precise system, however. At the center was a robot arm. And on each of the six interior walls were two dozen spindles, each hung with UPC-labeled plastic bags full of candy, which stood in for pills. The entire contraption was “an automated inventory-management tool,” Webber said—basically, an extremely reliable vending machine.

Webber pushed a button and the machine leaped to life with a series of Star Wars –type servo noises. “You see these little packets?” she asked. “We dump unit bar-coded inventory into the robot. It stocks it on the right spindle, and then it intercepts orders from the pharmacy, and picks those orders, plops them in a name board, labels it with the patient’s name, and that’s your medication for today.”

This seemed like a fairly elaborate approach to selecting an item from a bin. I asked if it was popular. “To my knowledge,” Spratt said, “we have never had one uninstalled.”

Just as a computer can be made to understand the contents of a roomful of pills, it also can be made to understand the contents of an entire hospital. Spratt pointed to another monitor, which displayed a floor plan. “This is a map that links into the systems that say, for example, ‘This guy’s meds are due,’ ‘This guy needs to be transported to get an X-ray,’ ‘This guy is due in the ICU in nine minutes,’ and so on.” It was another feeler for the future IDN. “We can take patients, we can take high-value assets like pumps or parts on wheels, and we can put tags on them that would tell you where they are in the hospital, so if I need a pump right away, I can find the closest one.”

I asked if that meant hospitals would have to tag everything with some kind of radio-frequency identification device.

“Everything of significant value,” Spratt said.

What about at home? Was there a way that people at home could be similarly virtualized for more efficient analysis and treatment?

Spratt nodded again, and Webber explained, “So the Health Buddy is our home monitoring device that would be placed in the homes of patients who are suffering from chronic ongoing diseases.” The Health Buddy looked like a clock radio. It had a very simple display—large letters and icons to make it easy for old people to comprehend—and, like a video-game console, it was capable of accepting various input devices, including a blood-pressure cuff, a scale, and a blood-glucose meter.

“That’s all transmitting back to a caseworker or care coordinator who is trying to decide how often to send a nurse into the home,” Webber explained. What sort of data does it track? “I really think something interesting might be the fact that you got out of bed,” Webber said. “You have a sensor that shows that the patient is being active and has picked the phone up.”

I asked if the Health Buddy would eventually be capable of actually treating people, for instance by taking genetic or proteomic data and simply manufacturing the appropriate prescription on the spot. “That would be very long range,” Spratt said. “But it would be, for example, very easy—we’re not there yet—to have it, with a caregiver intervening, electronically prescribe a new drug and have that delivered to you overnight.”

In the meantime, there was our next stop: the automatic pharmacy machine. It looked like a regular ATM, only with a door flap like on a candy machine instead of a bill dispenser. Webber pushed a button and a robotic voice said, “Please enter your data first.”

“One of the biggest problems is finding new pharmacists who are willing to count pills,” Spratt said. “If you have a prescription and a retail automation pharmacy system, this device here can automatically count the pills from a pretty good sized inventory of potential molecules.”

The robot voice said, “Please enter your PIN.”

Spratt continued, “So it will say, okay, you get thirty—in my case, maybe Xanax—and it’s going to dump thirty Xanax into here.”

The robot voice said, “Please touch the boxes below to select your prescriptions.” Then it said, “Please sign your name to acknowledge that you are receiving medication that has been billed to insurance.”

A Preference for Ideas

I had been, and remain, skeptical of Hammergren’s vision of data-driven salvation, but as I left the McKesson office, I did find myself unexpectedly drawn to the idea that our fallen world could be reborn within a system of our own creation. And as I considered my own strange compulsion, I began to suspect that the preference for markets and the preference for invention were just subsets of another, deeper preference—so deep perhaps that we were not even quite aware of it.

Anyone who has spent any time fighting for the health of the disembodied entity known as a corporation knows that disembodiment is itself a primary advantage. The human body is frail, after all, and subject not only to the physical laws of life—somatic decay and other factors that cause life to require death—but also to the physical laws of all material, living or not. Newton and Yeats agree that everything must fall apart eventually. Corporations, on the other hand, are inventions of the mind. They exist as agreements, brands, and statements, none of which answer to any of the laws of thermodynamics. Corporations can live forever, because they are really just ideas.

Of course, only the most outrageous West Coast transhumanists actually believe they can cast off the mortal realm of meatspace for the immortal realm of the virtual. Those of us who are not making plans to enter cryogenic storage, however, still are part of a culture predisposed to virtuality, a culture where in every realm of endeavor, from industry to politics to art, the word trumps the deed and the immaterial emotion trumps the material fact.

In our culture, though, this will to dematerialize remains, perhaps somewhat ironically, unarticulated and inchoate. This may explain our strange, indeed neurotic, sense that the needs of the “system”—whether it is the system of governance or the system of markets or the system of technological innovation—are now more important than the needs of the individuals such systems once were assumed to serve. We are a nation of closeted Pythagoreans, ashamed of and enthralled by our secret desire to assume the eternal nature of numbers, and made powerless by that shame.

Rationalization

Jonathan Simon, a sociologist who has studied the use of statistics in law enforcement, made a similar observation in a 1988 paper for Law and Society Review. “Over the last century there has been a significant growth within our society of practices that distribute costs and benefits to individuals based on statistical knowledge about the population,” he wrote. And these practices “are successful largely because they allow power to be exercised more effectively and at lower political cost.”

The market price of maintaining or slightly modifying the current system is indeed quite low; the health-care lobby—which is to say, all of the people who benefit from the current system—gave just under $150 million to Congress and the presidential candidates last year. That is a terrific bargain for them, but it does not explain why the rest of us are willing to sell our health so cheaply.

In the United States today, we can use our belief in numbers, which borders on the religious, to rationalize any amount of inequity. We can tell ourselves that we can’t have a system that guarantees health care to every American because such a system would be “inefficient.” We can tell ourselves that we must accept a world in which children suffering from post-traumatic stress don’t get any help because the numbers don’t support it. We can tell ourselves that we must trust our health to insurance companies, that markets are wiser than doctors, that we can afford every technology. We tell ourselves stories and we rationalize our prejudices, and, unless we are willing to more specifically address the “social determinants of health,” as Dr. Tsou said, we will continue to drift toward a change far more substantive than anything currently under consideration in Congress, a change suited to the few who care to exert their will. It may be a revolutionary system of corporate medical control, or a catastrophic financial collapse resulting from hubristic overtreatment, or a medical crisis stemming from some seemingly minor flaw in the heuristics of the integrated delivery network, or just a further increase in the massive inequities that already disgrace our current system. Whatever that change is, though, it will in the end be defined by the passivity of a people that has sacrificed its own, democratic power of large numbers on the altar of strange and unstated beliefs.

The Government Sector

On the third morning of the Democratic National Convention, a few hundred delegates and a dozen reporters gathered around cloth-draped dining tables in a conference room at the Denver Performing Arts Center to listen to the leaders of the Democratic Party describe how they would fix the health-care system. Andy Stern, the stubby, exuberant head of the Service Employees International Union, outlined the aim of the conclave with a convention-style chant.

Stern: What do we want?

Delegates: Health care!

Stern: First hundred days?

Delegates: Yes!

Stern: First hundred days?

Delegates: Yes!

Stern: In the first hundred days of the Obama Administration we’re gonna solve this health-care problem!

The speakers were notably specific, and they demonstrated genuine passion and intelligence. Michigan Congressman John Dingell, the longest-serving member of the House, made his way to the podium on crutches and explained that health care reform was needed because “a man ought not die like a dog in a ditch.” He added that a man has a right to an attorney when he commits a crime but no right to a doctor when he gets cancer. His anger—and his emphasis on “a man” as the relevant object of social consideration—were unusual. And yet, he concluded, “This is no longer a matter of social concern, or of humanity, but of economic salvation,” and thereby seemed, at least to my mind, to have upended the order of priority that once had informed the conscience of all thinking people.

The room grew more and more crowded. Every speaker was receiving a standing ovation. And a theme was developing. The Democratic governors and congressmen, the labor leaders and community activists, all of the public figures on the very farthest outer-left fringes of the respectable health-care debate in the United States, knew this one thing: the U.S. was never going to get health-care reform until the big money decided that heath care reform was in its interest. And the good news—the amazing news—was they have decided it is in their interest.

Up next was Kathleen Sebelius, the governor of Kansas. Until just a few days before the convention, Sebelius had been a leading contender to be vice president of the United States. She was tall and leathery and practiced. Her father had been governor, too, and she had made her own start in electoral politics as the state insurance commissioner. In Kansas, she said, the insurance regulator could take contributions from the insurance company he or she regulated, and she had long seen this as a preposterous conflict of interest. She had tried and failed to change the law, so she ran for insurance commissioner herself, on a platform of refusing contributions from insurance companies. She won, and within a few years was governor herself. She said that health-care reform was on the way, to believe it, that this was real. “We are finally at a tipping point in this country,” she said, “when we have a lot of voices in private industry calling for change!”

Then Ed Rendell, the Pennsylvania governor, with his bright orange tie and waxy block of a head, said, “If we control costs, the rest is easy.” Then Daschle, adding that “we’ve got a huge, huge cost problem.” And finally Hillary Clinton, whose husband asked America in 1993 to ask itself “whether the cost of staying on this same course isn’t greater than the cost of change.” Another standing ovation, longer and louder than all the rest. We cannot wait, she said. We have a plan—this current plan, this plan that represents the very best we can hope for—and it “is not only the moral approach, it is the economically sensible approach.” The crowd applauded again and again.

“This will be the kind of transformational change,” Clinton finally promised, “that comes once in a generation.” ■