Steven Eidman's Blog

The Official Blog of Steven Eidman

Posts Tagged ‘Democrat’

Posted by steveneidman on March 18, 2010

As Medicaid Payments Shrink, Patients Are Abandoned

By KEVIN SACK

FLINT, Mich. — Carol Y. Vliet’s cancer returned with a fury last summer, the tumors metastasizing to her brain, liver, kidneys and throat.

As she began a punishing regimen of chemotherapy and radiation, Mrs. Vliet found a measure of comfort in her monthly appointments with her primary care physician, Dr. Saed J. Sahouri, who had been monitoring her health for nearly two years.

She was devastated, therefore, when Dr. Sahouri informed her a few months later that he could no longer see her because, like a growing number of doctors, he had stopped taking patients with Medicaid.

Dr. Sahouri said that his reimbursements from Medicaid were so low — often no more than $25 per office visit — that he was losing money every time a patient walked in his exam room.

The final insult, he said, came when Michigan cut those payments by 8 percent last year to help close a gaping budget shortfall.

“My office manager was telling me to do this for a long time, and I resisted,” Dr. Sahouri said. “But after a while you realize that we’re really losing money on seeing those patients, not even breaking even. We were starting to lose more and more money, month after month.”

It has not taken long for communities like Flint to feel the downstream effects of a nationwide torrent of state cuts to Medicaid, the government insurance program for the poor and disabled. With states squeezing payments to providers even as the economy fuels explosive growth in enrollment, patients are finding it increasingly difficult to locate doctors and dentists who will accept their coverage. Inevitably, many defer care or wind up in hospital emergency rooms, which are required to take anyone in an urgent condition.

Mrs. Vliet, 53, who lives just outside Flint, has yet to find a replacement for Dr. Sahouri. “When you build a relationship, you want to stay with that doctor,” she said recently, her face gaunt from disease, and her head wrapped in a floral bandanna. “You don’t want to go from doctor to doctor to doctor and have strangers looking at you that don’t have a clue who you are.”

The inadequacy of Medicaid payments is severe enough that it has become a rare point of agreement in the health care debate between President Obama and Congressional Republicans. In a letter to Congress after their February health care meeting, Mr. Obama wrote that rates might need to rise if Democrats achieved their goal of extending Medicaid eligibility to 15 million uninsured Americans.

In 2008, Medicaid reimbursements averaged only 72 percent of the rates paid by Medicare, which are themselves typically well below those of commercial insurers, according to the Urban Institute, a research group. At 63 percent, Michigan had the sixth-lowest rate in the country, even before the recent cuts.

In Flint, Dr. Nita M. Kulkarni, an obstetrician, receives $29.42 from Medicaid for a visit that would bill $69.63 from Blue Cross Blue Shield of Michigan. She receives $842.16 from Medicaid for a Caesarean delivery, compared with $1,393.31 from Blue Cross.

If she takes too many Medicaid patients, she said, she cannot afford overhead expenses like staff salaries, the office mortgage and malpractice insurance that will run $42,800 this year. She also said she feared being sued by Medicaid patients because they might be at higher risk for problem pregnancies, because of underlying health problems.

As a result, she takes new Medicaid patients only if they are relatives or friends of existing patients. But her guilt is assuaged somewhat, she said, because her husband, who is also her office mate, Dr. Bobby B. Mukkamala, an ear, nose and throat specialist, is able to take Medicaid. She said he is able to do so because only a modest share of his patients have it.

The states and the federal government share the cost of Medicaid, which saw a record enrollment increase of 3.3 million people last year. The program now benefits 47 million people, primarily children, pregnant women, disabled adults and nursing home residents. It falls to the states to control spending by setting limits on eligibility, benefits and provider payments within broad federal guidelines.

Michigan, like many other states, did just that last year, packaging the 8 percent reimbursement cut with the elimination of dental, vision, podiatry, hearing and chiropractic services for adults.

When Randy C. Smith showed up recently at a Hamilton Community Health Network clinic near Flint, complaining of a throbbing molar, Dr. Miriam L. Parker had to inform him that Medicaid no longer covered the root canal and crown he needed.

A landscaper who has been without work for 15 months, Mr. Smith, 46, said he could not afford the $2,000 cost. “I guess I’ll just take Tylenol or Motrin,” he said before leaving.

This year, Gov. Jennifer M. Granholm, a Democrat, has revived a proposal to impose a 3 percent tax on physician revenues. Without the tax, she has warned, the state may have to reduce payments to health care providers by 11 percent.

In Flint, the birthplace of General Motors, the collapse of automobile manufacturing has melded with the recession to drive unemployment to a staggering 27 percent. About one in four non-elderly residents of Genesee County are uninsured, and one in five depends on Medicaid. The county’s Medicaid rolls have grown by 37 percent since 2001, and the program now pays for half of all childbirths.

But surveys show the share of doctors accepting new Medicaid patients is declining. Waits for an appointment at the city’s federally subsidized health clinic, where most patients have Medicaid, have lengthened to four months from six weeks in 2008. Parents like Rebecca and Jeoffrey Curtis, who had brought their 2-year-old son, Brian, to the clinic, say they have struggled to find a pediatrician.

“I called four or five doctors and asked if they accepted our Medicaid plan,” said Ms. Curtis, a 21-year-old waitress. “It would always be, ‘No, I’m sorry.’ It kind of makes us feel like second-class citizens.”

As physicians limit their Medicaid practices, emergency rooms are seeing more patients who do not need acute care.

At Genesys Regional Medical Center, one of three area hospitals, Medicaid volume is up 14 percent over last year. At Hurley Medical Center, the city’s safety net hospital, Dr. Michael Jaggi detects the difference when advising emergency room patients to seek follow-up treatment.

“We get met with the blank stare of ‘Where do I go from here?’ ” said Dr. Jaggi, the chief of emergency medicine.

New doctors, with their mountains of medical school debt, are fleeing the state because of payment cuts and proposed taxes. Dr. Kiet A. Doan, a surgeon in Flint, said that of 72 residents he had trained at local hospitals only two had stayed in the area, and both are natives.

Access to care can be even more challenging in remote parts of the state. The MidMichigan Medical Center in Clare, about 90 miles northwest of Flint, closed its obstetrics unit last year because Medicaid reimbursements covered only 65 percent of actual costs. Two other hospitals in the region might follow suit, potentially leaving 16 contiguous counties without obstetrics.

Medicaid enrollees in Michigan’s midsection have grown accustomed to long journeys for care. This month, Shannon M. Brown of Winn skipped work to drive her 8-year-old son more than two hours for a five-minute consultation with Dr. Mukkamala. Her pediatrician could not find a specialist any closer who would take Medicaid, she said.

Later this month, she will take the predawn drive again so Dr. Mukkamala can remove her son’s tonsils and adenoids. “He’s going to have to sit in the car for three hours after his surgery,” Mrs. Brown said. “I’m not looking forward to that one.”

Posted in abortion, business, Democrats, economics, economy, Healthcare, Law, Medicaid, Obama, Politics | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

A Better Way to Build a Pro-Israel PAC

Posted by steveneidman on March 11, 2010

 

The Jewish Standard

Steven Eidman • Letters

Published: March 4th, 2010

Thank you for your profile of the Joint Action Committee for Political Affairs (JACPAC), and its newly elected president, Clifton resident Gail  Yamner. 

(see http://www.jstandard.com/content/item/pac_head_urges_greater_political_involvement/ )

There are many one-issue pro-Israel PACs that do excellent work in helping to elect candidates who are strong supporters of a close U.S.-Israel relationship.

However, many of these candidates hold views on domestic issues, such as civil and reproductive rights, gun control, and church-state issues, that are at odds with the views of the overwhelming majority of Jewish voters. JACPAC supports a bipartisan slate of candidates who are both strongly pro-Israel and whose stance on domestic issues is more moderate and tolerant.

While a contribution to a single-issue PAC might have previously ended up helping to elect a Jesse Helms or a Tom Delay, and today may go to a Jim Demint, and tomorrow may wind up in the war chest of Sarah Palin, a contribution to JACPAC will go to candidates whose election will be good for Jewish interests in Israel and here at home.

Steven Eidman
Englewood

Posted in Democrats, Israel, Jewish Interest, National Security, Politics | Tagged: , , , , , , , , , , , | Leave a Comment »

Where Should You Buy Your Food: Whole Foods or Walmart?

Posted by steveneidman on March 4, 2010

The Great Grocery Smackdown

By Corby Kummer

Buy my food at Walmart? No thanks. Until recently, I had been to exactly one Walmart in my life, at the insistence of a friend I was visiting in Natchez, Mississippi, about 10 years ago. It was one of the sights, she said. Up and down the aisles we went, properly impressed by the endless rows and endless abundance. Not the produce section. I saw rows of prepackaged, plastic-trapped fruits and vegetables. I would never think of shopping there.

Not even if I could get environmentally correct food. Walmart’s move into organics was then getting under way, but it just seemed cynical—a way to grab market share while driving small stores and farmers out of business. Then, last year, the market for organic milk started to go down along with the economy, and dairy farmers in Vermont and other states, who had made big investments in organic certification, began losing contracts and selling their farms. A guaranteed large buyer of organic milk began to look more attractive. And friends started telling me I needed to look seriously at Walmart’s efforts to sell sustainably raised food.

Really? Wasn’t this greenwashing? I called Charles Fishman, the author of The Wal-Mart Effect, which entertainingly documents the market-changing (and company-destroying) effects of Walmart’s decisions. He reiterated that whatever Walmart decides to do has large repercussions—and told me that what it had decided to do since my Natchez foray was to compete with high-end supermarkets. “You won’t recognize the grocery section of a supercenter,” he said. He ordered me to get in my car and find one.

He was right. In the grocery section of the Raynham supercenter, 45 minutes south of Boston, I had trouble believing I was in a Walmart. The very reasonable-looking produce, most of it loose and nicely organized, was in black plastic bins (as in British supermarkets, where the look is common; the idea is to make the colors pop). The first thing I saw, McIntosh apples, came from the same local orchard whose apples I’d just seen in the same bags at Whole Foods. The bunched beets were from Muranaka Farm, whose beets I often buy at other markets—but these looked much fresher. The service people I could find (it wasn’t hard) were unfailingly enthusiastic, though I did wonder whether they got let out at night.

During a few days of tasting, the results were mixed. Those beets handily beat (sorry) ones I’d just bought at Whole Foods, and compared nicely with beets I’d recently bought at the farmers’ market. But packaged carrots and celery, both organic, were flavorless. Organic bananas and “tree ripened” California peaches, already out of season, were better than the ones in most supermarkets, and most of the Walmart food was cheaper—though when I went to my usual Whole Foods to compare prices for local produce, they were surprisingly similar (dry goods and dairy products were considerably less expensive at Walmart).

Walmart holding its own against Whole Foods? This called for a blind tasting.

I conspired with my contrarian friend James McWilliams, an agricultural historian at Texas State University at San Marcos and the author of the new Just Food: Where Locavores Get It Wrong and How We Can Truly Eat Responsibly. He enlisted his friends at Fino, a restaurant in Austin that pays special attention to where the food it serves comes from, as co-conspirators. I would buy two complete sets of ingredients, one at Walmart and the other at Whole Foods. The chef would prepare them as simply as possible, and serve two versions of each course, side by side on the same plate, to a group of local food experts invited to judge.

I started looking into how and why Walmart could be plausibly competing with Whole Foods, and found that its produce-buying had evolved beyond organics, to a virtually unknown program—one that could do more to encourage small and medium-size American farms than any number of well-meaning nonprofits, or the U.S. Department of Agriculture, with its new Know Your Farmer, Know Your Food campaign. Not even Fishman, who has been closely tracking Walmart’s sustainability efforts, had heard of it. “They do a lot of good things they don’t talk about,” he offered.

The program, which Walmart calls Heritage Agriculture, will encourage farms within a day’s drive of one of its warehouses to grow crops that now take days to arrive in trucks from states like Florida and California. In many cases the crops once flourished in the places where Walmart is encouraging their revival, but vanished because of Big Agriculture competition.

Ron McCormick, the senior director of local and sustainable sourcing for Walmart, told me that about three years ago he came upon pictures from the 1920s of thriving apple orchards in Rogers, Arkansas, eight miles from the company’s headquarters. Apples were once shipped from northwest Arkansas by railroad to St. Louis and Chicago. After Washington state and California took over the apple market, hardly any orchards remained. Cabbage, greens, and melons were also once staples of the local farming economy. But for decades, Arkansas’s cash crops have been tomatoes and grapes. A new initiative could diversify crops and give consumers fresher produce.

As with most Walmart programs, the clear impetus is to claim a share of consumer spending: first for organics, now for locally grown food. But buying local food is often harder than buying organic. The obstacles for both small farm and big store are many: how much a relatively small farmer can grow and how reliably, given short growing seasons; how to charge a competitive price when the farmer’s expenses are so much higher than those of industrial farms; and how to get produce from farm to warehouse.

Walmart knows all this, and knows that various nonprofit agricultural and university networks are trying to solve the same problems. In considering how to build on existing programs (and investments), Walmart talked with the local branch of the Environmental Defense Fund, which opened near the company’s Arkansas headquarters when Walmart started to look serious about green efforts, and with the Applied Sustainability Center at the University of Arkansas. The center (of which the Walmart Foundation is a chief funder) is part of a national partnership called Agile Agriculture, which includes universities such as Drake and the University of New Hampshire and nonprofits like the American Farmland Trust.* To get more locally grown produce into grocery stores and restaurants, the partnership is centralizing and streamlining distribution for farms with limited growing seasons, limited production, and limited transportation resources.

Walmart says it wants to revive local economies and communities that lost out when agriculture became centralized in large states. (The heirloom varieties beloved by foodies lost out at the same time, but so far they’re not a focus of Walmart’s program.) This would be something like bringing the once-flourishing silk and wool trades back to my hometown of Rockville, Connecticut. It’s not something you expect from Walmart, which is better known for destroying local economies than for rebuilding them.

As everyone who sells to or buys from (or, notoriously, works for) Walmart knows, price is where every consideration begins and ends. Even if the price Walmart pays for local produce is slightly higher than what it would pay large growers, savings in transport and the ability to order smaller quantities at a time can make up the difference. Contracting directly with farmers, which Walmart intends to do in the future as much as possible, can help eliminate middlemen, who sometimes misrepresent prices. Heritage produce currently accounts for only 4 to 6 percent of Walmart’s produce sales, McCormick told me (already more than a chain might spend on produce in a year, as Fishman would point out), adding that he hopes the figure will get closer to 20 percent, so the program will “go from experimental to being really viable.”

Michelle Harvey, who is in charge of working with Walmart on agriculture programs at the local Environmental Defense Fund office, summarized a long conversation with me on the sustainability efforts she thinks the company is serious about: “It’s getting harder and harder to hate Walmart.”

“We support local farmers,” read a sign at an Austin Walmart. I didn’t see any farm names listed in the produce section, but I did find plastic tubs of organic baby spinach and “spring mix” greens with modern labeling that looked like it could be at Whole Foods. My list was simple to the point of stark, for a fair fight. Some ingredients seemed identical to what I’d find at Whole Foods. Organic, free-range brown eggs. Promised Land all-natural, hormone-free milk. A bottle of Watkins Madagascar vanilla for panna cotta. I couldn’t find much in the way of the seasonal fruit the restaurant had told me the chef would serve with dessert. But I did find, to my surprise, a huge bin of pomegranates, so I bought those, and some Bosc pears. The sticking points were fresh goat cheese, which flummoxed the nice sales people (we found some Alouette brand, hidden), and chicken breasts. I could find organic meat, but no breasts without “up to 12 percent natural chicken broth” added—an attempt to inject flavor and add weight. I wasn’t happy with the suppliers, either: Tyson predominated. I bought Pilgrims Pride, but was suspicious. The bill was $126.02.

At the flagship Whole Foods, in downtown Austin, the produce was much more varied, though the spinach and spring mix looked less vibrant. The chicken was properly dry, a fresh ivory color—and more than twice as expensive as Walmart’s. My total bill was $175.04; $20 of the extra $50 was for the meat.

Brian Stubbs, the tall, genial young manager of Fino, and Jason Donoho, the chef, were intrigued as they helped me carry bag after bag into the restaurant’s kitchen. They carefully segregated the bags on two shelves of a walk-in refrigerator. The younger cooks looked surprised by the Whole Foods kraft-paper bags, and slightly horrified by the flimsy white plastic ones from Walmart.

The next night 16 critics, bloggers, and general food lovers gathered around a long, high table at the restaurant. Stubbs passed out scoring sheets with bullets for grades of one (worst) to five (best) for each of the four courses, and lines for comments.

The first course, bowls of almonds and pieces of fried goat cheese with red-onion jam and honey, was a clear win for Walmart. The Walmart almonds were described as “aromatic,” “mellow,” “pure,” and “yummy,” the Whole Foods almonds as “raw,” though also more “natural”; they were in fact fresher, though duller in flavor. (Like the best of the food I saw at the Austin Walmart, the packaging for the almonds had a homegrown Mexican look.) The second course, mixed spring greens in a sherry vinaigrette, was another Walmart win: only a few tasters preferred the Whole Foods greens, calling them fresher and heartier-flavored. And only one noticed the little brown age spots on a few Walmart leaves, but she was a ringer—Carol Ann Sayle, a local farmer famous for her greens.

So far Walmart was ahead. But then came the chicken, served with a poached egg on a bed of spinach and golden raisins. A woman whose taste I already thought uncanny—she works as an aromatherapist—compared the broth-infused meat to something out of a hospital cafeteria: “It’s like they injected it with something to make it taste like fast food.” I thought it was salty, damp, and dismal. The spinach, though, was another story: even the most ardent brothy-breast haters thought the Walmart spinach was fresher.

Dessert was the most puzzling. I had thought that Walmart’s locally sourced milk and exotic-looking vanilla would be the gold standard, but the Whole Foods house brands slaughtered them (“Kicks A’s ass,” one taster wrote). People couldn’t find enough words to diss the Walmart panna cotta (“artificial, thin”) and praise the Whole Foods one (“like a good Christmas”). I wished I’d bought the identical Promised Land milk at Whole Foods, to see if there is in fact a difference in the branded food products that suppliers give Walmart, as there is in the case of other branded products. The pomegranate seeds, sadly, were wan, with barely any flavor, particularly compared with the garnet gems from Whole Foods. But Walmart got points from the chef, and from me, for carrying pomegranates at all.

As I had been in my own kitchen, the tasters were surprised when the results were unblinded at the end of the meal and they learned that in a number of instances they had adamantly preferred Walmart produce. And they weren’t entirely happy.

In an ideal world, people would buy their food directly from the people who grew or caught it, or grow and catch it themselves. But most people can’t do that. If there were a Walmart closer to where I live, I would probably shop there.

Most important, the vast majority of Walmarts carry a large range of affordable fresh fruits and vegetables. And Walmarts serve many “food deserts,” in large cities and rural areas—ironically including farm areas. I’m not sure I’m convinced that the world’s largest retailer is set on rebuilding local economies it had a hand in destroying, if not literally, then in effect. But I’m convinced that if it wants to, a ruthlessly well-run mechanism can bring fruits and vegetables back to land where they once flourished, and deliver them to the people who need them most.

Correction: The article originally stated, incorrectly, that the Agile Agriculture partnership included the National Sustainable Agriculture Coalition.

Posted in business, culture, economics, economy, Healthcare | Tagged: , , , , , | Leave a Comment »

Posted by steveneidman on February 26, 2010

  An In-Depth Look At the Federal Budget

by Hale “Bonddad” Stewart

This week, the president announced the creation of a panel to look at the federal budget. As such, it seems appropriate to look at the federal budget in detail to get a sense of what’s there. All of the information contained in the graphs that follow is available from the CBO. Please click on all images to see a larger image. Also, all data starts in 1970 and goes through fiscal 2009.
The US has run a surplus 4 years since 1970, or about 10% of the time. Over those 39 years we’ve had Republican and Democratic control of both the White House and Congress. This leads to a very simple conclusion: no party can make a legitimate claim to being fiscally responsible.
Above is a chart of the total deficit for each year going back to 1970. First, note (again) only four years show a surplus. This means that for 35 years (and in fact for a longer period) the US has issued debt on a continuing basis to pay for its revenue shortfall. This means the US — like most US corporations — has to manage its Treasury operations. All this means is the US Treasury has to decide what maturity of Treasury bond to issue, how much of a particular Treasury bond to issue and when to issue it. Again, this is standard procedure from a corporate finance perspective.$12.4 trillion and total US GDP is approximately $14.4 trillion. That makes the debt/GDP ratio 86%. While that is not good, it is not fatal.
Above is a chart of total federal outlays as a percent of GDP. Notice the number has been remarkably constant since 1970, fluctuating right around 20% for most of that time.
Personal income taxes (the top blue line) comprise the largest percentage of federal tax receipts. In addition, these have continually comprised about 45%-50% of total federal receipts. The biggest change since 1970 has occurred in social insurance taxes (the yellow line), which have increased from a little over 20% to about 35%-40% over the last 10 years. Corporate taxes (the light purple line) have also been consistently responsible for about 10% of total tax receipts. Finally, note that estate and gift taxes (the light blue line at the bottom of the graph) overall contribution is more or less negligible on a percentage basis.
The above chart looks at federal receipts from a percent of GDP basis. Fist, note the percentages have been fairly consistent since 1970. Personal income taxes total between 8%-10% of GDP, corporate taxes total about 2% of GDP and estate and gift taxes account for less than 1% of GDP. The only big change has been an increase in social insurance taxes, which have increased to about 6% of GDP.
The above chart breaks federal spending down into mandatory, discretionary and interest payments. Mandatory spending has increased from a little under 40% of the federal budget in 1970 to right around 60% over the last few years. Discretionary spending has decreased from right around 60% in 1970 to a little under 40% over the last few years. The progression of mandatory spending is at the center of much of the budgetary concern in Washington and the public.
Above is a chart of the 10-year CMT (constantly maturing treasury). Interest rates have been dropping for about 20 years. While there is considerable debate regarding the possibility of this continuing, we’ll have to wait and see how that plays out.
Above is a chart of mandatory and discretionary spending as a percent of GDP. Interestingly enough, despite the increase in the dollar amount of discretionary spending, it has remained more or less constant on a percent of GDP basis. The recent spike may be the result of the extraordinary budgetary circumstances the country is currently in. Additionally, discretionary spending actually dropped until the beginning of the decade when it started to rise again. Finally, interest payments are under control for now.

Let’s start with a chart of government revenues and expenditures, starting in 1970:

Currently, total US debt is approximately

Let’s take a look at the components of federal revenue.

Finally, note that interest payments are in fact pretty much under control. The primary reason for this is the near 20 year downward trajectory in interest rates:

Finally, the chart above shows the percentages of SS, Medicaid and Medicare of mandatory spending. The big issue here is clear: note the increase of medicare as a percentage of mandatory spending. It’s been increasing for some time.

So, what does all of this tell us about the US budget?

1.) The total federal debt/GDP ratio and interest rate payments (both on a percent to total expenditures and percent of GDP) are manageable at current levels. All of this has been aided by a two decade long decrease in interest rates. It’s doubtful that will continue given the current pace of expenditures. Most importantly, given the current rate of spending and debt growth, changes will have to be made once we are out of the recession for sure. And that’s where the real political problem lies.

2.) While mandatory spending has remained constant as a percent of GDP, it’s increase to about 60% of the current federal budget is perhaps the biggest problem the US faces going forward. And as the percentage increase in medicare payments indicates, medical payments are a primary reason for the problems the country faces at the federal fiscal level.

3.) The argument that the US is taxed to death is wrong. On a percent of GDP basis the US is taxed at moderate rates.

4.) I’m surprised how unimportant estate and gift taxes are to the overall scheme of things. Even before the generous estate tax credit of the last few years (essentially exempting estates worth less than $3.5 million), estate and gift taxes are remarkably unimportant from a total revenues perspective. It’s obvious they serve another purpose such as the theoretical prevention of dynastic wealth transfer.

Posted in business, culture, Democrats, economics, economy, Healthcare, history, Medicaid, National Security, Obama, Politics, Polls | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »

The Problem With Political Reporting

Posted by steveneidman on February 22, 2010

The Quest for Innocence and the Loss of Reality in Political Journalism

by Jay Rosen of PressThink

This is a post about a single line in a recent article in the New York Times: Tea Party Lights Fuse for Rebellion on Right.

Before I get to the line that interested me, I need to acknowledge that the investigation the Times undertook for this article is wholly admirable and exactly what we need professional journalists to be doing. Reporter David Barstow spent five months—five months!—reporting and researching the Tea Party phenomenon.

He went to their events. He talked to hundreds of people drawn into the movement. He watched what happens at their rallies and the smaller meetings where movement politics is transacted. He made himself fully literate, learning the differences between the Tea Party and the Patriot movements, reading the authors who have infuenced Tea Party activists, getting to know local leaders and regional differences, building up a complex and layered portrait of a political cohort that doesn’t fit into party politics as normally understood.

This is original reporting at a very high level of commitment to public service; it is expensive, difficult, and increasingly rare in a news business suffering under economic collapse.

So I want to make it absolutely clear that I treasure this kind of journalism and indeed devoured Barstow’s report when it came online. (Although I wish it had been twice as long.) And I have no problem with his decision to confine himself to description of the Tea Party movement, rather than evaluating its goodness or badness. The first task is to understand, and that is why we need reporters willing to go out there and witness the phenomenon, interview the participants, pore over the texts and struggle with their account until they feel they have it right.

“A narrative of impending tyranny.”

As Barstow said in an interview with Columbia Journalism Review, “If you spend enough time talking to people in the movement, eventually you hear enough of the same kinds of ideas, the same kinds of concerns, and you begin to recognize what the ideology is, what the paradigm is that they’re operating in.” The key words are spend enough time and begin to recognize.

Now to the part that puzzles me:

It is a sprawling rebellion, but running through it is a narrative of impending tyranny. This narrative permeates Tea Party Web sites, Facebook pages, Twitter feeds and YouTube videos. It is a prominent theme of their favored media outlets and commentators, and it connects the disparate issues that preoccupy many Tea Party supporters — from the concern that the community organization Acorn is stealing elections to the belief that Mr. Obama is trying to control the Internet and restrict gun ownership.

Running through it is a narrative of impending tyranny…That sounds like the Tea Party movement I have observed, so the truth of the sentence is not in doubt. But what about the truth of the narrative? David Barstow is a Pulitzer Prize winning investigative reporter for the New York Times. He ought to know whether the United States is on the verge of losing its democracy and succumbing to an authoritarian or despotic form of government. If tyranny was pending in the U.S. that would seem to be a story. The New York Times has done a lot of reporting about the Obama Administration, but it has been silent on the collapse of basic freedoms lurking just around the corner. Barstow commented on the sentence that disturbed me in his interview with CJR:

The other thing that came through was this idea of impending tyranny. You could not go to Tea Party rallies or spend time talking to people within the movement without hearing that fear expressed in myriad ways. I was struck by the number of people who had come to the point where they were literally in fear of whether or not the United States of America would continue to be a free country. I just started seeing that theme come up everywhere I went.

It kept coming up, but David… did it make any sense? Was it grounded in observable fact, the very thing that investigative reporters specialize in? Did it square (at all) with what else Barstow knows, and what the New York Times has reported about the state of politics in 2009-10? Seriously: Why is this phrase, impending tyranny, just sitting there, as if Barstow had no way of knowing whether it was crazed and manipulated or verifiable and reasonable? If we credit the observation that a great many Americans drawn to the Tea Party live in fear that the United States is about to turn into a tyranny, with rigged elections, loss of civil liberties, no more free press, a police state… can we also credit the professional attitude that refuses to say whether this fear is reality-based? I don’t see how we can.

As a matter of reported fact

Now we can predict, with a reasonable degree of confidence, what the reply would be from the reporter, his editors (who are equally involved here, as the Times is a very editor-driven newspaper) and his peers in the press. The reply is the reply that is given by the common sense of pro journalism as it is practiced in the United States. “This was a news story, an attempt to report what’s happening out there, as accurately and fairly as possible. Which is not the place for the author’s opinion.” Or: “I was trying to describe the Tea Party movement, and to understand it, which is hard enough; I’ll let others judge what to make of it.”

Sounds good, right? But this distinction, between fact and opinion, description and assessment, is not what my question is about. It may appear to be responsive, but it really isn’t. The price of liberty is eternal vigilance, but… as a matter of reported fact, is the United States actually on the verge of tyranny? That is my question. Would a fair description of the American political scene by the Washington bureau and investigative staff of the New York Times lend support to the “impending tyranny” narrative that Barstow observed as a unifying theme in the Tea Party movement?

It’s a key point, so let me state it again: Based not on a subjective assessment of the Tea Party’s viability or his opinion of its desirability but only on facts he knows about the state of politics and government since Obama’s election, is there any substantial likelihood of a tyranny replacing the American republic in the near future?

I think it’s obvious—not only to me but to Barstow and the journalist who interviewed him for CJR—that the answers are “no.” For if the answers were “yes” it would have been a huge story! No fair description of the current scene, nothing in what the Washington bureau and investigative staff of the New York Times has picked up from its reporting, would support a characterization like “impending tyranny.”

In a word, the Times editors and Barstow know this narrative is nuts, but something stops them from saying so— despite the fact that they must have spent over $100,000 on this one story. And whatever that thing is, it’s not the reluctance to voice an opinion in the news columns, but a reluctance to report a fact in the news columns, the fact that the “narrative of impending tyranny” is ungrounded in any observable reality, even though the sense of grievance within the Tea Party movement is truly felt and politically consequential.

A faltering sense of reality

My claim: We have come upon something interfering with political journalism’s “sense of reality” as the philosopher Isaiah Berlin called it (see section 5.1) And I think I have a term for the confusing factor: a quest for innocence in reportage and dispute description. Innocence, meaning a determination not to be implicated, enlisted, or seen by the public as involved. That’s what created the pattern I’ve called “regression to a phony mean.” That’s what motivated the rise of he said, she said reporting.

I explained the quest for innocence in a 2008 essay on campaign coverage for tomdispatch.com. (It also ran in Salon.)

But the biggest advantage of horse-race journalism is that it permits reporters and pundits to play up their detachment. Focusing on the race advertises the political innocence of the press because “who’s gonna win?” is not an ideological question. By asking it you reaffirm that yours is not an ideological profession. This is experienced as pleasure by a lot of mainstream journalists. Ever noticed how spirits lift when the pundit roundtable turns from the Middle East or the looming recession to the horse race, and there’s an opportunity for sizing up the candidates? To be manifestly agenda-less is journalistic bliss. Of course, since trying to get ahead of the voters can affect how voters view the candidates, the innocence, too, is an illusion.

The quest for innocence in political journalism means the desire to be manifestly agenda-less and thus “prove” in the way you describe things that journalism is not an ideological trade. But this can get in the way of describing things! As it did in Barstow’s account. Now let’s speed up the picture and imagine how this interference in truth-telling happens routinely, many times a day over years and years of reporting on politics. What’s lost is that sense of reality Isaiah Berlin talked about. In its place is savviness, the dialect of insiders trying to persuade us that they know how things really work. Nothing is more characteristic of the savvy style than statements like “perception is often reality in politics.”

“For some reason, American political coverage is exempt.”

And in fact frustrated observers of political journalism have complained about this loss of the real. The latest to groan about it is George Packer in the New Yorker. He was commenting on how David Broder of the Washington Post, the dean emeritus of political reporters, had written a surreal column about Sarah Palin that nonetheless seemed entirely normal if you know the genre:

Broder wasn’t analyzing Palin’s positions or accusations, or the truth or falsehood of her claims, or even the nature of the emotions that she appeals to. He was reviewing a performance and giving it the thumbs up, using the familiar terminology of political journalism. This has been so characteristic of the coverage of politics for so long that it doesn’t seem in the least bit odd, and it’s hard to imagine doing it any other way.

Italics mine. Packer’s point becomes clearer when he transplants this kind of reportng to Afghanistan with the sense of reality dropped out. “Imagine Karzai’s recent inaugural address as covered by a Washington journalist,” he writes:

“Speaking at the presidential palace in Kabul, Mr. Karzai showed himself to be at the top of his game. He skillfully co-opted his Pashtun base while making a powerful appeal to the technocrats who have lately been disappointed in him, and at the same time he reassured the Afghan public that his patience with civilian casualties is wearing thin. A palace insider, who asked for anonymity in order to be able to speak candidly, said, “If Karzai can continue to signal the West that he is concerned about corruption without alienating his warlord allies, he will likely be able to defuse the perception of a weak leader and regain his image as a unifying figure who can play the role of both modernizer and nationalist.” Still, the palace insider acknowledged, tensions remain within Mr. Karzai’s own inner circle.

This sounds like politics the way our journalists narrate it, but as Packer notes, “A war or an economic collapse has a reality apart from perceptions, which imposes a pressure on reporters to find it. But for some reason, American political coverage is exempt.” Exactly. That’s the exemption Barstow was calling on when he wrote. “… running through it is a narrative of impending tyranny.” Somehow the reality that this narrative exists as a binding force within the Tea Party movement is more reportable than the fact that the movement’s binding force is a fake crisis, a delusion shared.

I leave you with a question: how the hell could this happen?

Posted in art, arts, culture, Democrats, economics, economy, Healthcare, history, Law, National Security, Obama, Politics, Polls | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Posted by steveneidman on February 16, 2010

How a New Jobless Era Will Transform America

 

Image credit: Fredrik Broden

By Don Peck

 

 

 

 

 

 

 

 

 

How should we characterize the economic period we have now entered? After nearly two brutal years, the Great Recession appears to be over, at least technically. Yet a return to normalcy seems far off. By some measures, each recession since the 1980s has retreated more slowly than the one before it. In one sense, we never fully recovered from the last one, in 2001: the share of the civilian population with a job never returned to its previous peak before this downturn began, and incomes were stagnant throughout the decade. Still, the weakness that lingered through much of the 2000s shouldn’t be confused with the trauma of the past two years, a trauma that will remain heavy for quite some time.

The unemployment rate hit 10 percent in October, and there are good reasons to believe that by 2011, 2012, even 2014, it will have declined only a little. Late last year, the average duration of unemployment surpassed six months, the first time that has happened since 1948, when the Bureau of Labor Statistics began tracking that number. As of this writing, for every open job in the U.S., six people are actively looking for work. 

All of these figures understate the magnitude of the jobs crisis. The broadest measure of unemployment and underemployment (which includes people who want to work but have stopped actively searching for a job, along with those who want full-time jobs but can find only part-time work) reached 17.4 percent in October, which appears to be the highest figure since the 1930s. And for large swaths of society—young adults, men, minorities—that figure was much higher (among teenagers, for instance, even the narrowest measure of unemployment stood at roughly 27 percent). One recent survey showed that 44 percent of families had experienced a job loss, a reduction in hours, or a pay cut in the past year. 

There is unemployment, a brief and relatively routine transitional state that results from the rise and fall of companies in any economy, and there is unemployment—chronic, all-consuming. The former is a necessary lubricant in any engine of economic growth. The latter is a pestilence that slowly eats away at people, families, and, if it spreads widely enough, the fabric of society. Indeed, history suggests that it is perhaps society’s most noxious ill. 

The worst effects of pervasive joblessness—on family, politics, society—take time to incubate, and they show themselves only slowly. But ultimately, they leave deep marks that endure long after boom times have returned. Some of these marks are just now becoming visible, and even if the economy magically and fully recovers tomorrow, new ones will continue to appear. The longer our economic slump lasts, the deeper they’ll be. 

If it persists much longer, this era of high joblessness will likely change the life course and character of a generation of young adults—and quite possibly those of the children behind them as well. It will leave an indelible imprint on many blue-collar white men—and on white culture. It could change the nature of modern marriage, and also cripple marriage as an institution in many communities. It may already be plunging many inner cities into a kind of despair and dysfunction not seen for decades. Ultimately, it is likely to warp our politics, our culture, and the character of our society for years. 

The Long Road Ahead

 

Since last spring, when fears of economic apocalypse began to ebb, we’ve been treated to an alphabet soup of predictions about the recovery. Various economists have suggested that it might look like a V (a strong and rapid rebound), a U (slower), a W (reflecting the possibility of a double-dip recession), or, most alarming, an L (no recovery in demand or jobs for years: a lost decade). This summer, with all the good letters already taken, the former labor secretary Robert Reich wrote on his blog that the recovery might actually be shaped like an X (the imagery is elusive, but Reich’s argument was that there can be no recovery until we find an entirely new model of economic growth). 

No one knows what shape the recovery will take. The economy grew at an annual rate of 2.2 percent in the third quarter of last year, the first increase since the second quarter of 2008. If economic growth continues to pick up, substantial job growth will eventually follow. But there are many reasons to doubt the durability of the economic turnaround, and the speed with which jobs will return. 

Historically, financial crises have spawned long periods of economic malaise, and this crisis, so far, has been true to form. Despite the bailouts, many banks’ balance sheets remain weak; more than 140 banks failed in 2009. As a result, banks have kept lending standards tight, frustrating the efforts of small businesses—which have accounted for almost half of all job losses—to invest or rehire. Exports seem unlikely to provide much of a boost; although China, India, Brazil, and some other emerging markets are growing quickly again, Europe and Japan—both major markets for U.S. exports—remain weak. And in any case, exports make up only about 13 percent of total U.S. production; even if they were to grow quickly, the impact would be muted. 

Most recessions end when people start spending again, but for the foreseeable future, U.S. consumer demand is unlikely to propel strong economic growth. As of November, one in seven mortgages was delinquent, up from one in 10 a year earlier. As many as one in four houses may now be underwater, and the ratio of household debt to GDP, about 65 percent in the mid-1990s, is roughly 100 percent today. It is not merely animal spirits that are keeping people from spending freely (though those spirits are dour). Heavy debt and large losses of wealth have forced spending onto a lower path. 

So what is the engine that will pull the U.S. back onto a strong growth path? That turns out to be a hard question. The New York Times columnist Paul Krugman, who fears a lost decade, said in a lecture at the London School of Economics last summer that he has “no idea” how the economy could quickly return to strong, sustainable growth. Mark Zandi, the chief economist at Moody’s Economy.com, told the Associated Press last fall, “I think the unemployment rate will be permanently higher, or at least higher for the foreseeable future. The collective psyche has changed as a result of what we’ve been through. And we’re going to be different as a result.” 

One big reason that the economy stabilized last summer and fall is the stimulus; the Congressional Budget Office estimates that without the stimulus, growth would have been anywhere from 1.2 to 3.2 percentage points lower in the third quarter of 2009. The stimulus will continue to trickle into the economy for the next couple of years, but as a concentrated force, it’s largely spent. Christina Romer, the chair of President Obama’s Council of Economic Advisers, said last fall, “By mid-2010, fiscal stimulus will likely be contributing little to further growth,” adding that she didn’t expect unemployment to fall significantly until 2011. That prediction has since been echoed, more or less, by the Federal Reserve and Goldman Sachs. 

The economy now sits in a hole more than 10 million jobs deep—that’s the number required to get back to 5 percent unemployment, the rate we had before the recession started, and one that’s been more or less typical for a generation. And because the population is growing and new people are continually coming onto the job market, we need to produce roughly 1.5 million new jobs a year—about 125,000 a month—just to keep from sinking deeper. 

Even if the economy were to immediately begin producing 600,000 jobs a month—more than double the pace of the mid-to-late 1990s, when job growth was strong—it would take roughly two years to dig ourselves out of the hole we’re in. The economy could add jobs that fast, or even faster—job growth is theoretically limited only by labor supply, and a lot more labor is sitting idle today than usual. But the U.S. hasn’t seen that pace of sustained employment growth in more than 30 years. And given the particulars of this recession, matching idle workers with new jobs—even once economic growth picks up—seems likely to be a particularly slow and challenging process. 

The construction and finance industries, bloated by a decade-long housing bubble, are unlikely to regain their former share of the economy, and as a result many out-of-work finance professionals and construction workers won’t be able to simply pick up where they left off when growth returns—they’ll need to retrain and find new careers. (For different reasons, the same might be said of many media professionals and auto workers.) And even within industries that are likely to bounce back smartly, temporary layoffs have generally given way to the permanent elimination of jobs, the result of workplace restructuring. Manufacturing jobs have of course been moving overseas for decades, and still are; but recently, the outsourcing of much white-collar work has become possible. Companies that have cut domestic payrolls to the bone in this recession may choose to rebuild them in Shanghai, Guangzhou, or Bangalore, accelerating off-shoring decisions that otherwise might have occurred over many years. 

New jobs will come open in the U.S. But many will have different skill requirements than the old ones. “In a sense,” says Gary Burtless, a labor economist at the Brookings Institution, “every time someone’s laid off now, they need to start all over. They don’t even know what industry they’ll be in next.” And as a spell of unemployment lengthens, skills erode and behavior tends to change, leaving some people unqualified even for work they once did well. 

Ultimately, innovation is what allows an economy to grow quickly and create new jobs as old ones obsolesce and disappear. Typically, one salutary side effect of recessions is that they eventually spur booms in innovation. Some laid-off employees become entrepreneurs, working on ideas that have been ignored by corporate bureaucracies, while sclerotic firms in declining industries fail, making way for nimbler enterprises. But according to the economist Edmund Phelps, the innovative potential of the U.S. economy looks limited today. In a recent Harvard Business Review article, he and his co-author, Leo Tilman, argue that dynamism in the U.S. has actually been in decline for a decade; with the housing bubble fueling easy (but unsustainable) growth for much of that time, we just didn’t notice. Phelps and Tilman finger several culprits: a patent system that’s become stifling; an increasingly myopic focus among public companies on quarterly results, rather than long-term value creation; and, not least, a financial industry that for a generation has focused its talent and resources not on funding business innovation, but on proprietary trading, regulatory arbitrage, and arcane financial engineering. None of these problems is likely to disappear quickly. Phelps, who won a Nobel Prize for his work on the “natural” rate of unemployment, believes that until they do disappear, the new floor for unemployment is likely to be between 6.5 percent and 7.5 percent, even once “recovery” is complete. 

It’s likely, then, that for the next several years or more, the jobs environment will more closely resemble today’s environment than that of 2006 or 2007—or for that matter, the environment to which we were accustomed for a generation. Heidi Shierholz, an economist at the Economic Policy Institute, notes that if the recovery follows the same basic path as the last two (in 1991 and 2001), unemployment will stand at roughly 8 percent in 2014. 

“We haven’t seen anything like this before: a really deep recession combined with a really extended period, maybe as much as eight years, all told, of highly elevated unemployment,” Shierholz told me. “We’re about to see a big national experiment on stress.” 

The Recession and America’s Youth

 

“I’m definitely seeing a lot of the older generation saying, ‘Oh, this [recession] is so awful,’” Robert Sherman, a 2009 graduate of Syracuse University, told The New York Times in July. “But my generation isn’t getting as depressed and uptight.” Sherman had recently turned down a $50,000-a-year job at a consulting firm, after careful deliberation with his parents, because he hadn’t connected well with his potential bosses. Instead he was doing odd jobs and trying to get a couple of tech companies off the ground. “The economy will rebound,” he said. 

Over the past two generations, particularly among many college grads, the 20s have become a sort of netherworld between adolescence and adulthood. Job-switching is common, and with it, periods of voluntary, transitional unemployment. And as marriage and parenthood have receded farther into the future, the first years after college have become, arguably, more carefree. In this recession, the term funemployment has gained some currency among single 20-somethings, prompting a small raft of youth-culture stories in the Los Angeles Times and San Francisco Weekly, on Gawker, and in other venues.

Most of the people interviewed in these stories seem merely to be trying to stay positive and make the best of a bad situation. They note that it’s a good time to reevaluate career choices; that since joblessness is now so common among their peers, it has lost much of its stigma; and that since they don’t have mortgages or kids, they have flexibility, and in this respect, they are lucky. All of this sounds sensible enough—it is intuitive to think that youth will be spared the worst of the recession’s scars. 

But in fact a whole generation of young adults is likely to see its life chances permanently diminished by this recession. Lisa Kahn, an economist at Yale, has studied the impact of recessions on the lifetime earnings of young workers. In one recent study, she followed the career paths of white men who graduated from college between 1979 and 1989. She found that, all else equal, for every one-percentage-point increase in the national unemployment rate, the starting income of new graduates fell by as much as 7 percent; the unluckiest graduates of the decade, who emerged into the teeth of the 1981–82 recession, made roughly 25 percent less in their first year than graduates who stepped into boom times. 

But what’s truly remarkable is the persistence of the earnings gap. Five, 10, 15 years after graduation, after untold promotions and career changes spanning booms and busts, the unlucky graduates never closed the gap. Seventeen years after graduation, those who had entered the workforce during inhospitable times were still earning 10 percent less on average than those who had emerged into a more bountiful climate. When you add up all the earnings losses over the years, Kahn says, it’s as if the lucky graduates had been given a gift of about $100,000, adjusted for inflation, immediately upon graduation—or, alternatively, as if the unlucky ones had been saddled with a debt of the same size. 

When Kahn looked more closely at the unlucky graduates at mid-career, she found some surprising characteristics. They were significantly less likely to work in professional occupations or other prestigious spheres. And they clung more tightly to their jobs: average job tenure was unusually long. People who entered the workforce during the recession “didn’t switch jobs as much, and particularly for young workers, that’s how you increase wages,” Kahn told me. This behavior may have resulted from a lingering risk aversion, born of a tough start. But a lack of opportunities may have played a larger role, she said: when you’re forced to start work in a particularly low-level job or unsexy career, it’s easy for other employers to dismiss you as having low potential. Moving up, or moving on to something different and better, becomes more difficult. 

“Graduates’ first jobs have an inordinate impact on their career path and [lifetime earnings],” wrote Austan Goolsbee, now a member of President Obama’s Council of Economic Advisers, in The New York Times in 2006. “People essentially cannot close the wage gap by working their way up the company hierarchy. While they may work their way up, the people who started above them do, too. They don’t catch up.” Recent research suggests that as much as two-thirds of real lifetime wage growth typically occurs in the first 10 years of a career. After that, as people start families and their career paths lengthen and solidify, jumping the tracks becomes harder. 

This job environment is not one in which fast-track jobs are plentiful, to say the least. According to the National Association of Colleges and Employers, job offers to graduating seniors declined 21 percent last year, and are expected to decline another 7 percent this year. Last spring, in the San Francisco Bay Area, an organization called JobNob began holding networking happy hours to try to match college graduates with start-up companies looking primarily for unpaid labor. Julie Greenberg, a co-founder of JobNob, says that at the first event, on May 7, she expected perhaps 30 people, but 300 showed up. New graduates didn’t have much of a chance; most of the people there had several years of work experience—quite a lot were 30-somethings—and some had more than one degree. JobNob has since held events for alumni of Stanford, Berkeley, and Harvard; all have been well attended (at the Harvard event, Greenberg tried to restrict attendance to 75, but about 100 people managed to get in), and all have been dominated by people with significant work experience. 

When experienced workers holding prestigious degrees are taking unpaid internships, not much is left for newly minted B.A.s. Yet if those same B.A.s don’t find purchase in the job market, they’ll soon have to compete with a fresh class of graduates—ones without white space on their résumé to explain. This is a tough squeeze to escape, and it only gets tighter over time. 

Strong evidence suggests that people who don’t find solid roots in the job market within a year or two have a particularly hard time righting themselves. In part, that’s because many of them become different—and damaged—people. Krysia Mossakowski, a sociologist at the University of Miami, has found that in young adults, long bouts of unemployment provoke long-lasting changes in behavior and mental health. “Some people say, ‘Oh, well, they’re young, they’re in and out of the workforce, so unemployment shouldn’t matter much psychologically,’” Mossakowski told me. “But that isn’t true.” 

Examining national longitudinal data, Mossakowski has found that people who were unemployed for long periods in their teens or early 20s are far more likely to develop a habit of heavy drinking (five or more drinks in one sitting) by the time they approach middle age. They are also more likely to develop depressive symptoms. Prior drinking behavior and psychological history do not explain these problems—they result from unemployment itself. And the problems are not limited to those who never find steady work; they show up quite strongly as well in people who are later working regularly. 

Forty years ago, Glen Elder, a sociologist at the University of North Carolina and a pioneer in the field of “life course” studies, found a pronounced diffidence in elderly men (though not women) who had suffered hardship as 20- and 30-somethings during the Depression. Decades later, unlike peers who had been largely spared in the 1930s, these men came across, he told me, as “beaten and withdrawn—lacking ambition, direction, confidence in themselves.” Today in Japan, according to the Japan Productivity Center for Socio-Economic Development, workers who began their careers during the “lost decade” of the 1990s and are now in their 30s make up six out of every 10 cases of depression, stress, and work-related mental disabilities reported by employers. 

A large and long-standing body of research shows that physical health tends to deteriorate during unemployment, most likely through a combination of fewer financial resources and a higher stress level. The most-recent research suggests that poor health is prevalent among the young, and endures for a lifetime. Till Von Wachter, an economist at Columbia University, and Daniel Sullivan, of the Federal Reserve Bank of Chicago, recently looked at the mortality rates of men who had lost their jobs in Pennsylvania in the 1970s and ’80s. They found that particularly among men in their 40s or 50s, mortality rates rose markedly soon after a layoff. But regardless of age, all men were left with an elevated risk of dying in each year following their episode of unemployment, for the rest of their lives. And so, the younger the worker, the more pronounced the effect on his lifespan: the lives of workers who had lost their job at 30, Von Wachter and Sullivan found, were shorter than those who had lost their job at 50 or 55—and more than a year and a half shorter than those who’d never lost their job at all. 

Journalists and academics have thrown various labels at today’s young adults, hoping one might stick—Generation Y, Generation Next, the Net Generation, the Millennials, the Echo Boomers. All of these efforts contain an element of folly; the diversity of character within a generation is always and infinitely larger than the gap between generations. Still, the cultural and economic environment in which each generation is incubated clearly matters. It is no coincidence that the members of Generation X—painted as cynical, apathetic slackers—first emerged into the workforce in the weak job market of the early-to-mid-1980s. Nor is it a coincidence that the early members of Generation Y—labeled as optimistic, rule-following achievers—came of age during the Internet boom of the late 1990s. 

Many of today’s young adults seem temperamentally unprepared for the circumstances in which they now find themselves. Jean Twenge, an associate professor of psychology at San Diego State University, has carefully compared the attitudes of today’s young adults to those of previous generations when they were the same age. Using national survey data, she’s found that to an unprecedented degree, people who graduated from high school in the 2000s dislike the idea of work for work’s sake, and expect jobs and career to be tailored to their interests and lifestyle. Yet they also have much higher material expectations than previous generations, and believe financial success is extremely important. “There’s this idea that, ‘Yeah, I don’t want to work, but I’m still going to get all the stuff I want,’” Twenge told me. “It’s a generation in which every kid has been told, ‘You can be anything you want. You’re special.’” 

In her 2006 book, Generation Me, Twenge notes that self-esteem in children began rising sharply around 1980, and hasn’t stopped since. By 1999, according to one survey, 91 percent of teens described themselves as responsible, 74 percent as physically attractive, and 79 percent as very intelligent. (More than 40 percent of teens also expected that they would be earning $75,000 a year or more by age 30; the median salary made by a 30-year-old was $27,000 that year.) Twenge attributes the shift to broad changes in parenting styles and teaching methods, in response to the growing belief that children should always feel good about themselves, no matter what. As the years have passed, efforts to boost self-esteem—and to decouple it from performance—have become widespread. 

These efforts have succeeded in making today’s youth more confident and individualistic. But that may not benefit them in adulthood, particularly in this economic environment. Twenge writes that “self-esteem without basis encourages laziness rather than hard work,” and that “the ability to persevere and keep going” is “a much better predictor of life outcomes than self-esteem.” She worries that many young people might be inclined to simply give up in this job market. “You’d think if people are more individualistic, they’d be more independent,” she told me. “But it’s not really true. There’s an element of entitlement—they expect people to figure things out for them.” 

Ron Alsop, a former reporter for The Wall Street Journal and the author of The Trophy Kids Grow Up: How the Millennial Generation Is Shaking Up the Workplace, says a combination of entitlement and highly structured childhood has resulted in a lack of independence and entrepreneurialism in many 20-somethings. They’re used to checklists, he says, and “don’t excel at leadership or independent problem solving.” Alsop interviewed dozens of employers for his book, and concluded that unlike previous generations, Millennials, as a group, “need almost constant direction” in the workplace. “Many flounder without precise guidelines but thrive in structured situations that provide clearly defined rules.” 

All of these characteristics are worrisome, given a harsh economic environment that requires perseverance, adaptability, humility, and entrepreneurialism. Perhaps most worrisome, though, is the fatalism and lack of agency that both Twenge and Alsop discern in today’s young adults. Trained throughout childhood to disconnect performance from reward, and told repeatedly that they are destined for great things, many are quick to place blame elsewhere when something goes wrong, and inclined to believe that bad situations will sort themselves out—or will be sorted out by parents or other helpers. 

In his remarks at last year’s commencement, in May, The New York Times reported, University of Connecticut President Michael Hogan addressed the phenomenon of students’ turning down jobs, with no alternatives, because they didn’t feel the jobs were good enough. “My first word of advice is this,” he told the graduates. “Say yes. In fact, say yes as often as you can. Saying yes begins things. Saying yes is how things grow. Saying yes leads to new experiences, and new experiences will lead to knowledge and wisdom. Yes is for young people, and an attitude of yes is how you will be able to go forward in these uncertain times.” 

Larry Druckenbrod, the university’s assistant director of career services, told me last fall, “This is a group that’s done résumé building since middle school. They’ve been told they’ve been preparing to go out and do great things after college. And now they’ve been dealt a 180.” For many, that’s led to “immobilization.” Druckenbrod said that about a third of the seniors he talked to that semester were seriously looking for work; another third were planning to go to grad school. The final third, he said, were “not even engaging with the job market—these are the ones whose parents have already said, ‘Just come home and live with us.’” 

According to a recent Pew survey, 10 percent of adults younger than 35 have moved back in with their parents as a result of the recession. But that’s merely an acceleration of a trend that has been under way for a generation or more. By the middle of the aughts, for instance, the percentage of 26-year-olds living with their parents reached 20 percent, nearly double what it was in 1970. Well before the recession began, this generation of young adults was less likely to work, or at least work steadily, than other recent generations. Since 2000, the percentage of people age 16 to 24 participating in the labor force has been declining (from 66 percent to 56 percent across the decade). Increased college attendance explains only part of the shift; the rest is a puzzle. Lingering weakness in the job market since 2001 may be one cause. Twenge believes the propensity of this generation to pursue “dream” careers that are, for most people, unlikely to work out may also be partly responsible. (In 2004, a national survey found that about one out of 18 college freshmen expected to make a living as an actor, musician, or artist.) 

Whatever the reason, the fact that so many young adults weren’t firmly rooted in the workforce even before the crash is deeply worrying. It means that a very large number of young adults entered the recession already vulnerable to all the ills that joblessness produces over time. It means that for a sizeable proportion of 20- and 30-somethings, the next few years will likely be toxic. 

No young people were present at a seminar for the unemployed held on November 4 in Reading, Pennsylvania, a blue-collar city about 60 miles west of Philadelphia. The meeting was organized by a regional nonprofit, Joseph’s People, and held in the basement of the St. Catharine’s parish center. All 30 or so attendees, sitting around a U-shaped table, looked to be 40 or older. But one middle-aged man, one of the first to introduce himself to the group, said he and his wife were there on behalf of their son, Errol. “He’s so disgusted that he didn’t want to come,” the man said. “He doesn’t know what to do, and we don’t either.” 

I talked to Errol a few days later. He is 28 and has a gentle, straightforward manner. He graduated from high school in 1999 and has lived with his parents since then. He worked in a machine shop for a couple of years after school, and has also held jobs at a battery factory, a sandpaper manufacturer, and a restaurant, where he was a cook. The restaurant closed in June 2008, and apart from a few days of work through temp agencies, he hasn’t had a job since. 

He calls in to a few temp agencies each week to let them know he’s interested in working, and checks the newspaper for job listings every Sunday. Sometimes he goes into CareerLink, the local unemployment office, to see if it has any new listings. He does work around the house, or in the small machine shop he’s set up in the garage, just to fill his days, and to try to keep his skills up. 

“I was thinking about moving,” he said. “I’m just really not sure where. Other places where I traveled, I didn’t really see much of a difference with what there was here.” He’s still got a few thousand dollars in the bank, which he saved when he was working as a machinist, and is mostly living off that; he’s been trading penny stocks to try to replenish those savings. 

I asked him what he foresaw for his working life. “As far as my job position,” he said, “I really don’t know what I want to do yet. I’m not sure.” When he was little, he wanted to be a mechanic, and he did enjoy the machine trade. But now there was hardly any work to be had, and what there was paid about the same as Walmart. “I don’t think there’s any way that you can have a job that you can think you can retire off of,” he said. “I think everyone’s going to have to transfer to another job.” He said the only future he could really imagine for himself now was just moving from job to job, with no career to speak of. “That’s what I think,” he said. “I don’t want to.” 

Men and Family in a Jobless Age

 

In her classic sociology of the Depression, The Unemployed Man and His Family, Mirra Komarovsky vividly describes how joblessness strained—and in many cases fundamentally altered—family relationships in the 1930s. During 1935 and 1936, Komarovsky and her research team interviewed the members of 59 white middle-class families in which the husband and father had been out of work for at least a year. Her research revealed deep psychological wounds. “It is awful to be old and discarded at 40,” said one father. “A man is not a man without work.” Another said plainly, “During the depression I lost something. Maybe you call it self-respect, but in losing it I also lost the respect of my children, and I am afraid I am losing my wife.” Noted one woman of her husband, “I still love him, but he doesn’t seem as ‘big’ a man.” 

Taken together, the stories paint a picture of diminished men, bereft of familial authority. Household power—over children, spending, and daily decisions of all types—generally shifted to wives over time (and some women were happier overall as a result). Amid general anxiety, fears of pregnancy, and men’s loss of self-worth and loss of respect from their wives, sex lives withered. Socializing all but ceased as well, a casualty of poverty and embarrassment. Although some men embraced family life and drew their wife and children closer, most became distant. Children described their father as “mean,” “nasty,” or “bossy,” and didn’t want to bring friends around, for fear of what he might say. “There was less physical violence towards the wife than towards the child,” Komarovsky wrote. 

In the 70 years that have passed since the publication of The Unemployed Man and His Family, American society has become vastly more wealthy, and a more comprehensive social safety net—however frayed it may seem—now stretches beneath it. Two-earner households have become the norm, cushioning the economic blow of many layoffs. And of course, relationships between men and women have evolved. Yet when read today, large parts of Komarovsky’s book still seem disconcertingly up-to-date. All available evidence suggests that long bouts of unemployment—particularly male unemployment—still enfeeble the jobless and warp their families to a similar degree, and in many of the same ways. 

Andrew Oswald, an economist at the University of Warwick, in the U.K., and a pioneer in the field of happiness studies, says no other circumstance produces a larger decline in mental health and well-being than being involuntarily out of work for six months or more. It is the worst thing that can happen, he says, equivalent to the death of a spouse, and “a kind of bereavement” in its own right. Only a small fraction of the decline can be tied directly to losing a paycheck, Oswald says; most of it appears to be the result of a tarnished identity and a loss of self-worth. Unemployment leaves psychological scars that remain even after work is found again, and, because the happiness of husbands and the happiness of wives are usually closely related, the misery spreads throughout the home. 

Especially in middle-aged men, long accustomed to the routine of the office or factory, unemployment seems to produce a crippling disorientation. At a series of workshops for the unemployed that I attended around Philadelphia last fall, the participants were overwhelmingly male, and the men in particular described the erosion of their identities, the isolation of being jobless, and the indignities of downward mobility. 

Over lunch I spoke with one attendee, Gus Poulos, a Vietnam-era veteran who had begun his career as a refrigeration mechanic before going to night school and becoming an accountant. He is trim and powerfully built, and looks much younger than his 59 years. For seven years, until he was laid off in December 2008, he was a senior financial analyst for a local hospital. 

Poulos said that his frustration had built and built over the past year. “You apply for so many jobs and just never hear anything,” he told me. “You’re one of my few interviews. I’m just glad to have an interview with anybody, even a magazine.” Poulos said he was an optimist by nature, and had always believed that with preparation and hard work, he could overcome whatever life threw at him. But sometime in the past year, he’d lost that sense, and at times he felt aimless and adrift. “That’s never been who I am,” he said. “But now, it’s who I am.” 

Recently he’d gotten a part-time job as a cashier at Walmart, for $8.50 an hour. “They say, ‘Do you want it?’ And in my head, I thought, ‘No.’ And I raised my hand and said, ‘Yes.’” Poulos and his wife met when they were both working as supermarket cashiers, four decades earlier—it had been one of his first jobs. “Now, here I am again.” 

Poulos’s wife is still working—she’s a quality-control analyst at a food company—and that’s been a blessing. But both are feeling the strain, financial and emotional, of his situation. She commutes about 100 miles every weekday, which makes for long days. His hours at Walmart are on weekends, so he doesn’t see her much anymore and doesn’t have much of a social life. 

Some neighbors were at the Walmart a couple of weeks ago, he said, and he rang up their purchase. “Maybe they were used to seeing me in a different setting,” he said—in a suit as he left for work in the morning, or walking the dog in the neighborhood. Or “maybe they were daydreaming.” But they didn’t greet him, and he didn’t say anything. He looked down at his soup, pushing it around the bowl with his spoon for a few seconds before looking back up at me. “I know they knew me,” he said. “I’ve been in their home.” 

The weight of this recession has fallen most heavily upon men, who’ve suffered roughly three-quarters of the 8 million job losses since the beginning of 2008. Male-dominated industries (construction, finance, manufacturing) have been particularly hard-hit, while sectors that disproportionately employ women (education, health care) have held up relatively well. In November, 19.4 percent of all men in their prime working years, 25 to 54, did not have jobs, the highest figure since the Bureau of Labor Statistics began tracking the statistic in 1948. At the time of this writing, it looks possible that within the next few months, for the first time in U.S. history, women will hold a majority of the country’s jobs. 

In this respect, the recession has merely intensified a long-standing trend. Broadly speaking, the service sector, which employs relatively more women, is growing, while manufacturing, which employs relatively more men, is shrinking. The net result is that men have been contributing a smaller and smaller share of family income. 

“Traditional” marriages, in which men engage in paid work and women in homemaking, have long been in eclipse. Particularly in blue-collar families, where many husbands and wives work staggered shifts, men routinely handle a lot of the child care today. Still, the ease with which gender bends in modern marriages should not be overestimated. When men stop doing paid work—and even when they work less than their wives—marital conflict usually follows. 

Last March, the National Domestic Violence Hotline received almost half again as many calls as it had one year earlier; as was the case in the Depression, unemployed men are vastly more likely to beat their wives or children. More common than violence, though, is a sort of passive-aggressiveness. In Identity Economics, the economists George Akerloff and Rachel Kranton find that among married couples, men who aren’t working at all, despite their free time, do only 37 percent of the housework, on average. And some men, apparently in an effort to guard their masculinity, actually do less housework after becoming unemployed. 

Many working women struggle with the idea of partners who aren’t breadwinners. “We’ve got this image of Archie Bunker sitting at home, grumbling and acting out,” says Kathryn Edin, a professor of public policy at Harvard, and an expert on family life. “And that does happen. But you also have women in whole communities thinking, ‘This guy’s nothing.’” Edin’s research in low-income communities shows, for instance, that most working women whose partner stayed home to watch the kids—while very happy with the quality of child care their children’s father provided—were dissatisfied with their relationship overall. “These relationships were often filled with conflict,” Edin told me. Even today, she says, men’s identities are far more defined by their work than women’s, and both men and women become extremely uncomfortable when men’s work goes away. 

The national divorce rate fell slightly in 2008, and that’s not unusual in a recession: divorce is expensive, and many couples delay it in hard times. But joblessness corrodes marriages, and makes divorce much more likely down the road. According to W. Bradford Wilcox, the director of the National Marriage Project at the University of Virginia, the gender imbalance of the job losses in this recession is particularly noteworthy, and—when combined with the depth and duration of the jobs crisis—poses “a profound challenge to marriage,” especially in lower-income communities. It may sound harsh, but in general, he says, “if men can’t make a contribution financially, they don’t have much to offer.” Two-thirds of all divorces are legally initiated by women. Wilcox believes that over the next few years, we may see a long wave of divorces, washing no small number of discarded and dispirited men back into single adulthood. 

Among couples without college degrees, says Edin, marriage has become an “increasingly fragile” institution. In many low-income communities, she fears it is being supplanted as a social norm by single motherhood and revolving-door relationships. As a rule, fewer people marry during a recession, and this one has been no exception. But “the timing of this recession coincides with a pretty significant cultural change,” Edin says: a fast-rising material threshold for marrying, but not for having children, in less affluent communities. 

Edin explains that poor and working-class couples, after seeing the ravages of divorce on their parents or within their communities, have become more hesitant to marry; they believe deeply in marriage’s sanctity, and try to guard against the possibility that theirs will end in divorce. Studies have shown that even small changes in income have significant effects on marriage rates among the poor and the lower-middle class. “It’s simply not respectable to get married if you don’t have a job—some way of illustrating to your neighbors that you have at least some grasp on some piece of the American pie,” Edin says. Increasingly, people in these communities see marriage not as a way to build savings and stability, but as “a symbol that you’ve arrived.” 

Childbearing is the opposite story. The stigma against out-of-wedlock children has by now largely dissolved in working-class communities—more than half of all new mothers without a college degree are unmarried. For both men and women in these communities, children are commonly seen as a highly desirable, relatively low-cost way to achieve meaning and bolster identity—especially when other opportunities are closed off. Christina Gibson-Davis, a public-policy professor at Duke University, recently found that among adults with no college degree, changes in income have no bearing at all on rates of childbirth. 

“We already have low marriage rates in low-income communities,” Edin told me, “including white communities. And where it’s really hitting now is in working-class urban and rural communities, where you’re just seeing astonishing growth in the rates of nonmarital childbearing. And that would all be fine and good, except these parents don’t stay together. This may be one of the most devastating impacts of the recession.” 

Many children are already suffering in this recession, for a variety of reasons. Among poor families, nutrition can be inadequate in hard times, hampering children’s mental and physical development. And regardless of social class, the stresses and distractions that afflict unemployed parents also afflict their kids, who are more likely to repeat a grade in school, and who on average earn less as adults. Children with unemployed fathers seem particularly vulnerable to psychological problems. 

But a large body of research shows that one of the worst things for children, in the long run, is an unstable family. By the time the average out-of-wedlock child has reached the age of 5, his or her mother will have had two or three significant relationships with men other than the father, and the child will typically have at least one half sibling. This kind of churning is terrible for children—heightening the risks of mental-health problems, troubles at school, teenage delinquency, and so on—and we’re likely to see more and more of it, the longer this malaise stretches on. 

“We could be headed in a direction where, among elites, marriage and family are conventional, but for substantial portions of society, life is more matriarchal,” says Wilcox. The marginalization of working-class men in family life has far-reaching consequences. “Marriage plays an important role in civilizing men. They work harder, longer, more strategically. They spend less time in bars and more time in church, less with friends and more with kin. And they’re happier and healthier.” 

Communities with large numbers of unmarried, jobless men take on an unsavory character over time. Edin’s research team spent part of last summer in Northeast and South Philadelphia, conducting in-depth interviews with residents. She says she was struck by what she saw: “These white working-class communities—once strong, vibrant, proud communities, often organized around big industries—they’re just in terrible straits. The social fabric of these places is just shredding. There’s little engagement in religious life, and the old civic organizations that people used to belong to are fading. Drugs have ravaged these communities, along with divorce, alcoholism, violence. I hang around these neighborhoods in South Philadelphia, and I think, ‘This is beginning to look like the black inner-city neighborhoods we’ve been studying for the past 20 years.’ When young men can’t transition into formal-sector jobs, they sell drugs and drink and do drugs. And it wreaks havoc on family life. They think, ‘Hey, if I’m 23 and I don’t have a baby, there’s something wrong with me.’ They’re following the pattern of their fathers in terms of the timing of childbearing, but they don’t have the jobs to support it. So their families are falling apart—and often spectacularly.” 

In his 1996 book, When Work Disappears, the Harvard sociologist William Julius Wilson connected the loss of jobs from inner cities in the 1970s to the many social ills that cropped up after that. “The consequences of high neighborhood joblessness,” he wrote, 

are more devastating than those of high neighborhood poverty. A neighborhood in which people are poor but employed is different from a neighborhood in which many people are poor and jobless. Many of today’s problems in the inner-city ghetto neighborhoods—crime, family dissolution, welfare, low levels of social organization, and so on—are fundamentally a consequence of the disappearance of work.

 

In the mid-20th century, most urban black men were employed, many of them in manufacturing. But beginning in the 1970s, as factories moved out of the cities or closed altogether, male unemployment began rising sharply. Between 1973 and 1987, the percentage of black men in their 20s working in manufacturing fell from roughly 37.5 percent to 20 percent. As inner cities shed manufacturing jobs, men who lived there, particularly those with limited education, had a hard time making the switch to service jobs. Service jobs and office work of course require different interpersonal skills and different standards of self-presentation from those that blue-collar work demands, and movement from one sector to the other can be jarring. What’s more, Wilson’s research shows, downwardly mobile black men often resented the new work they could find, and displayed less flexibility on the job than, for instance, first-generation immigrant workers. As a result, employers began to prefer hiring women and immigrants, and a vicious cycle of resentment, discrimination, and joblessness set in. 

It remains to be seen whether larger swaths of the country, as male joblessness persists, will eventually come to resemble the inner cities of the 1970s and ’80s. In any case, one of the great catastrophes of the past decade, and in particular of this recession, is the slippage of today’s inner cities back toward the depths of those brutal years. Urban minorities tend to be among the first fired in a recession, and the last rehired in a recovery. Overall, black unemployment stood at 15.6 percent in November; among Hispanics, that figure was 12.7 percent. Even in New York City, where the financial sector, which employs relatively few blacks, has shed tens of thousands of jobs, unemployment has increased much faster among blacks than it has among whites. 

In June 1999, the journalist Ellis Cose wrote in Newsweek that it was then “the best time ever” to be black in America. He ticked through the reasons: employment was up, murders and out-of-wedlock births down; educational attainment was rising, and poverty less common than at any time since 1967. Middle-class black couples were slowly returning to gentrifying inner-city neighborhoods. “Even for some of the most persistently unfortunate—uneducated black men between 16 and 24—jobs are opening up,” Cose wrote. 

But many of those gains are now imperiled. Late last year, unemployment among black teens ages 16 to 19 was nearly 50 percent, and the unemployment rate for black men age 20 or older was almost 17 percent. With so few jobs available, Wilson told me, “many black males will give up and drop out of the labor market, and turn more to the underground economy. And it will be very difficult for these people”—especially those who acquire criminal records—“to reenter the labor market in any significant way.” Glen Elder, the sociologist at the University of North Carolina, who’s done field work in Baltimore, said, “At a lower level of skill, if you lose a job and don’t have fathers or brothers with jobs—if you don’t have a good social network—you get drawn back into the street. There’s a sense in the kids I’ve studied that they lost everything they had, and can’t get it back.” 

In New York City, 18 percent of low-income blacks and 26 percent of low-income Hispanics reported having lost their job as a result of the recession in a July survey by the Community Service Society. More still had had their hours or wages reduced. About one in seven low-income New Yorkers often skipped meals in 2009 to save money, and one in five had had the gas, electricity, or telephone turned off. Wilson argues that once neighborhoods become socially dysfunctional, it takes a long period of unbroken good times to undo the damage—and they can backslide very quickly and steeply. “One problem that has plagued the black community over the years is resignation,” Wilson said—a self-defeating “set of beliefs about what to expect from life and how to respond,” passed from parent to child. “And I think there was sort of a feeling that norms of resignation would weaken somewhat with the Obama election. But these hard economic times could reinforce some of these norms.” 

Wilson, age 74, is a careful scholar, who chooses his words precisely and does not seem given to overstatement. But he sounded forlorn when describing the “very bleak” future he sees for the neighborhoods that he’s spent a lifetime studying. There is “no way,” he told me, “that the extremely high jobless rates we’re seeing won’t have profound consequences for the social organization of inner-city neighborhoods.” Neighborhood-specific statistics on drug addiction, family dysfunction, gang violence, and the like take time to compile. But Wilson believes that once we start getting detailed data on the conditions of inner-city life since the crash, “we’re going to see some horror stories”—and in many cases a relapse into the depths of decades past. “The point I want to emphasize,” Wilson said, “is that we should brace ourselves.” 

The Social Fabric

 

No one tries harder than the jobless to find silver linings in this national economic disaster. Many of the people I spoke with for this story said that unemployment, while extremely painful, had improved them in some ways: they’d become less materialistic and more financially prudent; they were using free time to volunteer more, and were enjoying that; they were more empathetic now, they said, and more aware of the struggles of others. 

In limited respects, perhaps the recession will leave society better off. At the very least, it’s awoken us from our national fever dream of easy riches and bigger houses, and put a necessary end to an era of reckless personal spending. Perhaps it will leave us humbler, and gentler toward one another, too—at least in the long run. A recent paper by the economists Paola Giuliano and Antonio Spilimbergo shows that generations that endured a recession in early adulthood became more concerned about inequality and more cognizant of the role luck plays in life. And in his book, Children of the Great Depression, Glen Elder wrote that adolescents who experienced hardship in the 1930s became especially adaptable, family-oriented adults; perhaps, as a result of this recession, today’s adolescents will be pampered less and counted on for more, and will grow into adults who feel less entitled than recent generations. 

But for the most part, these benefits seem thin, uncertain, and far off. In The Moral Consequences of Economic Growth, the economic historian Benjamin Friedman argues that both inside and outside the U.S., lengthy periods of economic stagnation or decline have almost always left society more mean-spirited and less inclusive, and have usually stopped or reversed the advance of rights and freedoms. A high level of national wealth, Friedman writes, “is no bar to a society’s retreat into rigidity and intolerance once enough of its citizens lose the sense that they are getting ahead.” When material progress falters, Friedman concludes, people become more jealous of their status relative to others. Anti-immigrant sentiment typically increases, as does conflict between races and classes; concern for the poor tends to decline. 

Social forces take time to grow strong, and time to dissipate again. Friedman told me that the phenomenon he’s studied “is not about business cycles … It’s not about people comparing where they are now to where they were a year ago.” The relevant comparisons are much broader: What opportunities are available to me, relative to those of my parents? What opportunities do my children have? What is the trajectory of my career? 

It’s been only about two years since this most recent recession started, but then again, most people hadn’t been getting ahead for a decade. In a Pew survey in the spring of 2008, more than half of all respondents said that over the past five years, they either hadn’t moved forward in life or had actually fallen backward, the most downbeat assessment that either Pew or Gallup has ever recorded, in nearly a half century of polling. Median household income in 2008 was the lowest since 1997, adjusting for inflation. “On the latest income data,” Friedman said, “we’re 11 years into a period of decline.” By the time we get out of the current downturn, we’ll likely be “up to a decade and a half. And that’s surely enough.” 

Income inequality usually falls during a recession, and the economist and happiness expert Andrew Clark says that trend typically provides some emotional salve to the poor and the middle class. (Surveys, lab experiments, and brain readings all show that, for better or worse, schadenfreude is a powerful psychological force: at any fixed level of income, people are happier when the income of others is reduced.) But income inequality hasn’t shrunk in this recession. In 2007–08, the most recent year for which data is available, it widened. 

Indeed, this period of economic weakness may reinforce class divides, and decrease opportunities to cross them—especially for young people. The research of Till Von Wachter, the economist at Columbia University, suggests that not all people graduating into a recession see their life chances dimmed: those with degrees from elite universities catch up fairly quickly to where they otherwise would have been if they’d graduated in better times; it’s the masses beneath them that are left behind. Princeton’s 2009 graduating class found more jobs in financial services than in any other industry. According to Princeton’s career-services director, Beverly Hamilton-Chandler, campus visits and hiring by the big investment banks have been down, but that decline has been partly offset by an uptick in recruiting by hedge funds and boutique financial firms. 

In the Internet age, it is particularly easy to see the bile that has always lurked within American society. More difficult, in the moment, is discerning precisely how these lean times are affecting society’s character. In many respects, the U.S. was more socially tolerant entering this recession than at any time in its history, and a variety of national polls on social conflict since then have shown mixed results. Signs of looming class warfare or racial conflagration are not much in evidence. But some seeds of discontent are slowly germinating. The town-hall meetings last summer and fall were contentious, often uncivil, and at times given over to inchoate outrage. One National Journal poll in October showed that whites (especially white men) were feeling particularly anxious about their future and alienated by the government. We will have to wait and see exactly how these hard times will reshape our social fabric. But they certainly will reshape it, and all the more so the longer they extend. 

A slowly sinking generation; a remorseless assault on the identity of many men; the dissolution of families and the collapse of neighborhoods; a thinning veneer of national amity—the social legacies of the Great Recession are still being written, but their breadth and depth are immense. As problems, they are enormously complex, and their solutions will be equally so. 

Of necessity, those solutions must include measures to bolster the economy in the short term, and to clear the way for faster long-term growth; to support the jobless today, and to ensure that we are creating the kinds of jobs (and the kinds of skills within the population) that can allow for a more broadly shared prosperity in the future. A few of the solutions—like more-aggressive support for the unemployed, and employer tax credits or other subsidies to get people back to work faster—are straightforward and obvious, or at least they should be. Many are not. 

At the very least, though, we should make the return to a more normal jobs environment an unflagging national priority. The stock market has rallied, the financial system has stabilized, and job losses have slowed; by the time you read this, the unemployment rate might be down a little. Yet the difference between “turning the corner” and a return to any sort of normalcy is vast. 

We are in a very deep hole, and we’ve been in it for a relatively long time already. Concerns over deficits are understandable, but in these times, our bias should be toward doing too much rather than doing too little. That implies some small risk to the government’s ability to continue borrowing in the future; and it implies somewhat higher taxes in the future too. But that seems a trade worth making. We are living through a slow-motion social catastrophe, one that could stain our culture and weaken our nation for many, many years to come. We have a civic—and indeed a moral—responsibility to do everything in our power to stop it now, before it gets even worse. 

Posted in banks, business, CEO, Computers, culture, Democrats, economics, economy, gender, Healthcare, history, Law, Obama, Politics, Polls, school, Social Network, Steven Eidman, Study, Wall Street, women | Tagged: , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Dead Studies 101

Posted by steveneidman on February 16, 2010

Management Secrets of the Grateful Dead

by Joshua Green

Fans of the Grateful Dead are big believers in serendipity. So a certain knowing approval greeted the news last year that the band would be donating its copious archive—four decades’ worth of commercial recordings and videotapes, press clippings, stage sets, business records, and a mountain of correspondence encompassing everything from elaborately decorated fan letters to a thank-you note for a fund-raising performance handwritten on White House stationery by President Barack Obama—to the University of California at Santa Cruz. Santa Cruz was understood to be a fitting home not only because it exemplifies the spirit of the counterculture as much as, and perhaps even more than, Berkeley and Stanford, which also bid for the archive, but because the school’s faculty includes an ethnomusicologist and composer named Fredric Lieberman, who is prominent among a curious breed in the academy: scholars who teach and study the Grateful Dead.

It’s worth noting right up front the hurdles Dead Studies faces as a field of serious inquiry. To begin with, the news that it exists at all tends to elicit grinning disbelief; a corollary challenge is the assumptions people carry about its practitioners, such as my own expectation when arranging to visit Lieberman last year that I would encounter an amiable hippie, probably of late-Boomer vintage and wearing a thinning ponytail. Rough mental image: Wavy Gravy with a Ph.D.

Lieberman is nothing of the sort. A small man with parchment skin, wisps of white hair, and large round glasses, he could have looked more professorial only by wielding a Dunhill pipe. His interest in the Grateful Dead, he explained, had arisen largely by chance. In the 1960s, he studied under the noted ethnomusicologist Charles Seeger (father of Pete Seeger) at UCLA, and came to share his mentor’s dismay at the academy’s neglect of popular and non-Western music. Lieberman went on to teach a series of classes in American vernacular music and, though he held no particular fondness for the Grateful Dead, became one of the first academics to teach the band’s music, in the early 1970s.

In 1983, the Dead’s drummer, Mickey Hart, asked Lieberman to help catalog his vast collection of instruments. When the project developed into a larger study of world percussion, Hart invited Lieberman to join him on tour. “I thought it would be interesting to treat it as an ethnomusicological field trip,” Lieberman told me. For some years, when he wasn’t teaching he traveled with the band, introducing Hart to ethnomusicologists by day and attending shows by night. If you squinted hard during any number of the Dead’s most famous shows in the 1980s and ’90s, you might have glimpsed the unlikely spectacle of an ethnomusicologist crouching in earnest concentration behind the drummer, going about his fieldwork.

Lieberman apologized for not being able to show me the archive. The whole thing was under lock and key in a Northern California warehouse whose location was a closely held secreta precaution against overzealous fans’ plundering a hoard that many would regard as akin to Tutankhamen’s treasure. On March 5, the New York Historical Society will open the first large-scale exhibit of material from the Dead Archive. Then, if all goes as planned, the collection will become the centerpiece of a new campus library at Santa Cruz slated to open later this year. Among other things, it is hoped that the Dead Archive will galvanize a nascent group of scholars across many disciplines who, like Lieberman, study the Grateful Deadnot just musicologists but historians, sociologists, philosophers, psychologists, and even business and management theorists. Some have risked their academic standing in the belief that the band and the larger social phenomenon that surrounds it are far more significant than is commonly understood. Lately, the world has been changing in ways that make that not so hard to believe.

One of the first academic articles on the Grateful Dead appeared in the Winter 1972 issue of the Journal of Psychedelic Drugs, a periodical for medical professionals, and drew on emergency-treatment records to compare drug use at a Grateful Dead concert with that at a Led Zeppelin concert. (Verdict: Deadheads favored LSD, Zeppelin fans alcohol.) The popular association between the Dead and a drug-fueled counterculture did little to encourage respectable academic endeavor.

As the band’s following grew, the notion that it might have something to offer scholars, particularly in the social sciences, became somewhat less far-fetched, though still not without professional risk. In the late 1980s, Rebecca G. Adams, a sociologist at the University of North Carolina at Greensboro, who studies friendships formed across distances, noticed deep bonds between Deadheads. The bonds seemed to belie the idea, then popular among leading social thinkers, that communities based on common interest, whose members do not live near each other, lack emotional and moral depththat Deadheads might belong to what sociologists call a “lifestyle enclave,” but couldn’t possibly form meaningful relationships. Adams brought a class on tour with the Deadan opportunity, she thought, to teach classical theory while letting students study a cutting-edge contemporary community.

She became instantly famous, among a small group of scholars, and then, suddenly, among a much larger group of people. One day, without warning, Senator Robert Byrd, the histrionic and prodigiously opinionated West Virginian, gave a speech decrying what he considered an appalling decline in the standards for higher education, and cited Adams’s class as an example. Adams had unwittingly placed herself in the crosshairs of the culture wars and was beset by, among other things, an inquiry from the president of North Carolina’s state university system. Though she survived with help from her chancellor and her department head, and though the question fell squarely within her specialty, Adams was politely discouraged from pursuing her line of inquiry. “I was advised to concentrate on the more respectable areas of my research,” she told me.

Other aspects of the band nevertheless continued to invite academic examination. Musicologists showed interest, although the band’s sprawling repertoire and tendency to improvise posed a significant challenge. Lieberman says that fully absorbing the Dead’s music could take years, and he has noted its similarities with South Indian classical music, with its complex notational system and highly formalized four-hour concerts. Engineers studied the band’s sophisticated sound system, radical at the time but widely emulated today. Even legal scholars took note, some contending that the American criminal-justice system, including the courts, unfairly profiles Deadhead defendants and has, on occasion, treated fandom as evidence of mental illness.

Oddly enough, the Dead’s influence on the business world may turn out to be a significant part of its legacy. Without intending towhile intending, in fact, to do just the oppositethe band pioneered ideas and practices that were subsequently embraced by corporate America. One was to focus intensely on its most loyal fans. It established a telephone hotline to alert them to its touring schedule ahead of any public announcement, reserved for them some of the best seats in the house, and capped the price of tickets, which the band distributed through its own mail-order house. If you lived in New York and wanted to see a show in Seattle, you didn’t have to travel there to get ticketsand you could get really good tickets, without even camping out. “The Dead were masters of creating and delivering superior customer value,” Barry Barnes, a business professor at the H. Wayne Huizenga School of Business and Entrepreneurship at Nova Southeastern University, in Florida, told me. Treating customers well may sound like common sense. But it represented a break from the top-down ethos of many organizations in the 1960s and ’70s. Only in the 1980s, faced with competition from Japan, did American CEOs and management theorists widely adopt a customer-first orientation.

As Barnes and other scholars note, the musicians who constituted the Dead were anything but naive about their business. They incorporated early on, and established a board of directors (with a rotating CEO position) consisting of the band, road crew, and other members of the Dead organization. They founded a profitable merchandising division and, peace and love notwithstanding, did not hesitate to sue those who violated their copyrights. But they weren’t greedy, and they adapted well. They famously permitted fans to tape their shows, ceding a major revenue source in potential record sales. According to Barnes, the decision was not entirely selfless: it reflected a shrewd assessment that tape sharing would widen their audience, a ban would be unenforceable, and anyone inclined to tape a show would probably spend money elsewhere, such as on merchandise or tickets. The Dead became one of the most profitable bands of all time.

It’s precisely this flexibility that Barnes believes holds the greatest lessons for businesshe calls it “strategic improvisation.” It isn’t hard to spot a few of its recent applications. Giving something away and earning money on the periphery is the same idea proffered by Wired editor Chris Anderson in his recent best-selling book, Free: The Future of a Radical Price. Voluntarily or otherwise, it is becoming the blueprint for more and more companies doing business on the Internet. Today, everybody is intensely interested in understanding how communities form across distances, because that’s what happens online. Far from being a subject of controversy, Rebecca Adams’s next book on Deadhead sociology has publishers lining up.

Much of the talk about “Internet business models” presupposes that they are blindingly new and different. But the connection between the Internet and the Dead’s business model was made 15 years ago by the band’s lyricist, John Perry Barlow, who became an Internet guru. Writing in Wired in 1994, Barlow posited that in the information economy, “the best way to raise demand for your product is to give it away.” As Barlow explained to me: “What people today are beginning to realize is what became obvious to us back thenthe important correlation is the one between familiarity and value, not scarcity and value. Adam Smith taught that the scarcer you make something, the more valuable it becomes. In the physical world, that works beautifully. But we couldn’t regulate [taping at] our shows, and you can’t online. The Internet doesn’t behave that way. But here’s the thing: if I give my song away to 20 people, and they give it to 20 people, pretty soon everybody knows me, and my value as a creator is dramatically enhanced. That was the value proposition with the Dead.” The Dead thrived for decades, in good times and bad. In a recession, Barnes says, strategic improvisation is more important then ever. “If you’re going to survive this economic downturn, you better be able to turn on a dime,” he says. “The Dead were exemplars.” It can be only a matter of time until Management Secrets of the Grateful Dead or some similar title is flying off the shelves of airport bookstores everywhere.

Recently, Barnes has been lecturing to business leaders about strategic improvisation. He’s been a big hit. “People are just so tired of hearing about GE and Southwest Airlines,” he admits. “They get really excited to hear about the Grateful Dead.”

Until now, scholars who studied the Dead were limited to what was available in the public domain. Barnes sought access to internal documents more than a decade ago and was turned down. When the Dead Archive opens, he and others expect to gain many new insights, because they’ll finally be able to draw on primary source material—and there’s plenty. For years, unbeknownst to just about everyone, the band’s longtime office manager obsessively stashed away everything that came into her office. The possibilities seem manifold. “From the economics folks to the anthropologists,” Barlow says, “increasing numbers of people are going to make a pilgrimage to the archive to see how this all came together.”

When a famous author or statesman donates his papers to history, the task of studying and making sense of them usually falls to some obvious discipline. That’s not quite the case here. Even with the recent renaissance, Dead scholars are few. The bulk of the expertise lies outside the academy, with ordinary Deadheads. So Santa Cruz library officials have devised a novel approach (some would call it strategic improvisation) to curating the collection. They intend to post as much of it as possible online in the hope that Deadheadszealous social networkers that they arewill contribute their knowledge, and perhaps material of their own, to help build up the record. With the culture wars of the 1960s finally beginning to subside, the possibility for sober reflection on a charged era is more feasible than it once was. Today, the Dead are more attraction than liability. The library will seek to become a haven for the study of pop culture since the 1960s, with the Dead Archive anchoring its collection.

“Revolutionaries get vilified, and then, once they get older, they just become cute,” says Steve Gimbel, who is a philosophy professor at Gettysburg College and edited the recent collection The Grateful Dead and Philosophy. “Think of Oscar Wilde. Once they’re not dangerous anymore, it’s okay to discuss them in serious ways.”

Posted in art, arts, business, celebrity, CEO, Computers, culture, Democrats, economics, economy, Film, history, literature, Music, Obama, Social Network | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

The Two Faces of Michael Mukasey

Posted by steveneidman on February 15, 2010

Michael Mukasey: Then and now

To promote his partisan fear-mongering attacks, the former Judge invokes the very arguments he once scorned

Glenn Greenwald

Former Bush Attorney General Michael Mukasey has become the leading spokesman for a Cheneyite national security attack, which relies on scaring Americans into believing that Obama is endangering their lives in those rare instances when he deviates from Bush’s Terrorism approach.  Toward that end, Mukasey has yet another fear-mongering Op-Ed, this time on today’s oh-so-liberal Washington Post Op-Ed Page (along side Michael Gerson’s stirring tribute to the virtues of GITMO, Bill Kristol’s call for regime change in Iran, a warning from Blackstone Chairman Steven Schwarzman to stop being so mean to banks, and a Charles Krauthammer column blaming Obama for something or other).  Mukasey specifically accuses the Obama administration of losing valuable intelligence by allowing Abdudlmutallab access to a lawyer, and insists that the accused Christmas Day bomber had no constitutional rights because — despite his being detained in the U.S. — he is merely an “enemy combatant.” 

But when Mukasey was a federal judge, he made the opposite arguments.  In 2002, the Bush administration detained Jose Padilla at Chicago’s O’Hare Airport, publicly labeled him The Dirty Bomber, declared him an “enemy combatant,” transferred him to military custody, and refused to charge him or even to allow him access to a lawyer.  When a lawsuit was brought on Padilla’s behalf, Mukasey was the assigned judge, and he ordered the Bush administration to allow Padilla access to a lawyer.  When the Bush administration dithered and basically refused (asking Mukasey to reconsider), Mukasey issued a lengthy Opinion and Order threatening to impose the conditions himself and explaining that Padilla’s constitutional right to a lawyer was clear and nonnegotiable.  So resounding was Mukasey’s defense of Padilla’s right to a lawyer that, when he was initially nominated as Attorney General, many anti-Bush legal analystsincluding me — cited Mukasey’s ruling in Padilla to argue that he was one of the better choices given the other right-wing alternatives.  Indeed, I analyzed his decision in Padilla at length to argue that, at least in that case, Mukasey “displayed an impressive allegiance to the rule of law and constitutional principles over fealty to claims of unlimited presidential power,” and that he “was more than willing to defy the Bush administration and not be intimidated by threats that enforcing the rule of law would prevent the President from stopping the Terrorists.” 

What’s most striking is that, in the Padilla case, Mukasey emphatically rejected the very arguments he is now making to attack Obama.  The Bush DOJ repeatedly insisted that Mukasey — by allowing Padilla access to a lawyer — would destroy their ability to interrogate him and obtain life-saving intelligence, thus endangering all Americans.  As Mukasey put it:  the Bush DOJ is “none too subtle in cautioning this court against going too far in the protection of this detainee’s rights, suggesting at one point that permitting Padilla to consult with a lawyer ‘risks that plans for future attacks will go undetected‘.”  Incredibly, that argument — which Mukasey decisively rejected back then — is exactly the one he’s now making against Obama.  Listen to what the Bush administration told Mukasey in demanding that he withdraw his order directing that Padilla be given access to a lawyer — this is what Mukasey quoted from a Bush DOJ brief and refused to embrace back then: 

DIA’s approach to interrogation is largely dependent upon creating an atmosphere of dependency and trust between the subject and the interrogator. Developing the kind of relationship of trust and dependency necessary for effective interrogations is a process that can take a significant amount of [redacted]. There are numerous examples of situations where interrogators have been unable to obtain valuable intelligence from a subject until months, or even years, after the interrogation process began. 

Anything that threatens the perceived dependency and trust between the subject and interrogator directly threatens the value of interrogation as an intelligence-gathering tool. Even seemingly minor interruptions can have profound psychological impacts on the delicate subject-interrogator relationship. Any insertion of counsel into the subject-interrogator relationship, for example — even if only for a limited duration or for a specific purpose — can undo months of work and may permanently shut down the interrogation process. Therefore, it is critical to minimize external influences on the interrogation process. . . .
 

Permitting Padilla any access to counsel may substantially harm our national security interests. As with most detainees, Padilla is unlikely to cooperate if he believes that an attorney will intercede in his detention. . . . Any such delay in Padilla’s case risks that plans for future attacks will go undetected during that period, and that whatever information Padilla may eventually provide will be outdated and more difficult to corroborate. 

 

Mukasey dismissed all of those fear-mongering claims as speculative hyperbole, and explicitly told the Bush DOJ:  “if the government had permitted Padilla to consult with counsel at the outset, this matter would have been long since decided in this court” — i.e., Mukasey told the Bush DOJ that the dilemma was its own doing because it should have allowed Padilla access to counsel from the start.  Yet in order to try to convince Americans now that Obama is endangering their lives by allowing Abdulmutallab access to counsel, Mukasey resorts to the very fear-mongering that he long ago rejected.  That’s called being a dishonest hack of the lowest order. 

More dishonestly still, Mukasey in today’s Op-Ed claims that he ordered Padilla to have access to counsel only “as a convenience to the court and not for any constitutionally based reason,” and only because Padilla (unlike Abdulmutallab) was a U.S. citizen.  Both of those excuses are blatantly and demonstrably false.  The whole legal basis for Mukasey’s ruling was that (1) he would order Padilla to have access to counsel even if he had believed Bush’s fear-mongering claims because Padilla had a constitutional right to counsel; and (2) the basis for that right is not that Padilla is a citizen, but rather, that all “persons” on U.S. soil have that right.  Just listen to what the Mukasey back then said in order to see how blatantly dishonest the Mukasey of today is (emphasis added): 

Even if the predictions [of the Bush DOJ] were reliably more certain than they in fact are, I would not be free simply to take the counsel of Admiral Jacoby’s fears, however well founded and sincere, and on that basis alone deny Padilla access to a lawyer. There is no dispute that Padilla has the right to bring this petition, and, for the reasons set forth in the Opinion, the statute makes it plain that he has the right to present facts if he chooses to do so. . . . 

Arbitrary deprivation of liberty violates the Due Process Clause, Foucha v. Louisiana, 504 U.S. 71, 80 (1992), which “applies to all ‘persons’ within the United States,” Zadvydas v. Davis, 533 U.S. 678, 693 (2001). . . . [U]nless he has the opportunity to make a submission, this court cannot do what the applicable statutes and the Due Process Clause require it to do: confirm what frankly appears likely from the Mobbs Declaration but cannot be certain if based only on the Mobbs Declaration — that Padilla’s detention is not arbitrary, and that, because his detention is not arbitrary, the President is exercising a power vouchsafed to him by the Constitution. . . . 

The Court in Hamdi took pains to point out that its holding was limited to “the specific context before us — that of the undisputed detention of a citizen during a combat operation undertaken in a foreign country and a determination by the executive that the citizen was allied with enemy forces.” Hamdi, 316 F.3d at 465.  That wise restraint is well worth following in this case by recognizing explicitly the limits of the current holding, and thereby recognizing as well the contrast between this case and Hamdi. Unlike Hamdi, Padilla was detained in this country, and initially by law enforcement officers pursuant to a material witness warrant. He was not captured on a foreign battlefield by soldiers in combat. The prospect of courts second-guessing battlefield decisions, which they have resolutely refused to do, e.g., id. at 474; cf. Stencel Aero Eng’g Corp. v. United States, 431 U.S. 666, 673 (1977), does not loom in this case. 

 

It’s true that this decision did not address the question of Miranda warnings, but the point is that Mukasey’s reasoning there directly negates what he is now arguing.  Based on those two findings — that (1) there was no clear evidence that allowing access to a lawyer would jeopardize intelligence-gathering and, even if there were, it wouldn’t matter, because (2) Padilla, as someone detained on U.S. soil., had a constitutional right to a lawyer — Mukasey ordered the Bush DOJ to comply with his directive in unusually strong language: 

Lest any confusion remain, this is not a suggestion or a request that Padilla be permitted to consult with counsel, and it is certainly not an invitation to conduct a further “dialogue” about whether he will be permitted to do so. It is a ruling — a determination — that he will be permitted to do so. 

 

Note, too, that Mukasey insisted that courts have the constitutional obligation to ensure that presidential-ordered detentions “are not arbitrary,” a claim both the Bush administration and now the Obama administration, in some circumstances, vigorously contests. 

This entire Miranda/Abdulmutallab controversy has been rife with deliberate misconceptions from the start: 

  • the inane notion that super-dangerous Terrorists innocently believe that they’re required to spill their guts if they aren’t given Miranda warnings (recall that the premise of Bush officials, including Mukasey, is that Terrorists are so hardened and Evil that they have to be tortured to get them to speak; the very idea that they would feel compelled to answer all questions unless told they did not have to is laughable on its face);
  • the empirically false claim that defendants stop co-operating — and that interrogations must stop — once they are Mirandized (huge amounts of co-operation from the accused occur once they’ve been Mirandized and have lawyers);
  • the invented allegation that Abdulmutallab was speaking freely until he was Mirandized, at which point he stopped talking;
  • the obviously misleading suggestion that it’s easier to interrogate and convict Terrorists in a military commission system than in civilian courts (the exact opposite has been true, by far); and,
  • the dishonest implication that we somehow lost something by Mirandizing and trying Richard Reid in our civilian court system, which sentenced him to life in prison with little effort, in contrast to the debacles produced by the military commission system).  

 

The ignorance of media stars about these issues allows fear-mongering politicians to make these claims over and over without challenge (although see Savannah Guthrie’s impressively aggressive, well-informed and effective interrogation of Sen. Kit Bond about this case: it’s the exception that proves the rule, and illustrates what effective adversarial journalism can accomplish).  And much of this is the fault of the Obama administration:  because they themselves have embraced the Bush/Cheney policies of military commissions and indefinite detentions, they’re incapable of articulating any coherent principle why civilian trials are needed, and are instead reduced to the pitiful spectacle of relying on a “Bush-did-it-too” defense to try to show that they’re sufficiently “tough on Terror” (as though the same administration which Obama spent two years depicting as radical, destructive and lawless is the standard-bearer for how Terrorists should be handled). 

Still, Mukasey’s dishonesty is worse than the standard political/media freak show, both because he knows better and because (as a judge) he renounced the very myths which (as a hardened right-wing partisan) he is now disseminating.  He has become a leading practitioner of the hysterical fear-mongering he once rightly scorned. 

* * * * *  

Long-time commenter DCLaw1 has rejuvinated his excellent blog, InsideOutTheBeltway, and has a typically insightful post on how the media has re-cycled blatant myths — grounded in sheer ignorance — about Miranda and Abdulmutallab. 

 

Posted in Democrats, history, Iran, Iraq, Law, National Security, Obama, Politics, Polls, Supreme Court, terrorism, UN | Tagged: , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

No, Mr. Walt, The Iraq War is Bush’s Fault, Not Israel’s

Posted by steveneidman on February 15, 2010

Rinse, Wash, Repeat

John B. Judis

For the last time, Stephen Walt, Israel did not send the U.S. and Britain into Iraq.

Walt, who blogs for Foreign Policy’s website, recently revived the argument, claiming in a self-congratulatory column titled “I don’t mean to say I told you so, but…” that Tony Blair’s testimony last month before Britain’s Iraq War Commission confirmed that “the Israel lobby … played a key role in the decision to invade Iraq in 2003.” I have read Blair’s testimony. I don’t find it to be proof of anything of the kind; and I don’t think Walt’s accompanying restatement of the argument is any more persuasive than the version he and Mearsheimer put forward in his book.

Walt says that Blair’s statement to the commission “reveals that concerns about Israel were part of the equation [that is, the decision to go to war] and that Israel officials were involved in those discussions.” Here is what Walt, citing a column in the New Statesman, quotes Blair as saying about his early April 2002 meeting in Crawford, Texas, with George W. Bush:

As I recall that discussion, it was less to do with specifics about what we were going to do on Iraq or, indeed, the Middle East, because the Israel issue was a big, big issue at the time. I think, in fact, I remember, actually, there may have been conversations that we had even with Israelis, the two of us, whilst we were there. So that was a major part of all this.

Now there are at least three problems with the inferences that Walt draws from this statement. First, even if we were to grant that Blair is saying that he and Bush were talking about Israel’s role in or importance to the Iraq invasion, this certainly does not show that the Israel lobby had anything to do with the decision to go to war. Nor, secondly, does it show that the Israeli government pressured the U.S. to go to war. The “conversations” could have easily consisted of the Bush administration informing Israelis of their plans.

But these are minor objections. The real problem is that Walt does not seem to have taken the trouble to have read the transcript of Blair’s testimony. If he had, he would have realized that Blair was not talking about how invading Iraq might benefit Israel, but about the conflict then occurring between Israel and the Palestinians. The second intifada had reached a new height with the Passover and Haifa suicide bombings and the beginning of the siege at the Church of the Nativity in Bethlehem, and Blair was concerned that the Bush administration was not actively pursuing the peace process. Blair wanted the administration to put the Arab-Israeli issue on a par with the threat of Iraq. The former prime minister makes this clear in other parts of his testimony. Here is an exchange between Blair and Sir Roderic Lyne:

Lyne: … Just one more point arising from Crawford, but not just from Crawford. You said–you reminded us that the Arab-Israel problem was in a very hot state at Crawford. You said you may even have had some conversations with Israelis from there, and obviously it was something that was a large part of your conversations with President Bush. I think it is right to say–indeed, Jack Straw said it–that you were relentless in trying to persuade the Americans to make more and faster progress on the Middle East peace process. Ultimately, Jack Straw said it was a matter of huge–in his evidence the other day–it was a matter of huge frustration that we weren’t able to achieve something which you had been seeking so strongly …

Blair: … I believe that resolving the Middle East–this is what I work on now–is immensely important, and I think it was difficult, and this is something I have said before on several occasions, it was difficult to persuade President Bush, and, indeed, America actually, that this was such a fundamental question …

Lyne: But surely you must have said to him, “Look, this thing is only really going to have a chance of working well if we can make this progress down the Arab-Israel track before we get there”?

Blair: Well, I was certainly saying to him, “I think this is vital,” and I mean, this was–you could describe me as a broken record through that period …

The talks at Crawford and subsequent discussions led eventually to getting Bush to launch the “road map” for peace. In other words, he and Bush were not saying that they had to invade Iraq to assist or appease the Israelis. Nothing that Blair said in his testimony should have provided the slightest evidence that this was occurring. And it seems clear enough that the discussions Blair and Bush had with the Israelis were not about Iraq but about the peace process.

I am sorry to say that this kind of sloppy research and reasoning is typical of the way that Walt and Mearsheimer deal with the question of whether the Israel lobby influenced the decision to go to war. In their book, they claim that the U.S. would “almost certainly” not have gone to war without the influence of the Israel lobby. That’s a very strong claim, but they do not back it up either in the book or in Walt’s current blogging. Let me briefly deal with their logic here.

There are three ways in which the Israel lobby could have made itself indispensable to the decision to go to war: first, in White House-Pentagon deliberations; second, in significantly influencing the critical Congressional vote in October 2002; and third, in dramatically shaping public opinion. Their argument falls short on all these counts.

White House: To contend that the “Israel lobby” influenced the White House decision to invade—which had more or less been made by the spring of 2002 when Blair visited Crawford—Walt and Mearsheimer expand the “lobby” to include “neoconservative intellectuals” such as Paul Wolfowitz, the Deputy Secretary of Defense. They then imply that Wolfowitz and other neo-conservatives favored regime change in Iraq primarily because it would benefit Israel.  No evidence has surfaced to show that Wolfowitz was acting in this manner.  There were other neo-conservatives in the administration – such as David Wurmser and Douglas Feith – who had in the past explicitly linked regime change in Iraq to Israel’s welfare, but they were not in a decision-making capacity. Indeed, the two people outside of the President who appear most responsible for the decision to invade — Secretary of Defense Donald Rumsfeld and Vice President Dick Cheney — could not be categorized, even by Walt and Mearsheimer’s absurdly broad standards, as part of an Israel lobby.  So while it would be foolish to rule out that Israel’s welfare was not discussed or mentioned in discussions about whether to invade Iraq, there is no basis for saying that the White House decision to invade Iraq was driven by neo-conservative preoccupations with Israel’s security.

Congress: Walt cites my quoting of AIPAC head Howard Kohr’s boast that AIPAC had been “quietly lobbying” Congress to pass the war resolution in October 2002. I don’t doubt that AIPAC officials favored going to war, as did the leaders of some other pro-Israel organizations. But AIPAC did not aggressively lobby for the war resolution the way it lobbied in 1981 against the AWACs surveillance plane sale to Saudi Arabia or recently for refined petroleum sanctions on Iran. I have interviewed AIPAC people and members of other Jewish lobbying organizations on this question, and they say the same thing. It was not a make-or-break legislative priority. And there is very good circumstantial evidence to back this up. Some of AIPAC’s most dependable supporters on the Hill—such as Senators Daniel Inouye and Carl Levin and Representative Jerrold Nadler—opposed the resolution. So, yes, AIPAC probably did “quietly” make its preference known; but it can’t be credited or blamed for the outcome of the vote. And no other pro-Israel or Jewish lobby possesses comparable clout on the Hill.

Public Opinion: Did the Israel lobby have a sine qua non influence on public opinion in favor of the war? If so, one would expect that its influence would at least show up among Jewish Americans, who would be most likely to listen to their arguments. In a 2003 survey, the American Jewish Committee found that 54 percent of Jewish Americans disapproved of going to war with Iraq and only 43 percent approved. At the time, a majority of Americans approved of going to war. So, far from being a leader in pro-war sentiment, American Jews were lagging behind. Walt and Mearsheimer concede this point, but it’s important nonetheless to include it because it is the only other way in which the Israel lobby might have had a decisive effect on the decision to invade, but did not.  

There is, in other words, no basis at all for accepting Walt and Mearsheimer’s contention that, without the Israel lobby, the U.S. would likely not have invaded Iraq.  It’s not anti-Semitic to make these charges–they have quotes and anecdotes in their book–but they don’t add up to the proof of any overriding influence. Nor does Walt’s use of Blair’s testimony to the Iraq War Commission. I think it’s time for Walt and Mearsheimer to put this part of their argument to rest.

Posted in Antisemitism, Democrats, history, Iran, Iraq, Israel, Jew, Jewish Interest, Law, National Security, Obama, Politics, Steven Eidman, terrorism, UN | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Posted by steveneidman on February 8, 2010

America: A fearsome foursome

By Edward Luce

The team seen most often in the Oval Office
David Axelrod, senior adviser A former journalist on the Chicago Tribune who quit to set up a political advertising firm, Mr Axelrod, 54, is Barack Obama’s longest-standing mentor, from his days in Chicago politics. Always at the candidate’s side during the election campaign, he is the chief defender of the Obama brand. Still a journalist at heart, he describes himself as having been “posted to Washington”.

Robert Gibbs, communications chief

The most visible face of the White House for his sardonic daily briefings. Mr Gibbs, 38, is perhaps the least likely member of the circle – he is a career Democratic press officer from Alabama who quit John Kerry’s 2004 presidential campaign and shortly afterwards went to work for Senator Obama. A constant presence during the campaign, he is also seen as a keeper of the flame.

Rahm Emanuel, chief of staff

The best story about Mr Emanuel, 50, concerns the dead fish he delivered to a pollster who displeased him. The least honey-tongued politician in Washington, he is also one of the most effective. Friends say he is relentlessly energetic, critics that he has attention deficit disorder. He has enemies but even detractors concede he may well achieve his aim of becoming the first Jewish speaker of the House of Representatives.

Valerie Jarrett, senior adviser

An old friend of the Obamas, having hired Michelle to work in Chicago politics in the early 1990s, Ms Jarrett, 53, is probably the first family’s most intimate White House confidante. A former businessperson and aide to Richard Daley, mayor of Chicago, she was briefly considered as a candidate to fill Mr Obama’s Senate seat. She was part of the circle he consulted before running for president.

At a crucial stage in the Democratic primaries in late 2007, Barack Obama rejuvenated his campaign with a barnstorming speech, in which he ended on a promise of what his victory would produce: “A nation healed. A world repaired. An America that believes again.”

Just over a year into his tenure, America’s 44th president governs a bitterly divided nation, a world increasingly hard to manage and an America that seems more disillusioned than ever with Washington’s ways. What went wrong?

Pundits, Democratic lawmakers and opinion pollsters offer a smorgasbord of reasons – from Mr Obama’s decision to devote his first year in office to healthcare reform, to the president’s inability to convince voters he can “feel their [economic] pain”, to the apparent ungovernability of today’s Washington. All may indeed have contributed to the quandary in which Mr Obama finds himself. But those around him have a more specific diagnosis – and one that is striking in its uniformity. The Obama White House is geared for campaigning rather than governing, they say.

In dozens of interviews with his closest allies and friends in Washington – most of them given unattributably in order to protect their access to the Oval Office – each observes that the president draws on the advice of a very tight circle. The inner core consists of just four people – Rahm Emanuel, the pugnacious chief of staff; David Axelrod and Valerie Jarrett, his senior advisers; and Robert Gibbs, his communications chief.

Two, Mr Emanuel and Mr Axelrod, have box-like offices within spitting distance of the Oval Office. The president, who is the first to keep a BlackBerry, rarely holds a meeting, including on national security, without some or all of them present.

With the exception of Mr Emanuel, who was a senior Democrat in the House of Representatives, all were an integral part of Mr Obama’s brilliantly managed campaign. Apart from Mr Gibbs, who is from Alabama, all are Chicagoans – like the president. And barring Richard Nixon’s White House, few can think of an administration that has been so dominated by such a small inner circle.

“It is a very tightly knit group,” says a prominent Obama backer who has visited the White House more than 40 times in the past year. “This is a kind of ‘we few’ group … that achieved the improbable in the most unlikely election victory anyone can remember and, unsurprisingly, their bond is very deep.”

John Podesta, a former chief of staff to Bill Clinton and founder of the Center for American Progress, the most influential think-tank in Mr Obama’s Washington, says that while he believes Mr Obama does hear a range of views, including dissenting advice, problems can arise from the narrow composition of the group itself.

Among the broader circle that Mr Obama also consults are the self-effacing Peter Rouse, who was chief of staff to Tom Daschle in his time as Senate majority leader; Jim Messina, deputy chief of staff; the economics team led by Lawrence Summers and including Peter Orszag, budget director; Joe Biden, the vice-president; and Denis McDonough, deputy national security adviser. But none is part of the inner circle.

“Clearly this kind of core management approach worked for the election campaign and President Obama has extended it to the White House,” says Mr Podesta, who managed Mr Obama’s widely praised post-election transition. “It is a very tight inner circle and that has its advantages. But I would like to see the president make more use of other people in his administration, particularly his cabinet.”

This White House-centric structure has generated one overriding – and unexpected – failure. Contrary to conventional wisdom, Mr Emanuel managed the legislative aspect of the healthcare bill quite skilfully, say observers. The weak link was the failure to carry public opinion – not Capitol Hill. But for the setback in Massachusetts, which deprived the Democrats of their 60-seat supermajority in the Senate, Mr Obama would by now almost certainly have signed healthcare into law – and with it would have become a historic president.

But the normally liberal voters of Massachusetts wished otherwise. The Democrats lost the seat to a candidate, Scott Brown, who promised voters he would be the “41st [Republican] vote” in the Senate – the one that would tip the balance against healthcare. Subsequent polling bears out the view that a decisive number of Democrats switched their votes with precisely that motivation in mind.

“Historians will puzzle over the fact that Barack Obama, the best communicator of his generation, totally lost control of the narrative in his first year in office and allowed people to view something they had voted for as something they suddenly didn’t want,” says Jim Morone, America’s leading political scientist on healthcare reform. “Communication was the one thing everyone thought Obama would be able to master.”

Whatever issue arises, whether it is a failed terrorist plot in Detroit, the healthcare bill, economic doldrums or the 30,000-troop surge to Afghanistan, the White House instinctively fields Mr Axelrod or Mr Gibbs on television to explain the administration’s position. “Every event is treated like a twist in an election campaign and no one except the inner circle can be trusted to defend the president,” says an exasperated outside adviser.

Perhaps the biggest losers are the cabinet members. Kathleen Sebelius, Mr Obama’s health secretary and formerly governor of Kansas, almost never appears on television and has been largely excluded both from devising and selling the healthcare bill. Others such as Ken Salazar, the interior secretary who is a former senator for Colorado, and Janet Napolitano, head of the Department for Homeland Security and former governor of Arizona, have virtually disappeared from view.

Administration insiders say the famously irascible Mr Emanuel treats cabinet principals like minions. “I am not sure the president realises how much he is humiliating some of the big figures he spent so much trouble recruiting into his cabinet,” says the head of a presidential advisory board who visits the Oval Office frequently. “If you want people to trust you, you must first place trust in them.”

In addition to hurling frequent profanities at people within the administration, Mr Emanuel has alienated many of Mr Obama’s closest outside supporters. At a meeting of Democratic groups last August, Mr Emanuel described liberals as “f***ing retards” after one suggested they mobilise resources on healthcare reform.

“We are treated as though we are children,” says the head of a large organisation that raised millions of dollars for Mr Obama’s campaign. “Our advice is never sought. We are only told: ‘This is the message, please get it out.’ I am not sure whether the president fully realises that when the chief of staff speaks, people assume he is speaking for the president.”

The same can be observed in foreign policy. On Mr Obama’s November trip to China, members of the cabinet such as the Nobel prizewinning Stephen Chu, energy secretary, were left cooling their heels while Mr Gibbs, Mr Axelrod and Ms Jarrett were constantly at the president’s side.

The White House complained bitterly about what it saw as unfairly negative media coverage of a trip dubbed Mr Obama’s “G2” visit to China. But, as journalists were keenly aware, none of Mr Obama’s inner circle had any background in China. “We were about 40 vans down in the motorcade and got barely any time with the president,” says a senior official with extensive knowledge of the region. “It was like the Obama campaign was visiting China.”

Then there are the president’s big strategic decisions. Of these, devoting the first year to healthcare is well known and remains a source of heated contention. Less understood is the collateral damage it caused to unrelated initiatives. “The whole Rahm Emanuel approach is that victory begets victory – the success of healthcare would create the momentum for cap-and-trade [on carbon emissions] and then financial sector reform,” says one close ally of Mr Obama. “But what happens if the first in the sequence is defeat?”

Insiders attribute Mr Obama’s waning enthusiasm for the Arab-Israeli peace initiative to a desire to avoid antagonising sceptical lawmakers whose support was needed on healthcare. The steam went out of his Arab-Israeli push in mid-summer, just when the healthcare bill was running into serious difficulties.

The same applies to reforming the legal apparatus in the “war on terror” – not least his pledge to close the Guantánamo Bay detention centre within a year of taking office. That promise has been abandoned.

“Rahm said: ‘We’ve got these two Boeing 747s circling that we are trying to bring down to the tarmac [healthcare and the decision on the Afghanistan troop surge] and we can’t risk a flock of f***ing Canadian geese causing them to crash,’ ” says an official who attended an Oval Office strategy meeting. The geese stood for the closure of Guantánamo.

An outside adviser adds: “I don’t understand how the president could launch healthcare reform and an Arab-Israeli peace process – two goals that have eluded US presidents for generations – without having done better scenario planning. Either would be historic. But to launch them at the same time?”

Again, close allies of the president attribute the problem to the campaign-like nucleus around Mr Obama in which all things are possible. “There is this sense after you have won such an amazing victory, when you have proved conventional wisdom wrong again and again, that you can simply do the same thing in government,” says one. “Of course, they are different skills. To be successful, presidents need to separate the stream of advice they get on policy from the stream of advice they get on politics. That still isn’t happening.”

The White House declined to answer questions on whether Mr Obama needed to broaden his circle of advisers. But some supporters say he should find a new chief of staff. Mr Emanuel has hinted that he might not stay in the job very long and is thought to have an eye on running for mayor of Chicago. Others say Mr Obama should bring in fresh blood. They point to Mr Clinton’s decision to recruit David Gergen, a veteran of previous White Houses, when the last Democratic president ran into trouble in 1993. That is credited with helping to steady the Clinton ship, after he too began with an inner circle largely carried over from his campaign.

But Mr Gergen himself disagrees. Now teaching at Harvard and commenting for CNN, Mr Gergen says members of the inner circle meet two key tests. First, they are all talented. Second, Mr Obama trusts them. “These are important attributes,” Mr Gergen says. His biggest doubt is whether Mr Obama sees any problem with the existing set-up.

“There is an old joke,” says Mr Gergen. “How many psychiatrists does it take to change a lightbulb? Only one. But the lightbulb must want to change. I don’t think President Obama wants to make any changes.”

Posted in Antisemitism, business, celebrity, culture, Democrats, economics, economy, Healthcare, history, Iran, Israel, Law, Medicaid, National Security, Obama, Politics, Polls, Social Network, Steven Eidman, Supreme Court, terrorism, Wall Street | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »