What the media gets wrong about guns

The shooting at Sandy Hook has brought gun policy to the forefront of our national conversation. President Obama has pledged to act aggressively on the issue, having laid out a comprehensive plan, including new weapons regulations as well as law enforcement and public awareness programs, in the hope of reducing gun violence. This will be a marquee issue in Washington and throughout the country over the next several months, and media coverage will only intensify.

With that said, too few journalists have a solid understanding of guns and gun violence. Here are three major things the media gets wrong. [Read more…]

Better reporting on computer models could dispel some of the mysteries of climate change

Now that climate topics have been allowed back in the public arena, it’s time for the media to fill some serious gaps in the coverage of climate science. A good place to start would be to explain how computer models work. While a story on the intricacies of algorithms might seem to be a “yawner,” if told from the point of view of a brilliant scientist, complete with compelling graphics, or, better yet, with the immersive technology of new media, stories on climate models could provide ways for non-scientists to evaluate the reliability of these tools as predictors of the future.

Equally important, social media and the virtual communities that websites are capable of forming can help to overcome a major barrier to the public’s understanding of risk perception: The tendency of citizens to conform their own beliefs about societal risks from climate change to those that predominate among their peers. This derails rational deliberation, and the herd instinct creates an opening for persuasion — if not deliberate disinformation — by the fossil fuel industry. Online communities can provide a counter-voice to corporations. They are populated by diverse and credible thought leaders who can influence peers to not just accept ideas but to seek out confirming evidence and then take action. Because social networks enable the rapid discovery, highlighting and sharing of information, they can generate instant grassroots activist movements and crowd-sourced demonstrations.

Studies show that a major cause of public skepticism over climate stems from ignorance of the reliability of climate models. Beyond their susceptibility to garbage in, garbage out, algorithms on which models are based have long lacked the transparency needed to promote public trust in computer decisions systems. The complexity and politicization of climate science models have made it difficult for the public and decision makers to put faith in them. But studies also show that the media plays a big role in why the public tends to be skeptical of models. An article in the September issue of Nature Climate Change written by Karen Akerlof et al slammed the media for failing to address the science of models and their relevance to political debate:

Little information on climate models has appeared in US newspapers over more than a decade. Indeed, we show it is declining relative to climate change. When models do appear, it is often within sceptic discourses. Using a media index from 2007, we find that model projections were frequently portrayed as likely to be inaccurate. Political opinion outlets provided more explanation than many news sources.

In other words, blogs and science websites have done a better job of explaining climate science than traditional media, as visitors to RealClimate.org, SkepticalScience.org and other science blogs can attest. But the reach of these sites and their impact on the broader public are debatable. Websites such as the U.S. Department of Energy’s Office of Science have a trove of information on climate modeling but, with the exception of NASA’s laboratories, most government sites on science make little effective use of data visualization. This void offers mainstream journalists an opportunity to be powerful agents in the climate learning process, to tell dramatic multimedia stories about how weather forecasts can literally save our lives and, by extension, why climate forecasts can be trusted.

Two recent events can be thought of as whetting the public’s appetite for stories about computer-generated versions of reality. The prediction that Hurricane Sandy would eventually turn hard left out in the Atlantic and pound the northeastern shore of the United States was made almost a week in advance by weather forecasters.

This technology-driven prediction no doubt saved countless lives. In addition, some media coverage of Hurricane Sandy did much to enable non-scientists to understand why it is tricky to attribute specific storms to climate change but still gave the public the big picture of how warmer ocean waters provide storms with more moisture and therefore make them bigger and more damaging.

Simultaneously, in a different domain but using the same tools of analysis and prediction, Nate Silver’s FiveThirtyEight computer model, results of which were published in his blog at The New York Times, out-performed traditional political experts by nailing the November national election outcomes. How did he pull that off? A story about his statistical methods, complete with graphics, could reveal how risk analysts create spaces between the real world and theory to calculate probabilities. This would help the public to become familiar with models as a source of knowledge.

Some reporters have produced text stories on climate models that are examples of clarity. Andrew Revkin, while as an environment writer for The New York Times and now as the author of his Dot Earth blog at nytimes.com’s opinion section, has for many years covered how climate models relate to a large body of science, including a posting on Oct. 30 that placed Hurricane Sandy in the context of superstorms of the past.

David A. Fahrenthold at The Washington Post wrote how “Scientists’ use of computer models to predict climate change is under attack,” which opens with a baseball statistics analogy and keeps the reader going. Holger Dambeck at SpiegelOnline did a thorough assessment of climate model accuracy in non-science language, “Modeling the Future: The Difficulties of Predicting Climate Change.” But these stories are rare and often one-dimensional.

Effort is now being spent on making scientists into better communicators, but more might be accomplished if mainstream journalists, including those who publish on news websites with heavy traffic, made themselves better acquainted with satellite technology and its impact on science. Information specialist Paul Edwards explains in his book, “A Vast Machine: Computer Models, Climate Data and the Politics of Global Warming,” how climate modeling, far from being purely theoretical, is a method that combines theory with data to meet “practical here-and-now needs.” Computer models operate within a logical framework that uses many approximations from data that — unlike weather models — can be “conspicuously sparse” but still constituting sound science, much as a reliable statistical sample can be drawn from a large population. How statistics guide risk analysis requires better explanation for a public that must make judgments but is seldom provided context by news stories. The debate over cap-and-trade policy might be Exhibit A.

Depicting model-data symbiosis in such diverse fields as baseball performance, hurricane forecasts and long-range warming predictions would be ideally suited to web technology. Not only can climate models be reproduced on PCs and laptops, showing atmospheric changes over the past and into the future, but also the models’ variables can be made accessible to the web user, who could then take control of the model and game the display by practicing “what ifs” — how many degrees of heat by year 2100 could be avoided by a selected energy policy, how many people would be forced into migrations if this amount of food supplies were lost, how big would a tidal barrier need to be to protect New York City from another Sandy disaster? (If this sounds a bit like SimCity, the new version of the game due in 2013 includes climate change as part of the simulated experience.)

This narrative approach to news, including personal diaries and anecdotes of everyday lived experience, is what Richard Sambrook, former director of BBC Global News and now a journalism professor at Cardiff University, has termed “360 degree storytelling.” Mike Hulme, a professor of climate change at East Anglia University, provides this description of the new public stance toward science in his book, “Why We Disagree About Climate Change”:

Citizens, far from being passive receivers of expert science, now have the capability through media communication “to actively challenge and reshape science, or even to constitute the very process of scientific communication through mass participation in simulation experiments such as ‘climateprediction.net’. New media developments are fragmenting audiences and diluting the authority of the traditional institutions of science and politics, creating many new spaces in the twenty-first century ‘agora’ … where disputation and disagreement are aired.”

Today’s media is about participation and argumentation. A new rhetoric of visualization is making science more comprehensible in our daily lives. What goes around, comes around. One of the pioneer online journalism experiments in making the public aware of how technology, risk assessment and human fallibility can cross over was a project by MSNBC.com known as the “baggage screening game.” Players could look into a simulated radar screen and control the speed of a conveyor line of airline passenger baggage — some of which harbored lethal weapons. Assuming you were at the controls, the program would monitor your speed and accuracy in detection and keep score, later making you painfully aware of missed knives and bombs. Adding to your misery was a soundtrack of passengers standing in line and complaining about your excessive scrutinizing, with calls of “Come on! Get this thing moving! We’re late!” It was hard to be impatient with the TSA scanners after that.

The Case of Philip Roth vs. Wikipedia

As Wikipedia becomes an increasingly dominant part of our digital media diet, what was once anomalous has become a regular occurrence.

Someone surfing the net comes face to face with a Wikipedia article — about himself. Or about her own work.

There’s erroneous information that needs to be fixed, but Wikipedia’s 10-year-old tangle of editing policies stands in the way, and its boisterous editing community can be fearsome.

If a person can put the error into the public spotlight, then publicly shaming Wikipedia’s volunteers into action can do the trick. But not without some pain.

The most recent episode?

The case of Pulitzer Prize winning fiction writer Philip Roth.

His bestselling novel “The Human Stain” tells the story of fictional character Coleman Silk, an African-American professor who presents himself as having a Jewish background and the trials he faces after leaving his university job in disgrace. Widely read and highly acclaimed, the book was reviewed or referenced by many famous writers, such as Michiko Kakutani and Janet Maslin of the New York Times and the noted Harvard professor Henry Louis Gates, Jr. [1] [2] [3]

The Broyard Theory

But there was a standing mystery about the novel.

After the book’s release in 2000, Roth had not elaborated on the inspiration for the professor Silk character . Over the years, it had become the subject of speculation, with most of the literary world pointing to Anatole Broyard, a famous writer and NY Times critic who “passed” in white circles without explicitly acknowledging his African American roots.

In 2000, Salon.com’s Charles Taylor wrote about Roth’s new book:

The thrill of gossip become literature hovers over “The Human Stain”: There’s no way Roth could have tackled this subject without thinking of Anatole Broyard, the late literary critic who passed as white for many years.

Brent Staples’ 2003 piece in The New York Times wrote that the story of Silk as a “character who jettisons his black family to live as white was strongly reminiscent of Mr. Broyard.”

Janet Maslin wrote the book was “seemingly prompted by the Broyard story.”

It was such a widely held notion, the Broyard connection was incorporated into the Wikipedia article on “The Human Stain.”

An early 2005 version of the Wikipedia entry cited Henry Louis Gates Jr., and by March 2008, it relayed the theory from Charles Taylor’s Salon.com review.

The view was so pervasive, a list of over a dozen notable citations from prominent writers and publications were found by Wikipedia editors.

Wikipedians researching the topic came across articles as secondary sources that drew parallels between Silk and Anatole Broyard. The references were verifiable, linkable prose from notable writers and respected publications. The core policies of Wikipedia — verifiability, using reliable sources and not undertaking original research — were upheld by using reputable content as the basis for the conclusions.

Roth Explains It All

However, information from Roth in 2008 changed things.

Bloomberg News did an interview with the author about his new book at the time, “Indignation.” Towards the end of the interview, he was asked a casual question about “The Human Stain:”

Hilferty: Is Coleman Silk, the black man who willfully passes as white in “The Human Stain,” based on anyone you knew?

Roth: No. There was much talk at the time that he was based on a journalist and writer named Anatole Broyard. I knew Anatole slightly, and I didn’t know he was black. Eventually there was a New Yorker article describing Anatole’s life written months and months after I had begun my book. So, no connection.

It might have been the first time Roth went on the record saying there was no connection between the fictional Silk and real-life writer Broyard. It seems to be the earliest record on the Internet of this fact.

Fast forward to 2012, and according to Roth, he read the Wikipedia article for [[The Human Stain]] for the first time, and found the erroneous assertions about Anatole Broyard as a template for his main character. In August 2012, Roth’s biographer, Blake Bailey, became an interlocutor who tried to change the Wikipedia entry to remove the false information. It became an unexpected tussle with Wikipedia’s volunteer editors.

Unfortunately for Roth, by the rules of Wikipedia, first-hand information from the mouth of the author does not immediately change Wikipedia. The policies of verifiability and forbidding original research prevent a direct email or a phone call to Wikpedia’s governing foundation or its volunteers from being the final word.

Enter The New Yorker

Frustrated with the process, Roth wrote a long article for the New Yorker, detailing his Wikipedia conundrum. He provided an exhaustive description of the actual inspiration for the professor Silk character: his friend and Princeton professor, Melvin Tumin.

“The Human Stain” was inspired, rather, by an unhappy event in the life of my late friend Melvin Tumin, professor of sociology at Princeton for some thirty years.

And it is this that inspired me to write “The Human Stain”: not something that may or may not have happened in the Manhattan life of the cosmopolitan literary figure Anatole Broyard but what actually did happen in the life of Professor Melvin Tumin, sixty miles south of Manhattan in the college town of Princeton, New Jersey, where I had met Mel, his wife, Sylvia, and his two sons when I was Princeton’s writer-in-residence in the early nineteen-sixties.

Good enough. But the problem arose when Roth attempted to correct the information in Wikipedia with the help of Bailey, his biographer. He wrote:

Yet when, through an official interlocutor, I recently petitioned Wikipedia to delete this misstatement, along with two others, my interlocutor was told by the “English Wikipedia Administrator”—in a letter dated August 25th and addressed to my interlocutor—that I, Roth, was not a credible source: “I understand your point that the author is the greatest authority on their own work,” writes the Wikipedia Administrator—“but we require secondary sources.”

Thus was created the occasion for this open letter. After failing to get a change made through the usual channels, I don’t know how else to proceed.

The frustration is understandable. That someone’s first-hand knowledge about their own work could be rejected in this manner seems inane. But it’s a fundamental working process of Wikipedia, which depends on reliable (secondary) sources to vet and vouch for the information.

Because of this, Wikipedia is fundamentally a curated tertiary source — when it works, it’s a researched and verified work that points to references both original and secondary, but mostly the latter.

It’s garbage in, garbage out. It’s only as good as the verifiable sources and references it can link to.

But it is also this policy that infuriates many Wikipedia outsiders.

During the debate over Roth’s edits, one Wikipedia administrator (an experienced editor in the volunteer community) cited Wikipedia’s famous refrain:

Verifiability, not truth, is the burden.
– ChrisGualtieri (talk) 15:53, 8 September 2012 (UTC)

By design, Wikipedia’s community couldn’t use an email from an original source as the final word. Wikipedia depends on information from a reliable source in a tangible form, and the verification it provides.

Reliable sources perform the gatekeeping function familiar in academic publishing, where peer review guarantees a level of rigor and fact checking from those with established track records.

But even with rigorous references, verifiability can be hard.

Consider Roth’s New Yorker piece, where he says:

“The Human Stain” was inspired, rather, by an unhappy event in the life of my late friend Melvin Tumin, professor of sociology at Princeton for some thirty years.

Compare that to the 2008 interview, when asked, “Is Coleman Silk, the black man who willfully passes as white in “The Human Stain,” based on anyone you knew?” Roth said, “No.

This would seem to contradict the New Yorker article. This doesn’t make Roth dishonest. Rather, Roth likely interpreted the question differently in a spoken interview as to whether he knew anyone who “passed” in real life, as Silk did in the novel.

The point of all this?

Truth via verification is not easy or obvious.

Even with multiple reliable sources — a direct transcript from an interview or the words from the author himself — ferreting out the truth requires standards and deliberation.

As of this writing, Roth’s explanation about the Coleman Silk character has become the dominant one in the Wikipedia article, as it should be.

However, the erroneous speculation about Anatole Broyard was so prevalent and widely held in the years before Roth’s clarification, that it still has a significant mention in the article for historical purposes. There’s still debate how prominent this should be in the entry, given that it’s been flatly denied by Roth.

Lessons

Roth’s New Yorker article caused the article to be fixed, but getting such a prominent soapbox is not a solution that scales for everyone who has a problem with Wikipedia.

After a decade of Wikipedia’s existence as the chaotic encyclopedia that “anyone can edit,” its ironic that its stringent standards for verifiability and moving slowly and deliberately with information now make those qualities a target for criticism.

Wikipedia has been portrayed as being too loose (“Anyone can edit Wikipedia? How can I trust it?”) and too strict (“Wikipedia doesn’t consider Roth a credible source about himself? How can I trust it?”). The fact is, on balance, this yin-yang relationship serves Wikipedia well the vast majority of the time by being responsive and thorough — by being quick by nature, yet slow by design.

It continues to be one of the most visited web properties in the world (fifth according to ComScore), by refining its policies to observe the reputation of living persons and to enforce accuracy in fast-changing articles. Most outsiders would be surprised to see how conscientious and pedantic Wikipedia’s editors are to get things right, despite a mercurial volunteer community in need of a decorum upgrade and the occasional standoff with award-winning novelists.

Andrew Lih is an associate professor at the University of Southern California’s Annenberg School of Communication and Journalism where he directs the new media program. He is the author of The Wikipedia Revolution: How a bunch of nobodies created the world’s greatest encyclopedia, (Hyperion 2009, Aurum UK 2009) and is a noted expert on online collaboration and participatory journalism. This story also appeared on his personal blog.