Better reporting on computer models could dispel some of the mysteries of climate change

Now that climate topics have been allowed back in the public arena, it’s time for the media to fill some serious gaps in the coverage of climate science. A good place to start would be to explain how computer models work. While a story on the intricacies of algorithms might seem to be a “yawner,” if told from the point of view of a brilliant scientist, complete with compelling graphics, or, better yet, with the immersive technology of new media, stories on climate models could provide ways for non-scientists to evaluate the reliability of these tools as predictors of the future.

Equally important, social media and the virtual communities that websites are capable of forming can help to overcome a major barrier to the public’s understanding of risk perception: The tendency of citizens to conform their own beliefs about societal risks from climate change to those that predominate among their peers. This derails rational deliberation, and the herd instinct creates an opening for persuasion — if not deliberate disinformation — by the fossil fuel industry. Online communities can provide a counter-voice to corporations. They are populated by diverse and credible thought leaders who can influence peers to not just accept ideas but to seek out confirming evidence and then take action. Because social networks enable the rapid discovery, highlighting and sharing of information, they can generate instant grassroots activist movements and crowd-sourced demonstrations.

Studies show that a major cause of public skepticism over climate stems from ignorance of the reliability of climate models. Beyond their susceptibility to garbage in, garbage out, algorithms on which models are based have long lacked the transparency needed to promote public trust in computer decisions systems. The complexity and politicization of climate science models have made it difficult for the public and decision makers to put faith in them. But studies also show that the media plays a big role in why the public tends to be skeptical of models. An article in the September issue of Nature Climate Change written by Karen Akerlof et al slammed the media for failing to address the science of models and their relevance to political debate:

Little information on climate models has appeared in US newspapers over more than a decade. Indeed, we show it is declining relative to climate change. When models do appear, it is often within sceptic discourses. Using a media index from 2007, we find that model projections were frequently portrayed as likely to be inaccurate. Political opinion outlets provided more explanation than many news sources.

In other words, blogs and science websites have done a better job of explaining climate science than traditional media, as visitors to RealClimate.org, SkepticalScience.org and other science blogs can attest. But the reach of these sites and their impact on the broader public are debatable. Websites such as the U.S. Department of Energy’s Office of Science have a trove of information on climate modeling but, with the exception of NASA’s laboratories, most government sites on science make little effective use of data visualization. This void offers mainstream journalists an opportunity to be powerful agents in the climate learning process, to tell dramatic multimedia stories about how weather forecasts can literally save our lives and, by extension, why climate forecasts can be trusted.

Two recent events can be thought of as whetting the public’s appetite for stories about computer-generated versions of reality. The prediction that Hurricane Sandy would eventually turn hard left out in the Atlantic and pound the northeastern shore of the United States was made almost a week in advance by weather forecasters.

This technology-driven prediction no doubt saved countless lives. In addition, some media coverage of Hurricane Sandy did much to enable non-scientists to understand why it is tricky to attribute specific storms to climate change but still gave the public the big picture of how warmer ocean waters provide storms with more moisture and therefore make them bigger and more damaging.

Simultaneously, in a different domain but using the same tools of analysis and prediction, Nate Silver’s FiveThirtyEight computer model, results of which were published in his blog at The New York Times, out-performed traditional political experts by nailing the November national election outcomes. How did he pull that off? A story about his statistical methods, complete with graphics, could reveal how risk analysts create spaces between the real world and theory to calculate probabilities. This would help the public to become familiar with models as a source of knowledge.

Some reporters have produced text stories on climate models that are examples of clarity. Andrew Revkin, while as an environment writer for The New York Times and now as the author of his Dot Earth blog at nytimes.com’s opinion section, has for many years covered how climate models relate to a large body of science, including a posting on Oct. 30 that placed Hurricane Sandy in the context of superstorms of the past.

David A. Fahrenthold at The Washington Post wrote how “Scientists’ use of computer models to predict climate change is under attack,” which opens with a baseball statistics analogy and keeps the reader going. Holger Dambeck at SpiegelOnline did a thorough assessment of climate model accuracy in non-science language, “Modeling the Future: The Difficulties of Predicting Climate Change.” But these stories are rare and often one-dimensional.

Effort is now being spent on making scientists into better communicators, but more might be accomplished if mainstream journalists, including those who publish on news websites with heavy traffic, made themselves better acquainted with satellite technology and its impact on science. Information specialist Paul Edwards explains in his book, “A Vast Machine: Computer Models, Climate Data and the Politics of Global Warming,” how climate modeling, far from being purely theoretical, is a method that combines theory with data to meet “practical here-and-now needs.” Computer models operate within a logical framework that uses many approximations from data that — unlike weather models — can be “conspicuously sparse” but still constituting sound science, much as a reliable statistical sample can be drawn from a large population. How statistics guide risk analysis requires better explanation for a public that must make judgments but is seldom provided context by news stories. The debate over cap-and-trade policy might be Exhibit A.

Depicting model-data symbiosis in such diverse fields as baseball performance, hurricane forecasts and long-range warming predictions would be ideally suited to web technology. Not only can climate models be reproduced on PCs and laptops, showing atmospheric changes over the past and into the future, but also the models’ variables can be made accessible to the web user, who could then take control of the model and game the display by practicing “what ifs” — how many degrees of heat by year 2100 could be avoided by a selected energy policy, how many people would be forced into migrations if this amount of food supplies were lost, how big would a tidal barrier need to be to protect New York City from another Sandy disaster? (If this sounds a bit like SimCity, the new version of the game due in 2013 includes climate change as part of the simulated experience.)

This narrative approach to news, including personal diaries and anecdotes of everyday lived experience, is what Richard Sambrook, former director of BBC Global News and now a journalism professor at Cardiff University, has termed “360 degree storytelling.” Mike Hulme, a professor of climate change at East Anglia University, provides this description of the new public stance toward science in his book, “Why We Disagree About Climate Change”:

Citizens, far from being passive receivers of expert science, now have the capability through media communication “to actively challenge and reshape science, or even to constitute the very process of scientific communication through mass participation in simulation experiments such as ‘climateprediction.net’. New media developments are fragmenting audiences and diluting the authority of the traditional institutions of science and politics, creating many new spaces in the twenty-first century ‘agora’ … where disputation and disagreement are aired.”

Today’s media is about participation and argumentation. A new rhetoric of visualization is making science more comprehensible in our daily lives. What goes around, comes around. One of the pioneer online journalism experiments in making the public aware of how technology, risk assessment and human fallibility can cross over was a project by MSNBC.com known as the “baggage screening game.” Players could look into a simulated radar screen and control the speed of a conveyor line of airline passenger baggage — some of which harbored lethal weapons. Assuming you were at the controls, the program would monitor your speed and accuracy in detection and keep score, later making you painfully aware of missed knives and bombs. Adding to your misery was a soundtrack of passengers standing in line and complaining about your excessive scrutinizing, with calls of “Come on! Get this thing moving! We’re late!” It was hard to be impatient with the TSA scanners after that.

National party conventions, graphic photos, social media's bull$#!t, open data, and a world stream

Here’s a quick roundup of stories and conversations that caught our attention in the past week, the first in what will gradually become a regular series.

Convention City: For the next two weeks, we’ll be barraged with reportage from the Republican and Democratic national conventions. As MediaShift points out, a lot of attention among media observers will be paid to how a variety of digital tools are deployed, much like it was during the Summer Olympics. The media industry blog has already put together a helpful list of resources for following the conventions. Meanwhile, the Washington Post has launched a new feature it’s calling The Grid, which is an interesting way to scan through all their various social media and reporting channels and get the latest on the RNC (and next week the DNC).

Instagraphic: In case you missed it (which seems impossible), Instagram moved to the center of a century-old debate this weekend following the shootings at the Empire State Building. When user @ryanstryin posted a graphic photo showing one of the victims lying in the street, it prompted a lot of reflection from both the mainstream media and the public over whether it’s appropriate to publish or share such images. We’ve had these arguments since the advent of photography – in times of war, in times of peace – on whether to publish photos of the dead and wounded or withhold them out of respect for the victims and their families. But this was a special kind of wake-up call. The media no longer makes these decisions, now that witnesses have a publishing platform in their pocket. New media commentator and J-school prof Jeff Jarvis got a little hot under the collar defending his own decision to share the photo on his Twitter stream and offers a compelling argument on the side of keeping the news unfiltered. The point is, if you click this hyperlink showing a victim with blood streaming down the sidewalk (republished here by Slate), you’ve already been forewarned by the linked words. Since mainstream media still have the broadest reach, they will continue to find themselves at the center of this debate, but the audience is going to find it increasingly difficult to avoid such material. The decision will be not one for the “broadcaster” on whether to share, but a personal one on whether to click.

Streaming the world 60 seconds at a time. The Wall Street Journal is now asking its reporters to file microvideo reports using the social media video platform Tout. They’re calling it WorldStream. From Tampa to Syria, you can see snippets of life, the news, and everything else a reporter can capture with a mobile phone camera. A first dive leaves me with the impression that much, much work has yet to be done before WSJ’s WorldStream can be called a mature product. Rebels relaxing in a mosque in Syria might have been portrayed better with a photo, for instance. Thirty seconds watching a pan of the empty delegate center in Tampa would have been better spent reading an actual story about the convention. And I can’t help but wonder what you can expect to get out of a 60-second interview with a pol – the format seems more suited to TMZ celeb shots and gotcha journalism. It will be interesting to see how the service evolves. For now, my main impression is that we’re looking at the news equivalent of Romantic fragment poems – Coleridge’s “Kubla Kahn” or Keats’ “Hyperion.” They may work artistically, but are story fragments really the best approach for an industry devoted to informing and enlightening its audience?

Social media is bull$#!t. Or so says B.J. Mendelson in the title of his new book. The former social media marketer and contributor to Mashable boosts his own contrarian view after serving the industry for years. Among some of the more common precepts of online journalism Mendelson disputes: the all-importance of pageviews, that Facebook really has 800 million users, and that we’ve learned much new about Internet marketing since Dale Carnegie’s “How to Win Friends and Influence People.” He tells journalist Ernie Smith that the biggest BS thing about social media is “the concept that what’s happening on these very different platforms, with their comparatively small and different audiences, has resonance with what’s happening with the rest of us. This false hope we’re giving people, which not coincidentally popped up around the same time the economy cratered. People needed something to believe in, and selfish and greedy marketers were ready to give that to them in the package of the myth of social media.” Incidentally, the interview is a nice display of what you can do with Jux, yet another platform for quick blogging.

The problem with open data. Is there one? Some interesting conversations on the topic this week. One started when the White House announced the selection of its “Innovation Fellows,” members of the private and nonprofit sectors and academia whose job it will be to help develop five government programs, including one on open data. That announcement sparked some backlash from conservative commentators, including Michelle Malkin, who wondered whether this isn’t really just a waste of taxpayer money. Open government reporter Alex Howard captured some of that debate, which unfolded in the social media sphere. Meanwhile, techPresident’s David Eaves reported on how a government spending scandal uncovered in the U.K. with the help of an open data project raises as many questions about how government collects and reports its data as it does about the suspect spending. So, what do you do if the government’s databases are poorly coded or managed – how do we get the government to change? And even if you discover these remarkable stories with the aid of open data sources, does it make it any easier to act? More questions like these are sure to present themselves as data journalism flowers into a discipline in its own right.

Another decade of the Internet. I leave you with a fun look back at how much the Internet has changed in the past 10 years, courtesy of this Mashable infographic. Enjoy.