ETHICAL DONATORS
AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS
SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS,WEBSITE DETAILS, CHARITY REQUESTS, LECTURE
OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL
AND/OR HEALTH ENLIGHTENMENT COURSES.
AS AN IMPORTANT MEMBER OF THE
GLOBAL INDEPENDENT
MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO
KEEP YOUR NEWS, DIVERSE,
AND FREE OF CORPORATE, GOVERNMENT SPIN AND
CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT:
themikiverse@gmail.com
Monday, March 25, 2013
RUPERT SHELDRAKE AT EU 2013—"SCIENCE SET FREE"
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
RUPERT SHELDRAKE - THE SCIENCE DELUSION BANNED TED TALK
ETHICAL DONATORS
AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS
SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS,WEBSITE DETAILS, CHARITY REQUESTS, LECTURE
OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL
AND/OR HEALTH ENLIGHTENMENT COURSES.
AS AN IMPORTANT MEMBER OF THE
GLOBAL INDEPENDENT
MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO
KEEP YOUR NEWS, DIVERSE,
AND FREE OF CORPORATE, GOVERNMENT SPIN AND
CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT:
themikiverse@gmail.com
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
Tuesday, March 19, 2013
THE BIRTH OF OPTOGENETICS
An account of the path to realizing tools for controlling brain circuits with light
By late spring 2000, however, I had become fascinated by a simpler and potentially easier-to-implement approach: using naturally occurring microbial opsins, which would pump ions into or out of neurons in response to light. Opsins had been studied since the 1970s because of their fascinating biophysical properties, and for the evolutionary insights they offer into how life forms use light as an energy source or sensory cue.[1. D. Oesterhelt, W. Stoeckenius, “Rhodopsin-like protein from the purple membrane of Halobacterium halobium,” Nat New Biol, 233:149-52, 1971.] These membrane-spanning microbial molecules—proteins with seven helical domains—react to light by transporting ions across the lipid membranes of cells in which they are genetically expressed. (See the illustration above.) For this strategy to work, an opsin would have to be expressed in the neuron’s lipid membrane and, once in place, efficiently perform this ion-transport function. One reason for optimism was that bacteriorhodopsin had successfully been expressed in eukaryotic cell membranes—including those of yeast cells and frog oocytes—and had pumped ions in response to light in these heterologous expression systems. And in 1999, researchers had shown that, although many halorhodopsins might work best in the high salinity environments in which their host archaea naturally live (i.e., in very high chloride concentrations), a halorhodopsin from Natronomonas pharaonis (Halo/NpHR) functioned best at chloride levels comparable to those in the mammalian brain.[2. D. Okuno et al., "Chloride concentration dependency of the electrogenic activity of halorhodopsin," Biochemistry, 38:5422-29, 1999.]
I was intrigued by this, and in May 2000 I e-mailed the opsin pioneer Janos Lanyi, asking for a clone of the N. pharaonis
halorhodopsin, for the purpose of actively controlling neurons with
light. Janos kindly asked his collaborator Richard Needleman to send it
to me. But the reality of graduate school was setting in: unfortunately,
I had already left Stanford for the summer to take a neuroscience class
at the Marine Biology Laboratory in Woods Hole. I asked Richard to send
the clone to Karl. When I returned to Stanford in the fall, I was so
busy learning all the skills I would need for my thesis work on motor
control that the opsin project took a backseat for a while.
In February 2004, I proposed to Karl that we contact Georg to see if they had constructs they were willing to distribute. Karl got in touch with Georg in March, obtained the construct, and inserted the gene into a neural expression vector. Georg had made several further advances by then: he had created fusion proteins of ChR2 and yellow fluorescent protein, in order to monitor ChR2 expression, and had also found a ChR2 mutant with improved kinetics. Furthermore, Georg commented that in cell culture, ChR2 appeared to require little or no chemical supplementation in order to operate (in microbial opsins, the chemical chromophore all-trans-retinal must be attached to the protein to serve as the light absorber; it appeared to exist at sufficient levels in cell culture).
Finally, we were getting the ball rolling on targetable control of specific neural types. Karl optimized the gene expression conditions, and found that neurons could indeed tolerate ChR2 expression. Throughout July, working in off-hours, I debugged the optics of the Tsien-lab rig that I had often used in the past. Late at night, around 1 a.m. on August 4, 2004, I went into the lab, put a dish of cultured neurons expressing ChR2 into the microscope, patch-clamped a glowing neuron, and triggered the program that I had written to pulse blue light at the neurons. To my amazement, the very first neuron I patched fired precise action potentials in response to blue light. That night I collected data that demonstrated all the core principles we would publish a year later in Nature Neuroscience, announcing that ChR2 could be used to depolarize neurons.[5. E.S. Boyden et al., "Millisecond-timescale, genetically targeted optical control of neural activity," Nat Neurosci, 8:1263-68, 2005.] During that long, exciting first night of experimentation in 2004, I determined that ChR2 was safely expressed and physiologically functional in neurons. The neurons tolerated expression levels of the protein that were high enough to mediate strong neural depolarizations. Even with brief pulses of blue light, lasting just a few milliseconds, the magnitude of expressed-ChR2 photocurrents was large enough to mediate single action potentials in neurons, thus enabling temporally precise driving of spike trains. Serendipity had struck—the molecule was good enough in its wild-type form to be used in neurons right away. I e-mailed Karl, “Tired, but excited.” He shot back, “This is great!!!!!”
Transitions and optical neural silencers
In January 2005, Karl finished his postdoc and became an assistant professor of bioengineering and psychiatry at Stanford. Feng Zhang, then a first-year graduate student in chemistry (and now an assistant professor at MIT and at the Broad Institute), joined Karl’s new lab, where he cloned ChR2 into a lentiviral vector, and produced lentivirus that greatly increased the reliability of ChR2 expression in neurons. I was still working on my PhD, and continued to perform ChR2 experiments in the Tsien lab. Indeed, about half the ChR2 experiments in our first optogenetics paper were done in Richard Tsien’s lab, and I owe him a debt of gratitude for providing an environment in which new ideas could be pursued. I regret that, in our first optogenetics paper, we did not acknowledge that many of the key experiments had been done there. When I started working in Karl’s lab in late March 2005, we carried out experiments to flesh out all the figures for our paper, which appeared in Nature Neuroscience in August 2005, a year after that exhilarating first discovery that the technique worked.
Around
that same time, Guoping Feng, then leading a lab at Duke University
(and now a professor at MIT), began to make the first transgenic mice
expressing ChR2 in neurons.[6. B.R. Arenkiel et al., "In vivo
light-induced activation of neural circuitry in transgenic mice
expressing channelrhodopsin-2," Neuron, 54:205-18, 2007.]
Several other groups, including the Yawo, Herlitze, Landmesser, Nagel,
Gottschalk, and Pan labs, rapidly published papers demonstrating the use
of ChR2 in neurons in the months following.[7. T. Ishizuka et al.,
"Kinetic evaluation of photosensitivity in genetically engineered
neurons expressing green algae light-gated channels," Neurosci Res, 54:85-94, 2006.],[8.
X. Li et al., "Fast noninvasive activation and inhibition of neural and
network activity by vertebrate rhodopsin and green algae
channelrhodopsin," PNAS, 102:17816-21, 2005.],[9. G.
Nagel et al., "Light activation of channelrhodopsin-2 in excitable
cells of Caenorhabditis elegans triggers rapid behavioral responses," Curr Biol, 15:2279-84, 2005.],[10.
A. Bi et al., "Ectopic expression of a microbial-type rhodopsin
restores visual responses in mice with photoreceptor degeneration," Neuron,
50:23-33, 2006.] Clearly, the idea had been in the air, with many
groups chasing the use of channelrhodopsin in neurons. These papers
showed, among many other groundbreaking results, that no chemicals were
needed to supplement ChR2 function in the living mammalian brain.
Almost immediately after I finished my PhD in October 2005, two months after our ChR2 paper came out, I began the faculty job search process. At the same time, I started a position as a postdoctoral researcher with Karl and with Mark Schnitzer at Stanford. The job-search process ended up consuming much of my time, and being on the road, I began doing bioengineering invention consulting in order to learn about other new technology areas that could be brought to bear on neuroscience. I accepted a faculty job offer from the MIT Media Lab in September 2006, and began the process of setting up a neuroengineering research group there.
Around that time, I began a collaboration with Xue Han, my then girlfriend (and a postdoctoral researcher in the lab of Richard Tsien), to revisit the original idea of using the N. pharaonis halorhodopsin to mediate optical neural silencing. Back in 2000, Karl and I had planned to pursue this jointly; there was now the potential for competition, since we were working separately. Xue and I ordered the gene to be synthesized in codon-optimized form by a DNA synthesis company, and, using the same Tsien-lab rig that had supported the channelrhodopsin paper, Xue acquired data showing that this halorhodopsin could indeed silence neural activity. Our paper[11. X. Han, E.S. Boyden, "Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution," PLoS ONE, 2:e299, 2007.] appeared in the March 2007 issue of PLoS ONE; Karl’s group, working in parallel, published a paper in Nature a few weeks later, independently showing that this halorhodopsin could support light-driven silencing of neurons, and also including an impressive demonstration that it could be used to manipulate behavior in Caenorhabditis elegans.[12. F. Zhang et al., "Multimodal fast optical interrogation of neural circuitry," Nature, 446:633-39, 2007.] Later, both our groups teamed up to file a joint patent on the use of this halorhodopsin to silence neural activity. As a testament to the unanticipated side effects of following innovation where it leads you, Xue and I got married in 2009 (and she is now an assistant professor at Boston University).
I continued to survey a wide variety of microorganisms for better silencing opsins: the inexpensiveness of gene synthesis meant that it was possible to rapidly obtain genes codon-optimized for mammalian expression, and to screen them for new and interesting light-drivable neural functions. Brian Chow (now an assistant professor at the University of Pennsylvania) joined my lab at MIT as a postdoctoral researcher, and began collaborating with Xue. In 2008 they identified a new class of neural silencer, the archaerhodopsins, which were not only capable of high-amplitude neural silencing—the first such opsin that could support 100 percent shutdown of neurons in the awake, behaving animal—but also were capable of rapid recovery after having been illuminated for extended durations, unlike halorhodopsins, which took minutes to recover after long-duration illumination.[13. B.Y. Chow et al., "High-performance genetically targetable optical neural silencing by light-driven proton pumps," Nature, 463:98-102, 2010.] Interestingly, the archaerhodopsins are light-driven outward pumps, similar to bacteriorhodopsin—they hyperpolarize neurons by pumping protons out of the cells. However, the resultant pH changes are as small as those produced by channelrhodopsins (which have proton conductances a million times greater than their sodium conductances), and well within the safe range of neuronal operation. Intriguingly, we discovered that the H. salinarum bacteriorhodopsin, the very first opsin characterized in the early 1970s, was able to mediate decent optical neural silencing, suggesting that perhaps opsins could have been applied to neuroscience decades ago.
For mammalian systems, viruses bearing genes encoding for opsins have proven popular in experimental use, due to their ease of creation and use. These viruses achieve their specificity either by infecting only specific neurons, or by containing regulatory promoters that constrain opsin expression to certain kinds of neurons.
An increasing number of transgenic mouse lines are also now being created, in which an opsin is expressed in a given neuron type through transgenic methodologies. One popular hybrid strategy is to inject a virus that contains a Cre-activated genetic cassette encoding for the opsin into one of the burgeoning number of mice that express Cre recombinase in specific neuron types, so that the opsin will only be produced in Cre recombinase-expressing neurons. [15. D.Atasoy et al., “A FLEX switch targets Channelrhodopsin-2 to multiple cell types for imaging and long-range circuit mapping,” J Neurosci, 28:7025-30, 2008.]
In 2009, in collaboration with the labs of Robert Desimone and Ann Graybiel at MIT, we published the first use of channelrhodopsin-2 in the nonhuman primate brain, showing that it could safely and effectively mediate neuron type–specific activation in the rhesus macaque without provoking neuron death or functional immune reactions. [16. X. Han et al., “Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain,” Neuron, 62:191-98, 2009.] This paper opened up a possibility of translating the technique of optical neural stimulation into the clinic as a treatment modality, although clearly much more work is required to understand this potential application of optogenetics.
Edward Boyden leads the Synthetic Neurobiology Group at MIT,
where he is the Benesse Career Development Professor and associate
professor of biological engineering and brain and cognitive science at
the MIT Media Lab and the MIT McGovern Institute. This article is adapted from a review in F1000 Biology Reports, DOI:10.3410/B3-11 (open access at http://f1000.com/reports/b/3/11). For citation purposes, please refer to that version.
http://www.the-scientist.com/?articles.view/articleNo/30756/title/The-Birth-of-Optogenetics/
By Edward S. Boyden | July 1, 2011
Blue light hits a neuron engineered to
express opsin molecules on its surface, opening a channel through which
ions pass into the cell—activating the neuron.MIT McGovern Institute, Julie Pryor, Charles Jennings, Sputnik Animation, Ed Boyden
For a few years now, I’ve taught a
course at MIT called “Principles of Neuroengineering.” The idea of the
class is to get students thinking about how to create neurotechnology
innovations—new inventions that can solve outstanding scientific
questions or address unmet clinical needs. Designing neurotechnologies
is difficult because of the complex properties of the brain: its
inaccessibility, heterogeneity, fragility, anatomical richness, and high
speed of operation. To illustrate the process, I decided to write a
case study about the birth and development of an innovation with which I
have been intimately involved: optogenetics—a toolset of genetically
encoded molecules that, when targeted to specific neurons in the brain,
allow the activity of those neurons to be driven or silenced by light.A strategy: controlling the brain with light
As an undergraduate at MIT, I studied physics and electrical engineering and got a good deal of firsthand experience in designing methods to control complex systems. By the time I graduated, I had become quite interested in developing strategies for understanding and engineering the brain. After graduating in 1999, I traveled to Stanford to begin a PhD in neuroscience, setting up a home base in Richard Tsien’s lab. In my first year at Stanford I was fortunate enough to meet many nearby biologists willing to do collaborative experiments, ranging from attempting the assembly of complex neural circuits in vitro to behavioral experiments with rhesus macaques. For my thesis work, I joined the labs of Richard Tsien and of Jennifer Raymond in spring 2000, to study how neural circuits adapt in order to control movements of the body as the circumstances in the surrounding world change.In parallel, I started thinking about new technologies for controlling the electrical activity of specific neuron types embedded within intact brain circuits. That spring, I discussed this problem—during brainstorming sessions that often ran late into the night—with Karl Deisseroth, then a Stanford MD-PhD student also doing research in Tsien’s lab. We started to think about delivering stretch-sensitive ion channels to specific neurons, and then tethering magnetic beads selectively to the channels, so that applying an appropriate magnetic field would result in the bead’s moving and opening the ion channel, thus activating the targeted neurons.By late spring 2000, however, I had become fascinated by a simpler and potentially easier-to-implement approach: using naturally occurring microbial opsins, which would pump ions into or out of neurons in response to light. Opsins had been studied since the 1970s because of their fascinating biophysical properties, and for the evolutionary insights they offer into how life forms use light as an energy source or sensory cue.[1. D. Oesterhelt, W. Stoeckenius, “Rhodopsin-like protein from the purple membrane of Halobacterium halobium,” Nat New Biol, 233:149-52, 1971.] These membrane-spanning microbial molecules—proteins with seven helical domains—react to light by transporting ions across the lipid membranes of cells in which they are genetically expressed. (See the illustration above.) For this strategy to work, an opsin would have to be expressed in the neuron’s lipid membrane and, once in place, efficiently perform this ion-transport function. One reason for optimism was that bacteriorhodopsin had successfully been expressed in eukaryotic cell membranes—including those of yeast cells and frog oocytes—and had pumped ions in response to light in these heterologous expression systems. And in 1999, researchers had shown that, although many halorhodopsins might work best in the high salinity environments in which their host archaea naturally live (i.e., in very high chloride concentrations), a halorhodopsin from Natronomonas pharaonis (Halo/NpHR) functioned best at chloride levels comparable to those in the mammalian brain.[2. D. Okuno et al., "Chloride concentration dependency of the electrogenic activity of halorhodopsin," Biochemistry, 38:5422-29, 1999.]
Infographic: Part Human, Part HIV Credit: Lucy Reading-Ikkanda
|
The channelrhodopsin collaboration
In 2002 a pioneering paper from the lab of Gero Miesenböck showed that genetic expression of a three-gene Drosophila phototransduction cascade in neurons allowed the neurons to be excited by light, and suggested that the ability to activate specific neurons with light could serve as a tool for analyzing neural circuits.[3. B.V. Zemelman et al., "Selective photostimulation of genetically chARGed neurons," Neuron, 33:15-22, 2002.] But the light-driven currents mediated by this system were slow, and this technical issue may have been a factor that limited adoption of the tool.This paper was fresh in my mind when, in fall 2003, Karl e-mailed me to express interest in revisiting the magnetic-bead stimulation idea as a potential project that we could pursue together later—when he had his own lab, and I had finished my PhD and could join his lab as a postdoc. Karl was then a postdoctoral researcher in Robert Malenka’s lab (also at Stanford), and I was about halfway through my PhD. We explored the magnetic-bead idea between October 2003 and February 2004. Around that time I read a just-published paper by Georg Nagel, Ernst Bamberg, Peter Hegemann, and colleagues, announcing the discovery of channelrhodopsin-2 (ChR2), a light-gated cation channel and noting that the protein could be used as a tool to depolarize cultured mammalian cells in response to light.[4. G. Nagel et al., "Channelrhodopsin-2, a directly light-gated cation-selective membrane channel," PNAS, 100:13940-45, 2003.]In February 2004, I proposed to Karl that we contact Georg to see if they had constructs they were willing to distribute. Karl got in touch with Georg in March, obtained the construct, and inserted the gene into a neural expression vector. Georg had made several further advances by then: he had created fusion proteins of ChR2 and yellow fluorescent protein, in order to monitor ChR2 expression, and had also found a ChR2 mutant with improved kinetics. Furthermore, Georg commented that in cell culture, ChR2 appeared to require little or no chemical supplementation in order to operate (in microbial opsins, the chemical chromophore all-trans-retinal must be attached to the protein to serve as the light absorber; it appeared to exist at sufficient levels in cell culture).
Finally, we were getting the ball rolling on targetable control of specific neural types. Karl optimized the gene expression conditions, and found that neurons could indeed tolerate ChR2 expression. Throughout July, working in off-hours, I debugged the optics of the Tsien-lab rig that I had often used in the past. Late at night, around 1 a.m. on August 4, 2004, I went into the lab, put a dish of cultured neurons expressing ChR2 into the microscope, patch-clamped a glowing neuron, and triggered the program that I had written to pulse blue light at the neurons. To my amazement, the very first neuron I patched fired precise action potentials in response to blue light. That night I collected data that demonstrated all the core principles we would publish a year later in Nature Neuroscience, announcing that ChR2 could be used to depolarize neurons.[5. E.S. Boyden et al., "Millisecond-timescale, genetically targeted optical control of neural activity," Nat Neurosci, 8:1263-68, 2005.] During that long, exciting first night of experimentation in 2004, I determined that ChR2 was safely expressed and physiologically functional in neurons. The neurons tolerated expression levels of the protein that were high enough to mediate strong neural depolarizations. Even with brief pulses of blue light, lasting just a few milliseconds, the magnitude of expressed-ChR2 photocurrents was large enough to mediate single action potentials in neurons, thus enabling temporally precise driving of spike trains. Serendipity had struck—the molecule was good enough in its wild-type form to be used in neurons right away. I e-mailed Karl, “Tired, but excited.” He shot back, “This is great!!!!!”
Transitions and optical neural silencers
In January 2005, Karl finished his postdoc and became an assistant professor of bioengineering and psychiatry at Stanford. Feng Zhang, then a first-year graduate student in chemistry (and now an assistant professor at MIT and at the Broad Institute), joined Karl’s new lab, where he cloned ChR2 into a lentiviral vector, and produced lentivirus that greatly increased the reliability of ChR2 expression in neurons. I was still working on my PhD, and continued to perform ChR2 experiments in the Tsien lab. Indeed, about half the ChR2 experiments in our first optogenetics paper were done in Richard Tsien’s lab, and I owe him a debt of gratitude for providing an environment in which new ideas could be pursued. I regret that, in our first optogenetics paper, we did not acknowledge that many of the key experiments had been done there. When I started working in Karl’s lab in late March 2005, we carried out experiments to flesh out all the figures for our paper, which appeared in Nature Neuroscience in August 2005, a year after that exhilarating first discovery that the technique worked.
CHANNELRHODOPSINS
IN ACTION <br> A neuron expresses the light-gated cation channel
channelrhodopsin-2 (green dots on the cell body) in its cell membrane
(1). The neuron is illuminated by a brief pulse of blue light a few
milliseconds long, which opens the channelrhodopsin-2 molecules (2),
allowing positively charged ions to enter the cells, and causing the
neuron to fire an electrical pulse (3). A neural network containing
different kinds of cells (pyramidal cell, basket cell, etc.), with the
basket cells (small star-shaped cells) selectively sensitized to light
activation. When blue light hits the neural network, the basket cells
fire electrical pulses (white highlights), while the surrounding neurons
are not directly affected by the light (4). The basket cells, once
activated, can, however, modulate the activity in the rest of the
network.
Watch Video
Credit: MIT McGovern Institute, Julie Pryor, Charles Jennings, Sputnik Animation, Ed Boyden
|
Almost immediately after I finished my PhD in October 2005, two months after our ChR2 paper came out, I began the faculty job search process. At the same time, I started a position as a postdoctoral researcher with Karl and with Mark Schnitzer at Stanford. The job-search process ended up consuming much of my time, and being on the road, I began doing bioengineering invention consulting in order to learn about other new technology areas that could be brought to bear on neuroscience. I accepted a faculty job offer from the MIT Media Lab in September 2006, and began the process of setting up a neuroengineering research group there.
Around that time, I began a collaboration with Xue Han, my then girlfriend (and a postdoctoral researcher in the lab of Richard Tsien), to revisit the original idea of using the N. pharaonis halorhodopsin to mediate optical neural silencing. Back in 2000, Karl and I had planned to pursue this jointly; there was now the potential for competition, since we were working separately. Xue and I ordered the gene to be synthesized in codon-optimized form by a DNA synthesis company, and, using the same Tsien-lab rig that had supported the channelrhodopsin paper, Xue acquired data showing that this halorhodopsin could indeed silence neural activity. Our paper[11. X. Han, E.S. Boyden, "Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution," PLoS ONE, 2:e299, 2007.] appeared in the March 2007 issue of PLoS ONE; Karl’s group, working in parallel, published a paper in Nature a few weeks later, independently showing that this halorhodopsin could support light-driven silencing of neurons, and also including an impressive demonstration that it could be used to manipulate behavior in Caenorhabditis elegans.[12. F. Zhang et al., "Multimodal fast optical interrogation of neural circuitry," Nature, 446:633-39, 2007.] Later, both our groups teamed up to file a joint patent on the use of this halorhodopsin to silence neural activity. As a testament to the unanticipated side effects of following innovation where it leads you, Xue and I got married in 2009 (and she is now an assistant professor at Boston University).
I continued to survey a wide variety of microorganisms for better silencing opsins: the inexpensiveness of gene synthesis meant that it was possible to rapidly obtain genes codon-optimized for mammalian expression, and to screen them for new and interesting light-drivable neural functions. Brian Chow (now an assistant professor at the University of Pennsylvania) joined my lab at MIT as a postdoctoral researcher, and began collaborating with Xue. In 2008 they identified a new class of neural silencer, the archaerhodopsins, which were not only capable of high-amplitude neural silencing—the first such opsin that could support 100 percent shutdown of neurons in the awake, behaving animal—but also were capable of rapid recovery after having been illuminated for extended durations, unlike halorhodopsins, which took minutes to recover after long-duration illumination.[13. B.Y. Chow et al., "High-performance genetically targetable optical neural silencing by light-driven proton pumps," Nature, 463:98-102, 2010.] Interestingly, the archaerhodopsins are light-driven outward pumps, similar to bacteriorhodopsin—they hyperpolarize neurons by pumping protons out of the cells. However, the resultant pH changes are as small as those produced by channelrhodopsins (which have proton conductances a million times greater than their sodium conductances), and well within the safe range of neuronal operation. Intriguingly, we discovered that the H. salinarum bacteriorhodopsin, the very first opsin characterized in the early 1970s, was able to mediate decent optical neural silencing, suggesting that perhaps opsins could have been applied to neuroscience decades ago.
Beyond luck: systematic discovery and engineering of optogenetic tools
An essential aspect of furthering this work is the free and open distribution of these optogenetic tools, even prior to publication. To facilitate teaching people how to use these tools, our lab regularly posts white papers on our website* with details on reagents and optical hardware (a complete optogenetics setup costs as little as a few thousand dollars for all required hardware and consumables), and we have also partnered with nonprofit organizations such as Addgene and the University of North Carolina Gene Therapy Center Vector Core to distribute DNA and viruses, respectively. We regularly host visitors to observe experiments being done in our lab, seeking to encourage the community building that has been central to the development of optogenetics from the beginning.As a case study, the birth of optogenetics offers a number of interesting insights into the blend of factors that can lead to the creation of a neurotechnological innovation. The original optogenetic tools were identified partly through serendipity, guided by a multidisciplinary convergence and a neuroscience-driven knowledge of what might make a good tool. Clearly, the original serendipity that fostered the formation of this concept, and that accompanied the initial quick try to see if it would work in nerve cells, has now given way to the systematized luck of bioengineering, with its machines and algorithms designed to optimize the chances of finding something new. Many labs, driven by genomic mining and mutagenesis, are reporting the discovery of new opsins with improved light and color sensitivities and new ionic properties. It is to be hoped, of course, that as this systematized luck accelerates, we will stumble upon more innovations that can aid in dissecting the enormous complexity of the brain—beginning the cycle of invention again.Putting the toolbox to work
These optogenetic tools are now in use by many hundreds of neuroscience and biology labs around the world. Opsins have been used to study how neurons contribute to information processing and behavior in organisms including C. elegans, Drosophila, zebrafish, mouse, rat, and nonhuman primate. Light sources such as conventional mercury and xenon lamps, light-emitting diodes, scanning lasers, femtosecond lasers, and other common microscopy equipment suffice for in vitro use.In vivo mammalian use of these optogenetic reagents has been greatly facilitated by the availability of inexpensive lasers with optical-fiber outputs; the free end of the optical fiber is simply inserted into the brain of the live animal when needed,[14. A.M. Aravanis et al., “An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology,” J Neural Eng, 4:S143-56, 2007.] or coupled at the time of experimentation to an implanted optical fiber.For mammalian systems, viruses bearing genes encoding for opsins have proven popular in experimental use, due to their ease of creation and use. These viruses achieve their specificity either by infecting only specific neurons, or by containing regulatory promoters that constrain opsin expression to certain kinds of neurons.
An increasing number of transgenic mouse lines are also now being created, in which an opsin is expressed in a given neuron type through transgenic methodologies. One popular hybrid strategy is to inject a virus that contains a Cre-activated genetic cassette encoding for the opsin into one of the burgeoning number of mice that express Cre recombinase in specific neuron types, so that the opsin will only be produced in Cre recombinase-expressing neurons. [15. D.Atasoy et al., “A FLEX switch targets Channelrhodopsin-2 to multiple cell types for imaging and long-range circuit mapping,” J Neurosci, 28:7025-30, 2008.]
In 2009, in collaboration with the labs of Robert Desimone and Ann Graybiel at MIT, we published the first use of channelrhodopsin-2 in the nonhuman primate brain, showing that it could safely and effectively mediate neuron type–specific activation in the rhesus macaque without provoking neuron death or functional immune reactions. [16. X. Han et al., “Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain,” Neuron, 62:191-98, 2009.] This paper opened up a possibility of translating the technique of optical neural stimulation into the clinic as a treatment modality, although clearly much more work is required to understand this potential application of optogenetics.
http://www.the-scientist.com/?articles.view/articleNo/30756/title/The-Birth-of-Optogenetics/
ETHICAL DONATORS
AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS
SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS,WEBSITE DETAILS, CHARITY REQUESTS, LECTURE
OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL
AND/OR HEALTH ENLIGHTENMENT COURSES.
AS AN IMPORTANT MEMBER OF THE
GLOBAL INDEPENDENT
MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO
KEEP YOUR NEWS, DIVERSE,
AND FREE OF CORPORATE, GOVERNMENT SPIN AND
CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT:
themikiverse@gmail.com
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
Monday, March 18, 2013
A NOBEL PRIZE FOR THE DARK SIDE
”Science today is about getting some results, framing those results in an attention-grabbing media release and basking in the glory.”On October 4, 2011 the Nobel Prize in Physics was awarded to three astrophysicists for “THE ACCELERATING UNIVERSE.” Prof. Perlmutter of the University of California, Berkeley, has been awarded half the 10m Swedish krona (US$1,456,000 or £940,000) prize, with Prof. Schmidt of the Australian National University and Prof. Riess of Johns Hopkins University’s Space Telescope Science Institute sharing the other half. The notion of an accelerating expansion of the universe is based on observation of supernovae at high redshift, known as The High-Z SN Search.
—Kerry Cue, Canberra Times, 5 October 2011
However, accelerating expansion requires a mysterious source of energy in space acting against gravity, dubbed “dark energy.” Calculations show that the energy required is equivalent to 73% of the total mass-energy of the universe! Historians will look back at science today with disbelief and amusement at the ‘science’ of today. Following equally mysterious ‘black holes’ and ‘dark matter,’ if we continue to discover darkness at the present rate we shall soon know nothing!
“The present boastfulness of the expounders and the gullibility of the listeners alike violate that critical spirit which is supposedly the hallmark of science.”I attended a public lecture recently on “Cosmological Confusion… revealing the common misconceptions about the big bang, the expansion of the universe and cosmic horizons,” presented at the Australian National University by an award winning Australian astrophysicist, Dr. Tamara Davis.
—Jacques Barzun, Science: the glorious entertainment
The particular interests of Dr. Davis are the mysteries posed by ‘dark matter’ and ‘dark energy,’ hence the title of this piece. The theatre was packed and the speaker animated like an excited schoolchild who has done her homework and is proud to show the class. Her first question to the packed hall was, “How many in the audience have done some physics?” It seemed the majority had. So it was depressing to listen to the questions throughout the performance and recognize that the noted cultural historian Jacques Barzun was right. Also, Halton Arp’s appraisal of the effect of modern education seemed fitting, “If you take a highly intelligent person and give them the best possible, elite education, then you will most likely wind up with an academic who is completely impervious to reality.”
Carl Linnaeus in 1758 showed characteristic academic hubris and anthropocentrism when he named our species Homo sapiens sapiens (“Sapiens” is Latin for “wise man” or ” knowing man”). But it is questionable, as a recent (18th August) correspondent to Nature wrote, whether we “merit a single ‘sapiens,’ let alone the two we now bear.” To begin, big bang cosmology dismisses the physics principle of no creation from nothing. It then proceeds with the falsehood that Hubble discovered the expansion of the universe. He didn’t, he found the apparent redshift/distance relationship (actually a redshift/luminosity relationship), which to his death he did not feel was due to an expanding universe.
This misrepresentation is followed by the false assumption that the evolution of an expanding universe can be deduced from Einstein’s unphysical theory of gravity, which combines two distinct concepts, space and time, into some ‘thing’ with four dimensions called “the fabric of space-time.” I should like to know what this “fabric” is made from and how matter can be made to shape it? Space is the concept of the relationship between objects in three orthogonal dimensions only. Time is the concept of the interval between events and has nothing to do with Einstein’s physical clocks. Clearly time has no physical dimension. David Harriman says, “A concept detached from reality can be like a runaway train, destroying everything in its path.” This is certainly true of Einstein’s theories of relativity.
Special relativity is no different to declaring that the apparent dwindling size of a departing train and the lower pitch of its whistle are due to a real shrinking of space on the train and slowing of its clocks. We know from experience that isn’t true. The farce must eventually play out like the cartoon character walking off the edge of a cliff and not falling until the realization dawns that there is no support. But how long must we wait? We are swiftly approaching the centennial of the big bang. The suspense has become tedious and it is costing us dearly. Some people are getting angry.
All of the ‘dark’ things in astronomy are artefacts of a crackpot cosmology. The ‘dark energy’ model of the universe demands that eventually all of the stars will disappear and there will be eternal darkness. In the words of Brian Schmidt, “The future for the universe appears very bleak.” He confirms my portrayal of big bang cosmology as “hope less.”
The Nobel Prize Committee had the opportunity to consider a number of rational arguments and evidence against an accelerating expanding universe:
1. General Relativity (GR) is wrong — we don’t understand gravity. Brian Schmidt mentions this possibility and labels it “heretical.” But GR must be wrong because space is not some ‘thing’ that can be warped mysteriously by the presence of matter. The math of GR explains nothing.
2. Supernovae are not understood. (Schmidt mentions this possibility too). This also should have been obvious because the theory is so complex and adjustable that it cannot predict anything. The model involving a sudden explosion of an accreting white dwarf is unverified and does not predict the link between peak luminosity and duration of supernovae type 1a ‘standard candles’ or the complex bipolar pattern of their remnants.
3. The universe is not expanding — Hubble was right. If the redshift is not simply a Doppler effect, “the region observed appears as a small, homogeneous, but insignificant portion of a universe extended indefinitely both in space and time.”
4. Concerning intrinsic redshift, Halton Arp and his colleagues long ago proved that there is, as Hubble wrote, “a new principle of nature,” to be discovered.
5. There can be no ‘dark energy’ in ‘empty space.’ E=mc2 tells us that energy (E) is an intrinsic property of matter. There is no mysterious disembodied energy available to accelerate any ‘thing’ much less accelerate the concept of space.
In failing to address these points the Nobel Committee perpetuates the lack of progress in science. We are paying untold billions of dollars for experiments meant to detect the phantoms springing endlessly from delusional theories. For example, gravitational wave telescopes are being built and continually refined in sensitivity to discover the imaginary “ripples in the fabric of space-time.” The scientists might as well be medieval scholars theorizing about the number of angels that could dance on the head of a pin. By the end of 2010, the Large Hadron Collider has now cost more than US$10 billion searching for the mythical Higgs boson that is supposed to cause all other particles to exhibit mass! Here, once again, E=mc2 shows that mass (m) is an intrinsic property of matter. It is futile to look elsewhere for a cause. In a scientific field, it is dangerous to rely on a single idea. The peril for cosmologists is clear. They have developed a monoculture; an urban myth called the big bang. Every surprising discovery must be force-fitted into the myth regardless of its absurdities. Scientists are presently so far ‘through the looking glass’ that the real universe we observe constitutes a mere 4% of their imaginary one.
The ‘Alice in Wonderland’ aspect of big bang cosmology is highlighted by the fact there is a competing ‘plasma cosmology,’ which is recognized by practical electrical engineers but unknown or dismissed by the mythmakers. Plasma cosmology deals with the dominant (>99%) form of matter in the visible universe. Plasma cosmology can demonstrate the formation and detailed rotation pattern of spiral galaxies, both by experiment and particle-in-cell computer simulation, using Maxwell’s laws of electromagnetism alone. The puny force of gravity can be ignored! Plasma cosmology can also explain the activity in the centres of galaxies without resort to the mythical dark gravitational beast — the ‘black hole.’ The Electric Universe goes further and also explains the gravitational effects observed at the center of the Milky Way in electrical terms. So much for the gravitational cosmology of the big bang! No invisible ‘dark matter’ need be conjured up and placed where needed to save the plasma model.
The most profound and important demand we must make of astrophysicists is to justify their unawareness of this freely available ‘second idea.’
‘Dark energy’ is supposed to make up 73% of the universe. The evidence interpreted in this weird way comes from comparing the redshift distances of galaxies with the brightness of their supernovae type 1a, used as a ‘standard candle.’ It was found that the supernovae in highly redshifted galaxies are fainter than expected, indicating that they are further away than previously estimated. This, in turn, implied a startling accelerating expansion of the universe, according to the big bang model. It is like throwing a ball into the air and having it accelerate upwards. So a mysterious ‘dark energy’ was invented, which fills the vacuum and works against gravity. The Douglas Adams’ “Infinite Improbability generator” type of argument was called upon to produce this ‘vacuum energy.’ The language defining vacuum energy is revealing: “Vacuum energy is an underlying background energy that exists in space even when the space is devoid of matter (free space). The concept of vacuum energy has been deduced from the concept of virtual particles, which is itself derived from the energy-time uncertainty principle.” You may notice the absurdity of the concept, given that the vacuum contains no matter, ‘background’ or otherwise, yet it is supposed to contain energy. Adams was parodying Heisenberg’s ‘uncertainty principle’ of quantum mechanics. Quantum mechanics is merely a probabilistic description of what happens at the scale of subatomic particles without any real physical understanding of cause and effect. Heisenberg was uncertain because he didn’t know what he was talking about. However, he was truthful when he wrote, “we still lack some essential feature in our image of the structure of matter.” The concept of ‘virtual particles’ winking in and out of existence defies the aforementioned first principle of physics, “Thou shalt not magically materialize nor dematerialize matter.” Calling that matter ‘virtual’ merely underscores its non-reality.
Indeed, the ‘discovery’ of the acceleration of the expanding universe is an interpretation based on total ignorance of the real nature of stars and the ‘standard candle,’ the supernova type 1a. A supernova type 1a is supposed to be due to a hypothetical series of incredible events involving a white dwarf star. But as I have shown, a supernova is simply an electrical explosion of a star that draws its energy from a galactic circuit. The remarkable brilliance of a supernova, which can exceed that of its host galaxy for days or weeks, is explained by the kind of power transmission line failure that can also be seen occasionally on Earth. If such a circuit is suddenly opened, the electromagnetic energy stored in the extensive circuit is concentrated at the point where the circuit is broken, producing catastrophic arcing. Stars too can ‘open their circuit’ due to a plasma instability causing, for example, a magnetic ‘pinch off’ of the interstellar Birkeland current. The ‘standard candle’ effect and light curve is then simply due to the circuit parameters of galactic transmission lines, which power all stars.
What of the fainter and more short-lived supernovae in highly-redshifted galaxies? Arp has shown that faint, highly-redshifted objects, like quasars, are intrinsically faint because of their youth and not their distance. Quasars are ‘born’ episodically from the nucleus of active galaxies. They initially move very fast along the spin axis away from their parent. As they mature they grow brighter and slow down, as if gaining in mass. Finally they evolve into companion galaxies. The decreasing quasar redshift occurs in discrete steps which points to a process whereby protons and electrons go through a number of small, quantized (resonant) increases in mass as the electrical stress and power density within the quasar increases. The charge required comes via an electrical ‘umbilical cord,’ in the form of the parent galaxies’ nuclear jet. Based on Arp’s discovery and the electric model of galaxies and stars, both stars and supernovae type 1a are naturally dimmer, and the supernovae more short-lived, in high-redshift galaxies than in low-redshift galaxies because of the smaller galactic power density and lower mass (energy) of all subatomic particles making up the former.
But I don’t expect a Nobel Prize for this sensible explanation. Otherwise I could meet the fate of the hapless student who created the ‘Infinite Improbability generator’ in Douglas Adams’ wonderful Hitchhiker’s Guide to the Galaxy, “when just after he was awarded the Galactic Institute’s Prize for Extreme Cleverness he got lynched by a rampaging mob of respectable physicists who had finally realized that the one thing they really couldn’t stand was a smart-ass.”
The use of the title The Dark Side for Dr. Davis’ cosmology talk seems unconsciously apposite. It was Joseph Campbell who said, “We live our mythology.” And George Lucas attributes the success of his Star Wars films, which rely on a degenerate, evil ‘dark side,’ to reading Campbell’s books. The triumph of the big bang myth over common sense and logic supports Campbell’s assessment. And the showbiz appeal of Lucas’ mythic approach to storytelling is evident in the ‘dark side’ of cosmology. Scientists live their mythology too. Science’s “cosmic confusion” is self-inflicted.
The Electric Universe paradigm is distinguished by its interdisciplinary origin in explaining mythology by the use of forensic scientific techniques. It demands the lonely courage to give up familiar landmarks and beliefs. Sitting in the tame audience the other evening, listening to the professor of astrophysics, I was reminded of The Galaxy Song from Monty Python, which ends with the painfully perceptive lines, “And pray there is intelligent life somewhere up in space, ‘cause there’s bugger-all down here on Earth!”
Wal Thornhill
http://www.holoscience.com/wp/a-nobel-prize-for-the-dark-side/
ETHICAL DONATORS
AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS
SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS,WEBSITE DETAILS, CHARITY REQUESTS, LECTURE
OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL
AND/OR HEALTH ENLIGHTENMENT COURSES.
AS AN IMPORTANT MEMBER OF THE
GLOBAL INDEPENDENT
MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO
KEEP YOUR NEWS, DIVERSE,
AND FREE OF CORPORATE, GOVERNMENT SPIN AND
CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT:
themikiverse@gmail.com
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
Thursday, March 7, 2013
WHAT IS METAPROGRAMMING? - THE BLOG OF J.D. MOYER SYSTEMS FOR LIVING WELL
For a number of decades I’ve been interested in self-improvement via a method I like to call metaprogramming. I was first exposed to the term via John C. Lilly’s Programming and Metaprogramming in the Human Biocomputer (a summary report to Lilly’s employer at the time, The National Institute of Mental Health [NIMH]). Lilly explored the idea that all human behavior is controlled by genetic and neurological programs, and that via intense introspection, psychedelic drugs, and isolation tanks, human beings can learn to reprogram their own computers. Far out, man. As the fields of psychology, neurophysiology, cognitive science have progressed, we’ve learned that the computer/brain analogy has its limitations. As for psychedelics, they have their limitations as well; they are so effective at disrupting rigid mental structures (opening up minds), that they can leave their heavy users a bit lacking in structure. From my own observations, what the heavy user of psychedelics stands to gain in creativity, he may lose in productivity, or stability, or coherence.
Those issues aside, I still love the term metaprogramming. We are creatures of habit (programs), and one of the most effective (if not only) way we can modify our own behavior is by hacking our own habits. We can program our programs, thus, metaprogramming. This is a slightly different use of the term than Lilly’s; what I call metaprogramming he probably would have called selfmetaprogramming (he used metaprograms to refer to higher level programs in the human biocomputer; habits and learned knowledge and cultural norms as opposed to instincts and other “hardwired” behaviors).
Effective Metaprogramming
Effective metaprogramming requires a degree of self-awareness and self-observation. It also requires a forgiving attitude towards oneself; we can more clearly observe and take responsibility for our own behaviors (including the destructive ones), if we refrain from unnecessary self-flagellation.
Most importantly, effective metaprogramming requires clear targets for behavior. In my experience, coming up with these targets takes an enormous amount of time and energy. It’s hard to decide how you want to behave, in every area of your life. It’s much easier to just continue on cruise control, relying on your current set of habits to carry you towards whatever fate you’re currently pointed at.
And what if you pick a target for your own behavior, implement it, and don’t like the results? Course corrections are part of the territory.
Religion (Do It Our Way)
If you don’t want to come up with your own set of behavioral guidelines, there’s always someone willing to offer (or sell) you theirs. Moses, lugging around his ten commandments, or Tony Robbins, with his DVDs.
Religion has historically offered various sets of metaprogramming tools; rules for how to behave, and in some cases, techniques and practices to help you out (like Buddhist meditation). If you decide to follow or join a religion, you have to watch out for the extra baggage. Some religions come with threats if you don’t follow the rules. The threats can be real (banishment from the group), or made up (banishment to Hell). Judaism is perhaps the exception; there are lots of rules but the main punishment for not following them (as far as I can tell) is that you simply become a less observant Jew.
I’m an atheist, more or less, and a fan of the scientific method and scientific inquiry. I also appreciate the work the philosopher/evolutionary biologists Daniel Dennett and Richard Dawkins, both of whom have taken up strong stands against organized religion. These stands are excusable, insofar as they attack outmoded religious beliefs (creationism, the afterlife, inferiority of women, and so forth) or crime (like the abuse of children by priests — Dawkins is actually trying to arrest the Pope). But religion offers much more than belief, and in some religions (like Judaism) belief matters very little. Religions offer behavioral systems, practices, rituals, myths, stories, and traditions, all of which are tremendous, irreplaceable cultural resources.
Some religions are attempting the leap into modernity. The Dalai Lama has taken an active interest in neuroscience. My wife’s rabbi is a self-proclaimed atheist. The Vatican has put out a statement suggesting that Darwinian evolution is not in conflict with the official doctrines of the Catholicism (a nice PR move, but in my opinion it’s only because they don’t fully understand the principles of Darwinian evolution — Daniel Dennett called Darwin’s idea “dangerous” for good reason). In the long-run, religions are institutions, and they’ll do what they have to in order to survive. The term “God” will be redefined, as necessary, to keep the pews warm and the tithing buckets full. Evolutionary biologists (with their logical, literal thinking) are tilting at windmills when they attack religion; they are no match for the nimble, poetic minds of theologians.
As much as I value religions in the abstract, I haven’t yet found one I can deal with personally. My wife finds the endless rules of Judaism to be invigorating; following them gives her real spiritual satisfaction. I find them to be bizarre and confusing (maybe this is because I’m not Jewish, but I suspect some Jews would agree with me).
Still, I have liberally borrowed from the world’s religions while devising my own metaprogramming system. Jesus’s Golden Rule. Islam’s dislike of debt. A good chunk of the Buddha’s Eightfold Path. And at least a few of the Ten Commandments.
Help Yourself
The self-help movement has been around at least as long as Dale Carnegie. Decades later, the psychedelic and cross-cultural explorations of the 60′s (Richard Alpert hanging out and dropping acid with Indian gurus, Timothy Leary dropping acid and reading The Tibetan Book of the Dead, Werner Erhard experimenting with Zen Buddhism) added fuel to the fire of the self-help movement. East meets West meets L.S.D. = Total Transformation of the Human Psyche! We all know how that turned out.
The modern self-help movement has had its share of both inspired individuals (like Tony Robbins) and charismatic but ultimately abusive (like the late Frederick Lenz).
I’m a fan of Robbins, for example, because his teachings are open (he does sell products and seminars, but he also gives away an enormous amount of content). Same goes for Steve Pavlina, Les Brown, and even Timothy Ferriss. All offer up their own insights and behavioral modification (metaprogramming) systems with a “try this and see if it works for you” attitude. It’s clear they are interested in spreading their message first, in making a living second, and not at all interested in controlling people or accumulating subjugates.
I’m also fascinated by the late anti-guru U.G. Krishnamurti (not to be confused with the more popular J. Krishnamurti). U.G., by all accounts, was unequivocally an enlightened being. The interesting bit was his absolute refusal to attempt to teach, pass on, or even recommend his own higher state of consciousness. Throughout his life, he refused to take on any followers or officially publish any of his writings. I’ll write about U.G. in more detail in another post.
Sinister Intentions
At the unfortunate intersection between religion and self-help lies the world of cults. Cult leaders and cult organizations can be spotted by the following attributes. Stay away!
- secret, often bizarre teachings
- brainwashing techniques (sleep deprivation, emotional trauma, isolation, sensory overload)
- enormous fees required for membership and/or access to teachings
- requirement to cut off contact from family and/or friends (nonmembers)
- use coercive methods to control their members (intimidation, blackmail, even violence)
There’s nothing wrong with using somebody else’s self-improvement/behavioral modification/metaprogramming system, either ancient or modern, in whole or in part, as long as you shop around carefully. Or, you can invent your own. As a third alternative, if you are already happy with the current state of your habits (and where they are steering you in life), you may not feel compelled to bother with changing yourself.
Baby with the Bathwater
The field of self-improvement is full of half-truths, hucksters, pseudoscience, charlatans, snake oil and snake oil salesmen, bizarre beliefs, true believers, smelly hippies, narcissistic baby boomers, pitiful cases, get-rich-quick schemers, crystal wavers, cult leaders, and weird dieters, and is thus always ripe for parody (my favorite is this video parody of The Secret). A down-to-earth, rational person could be excused for steering clear of the self-improvement realm altogether.
On the other hand, energy we invest in improving our own habits (programs), including habits of thought and perception, is probably one of the best investments we can make in our own lives. Even minor improvements can yield enormous dividends in the long-run.
I’ll continue to share my thoughts about metaprogramming in this blog, including my core metaprogramming principles (not as a prescriptive, but rather in the spirit of open-source code sharing). As a quick preview, I’ll offer that my own principles involve the following areas:
- Maintaining a High Quality of Consciousness
- Taking Radical Responsibility for All Your Actions, and Every Aspect Of Your Life
- Creating a System of Functional Vitality
ETHICAL DONATORS AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS, WEBSITE DETAILS, CHARITY REQUESTS, LECTURE OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL AND/OR HEALTH ENLIGHTENMENT COURSES. AS AN IMPORTANT MEMBER OF THE GLOBAL INDEPENDENT MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO KEEP YOUR NEWS, DIVERSE,AND FREE OF CORPORATE, GOVERNMENT SPIN AND CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT: themikiverse@gmail.com
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
THE SCIENCE OF ENERGY AND THOUGHT. SUBCONSCIOUS MIND POWER
Sunday, March 25, 2012
A scientific approach explaining the power of thought. We have all heard before, 'Your thoughts create your reality'. Well, new quantum physics studies support this idea.
Learn about recent research about how the mind can influence the behavior of subatomic particles and physical matter. If you enjoy the video, please pass it on to friends and family.
The power of our thoughts and feelings allows us to manifest our desires. The challenge is in harnessing our ever shifting perspectives so that we can focus upon the thoughts that can make a positive difference.
Working with our thoughts consciously allows our awareness and experience of life to unfold its potential. The key is to be open to change and express ourselves from a higher perspective on life.
Our past is but a memory and the future is in our imagination, right now in the present moment is our true point of power.
Dr. Joseph Murphy's book on the power of the Subconscious Mind is a practical guide to understand and learn to use the incredible powers you possess within you. GYHF
“By choosing your thoughts, and by selecting which emotional currents you will release and which you will reinforce, you determine the quality of your Light. You determine the effects that you will have upon others, and the nature of the experience of your life.” -Gary Zukav
http://www.knowledgeoftoday.org/2012/03/thought-definition-life-energy-power.html
ETHICAL DONATORS AND COMMUNITY MEMBERS REQUIRED, TO FILL THIS SPACE WITH YOUR POLITICAL SLOGANS, ADVERTISING OFFERS, WEBSITE DETAILS, CHARITY REQUESTS, LECTURE OPPORTUNITIES, EDUCATIONAL WORKSHOPS, SPIRITUAL AND/OR HEALTH ENLIGHTENMENT COURSES. AS AN IMPORTANT MEMBER OF THE GLOBAL INDEPENDENT MEDIA COMMUNITY, MIKIVERSE SCIENCE HONOURABLY REQUESTS YOUR HELP TO KEEP YOUR NEWS, DIVERSE,AND FREE OF CORPORATE, GOVERNMENT SPIN AND CONTROL. FOR MORE INFORMATION ON HOW YOU MAY ASSIST, PLEASE CONTACT: themikiverse@gmail.com
Labels:
100% Independent Australian News,
mikiverse,
mikiverse headline news,
Mikiverse Hip Hop,
Mikiverse Law,
Mikiverse Politics,
Mikiverse Science
Wednesday, March 6, 2013
DETECTION AND ATTRIBUTION OF CLIMATE CHANGE: A REGIONAL PERSPECTIVE
AUTHORS: Peter A. Stott1,*, Nathan P. Gillett2, Gabriele C. Hegerl3, David J. Karoly4, Dáithí A. Stone5, Xuebin Zhang6, Francis Zwiers6. Article first published online: 5 MAR 2010
DOI: 10.1002/wcc.34 Copyright © 2010 John Wiley & Sons, Inc.
Abstract
The
Intergovernmental Panel on Climate Change fourth assessment report,
published in 2007 came to a more confident assessment of the causes of
global temperature change than previous reports and concluded that ‘it
is likely that there has been significant anthropogenic warming over the
past 50 years averaged over each continent except Antarctica.’ Since
then, warming over Antarctica has also been attributed to human
influence, and further evidence has accumulated attributing a much wider
range of climate changes to human activities. Such changes are broadly
consistent with theoretical understanding, and climate model
simulations, of how the planet is expected to respond. This paper
reviews this evidence from a regional perspective to reflect a growing
interest in understanding the regional effects of climate change, which
can differ markedly across the globe. We set out the methodological
basis for detection and attribution and discuss the spatial scales on
which it is possible to make robust attribution statements. We review
the evidence showing significant human-induced changes in regional
temperatures, and for the effects of external forcings on changes in the
hydrological cycle, the cryosphere, circulation changes, oceanic
changes, and changes in extremes. We then discuss future challenges for
the science of attribution. To better assess the pace of change, and to
understand more about the regional changes to which societies need to
adapt, we will need to refine our understanding of the effects of
external forcing and internal variability. Copyright © 2010 John Wiley
& Sons, Inc.
For further resources related to this article, please visit the WIREs website
1. Introduction
Evidence for an anthropogenic contribution to climate trends over the twentieth century is accumulating at a rapid pace [see Mitchell et al. (2001) and International Ad Hoc Detection and Attribution Group (2005) for detailed reviews]. The greenhouse gas signal in global surface temperature can be distinguished from internal climate variability and from the response to other forcings (such as changes in solar radiation, volcanism, and anthropogenic forcings other than greenhouse gases) for global temperature changes (e.g., Santer et al. 1996; Hegerl et al. 1997; Tett et al. 1999; Stott et al. 2001) and also for continental-scale temperature (Stott 2003; Zwiers and Zhang 2003; Karoly et al. 2003; Karoly and Braganza 2005). Evidence for anthropogenic signals is also emerging in other variables, such as sea level pressure (Gillett et al. 2003b), ocean heat content (Barnett et al. 2001; Levitus et al. 2001, 2005; Reichert et al. 2002; Wong et al. 2001), ocean salinity (Wong et al. 1999; Curry et al. 2003), and tropopause height (Santer et al. 2003b).
The goal of this paper is to discuss new directions and open questions in research toward the detection and attribution of climate change signals in key components of the climate system, and in societally relevant variables. We do not intend to provide a detailed review of present accomplishments, for which we refer the reader to International Ad Hoc Detection and Attribution Group (2005).
Detection has been defined as the process of demonstrating that an observed change is significantly different (in a statistical sense) from natural internal climate variability, by which we mean the chaotic variation of the climate system that occurs in the absence of anomalous external natural or anthropogenic forcing (Mitchell et al. 2001). Attribution of anthropogenic climate change is generally understood to require a demonstration that the detected change is consistent with simulated change driven by a combination of external forcings, including anthropogenic changes in the composition of the atmosphere and internal variability, and not consistent with alternative explanations of recent climate change that exclude important forcings [see Houghton et al. (2001) for a more thorough discussion]. This implies that all important forcing mechanisms, natural (e.g., changes in solar radiation and volcanism) and anthropogenic, should be considered in a full attribution study.
Detection and attribution provides therefore a rigorous test of the model-simulated transient change. In cases where the observed change is consistent with changes simulated in response to historical forcing, such as large-scale surface and ocean temperatures, these emerging anthropogenic signals enhance the credibility of climate model simulations of future climate change. In cases where a significant discrepancy is found between simulated and observed changes, this raises important questions about the accuracy of model simulations and the forcings used in the simulations. It may also emphasize a need to revisit uncertainty estimates for observed changes.
Beyond model evaluation, a further important application of detection and attribution studies is to obtain information on the uncertainty range of future climate change. Anthropogenic signals that have been estimated from the twentieth century can be used to extrapolate model signals into the twenty-first century and estimate uncertainty ranges based on observations (Stott and Kettleborough 2002; Allen et al. 2000). This is important since there is no guarantee that the spread of model output fully represents the uncertainty of future change. Techniques related to detection approaches can also be used to estimate key climate parameters, such as the equilibrium global temperature increase associated with CO2 doubling (“climate sensitivity”) or the heat taken up by the ocean (e.g., Forest et al. 2002) to further constrain model simulations of future climate change.
Mitchell et al. (2001) and International Ad Hoc Detection and Attribution Group (2005)
give an extensive overview of detection and attribution methods. One of
the most widely used, and arguably the most efficient method for
detection and attribution is “optimal fingerprinting.” This is
generalized multivariate regression that uses a maximum likelihood
method (Hasselmann 1979, 1997; Allen and Tett 1999)
to estimate the amplitude of externally forced signals in observations.
The regression model attempts to represent the observed record y, organized as a vector in space and time, from a set of n response (signal) patterns that are concatenated in a matrix using the linear assumption y = β + u.
Climate change signal patterns (also called fingerprints) are usually
derived from model simulations [e.g., with a coupled general circulation
model (CGCM)]. The vector β contains the scaling factors
that adjust the amplitude of each those signal patterns (also called
fingerprints) to best match the observed amplitude, and u is a realization of internal climate variability. Vector u
is assumed to be a realization of a Gaussian random vector (see below
for discussion). Long “control” simulations with CGCMs, that is,
simulations without anomalous external forcing, are typically used to
estimate the internal climate variability and the resulting uncertainty
in scaling factors β.
Inferences about detection and attribution in the standard approach are then based on hypothesis testing. For detection, this involves testing the null hypothesis that the amplitude of a given signal is consistent with zero (if this is not the case, it is detected); attribution is assessed using the attribution consistency test (Allen and Tett 1999; see also Hasselmann 1997), which evaluates the null hypothesis that the amplitude β is a vector of units (i.e., the model signal does not need to be rescaled to match the observations). A complete attribution assessment accounts for competing mechanisms of climate change as completely as possible, as discussed by Mitchell et al. (2001). Increasingly, Bayesian approaches are used as an alternative to the standard approach. In Bayesian approaches, inferences are based on a posterior distribution that blends evidence from the observations with independent prior information that is represented by a prior distribution [e.g., Berliner et al. 2000; Schnur and Hasselmann 2004; Lee et al. 2005; see International Ad Hoc Detection and Attribution Group (2005) for a more complete discussion and results]. Since Bayesian approaches can incorporate multiple lines of evidence and account elegantly for uncertainties in various components of the detection and attribution effort, we expect that they will be very helpful for variables with considerable observational and model uncertainty.
As we move toward detection and attribution studies on smaller spatial and temporal scales and with nontemperature variables, new challenges arise that are related to noise and uncertainty in signal patterns, dealing with non-Gaussian variables and facing data limitations. These are now discussed.
There is an increasing amount of observational evidence for changes within the ocean, both at regional and global scales (e.g., Bindoff and Church 1992; Wong et al. 1999; Wong et al. 2001; Dickson et al. 2001; Curry et al. 2003; Levitus et al. 2001; Aoki et al. 2005). Many of the observed changes in the ocean are from studies of the heat storage (Ishii et al. 2003; White et al. 2003; Willis et al. 2004; Levitus et al. 2005).
These studies all show that the global heat content of the oceans has
been increasing since the 1950s. For the period 1993–2003, this increase
is between 0.7 and 0.86 W m−2. The longer-term average increase in heat content (1955–98) over the 0–3000-m layer of the ocean is 0.2 W m−2
or 0.037°C. These observed changes in ocean heat content are consistent
with model-simulated changes in state-of-the-art coupled climate
models, which can be detected and attributed to anthropogenic forcing
(e.g., Barnett et al. 2001; Levitus et al. 2001; Reichert et al. 2002). However, total ocean heat content is affected by observational sampling uncertainty (Gregory et al. 2004). Since the ocean is a major source of uncertainty in future climate change (see Houghton et al. 2001),
attempting to detect and quantify ocean climate change in variables
focusing on ocean physics, such as water mass characteristics, will
increase confidence in large-scale simulations of climate change in the
ocean and our ability to simulate future ocean changes.
The water mass characteristics of the relatively shallow Sub-Antarctic Mode Water (SAMW) and the subtropical gyres in the Indian and Pacific basins since the 1960s have been changing. In most studies differences between earlier historical data (mainly from the 1960s) with more recent World Ocean Circulation Experiment (WOCE) data in the late 1980s and 1990s show that the SAMW is cooler and fresher on density surfaces (Bindoff and Church 1992; Johnson and Orsi 1997; Bindoff and McDougall 1994; Wong et al. 2001; Bindoff and McDougall 2000), indicative of a subduction of warmer waters [see Bindoff and McDougall (1994) for an explanation of this counterintuitive result]. These water mass results are supported by the strong increase in heat content in the Southern Hemisphere midlatitudes across both the Indian and Pacific Oceans during the 1993–2003 period (Willis et al. 2004). While most studies of the SAMW water mass properties have shown a cooling and freshening on density surfaces in the Indian and Pacific Oceans, the most recent repeat of the WOCE Indian Ocean section along 32°S in 2001 found a warming and salinity increase on density surfaces (indicative of subduction of cooler waters) in the shallow thermocline (Bryden et al. 2003). This result emphasizes the need to understand the processes involved in decadal oscillations in the subtropical gyres. Note, however, that the denser waters masses below 300 m showed the same trend in water mass properties that had been reported earlier (Bindoff and McDougall 2000). Further evidence of the large-scale freshening and cooling of SAMW (Fig. 3) comes from an analysis of six meridional WOCE sections and three Japanese Antarctic Research Expedition sections from South Africa to 150°E. These sections were compared with historical data extending from the Subtropical Front (35°S) to the Antarctic Divergence (60°S), and from South Africa eastward to the Drake Passage. In almost all sections a cooling and freshening of SAMW has occurred consistent with the subduction of warmer surface waters observed over the same period, summarized in Fig. 3.
The salinity minimum water in the North Pacific has freshened and in the southern parts of the Atlantic, Indian, and Pacific Oceans there has also been a corresponding freshening of the salinity minimum layer. The Atlantic freshening at depth is also supported by direct observations of a freshening of the surface waters (Curry et al. 2003). Taken together these changes in the Atlantic and North Pacific suggest a global increase in the hydrological cycle (and flux of freshwater into the oceans including melt waters from ice caps and sea ice) at high latitudes in the source regions of these two water masses (Wong et al. 1999). To the south of the Subantarctic Front, there is a very coherent pattern of warming and salinity increase on density surfaces <500 m (Fig. 3). This pattern of warming and salinity increase on isopycnals from 45°E to 90°W is consistent with the warming and/or freshening of surface waters (see Bindoff and McDougall 1994). Figure 3 summarizes the observed differences in the Southern Ocean, showing the cooling and freshening on density surfaces of SAMW north of the Subantarctic Front, freshening of Antarctic Intermediate Water, and warming and salinity increase of the Upper-Circumpolar Deep Water south of the Subantarctic Front.
These observed changes are broadly consistent with simulations of warming and changes in precipitation minus evaporation. Banks and Bindoff (2003) identified a zonal mode (or fingerprint) of difference in water mass properties in the anthropogenically forced simulation of the HadCM3 model between the 1960s and 1990s (Fig. 4). This fingerprint identified in HadCM3 is strikingly similar to the observed differences in water mass characteristics (Fig. 3) in the Southern Hemisphere. In the HadCM3 climate change simulation, the strength of the zonal mode in the Indo-Pacific Ocean tends to become stronger and increasingly significant from the 1960s onward. Its strength exceeds the 5% significance level 40% of the time, while this happens only occasionally (<5% of the time) in the 600-yr control simulation. This result suggests that the zonal signature of climate change for the Indo-Pacific basin (and Southern Ocean) is distinct from the modes of variability and suggests that the anthropogenic change can be separated from internal ocean variability. The similarity of observed and simulated water mass changes suggests that such changes can already be observed. For a quantitative detection approach, the relatively sparse sampling of ocean data needs to be emulated in models.
Results
of detection and attribution studies in surface and atmospheric
temperature and ocean heat content show consistently that a large part
of the twentieth-century warming can be attributed to greenhouse gas
forcing. We need to continue to attempt estimating the climate response
to anthropogenic forcing in different components of the climate system,
including the oceans, atmosphere, and cryosphere. We also need to more
fully assess all the components of the climate system for their
sensitivity to climate change signals and their signal-to-noise ratios
for climate change, and synthesize estimates of anthropogenic signals
from different climate variables. Also, detection studies are now
starting to focus on spatial scales and variables that are important for
climate change impacts. All these efforts raise both familiar and new
questions for climate research.
For example, the detection and attribution of climate change requires long observed time series free from nonclimate-related time-dependent biases. For the analysis of extreme events it is also important that quality control routines do not weed out true extreme events. Blended remote sensing and in situ data, if quality controlled also with regard to extremes, may become very useful to overcome spatial sampling inadequacies. Since every source of data is subject to observational uncertainty, climate records that are based on different observing systems and analysis methods are important for quantifying and decreasing the uncertainty in detection and attribution results. Lessons learned from microwave satellite data, global land surface temperatures, and sea surface temperatures show that our initial estimates of uncertainty from a single dataset are often too low. Therefore, a high priority must be placed on adequate estimation of error, including time-dependent biases.
For reducing uncertainties in detection and attribution results we also need to keep improving our understanding and estimates of historical anthropogenic and natural radiative forcings, particularly those with largest uncertainties such as black carbon, effect of aerosols on clouds, or solar forcing. As the spatial scale upon which detection and attribution efforts focus decreases, forcings that are of minor importance globally, such as land use change, may become more important and need to be considered.
Furthermore, our understanding of model uncertainty needs to be improved, and more complete estimates of model error need to be included in detection and attribution approaches. Both ensembles of models with perturbed parameters (e.g., Allen and Stainforth 2002; Murphy et al. 2004) and true diversity in CGCMs used worldwide are important to sample model uncertainty. Aspects of climate change where there is a significant discrepancy between model simulation and observation, such as the magnitude of changes in annular modes or in the fingerprint of anthropogenic sea level pressure change, need to be understood.
Furthermore, different components of the climate system present their own challenges. In the oceans, it is important to exploit the signatures of climate change in both water mass properties, heat and freshwater, sea level, and in other ocean tracers, such as oxygen concentration, together to more reliably detect and attribute climate change and evaluate ocean model performance. The advantage of exploring water mass variations on density surfaces in addition to inventories of heat and freshwater storage, is that water mass changes largely reflect changes in the surface forcing and are less prone to noise introduced by mesoscale eddies. Furthermore, water mass changes on density surfaces do not contribute to sea level rise and thus provide information about changes within the water column that are independent from sea level measurements.
In the atmosphere, detectable global precipitation changes in response to volcanism may be useful to evaluate simulated changes in the hydrological cycle even before greenhouse gas–induced changes in precipitation become detectable. Also, changes in extreme precipitation may become detectable before changes in total precipitation. Furthermore, the probability of an individual extreme event with and without greenhouse warming can be estimated to assess how much global warming contributes to changes in the risk of a particular extreme event.
Online publication date: 1-Jan-2008.
CrossRef
1. Introduction
Evidence for an anthropogenic contribution to climate trends over the twentieth century is accumulating at a rapid pace [see Mitchell et al. (2001) and International Ad Hoc Detection and Attribution Group (2005) for detailed reviews]. The greenhouse gas signal in global surface temperature can be distinguished from internal climate variability and from the response to other forcings (such as changes in solar radiation, volcanism, and anthropogenic forcings other than greenhouse gases) for global temperature changes (e.g., Santer et al. 1996; Hegerl et al. 1997; Tett et al. 1999; Stott et al. 2001) and also for continental-scale temperature (Stott 2003; Zwiers and Zhang 2003; Karoly et al. 2003; Karoly and Braganza 2005). Evidence for anthropogenic signals is also emerging in other variables, such as sea level pressure (Gillett et al. 2003b), ocean heat content (Barnett et al. 2001; Levitus et al. 2001, 2005; Reichert et al. 2002; Wong et al. 2001), ocean salinity (Wong et al. 1999; Curry et al. 2003), and tropopause height (Santer et al. 2003b).
The goal of this paper is to discuss new directions and open questions in research toward the detection and attribution of climate change signals in key components of the climate system, and in societally relevant variables. We do not intend to provide a detailed review of present accomplishments, for which we refer the reader to International Ad Hoc Detection and Attribution Group (2005).
Detection has been defined as the process of demonstrating that an observed change is significantly different (in a statistical sense) from natural internal climate variability, by which we mean the chaotic variation of the climate system that occurs in the absence of anomalous external natural or anthropogenic forcing (Mitchell et al. 2001). Attribution of anthropogenic climate change is generally understood to require a demonstration that the detected change is consistent with simulated change driven by a combination of external forcings, including anthropogenic changes in the composition of the atmosphere and internal variability, and not consistent with alternative explanations of recent climate change that exclude important forcings [see Houghton et al. (2001) for a more thorough discussion]. This implies that all important forcing mechanisms, natural (e.g., changes in solar radiation and volcanism) and anthropogenic, should be considered in a full attribution study.
Detection and attribution provides therefore a rigorous test of the model-simulated transient change. In cases where the observed change is consistent with changes simulated in response to historical forcing, such as large-scale surface and ocean temperatures, these emerging anthropogenic signals enhance the credibility of climate model simulations of future climate change. In cases where a significant discrepancy is found between simulated and observed changes, this raises important questions about the accuracy of model simulations and the forcings used in the simulations. It may also emphasize a need to revisit uncertainty estimates for observed changes.
Beyond model evaluation, a further important application of detection and attribution studies is to obtain information on the uncertainty range of future climate change. Anthropogenic signals that have been estimated from the twentieth century can be used to extrapolate model signals into the twenty-first century and estimate uncertainty ranges based on observations (Stott and Kettleborough 2002; Allen et al. 2000). This is important since there is no guarantee that the spread of model output fully represents the uncertainty of future change. Techniques related to detection approaches can also be used to estimate key climate parameters, such as the equilibrium global temperature increase associated with CO2 doubling (“climate sensitivity”) or the heat taken up by the ocean (e.g., Forest et al. 2002) to further constrain model simulations of future climate change.
Section 2 briefly reviews methodological challenges associated with new directions in detection and attribution. Section 3 lists results and challenges in large-scale surface and atmospheric variables, while section 4 focuses on the ocean, and section 5 on impact-relevant variables. We conclude with some recommendations in section 6.
2. Methodological challenges and data requirements |
---|
Inferences about detection and attribution in the standard approach are then based on hypothesis testing. For detection, this involves testing the null hypothesis that the amplitude of a given signal is consistent with zero (if this is not the case, it is detected); attribution is assessed using the attribution consistency test (Allen and Tett 1999; see also Hasselmann 1997), which evaluates the null hypothesis that the amplitude β is a vector of units (i.e., the model signal does not need to be rescaled to match the observations). A complete attribution assessment accounts for competing mechanisms of climate change as completely as possible, as discussed by Mitchell et al. (2001). Increasingly, Bayesian approaches are used as an alternative to the standard approach. In Bayesian approaches, inferences are based on a posterior distribution that blends evidence from the observations with independent prior information that is represented by a prior distribution [e.g., Berliner et al. 2000; Schnur and Hasselmann 2004; Lee et al. 2005; see International Ad Hoc Detection and Attribution Group (2005) for a more complete discussion and results]. Since Bayesian approaches can incorporate multiple lines of evidence and account elegantly for uncertainties in various components of the detection and attribution effort, we expect that they will be very helpful for variables with considerable observational and model uncertainty.
As we move toward detection and attribution studies on smaller spatial and temporal scales and with nontemperature variables, new challenges arise that are related to noise and uncertainty in signal patterns, dealing with non-Gaussian variables and facing data limitations. These are now discussed.
a. Data requirements for detection and attribution The
observations analyzed in a detection approach should cover a long
enough time period to distinguish an emerging anthropogenic signal,
typically at least 20 yr, or better, 50 yr. Longer records generally
allow for a more powerful detection of the anthropogenic signal from the
background of natural variability, but the time period is limited by
available observed data and samples for climate variability. The
observed record also needs to be as homogeneous in time as possible;
that is, free from artifacts due to changes in temporal sampling,
instrument bias, instrument exposure or location, observing procedures,
and processing algorithms.
Time-dependent biases for long time period temporal sampling (e.g., monthly, seasonal, and annual) have been addressed more frequently and effectively than biases associated with short temporal sampling (hourly and daily). However, analysis of climate extremes requires high-resolution temporal sampling. Difficulties arise from diurnal biases of temperature that are difficult to completely eliminate (see, e.g., DeGaetano 1999; Vose et al. 2003) and from corrections for short-duration precipitation integrations (hourly or less) versus longer time integrations (daily and monthly; Groisman et al. 1999).
Data availability is still limited, particularly in very high latitudes and the Tropics (see http://www.ncdc.noaa.gov/img/climate/research/2005/feb/map_prcp_ 02_2005_pg.gif). Also, there remains considerable data that are inaccessible in many developing and some developed countries. A U.S. program to rescue long-term data that are not electronically accessible (the U.S. Climate Data Modernization Program) is now working with other countries and the World Meteorological Organization (WMO) to help fill these gaps. It has already lead to new data being made available worldwide. Supporting information about instrument status and the observing environment is critical to derive appropriate corrections for time-dependent biases. Therefore, it is very important to also maintain and rescue metadata. Detection methods may be helpful in prioritizing where observational data would be most useful to constrain model climate change fingerprints (see, e.g., Groisman et al. 2005). More needs to be done in that regard.
Time-dependent biases for long time period temporal sampling (e.g., monthly, seasonal, and annual) have been addressed more frequently and effectively than biases associated with short temporal sampling (hourly and daily). However, analysis of climate extremes requires high-resolution temporal sampling. Difficulties arise from diurnal biases of temperature that are difficult to completely eliminate (see, e.g., DeGaetano 1999; Vose et al. 2003) and from corrections for short-duration precipitation integrations (hourly or less) versus longer time integrations (daily and monthly; Groisman et al. 1999).
Data availability is still limited, particularly in very high latitudes and the Tropics (see http://www.ncdc.noaa.gov/img/climate/research/2005/feb/map_prcp_ 02_2005_pg.gif). Also, there remains considerable data that are inaccessible in many developing and some developed countries. A U.S. program to rescue long-term data that are not electronically accessible (the U.S. Climate Data Modernization Program) is now working with other countries and the World Meteorological Organization (WMO) to help fill these gaps. It has already lead to new data being made available worldwide. Supporting information about instrument status and the observing environment is critical to derive appropriate corrections for time-dependent biases. Therefore, it is very important to also maintain and rescue metadata. Detection methods may be helpful in prioritizing where observational data would be most useful to constrain model climate change fingerprints (see, e.g., Groisman et al. 2005). More needs to be done in that regard.
Reanalysis
data are dynamically complete and can provide a valuable source of data
for studying climate variability. However, at present, inhomogeneities
in time, particularly during the time of transition to the satellite
era, make these products problematic to use for detection (e.g., Chelliah and Ropelewski 2000).
Limiting the analysis to the better-constrained satellite era, and
analyzing data from several reanalyses, particularly more recent ones,
can circumvent some of these problems (see Santer et al. 2003b; Gillett et al. 2003b), although caution is still needed.
b. Addressing error and noise in model-simulated patterns Because
CGCMs simulate natural internal variability as well as the response to
external forcing, the CGCM simulated climate signals need to be averaged
across an ensemble of simulations. Even then, signal estimates will
contain remnants of the climate’s natural internal variability unless
the ensemble size is very large. The presence of this noise in the
signal may bias ordinary least squares estimates of β downward,
particularly for signals that have small signal-to-noise ratios (such as
signals from natural forcing or other anthropogenic forcings in the
twentieth century). This can be addressed by estimating β with a total least squares algorithm (Allen and Stott 2003). Further processing of signals or fingerprints (see Santer et al. 1996; Hegerl et al. 1996)
may be needed to reduce the amount of noise for variables and spatial
scales that are more strongly affected by climate variability.
Model-simulated signals also invariably contain uncertainties associated with errors in models (such as imperfect treatment of clouds, e.g.) and forcings. Detection and attribution results are sensitive to this uncertainty as demonstrated when results from different models and different forcing assumptions are compared (e.g., Santer et al. 1996; Hegerl et al. 2000; Allen et al. 2006). A first estimate of the combined model error and forcing uncertainty can be based on combining data from simulations forced with different estimates of radiative forcings, and simulated with different models. Gillett et al. (2002) demonstrate that such multimodel fingerprints lead to a more convincing attribution of observed warming between greenhouse gas and sulfate aerosol forcing. Taylor (K. Taylor 2005, personal communication) shows that averages from multiple models often outperform individual models in simulations of mean climate and variability.
For a complete understanding of the effects of forcing and model uncertainty, and a full representation of both uncertainties in detection and attribution approaches (as suggested by Hasselmann 1997), both forcing and model uncertainties need to be explored fully and separately. Using very large ensembles of models with perturbed parameters will improve the model error estimate (see Allen and Stainforth 2002; Murphy et al. 2004). However, if models share common errors, the estimate of model uncertainty will be biased low. It is therefore important to maintain true diversity in climate models used worldwide.
Also, appreciation of the complexities of the numerous types of anthropogenic and natural forcings is growing rapidly. Additional climate forcings have been identified recently, such as several types of aerosols, changes in land use, urbanization, and irrigation practices (e.g., Dolman et al. 2003; Bonan 1999; Charney 1975; Hahmann and Dickinson 1997). The importance of these forcings will vary between climate variables and spatial scales. For example, while land use change is thought to have a relatively small effect on globally or hemispherically averaged temperature (e.g., Matthews et al. 2004), it can have substantial effects locally (e.g., Baidya and Avissar 2002) and may therefore be important for the detection of regional climate change.
Model-simulated signals also invariably contain uncertainties associated with errors in models (such as imperfect treatment of clouds, e.g.) and forcings. Detection and attribution results are sensitive to this uncertainty as demonstrated when results from different models and different forcing assumptions are compared (e.g., Santer et al. 1996; Hegerl et al. 2000; Allen et al. 2006). A first estimate of the combined model error and forcing uncertainty can be based on combining data from simulations forced with different estimates of radiative forcings, and simulated with different models. Gillett et al. (2002) demonstrate that such multimodel fingerprints lead to a more convincing attribution of observed warming between greenhouse gas and sulfate aerosol forcing. Taylor (K. Taylor 2005, personal communication) shows that averages from multiple models often outperform individual models in simulations of mean climate and variability.
For a complete understanding of the effects of forcing and model uncertainty, and a full representation of both uncertainties in detection and attribution approaches (as suggested by Hasselmann 1997), both forcing and model uncertainties need to be explored fully and separately. Using very large ensembles of models with perturbed parameters will improve the model error estimate (see Allen and Stainforth 2002; Murphy et al. 2004). However, if models share common errors, the estimate of model uncertainty will be biased low. It is therefore important to maintain true diversity in climate models used worldwide.
Also, appreciation of the complexities of the numerous types of anthropogenic and natural forcings is growing rapidly. Additional climate forcings have been identified recently, such as several types of aerosols, changes in land use, urbanization, and irrigation practices (e.g., Dolman et al. 2003; Bonan 1999; Charney 1975; Hahmann and Dickinson 1997). The importance of these forcings will vary between climate variables and spatial scales. For example, while land use change is thought to have a relatively small effect on globally or hemispherically averaged temperature (e.g., Matthews et al. 2004), it can have substantial effects locally (e.g., Baidya and Avissar 2002) and may therefore be important for the detection of regional climate change.
While
forcing uncertainty affects the results of estimating contributions of
external forcing to observed changes, detection methods can also provide
help to constrain the magnitude of external forcings if their
space–time signature is known (“top-down” forcing estimates; see, e.g., Anderson et al. 2003).
c. Estimates of internal climate variability One
of the primary concerns with current optimal fingerprinting techniques
is related to the dependence upon models for estimates of internal
variability. There are at least two prospects for improving our
confidence in these estimates.
First, the paleoclimate community continues to make impressive progress in the reconstruction and interpretation of the climate record of the last 1–2 millennia (e.g., Jones and Mann 2004), although uncertainties remain (von Storch et al. 2004). However, the variability in paleoreconstructions is a convolution of internal climate variability, additional noise from proxy data, sampling uncertainty due to incomplete coverage of paleodata, and the climate response to uncertain external forcing. A comparison of this variability with unforced internal climate variability in climate models is not straightforward. One step toward such a comparison is comparing the residual variability in paleoclimatic reconstructions after removing effects from external forcing (e.g., Hegerl et al. 2006) with variability in control simulations; or, alternatively, comparing the variability in simulations of the last millennium with proxy data (e.g., Tett et al. 2006). Studies of the last millennium also help to better understand climate response to natural forcing.
First, the paleoclimate community continues to make impressive progress in the reconstruction and interpretation of the climate record of the last 1–2 millennia (e.g., Jones and Mann 2004), although uncertainties remain (von Storch et al. 2004). However, the variability in paleoreconstructions is a convolution of internal climate variability, additional noise from proxy data, sampling uncertainty due to incomplete coverage of paleodata, and the climate response to uncertain external forcing. A comparison of this variability with unforced internal climate variability in climate models is not straightforward. One step toward such a comparison is comparing the residual variability in paleoclimatic reconstructions after removing effects from external forcing (e.g., Hegerl et al. 2006) with variability in control simulations; or, alternatively, comparing the variability in simulations of the last millennium with proxy data (e.g., Tett et al. 2006). Studies of the last millennium also help to better understand climate response to natural forcing.
Second,
the CGCMs that are used for climate change research are also
increasingly being used for seasonal and longer-range prediction—that
is, for use in initial value problems rather than external forcing
response problems. Prediction skill at seasonal to interannual time
scales is not necessarily an indicator of a model’s potential skill in
simulating the response to external forcing. However, prediction
research provides an understanding of the circumstances under which we
can make skillful seasonal to interannual forecasts, and it can help to
validate the mechanisms that provide that skill, thus increasing
confidence in estimates of internal variability from CGCMs. This should
also provide further insights into the large-scale feedback mechanisms
that determine the climate’s sensitivity to forcing, and the nature of
its transient response to that forcing (e.g., Boer et al. 2004, 2005; Boer 2004), since these mechanisms are also likely an important source of predictive skill on seasonal to interannual time scales.
d. Linearity
Another,
but substantially smaller, concern is the “linear” model that is used
predominantly in climate change detection research.1
This model assumes that the responses to the various external agents
(anthropogenic and natural) that are thought to have influenced the
climate of the past century add linearly. There is little evidence to
suggest that the response has not been additive on global scales.
However, additivity may not continue to hold well on smaller space or
time scales or in the future, and biogeochemical feedback mechanisms may
cause nonadditive feedbacks on radiative forcing (e.g., Cox et al. 2000).
A breakdown of additivity would pose a problem for the use of detection
methods to constrain model projections of future climate, although it
is possible to address this in the context of existing methods.
e. Non-Gaussian variables and extremes A
third area of concern is the extension of detection techniques so that
they can be used to evaluate changes in quantities that are not
inherently Gaussian, such as the detection and attribution of change in
the frequency and intensity of extreme events.2 This will be a challenge because signal-to-noise ratios are expected to be low. There are two fundamental challenges.
The first challenge is methodological and not inherently difficult. Inferences in current optimal fingerprinting methods can be understood as based on a “likelihood function.” The form of that function, and thus the method of inference, is derived from the “link” between the climate change signals and the observations, y = β + u, and the assumption that the errors are Gaussian. Research is already underway where the relationship between the observations and the signal is more complex than the simple equation above, and where the distribution function is replaced with one that is more appropriate for extremes (e.g., Kharin and Zwiers 2005; Wang et al. 2004; Zhang et al. 2005).
The first challenge is methodological and not inherently difficult. Inferences in current optimal fingerprinting methods can be understood as based on a “likelihood function.” The form of that function, and thus the method of inference, is derived from the “link” between the climate change signals and the observations, y = β + u, and the assumption that the errors are Gaussian. Research is already underway where the relationship between the observations and the signal is more complex than the simple equation above, and where the distribution function is replaced with one that is more appropriate for extremes (e.g., Kharin and Zwiers 2005; Wang et al. 2004; Zhang et al. 2005).
However,
there are some additional and more difficult challenges in the
detection of externally forced change in extremes. These include
continuing challenges in resolving the scaling issues that hinder the
comparison of CGCM simulated extremes with observed extremes (which will
be discussed in section 5), and a lack of consensus between models on the simulation of present-day extremes (Kharin et al. 2005).
Nonetheless, there is a pressing need for information in this area, and
thus the detection community will increasingly venture into this area
of research.
3. Large-scale change at the surface and in the atmosphere |
---|
a. Attribution of twentieth-century warming to causes The
conclusion of the third Intergovernmental Panel on Climate Change
(IPCC) assessment report from detection and attribution studies was that
“most of the observed warming over the last 50 years is likely to have
been due to the increase in greenhouse gas concentrations” (Mitchell et al. 2001).
This conclusion has been largely based on results using multiple
regressions of observed surface air temperature onto fingerprints of
greenhouse gas, sulfate aerosol or combined anthropogenic nongreenhouse
gas emissions, and natural forcing (solar and/or volcanic forcing
separately, or both combined). The effect of various uncertainties in
detection and attribution results, such as forcing or model uncertainty
as discussed above, is summarized in the term “likely.” Detection and
attribution results from global surface temperature data will need to be
updated with improved model versions, better estimates of forcing, and
more complete estimates of uncertainty in order to better quantify and
narrow the remaining uncertainty in detection results. A further issue
that is being addressed but needs more work is observational uncertainty
during the first half of the twentieth century (Smith and Reynolds 2003).
Progress has been made in understanding differences between surface and tropospheric temperature trends. The climate response to anthropogenic forcing in the vertical profile of temperature trends is characterized by stratospheric cooling and tropospheric warming. Such a climate response has been detected in radiosonde data since the 1960s (e.g., Santer et al. 1996; Tett et al. 1996; Allen and Tett 1999), even if only lower-tropospheric temperatures are considered (Thorne et al. 2003). Cooling of the stratosphere and warming of the troposphere leads to an increase in tropopause height, where clear anthropogenic and natural signals can be detected in a range of reanalysis data (Santer et al. 2003b).
Progress has been made in understanding differences between surface and tropospheric temperature trends. The climate response to anthropogenic forcing in the vertical profile of temperature trends is characterized by stratospheric cooling and tropospheric warming. Such a climate response has been detected in radiosonde data since the 1960s (e.g., Santer et al. 1996; Tett et al. 1996; Allen and Tett 1999), even if only lower-tropospheric temperatures are considered (Thorne et al. 2003). Cooling of the stratosphere and warming of the troposphere leads to an increase in tropopause height, where clear anthropogenic and natural signals can be detected in a range of reanalysis data (Santer et al. 2003b).
The
apparent lack of significant warming in the lower troposphere over the
satellite era has raised concerns over the validity of estimates of
surface warming (Christy et al. 2001; Christy and Norris 2004) or the ability of climate models to simulate the vertical coherence in temperature (e.g., Hegerl and Wallace 2002). This problem is discussed in International Ad Hoc Detection and Attribution Group (2005) and has been the subject of a U.S. Climate Change Science Program Synthesis Report (Karl et al. 2006).
For understanding trends in satellite measurements of the upper
troposphere, the influence of the stratosphere on that measurement needs
to be considered. Recent analyses suggest that trends in surface and
tropospheric temperature are consistent with how we expect them to vary
according to the physics of the atmosphere if this stratospheric
influence (and its temperature trends associated with stratospheric
ozone depletion) as well as observational uncertainty in satellite data
are considered (see Mears et al. 2003; Fu et al. 2004).
The trends are also no longer inconsistent with model-simulated trends
if observational uncertainty and natural forcing is considered (Santer et al. 2003a).
However, the uncertainty in satellite data processing needed to be
fully understood in order to yield an improved best guess and
uncertainty range for satellite-derived tropospheric temperature trends (Karl et al. 2006). This example demonstrates a need for improved operation of satellite and in situ observing systems for monitoring climate.
b. Changes in global circulation and precipitation The
atmospheric circulation is driven by differential heating across the
globe, and as external forcing perturbs these heating rates, it is
natural to expect the atmospheric circulation to change in response
(see, e.g., Palmer 1999). However, there is no widely accepted theory to describe how it is likely to change. As discussed in International Ad Hoc Detection and Attribution Group (2005), positive trends in the Northern and Southern Annular Modes have recently been observed (Hurrell 1996; Thompson et al. 2000; Thompson and Solomon 2002; Gillett et al. 2003a).
The surface circulation is well-characterized by sea level pressure,
which has the advantage that it is well observed and exhibits a high
degree of spatial homogeneity. Gillett et al. (2003b)
used detection and attribution methods to compare simulated and
observed trends in sea level pressure and found a detectable response to
a combined greenhouse gas and sulfate aerosol forcing using three
different observational datasets and the mean simulated response from
four models [the Second Hadley Centre Coupled Ocean–Atmosphere GCM
(HadCM2), Third HadCM (HadCM3), CGCM1, and CGCM2; note that of these
models only HadCM3 has no flux corrections]. These results have now been
extended to include the 40-Yr European Centre for Medium-Range Weather
Forecasts (ECMWF) Re-Analysis (ERA-40). Although the pattern of
simulated and observed change was similar, Gillett et al. (2003b)
found that the magnitude of the observed sea level pressure change is
substantially larger than that simulated in several climate models. Figure 1a
shows changes in winter sea level pressure over the period 1958–98 from
the ERA-40 dataset compared to the mean response simulated by four
climate models (Fig. 1b).
The simulated pattern of sea level pressure trends is similar to that
from the reanalysis, but the magnitude is much smaller. This result is
confirmed by Fig. 1c,
which shows regression coefficients of sea level pressure change from
several observed and reanalysis datasets against a multimodel mean
simulated response to greenhouse gas and sulfate aerosol increases. The
scaling factor (see section 2) is always significantly greater than one, indicating that the observed response is larger than that simulated by the models.
Why do climate models fail to predict the correct magnitude of sea level pressure changes? One reason may be that the studies discussed above do not include all the relevant external climate forcings. Stott et al. (2001) examine integrations of HadCM3 forced with all of the principal external forcings—greenhouse gas changes, sulfate aerosol changes, solar irradiance changes, volcanic aerosol, and stratospheric ozone depletion—and find that they do not simulate the recently observed North Atlantic Oscillation (NAO) increase. However, Gillett and Thompson (2003) examined the response to stratospheric ozone depletion in a model with high vertical resolution and found that realistic December–February trends in geopotential height over the Southern Hemisphere were simulated in response to ozone depletion. They also noted that simulation of these trends required high vertical resolution, explaining why they were not simulated by Stott et al. (2001). These results thus suggest that part of the discrepancy between simulated and observed circulation changes in the Southern Hemisphere noted by Gillett et al. (2003b) may be due to ozone depletion. However, the discrepancy over the Northern Hemisphere cannot be explained in this way.
Shindell et al. (1999) argue that the tropospheric circulation response to greenhouse gas increases is remotely forced from the stratosphere and that a high model upper boundary is necessary in order to simulate a realistic sea level pressure response to greenhouse gas increases, but their findings were not reproduced in a model with higher horizontal resolution (Gillett et al. 2002). Other authors have suggested that the North Atlantic Oscillation response to greenhouse gas increases is indirectly forced by changes in sea surface temperatures (Rodwell et al. 1999; Hoerling et al. 2001), but while some studies with prescribed sea surface temperatures are able to simulate changes in the NAO that are correlated with those that have been observed, none has yet been able to simulate the magnitude of the observed trend. Thus the reason for the difference in amplitude of the observed and simulated sea level pressure trends remains unknown.
How might we reconcile this difference between observed and simulated sea level pressure changes? First, it is important to identify and characterize sources of uncertainty in the observational datasets. Sufficiently long instrumental records of sea level pressure only exist for limited areas of the globe, thus we must either restrict our analysis to only these well-observed regions or use sea level pressure derived from reanalyses. The National Centers for Environmental Prediction (NCEP) reanalysis exhibits larger negative trends in sea level pressure in the poorly observed Antarctic, which are not fully reproduced in the recent ERA-40 reanalysis, suggesting that the NCEP reanalysis trends may be overestimates there. A detection analysis applied to sea level pressure over the North Atlantic region (20°–80°N and 0°–60°W) over the period 1908–98 using the Trenberth analyses (Trenberth and Paolino 1980) indicated no better agreement between models and analysis-based sea level pressure data than the over the 1958–98 period, but further analysis of historical data may help to better constrain uncertainties.
We also need to examine climate models to understand why they are in disagreement with observations, if the observed trends prove correct. For example, it is likely that the sea level pressure response is sensitive to the parameterizations used in a model. By making use of “perturbed physics” ensembles (e.g., Allen 2003b), in which physical parameterizations are systematically perturbed in a large ensemble of integrations, it may be possible to identify the model parameters to which circulation changes are most sensitive and that lead to a more realistic simulation of historical sea level pressure changes. This area of disagreement between models and observations may therefore ultimately prove useful in constraining model physics.
As with atmospheric circulation changes, we also expect the hydrological cycle to respond to changes in external forcing of the climate system. Mitchell et al. (1987) argue that precipitation changes are controlled primarily by the energy budget of the troposphere: the latent heat of condensation being balanced by radiative cooling. Externally forced warming of the troposphere enhances the local cooling rate, thereby increasing precipitation, but this may be partly offset by a decrease in the efficiency of the cooling due to greenhouse gas increases (Allen and Ingram 2002; Yang et al. 2003; Lambert et al. 2004). Allen and Ingram (2002) demonstrate that the ensemble mean land average precipitation simulated by HadCM3 is significantly correlated with observed land average precipitation over the 1945–98 period, essentially detecting the influence of natural external forcing on precipitation. A similar result was obtained using all-forcings simulations of the Parallel Climate Model (PCM; Fig. 2; Gillett et al. 2004b). Consistent with this, Lambert et al. (2004) demonstrate that the response to shortwave forcing is detectable in observations, whereas the response to longwave forcing is not. These results therefore suggest that natural forcings such as volcanic aerosol and solar irradiance changes are likely to have had a larger influence on mean changes of total precipitation during the twentieth century than greenhouse gas changes, which is consistent with simulations of the response to volcanic forcing (Robock and Liu 1994). Gillett et al. (2004b) demonstrate that there is a detectable volcanic influence in terrestrial precipitation over the past 50 yr, using simulations of the PCM, although the model appears to underestimate the volcanic response.
Owing to the limited sensitivity of precipitation to greenhouse gas changes and the relatively small change in forcing over the observed period, Allen and Ingram (2002) argue that hydrological sensitivity (the change in mean total precipitation in response to a doubling of CO2) is not well constrained by available observations. A perfect model study that examined the detectability of precipitation changes in simulations of the PCM with natural and anthropogenic forcing also suggested that the response to greenhouse gas forcing should not yet be detectable in total mean precipitation (Ziegler et al. 2003) and that model uncertainty should make detection of annual precipitation changes difficult (Hegerl et al. 2004 note that changes in some aspects of extreme precipitation may be detectable earlier; see below). However, detection and attribution techniques are likely to be useful in examining the hydrological response to natural forcings, particularly volcanoes. In these cases, we may be able to use these techniques to answer the question of whether observed and simulated precipitation responses are consistent, and in the context of a perturbed physics ensemble, these techniques may be used to constrain model parameters by comparison with observations (to the extent that observations provide a constraint given their uncertainties). This in turn may help to constrain our predictions of future precipitation changes.
Why do climate models fail to predict the correct magnitude of sea level pressure changes? One reason may be that the studies discussed above do not include all the relevant external climate forcings. Stott et al. (2001) examine integrations of HadCM3 forced with all of the principal external forcings—greenhouse gas changes, sulfate aerosol changes, solar irradiance changes, volcanic aerosol, and stratospheric ozone depletion—and find that they do not simulate the recently observed North Atlantic Oscillation (NAO) increase. However, Gillett and Thompson (2003) examined the response to stratospheric ozone depletion in a model with high vertical resolution and found that realistic December–February trends in geopotential height over the Southern Hemisphere were simulated in response to ozone depletion. They also noted that simulation of these trends required high vertical resolution, explaining why they were not simulated by Stott et al. (2001). These results thus suggest that part of the discrepancy between simulated and observed circulation changes in the Southern Hemisphere noted by Gillett et al. (2003b) may be due to ozone depletion. However, the discrepancy over the Northern Hemisphere cannot be explained in this way.
Shindell et al. (1999) argue that the tropospheric circulation response to greenhouse gas increases is remotely forced from the stratosphere and that a high model upper boundary is necessary in order to simulate a realistic sea level pressure response to greenhouse gas increases, but their findings were not reproduced in a model with higher horizontal resolution (Gillett et al. 2002). Other authors have suggested that the North Atlantic Oscillation response to greenhouse gas increases is indirectly forced by changes in sea surface temperatures (Rodwell et al. 1999; Hoerling et al. 2001), but while some studies with prescribed sea surface temperatures are able to simulate changes in the NAO that are correlated with those that have been observed, none has yet been able to simulate the magnitude of the observed trend. Thus the reason for the difference in amplitude of the observed and simulated sea level pressure trends remains unknown.
How might we reconcile this difference between observed and simulated sea level pressure changes? First, it is important to identify and characterize sources of uncertainty in the observational datasets. Sufficiently long instrumental records of sea level pressure only exist for limited areas of the globe, thus we must either restrict our analysis to only these well-observed regions or use sea level pressure derived from reanalyses. The National Centers for Environmental Prediction (NCEP) reanalysis exhibits larger negative trends in sea level pressure in the poorly observed Antarctic, which are not fully reproduced in the recent ERA-40 reanalysis, suggesting that the NCEP reanalysis trends may be overestimates there. A detection analysis applied to sea level pressure over the North Atlantic region (20°–80°N and 0°–60°W) over the period 1908–98 using the Trenberth analyses (Trenberth and Paolino 1980) indicated no better agreement between models and analysis-based sea level pressure data than the over the 1958–98 period, but further analysis of historical data may help to better constrain uncertainties.
We also need to examine climate models to understand why they are in disagreement with observations, if the observed trends prove correct. For example, it is likely that the sea level pressure response is sensitive to the parameterizations used in a model. By making use of “perturbed physics” ensembles (e.g., Allen 2003b), in which physical parameterizations are systematically perturbed in a large ensemble of integrations, it may be possible to identify the model parameters to which circulation changes are most sensitive and that lead to a more realistic simulation of historical sea level pressure changes. This area of disagreement between models and observations may therefore ultimately prove useful in constraining model physics.
As with atmospheric circulation changes, we also expect the hydrological cycle to respond to changes in external forcing of the climate system. Mitchell et al. (1987) argue that precipitation changes are controlled primarily by the energy budget of the troposphere: the latent heat of condensation being balanced by radiative cooling. Externally forced warming of the troposphere enhances the local cooling rate, thereby increasing precipitation, but this may be partly offset by a decrease in the efficiency of the cooling due to greenhouse gas increases (Allen and Ingram 2002; Yang et al. 2003; Lambert et al. 2004). Allen and Ingram (2002) demonstrate that the ensemble mean land average precipitation simulated by HadCM3 is significantly correlated with observed land average precipitation over the 1945–98 period, essentially detecting the influence of natural external forcing on precipitation. A similar result was obtained using all-forcings simulations of the Parallel Climate Model (PCM; Fig. 2; Gillett et al. 2004b). Consistent with this, Lambert et al. (2004) demonstrate that the response to shortwave forcing is detectable in observations, whereas the response to longwave forcing is not. These results therefore suggest that natural forcings such as volcanic aerosol and solar irradiance changes are likely to have had a larger influence on mean changes of total precipitation during the twentieth century than greenhouse gas changes, which is consistent with simulations of the response to volcanic forcing (Robock and Liu 1994). Gillett et al. (2004b) demonstrate that there is a detectable volcanic influence in terrestrial precipitation over the past 50 yr, using simulations of the PCM, although the model appears to underestimate the volcanic response.
Owing to the limited sensitivity of precipitation to greenhouse gas changes and the relatively small change in forcing over the observed period, Allen and Ingram (2002) argue that hydrological sensitivity (the change in mean total precipitation in response to a doubling of CO2) is not well constrained by available observations. A perfect model study that examined the detectability of precipitation changes in simulations of the PCM with natural and anthropogenic forcing also suggested that the response to greenhouse gas forcing should not yet be detectable in total mean precipitation (Ziegler et al. 2003) and that model uncertainty should make detection of annual precipitation changes difficult (Hegerl et al. 2004 note that changes in some aspects of extreme precipitation may be detectable earlier; see below). However, detection and attribution techniques are likely to be useful in examining the hydrological response to natural forcings, particularly volcanoes. In these cases, we may be able to use these techniques to answer the question of whether observed and simulated precipitation responses are consistent, and in the context of a perturbed physics ensemble, these techniques may be used to constrain model parameters by comparison with observations (to the extent that observations provide a constraint given their uncertainties). This in turn may help to constrain our predictions of future precipitation changes.
A major impediment in
detection of anthropogenic influences on precipitation is that global
estimates of precipitation are not available, particularly before the
satellite era. Decadal changes recorded by satellite measurements of
rainfall are still uncertain. Station-based datasets over land are
incomplete, even during the past 50 yr, and are also affected by
observational uncertainties (Houghton et al. 2001).
4. Changes in the ocean |
---|
The water mass characteristics of the relatively shallow Sub-Antarctic Mode Water (SAMW) and the subtropical gyres in the Indian and Pacific basins since the 1960s have been changing. In most studies differences between earlier historical data (mainly from the 1960s) with more recent World Ocean Circulation Experiment (WOCE) data in the late 1980s and 1990s show that the SAMW is cooler and fresher on density surfaces (Bindoff and Church 1992; Johnson and Orsi 1997; Bindoff and McDougall 1994; Wong et al. 2001; Bindoff and McDougall 2000), indicative of a subduction of warmer waters [see Bindoff and McDougall (1994) for an explanation of this counterintuitive result]. These water mass results are supported by the strong increase in heat content in the Southern Hemisphere midlatitudes across both the Indian and Pacific Oceans during the 1993–2003 period (Willis et al. 2004). While most studies of the SAMW water mass properties have shown a cooling and freshening on density surfaces in the Indian and Pacific Oceans, the most recent repeat of the WOCE Indian Ocean section along 32°S in 2001 found a warming and salinity increase on density surfaces (indicative of subduction of cooler waters) in the shallow thermocline (Bryden et al. 2003). This result emphasizes the need to understand the processes involved in decadal oscillations in the subtropical gyres. Note, however, that the denser waters masses below 300 m showed the same trend in water mass properties that had been reported earlier (Bindoff and McDougall 2000). Further evidence of the large-scale freshening and cooling of SAMW (Fig. 3) comes from an analysis of six meridional WOCE sections and three Japanese Antarctic Research Expedition sections from South Africa to 150°E. These sections were compared with historical data extending from the Subtropical Front (35°S) to the Antarctic Divergence (60°S), and from South Africa eastward to the Drake Passage. In almost all sections a cooling and freshening of SAMW has occurred consistent with the subduction of warmer surface waters observed over the same period, summarized in Fig. 3.
The salinity minimum water in the North Pacific has freshened and in the southern parts of the Atlantic, Indian, and Pacific Oceans there has also been a corresponding freshening of the salinity minimum layer. The Atlantic freshening at depth is also supported by direct observations of a freshening of the surface waters (Curry et al. 2003). Taken together these changes in the Atlantic and North Pacific suggest a global increase in the hydrological cycle (and flux of freshwater into the oceans including melt waters from ice caps and sea ice) at high latitudes in the source regions of these two water masses (Wong et al. 1999). To the south of the Subantarctic Front, there is a very coherent pattern of warming and salinity increase on density surfaces <500 m (Fig. 3). This pattern of warming and salinity increase on isopycnals from 45°E to 90°W is consistent with the warming and/or freshening of surface waters (see Bindoff and McDougall 1994). Figure 3 summarizes the observed differences in the Southern Ocean, showing the cooling and freshening on density surfaces of SAMW north of the Subantarctic Front, freshening of Antarctic Intermediate Water, and warming and salinity increase of the Upper-Circumpolar Deep Water south of the Subantarctic Front.
These observed changes are broadly consistent with simulations of warming and changes in precipitation minus evaporation. Banks and Bindoff (2003) identified a zonal mode (or fingerprint) of difference in water mass properties in the anthropogenically forced simulation of the HadCM3 model between the 1960s and 1990s (Fig. 4). This fingerprint identified in HadCM3 is strikingly similar to the observed differences in water mass characteristics (Fig. 3) in the Southern Hemisphere. In the HadCM3 climate change simulation, the strength of the zonal mode in the Indo-Pacific Ocean tends to become stronger and increasingly significant from the 1960s onward. Its strength exceeds the 5% significance level 40% of the time, while this happens only occasionally (<5% of the time) in the 600-yr control simulation. This result suggests that the zonal signature of climate change for the Indo-Pacific basin (and Southern Ocean) is distinct from the modes of variability and suggests that the anthropogenic change can be separated from internal ocean variability. The similarity of observed and simulated water mass changes suggests that such changes can already be observed. For a quantitative detection approach, the relatively sparse sampling of ocean data needs to be emulated in models.
The Southern Ocean is an important source of world’s global water masses and thermohaline circulation. Banks and Wood (2002)
concluded from their analysis of the HadCM3 model results that the
geographic regions with the greatest signal-to-noise ratio for detecting
climate changes trends were from water masses that mainly originate
from or in the Southern Ocean with short residence times such as SAMW.
By contrast, the North Atlantic was considered less suitable for climate
change detection because of its greater internal variability in this
model.
5. Detecting anthropogenic changes in impact-relevant variables |
---|
a. Toward detecting regional changes Regional
and local changes in climate have a large impact on society. Recently,
it has been shown that an anthropogenic climate change signal is
detectable in continental-scale regions using surface temperature
changes over the twentieth century (Karoly et al. 2003; Stott 2003; Zwiers and Zhang 2003; Karoly and Braganza 2005).
It has also been shown that most of the observed warming over the last
50 yr in six continental-scale regions (including North America,
Eurasia, and Australia) was likely to be due to the increase in
greenhouse gases in the atmosphere (Stott 2003).
However, it becomes harder to detect climate change at decreasing
spatial scales, and scaling factors may become more model dependent.
This tendency is illustrated in Fig. 5, which is based on the approach by Zwiers and Zhang (2003).
The authors use the Canadian climate model to show that greenhouse gas
and sulfate aerosol climate change can be detected in the observed
warming in North America and Eurasia over the twentieth century. As the
spatial scales considered become smaller, it can be seen that the
uncertainty in estimated signal amplitudes (as demonstrated by the size
of the vertical bars) becomes larger, reducing the signal-to-noise ratio
in detection and attribution results (see also Stott and Tett 1998).
Since the signal-to-noise ratio depends on the local level of natural
variability and the size of the anthropogenic signal, results vary
between regions, such as between Eurasia and North America. The figure
also illustrates that most of the results hold if the variance of
internal climate variability in the control simulations is doubled [by
enhancing anomalies of the control simulation by a factor of sqrt(2);
see Fig. 5b].
This increases our confidence in the detection result, since estimates
of internal climate variability based on models are still uncertain (see
section 2).
A different approach to detection of regional temperature change uses indices of area-average minimum and maximum surface temperature variations in the North American region (Karoly et al. 2003) and in the Australian region (Karoly and Braganza 2005) calculated from observations and a number of different climate models. Results show that recent climate change in those regions could not be explained by natural variability alone and was consistent with the response to anthropogenic forcing (Fig. 6).
The successful attribution of continental-scale climate change to anthropogenic forcing, as demonstrated in the results discussed above, can also be used to provide probabilistic estimates of future climate change at regional scales (in a similar manner as done for global scales; see, e.g., Stott and Kettleborough 2002).
Detection of regional climate change is very relevant for attributing impacts of climate change to external forcing. Gillett et al. (2004a) demonstrate a detectable anthropogenic influence on Canadian fire season temperature. They go on to detect the influence of anthropogenic climate change on forest area burnt, using a simple statistical model. This result links observed impacts directly to external forcing. Such an approach will become increasingly important for understanding climate change impacts, such as changes in ecosystems.
A different approach to detection of regional temperature change uses indices of area-average minimum and maximum surface temperature variations in the North American region (Karoly et al. 2003) and in the Australian region (Karoly and Braganza 2005) calculated from observations and a number of different climate models. Results show that recent climate change in those regions could not be explained by natural variability alone and was consistent with the response to anthropogenic forcing (Fig. 6).
The successful attribution of continental-scale climate change to anthropogenic forcing, as demonstrated in the results discussed above, can also be used to provide probabilistic estimates of future climate change at regional scales (in a similar manner as done for global scales; see, e.g., Stott and Kettleborough 2002).
Detection of regional climate change is very relevant for attributing impacts of climate change to external forcing. Gillett et al. (2004a) demonstrate a detectable anthropogenic influence on Canadian fire season temperature. They go on to detect the influence of anthropogenic climate change on forest area burnt, using a simple statistical model. This result links observed impacts directly to external forcing. Such an approach will become increasingly important for understanding climate change impacts, such as changes in ecosystems.
However,
the prospects of successful attribution of observed temperature change
at local scales (such as at a single station) are limited in the near
future, as the magnitude of local temperature, and even more so,
rainfall, variability is generally much larger than any regional
greenhouse climate change signal. The spatial scale at which a
detectable anthropogenic signal can be identified is likely to decrease
over time, as the magnitude of the projected greenhouse climate signal
increases.
b. Extreme events Perhaps
one of the most unexpected developments in the area of climate change
detection and attribution is the recent focus on extreme climate events.
Certainly, from the perspective of climate impacts extreme weather and
climate events are very important, but until recently it was not
expected that they would exhibit detectable anthropogenic signals beyond
a shift due to changes in climate means in the near future. However,
the central Europe heat wave during the summer of 2003, which is
estimated to be a very extreme event in the context of long station
records, is consistent with hypothesized increases in temperature
variability and hence greater likelihood of extremes (Schär et al. 2004).
Results from climate model simulations suggest that the tails of the distribution of daily temperature data will change differently from seasonal mean data, suggesting that a separate detection of changes in temperature extremes is worthwhile. Figure 7 shows that two climate models simulate a stronger change in European cold winter days than in winter means, narrowing the future temperature distribution in a manner consistent with simulated changes in circulation, while the distribution of daily maximum temperature widens, leading to stronger hot extremes (Hegerl et al. 2004).
Climatological data show that the most intense precipitation occurs in warm regions (Fig. 8a). Also, higher temperatures lead to an increase in the water holding capacity of the atmosphere, and hence to a greater proportion of total precipitation in heavy and very heavy events (Karl and Trenberth 2003). Therefore, all climate models analyzed to date show on average an increase in extreme precipitation events as global temperatures increase (Houghton et al. 2001; Semenov and Bengtsson 2002; Allen and Ingram 2002; Hegerl et al. 2003), with global increases in extreme precipitation exceeding increases in mean precipitation. Groisman et al. (1999) has demonstrated empirically, and Katz (1999) theoretically, that as precipitation increases a greater proportion falls in heavy and very heavy events if the frequency remains constant. Figure 8b illustrates that observed decadal trends in rainfall tend to show stronger changes in extreme than mean rainfall. Although measurement uncertainties in these regional changes are considerable, the probability of 16 out of 16 regions showing stronger absolute changes in extremes than means by chance is very small. Note that this result, which applies to the 90th percentile of daily precipitation, is not inconsistent with model results that suggest that the magnitude of very rare events, such as the 20-yr extreme event, will increase almost everywhere with increasing temperature.
These findings draw attention to the necessity of closer examination of the changes in precipitation extremes and attempts to detect changes and attribute them to anthropogenic forcing. However, there are a number of difficulties to address before such a detection and attribution attempt becomes feasible.
First, as mentioned in the methods section, a comparison between observed and simulated changes in climate extremes requires a comparison of data that represent different spatial scales: while the typical global climate model grid box is on the order of one or several hundreds of kilometers wide, the observations represent point observations by individual stations. Therefore, a direct quantitative comparison between observed and simulated extremes is not feasible, and it is important to develop area-averaged changes in extreme precipitation (Groisman et al. 2005). A large number of stations are needed to provide reliable estimates of area-averaged precipitation (e.g., McCollum and Krajewski 1998; Osborn and Hulme 1997). Data from reanalysis projects (e.g., ERA-40 reanalysis, Simmons et al. 2005; or the updated NCEP reanalysis, Kanamitsu et al. 2002) may be useful since they are more readily comparable to model data, but rainfall in these products is not well constrained by observations [see Kharin and Zwiers (2000) for extreme and Widmann et al. (2003) for mean rainfall]. On the other hand, if reanalysis rainfall extremes are driven by parameterizations, we might be able to learn from the success or failure of different reanalysis products about model parameterizations that improve the simulation of extreme rainfall.
Today, station-based observations are the most reliable data for detection and attribution of climate change in rainfall extremes, but as the time series and accuracy of remote sensing data increases, the blend of these different types of data will become increasingly important for comparison with climate model simulations. Station data still require additional work for daily and possibly hourly resolution data, for integration into global datasets and assessment for time-dependent biases caused by systematic changes in observing procedure or instruments. Fortunately, the impacts of such systematic changes in precipitation observations appear to be strongest for light precipitation measurements and affect less the measurement of heavy and very heavy precipitation (Groisman et al. 1999). Other inhomogeneities, such as changes in station location, may still affect heavy rainfall, though these are less spatially coherent.
A second difficulty in the detection of changes in extremes is that the term “climate extreme” encompasses a range of events that typically cause impacts. These range from frequent events such as midlatitude frost days to extremely rare and devastating events. Consequently, a large range of indices documenting extreme events has been proposed and applied (see Meehl et al. 2000; Frich et al. 2002). This different use of indices for extremes has so far made a comparison between results of model and observational studies of extremes difficult (see Houghton et al. 2001). Examples for indices of extremes include the most extreme event over a period of time, such as a year. This index may be interesting by itself (Hegerl et al. 2004) or can be used to fit an extreme value distribution that allows us to estimate extreme events with long return characteristics (see Zwiers and Kharin 1998; Kharin and Zwiers 2000, 2005; Wehner 2004). Other indices of extremes are defined as exceedances of a threshold for extreme events, such as the 90th percentile of climatological temperature or rainfall. Exceedances of thresholds benefit from extensive statistical literature on their properties. However, their application to climate variables with strong seasonal cycle, such as temperature, leads to unanticipated problems. Thresholds that are based on estimated percentiles of climatological temperature are affected by sampling error. This error leads to systematic differences in exceedance rates between the climatological base period and the period outside, causing substantial biases in trends in extremes (Zhang et al. 2005). These can be circumvented if extremes indices are processed differently. This example demonstrates that indices for climate extremes must be very carefully evaluated for their statistical properties, their applicability to climatologically different regions, and their robustness. Data from climate models are very valuable to test the properties of indices, since they are abundant and relatively homogeneous.
A further consideration in the choice of indices is that indices for more rare and extreme events will be more poorly sampled than indices of events that occur more frequently. This decreased sampling will almost certainly lead to a decrease in signal-to-noise ratio for detection. However, for extremes that occur at least once a year, this decrease in signal-to-noise ratio appears quite small for temperature or precipitation compared to seasonal mean data. If uncertainty in the spatial fingerprint of climate change in models is considered, changes in annual rainfall extremes may actually be more robustly detectable than changes in annual total rainfall (Hegerl et al. 2004; Fig. 9). This is caused by the above-mentioned stronger increases (in percent of climatological values) for extreme than annual total rainfall, which leads to a more robustly detectable pattern of general increase of extreme rainfall. In contrast, annual total rainfall shows a model-dependent pattern of increases and decreases.
Results from climate model simulations suggest that the tails of the distribution of daily temperature data will change differently from seasonal mean data, suggesting that a separate detection of changes in temperature extremes is worthwhile. Figure 7 shows that two climate models simulate a stronger change in European cold winter days than in winter means, narrowing the future temperature distribution in a manner consistent with simulated changes in circulation, while the distribution of daily maximum temperature widens, leading to stronger hot extremes (Hegerl et al. 2004).
Climatological data show that the most intense precipitation occurs in warm regions (Fig. 8a). Also, higher temperatures lead to an increase in the water holding capacity of the atmosphere, and hence to a greater proportion of total precipitation in heavy and very heavy events (Karl and Trenberth 2003). Therefore, all climate models analyzed to date show on average an increase in extreme precipitation events as global temperatures increase (Houghton et al. 2001; Semenov and Bengtsson 2002; Allen and Ingram 2002; Hegerl et al. 2003), with global increases in extreme precipitation exceeding increases in mean precipitation. Groisman et al. (1999) has demonstrated empirically, and Katz (1999) theoretically, that as precipitation increases a greater proportion falls in heavy and very heavy events if the frequency remains constant. Figure 8b illustrates that observed decadal trends in rainfall tend to show stronger changes in extreme than mean rainfall. Although measurement uncertainties in these regional changes are considerable, the probability of 16 out of 16 regions showing stronger absolute changes in extremes than means by chance is very small. Note that this result, which applies to the 90th percentile of daily precipitation, is not inconsistent with model results that suggest that the magnitude of very rare events, such as the 20-yr extreme event, will increase almost everywhere with increasing temperature.
These findings draw attention to the necessity of closer examination of the changes in precipitation extremes and attempts to detect changes and attribute them to anthropogenic forcing. However, there are a number of difficulties to address before such a detection and attribution attempt becomes feasible.
First, as mentioned in the methods section, a comparison between observed and simulated changes in climate extremes requires a comparison of data that represent different spatial scales: while the typical global climate model grid box is on the order of one or several hundreds of kilometers wide, the observations represent point observations by individual stations. Therefore, a direct quantitative comparison between observed and simulated extremes is not feasible, and it is important to develop area-averaged changes in extreme precipitation (Groisman et al. 2005). A large number of stations are needed to provide reliable estimates of area-averaged precipitation (e.g., McCollum and Krajewski 1998; Osborn and Hulme 1997). Data from reanalysis projects (e.g., ERA-40 reanalysis, Simmons et al. 2005; or the updated NCEP reanalysis, Kanamitsu et al. 2002) may be useful since they are more readily comparable to model data, but rainfall in these products is not well constrained by observations [see Kharin and Zwiers (2000) for extreme and Widmann et al. (2003) for mean rainfall]. On the other hand, if reanalysis rainfall extremes are driven by parameterizations, we might be able to learn from the success or failure of different reanalysis products about model parameterizations that improve the simulation of extreme rainfall.
Today, station-based observations are the most reliable data for detection and attribution of climate change in rainfall extremes, but as the time series and accuracy of remote sensing data increases, the blend of these different types of data will become increasingly important for comparison with climate model simulations. Station data still require additional work for daily and possibly hourly resolution data, for integration into global datasets and assessment for time-dependent biases caused by systematic changes in observing procedure or instruments. Fortunately, the impacts of such systematic changes in precipitation observations appear to be strongest for light precipitation measurements and affect less the measurement of heavy and very heavy precipitation (Groisman et al. 1999). Other inhomogeneities, such as changes in station location, may still affect heavy rainfall, though these are less spatially coherent.
A second difficulty in the detection of changes in extremes is that the term “climate extreme” encompasses a range of events that typically cause impacts. These range from frequent events such as midlatitude frost days to extremely rare and devastating events. Consequently, a large range of indices documenting extreme events has been proposed and applied (see Meehl et al. 2000; Frich et al. 2002). This different use of indices for extremes has so far made a comparison between results of model and observational studies of extremes difficult (see Houghton et al. 2001). Examples for indices of extremes include the most extreme event over a period of time, such as a year. This index may be interesting by itself (Hegerl et al. 2004) or can be used to fit an extreme value distribution that allows us to estimate extreme events with long return characteristics (see Zwiers and Kharin 1998; Kharin and Zwiers 2000, 2005; Wehner 2004). Other indices of extremes are defined as exceedances of a threshold for extreme events, such as the 90th percentile of climatological temperature or rainfall. Exceedances of thresholds benefit from extensive statistical literature on their properties. However, their application to climate variables with strong seasonal cycle, such as temperature, leads to unanticipated problems. Thresholds that are based on estimated percentiles of climatological temperature are affected by sampling error. This error leads to systematic differences in exceedance rates between the climatological base period and the period outside, causing substantial biases in trends in extremes (Zhang et al. 2005). These can be circumvented if extremes indices are processed differently. This example demonstrates that indices for climate extremes must be very carefully evaluated for their statistical properties, their applicability to climatologically different regions, and their robustness. Data from climate models are very valuable to test the properties of indices, since they are abundant and relatively homogeneous.
A further consideration in the choice of indices is that indices for more rare and extreme events will be more poorly sampled than indices of events that occur more frequently. This decreased sampling will almost certainly lead to a decrease in signal-to-noise ratio for detection. However, for extremes that occur at least once a year, this decrease in signal-to-noise ratio appears quite small for temperature or precipitation compared to seasonal mean data. If uncertainty in the spatial fingerprint of climate change in models is considered, changes in annual rainfall extremes may actually be more robustly detectable than changes in annual total rainfall (Hegerl et al. 2004; Fig. 9). This is caused by the above-mentioned stronger increases (in percent of climatological values) for extreme than annual total rainfall, which leads to a more robustly detectable pattern of general increase of extreme rainfall. In contrast, annual total rainfall shows a model-dependent pattern of increases and decreases.
This should encourage attempts to detect changes in extremes. A first attempt was based on the Frich et al. (2002)
indices, using fingerprints from atmospheric model simulations with
fixed sea surface temperature and a bootstrap method for significance
testing (Kiktev et al. 2003).
Their results indicate that patterns of simulated and observed rainfall
extremes bear little similarity for the indices they selected, in
contrast to the similarity of trends depicted by Groisman et al. (2005). In contrast, some observed changes in temperature extremes can be detected and attributed to greenhouse gas forcing (Christidis et al. 2005).
c. Attributing individual extreme events probabilistically A
new challenge for the detection and attribution community is
quantifying the impact of external climate forcing on the probability of
specific weather events. Detection and attribution studies to date have
tended to focus on properties of the climate system that can be
considered as deterministic. For example, the studies reviewed by the Houghton et al. (2001)
attributing large-scale temperature changes were all based on the
underlying statistical model of a deterministic change with superimposed
climate noise. The combination of observational uncertainty and natural
internal variability means that we cannot be completely sure what the
externally driven 100-yr change in global temperatures has been, but can
estimate a best guess and uncertainty range for the underlying
anthropogenic temperature change from observed trends.
This distinction between the observed change in actual temperatures and the underlying change in expected temperatures is largely of academic interest when addressing global temperature trends, because the level of internal variability in 50- or 100-yr temperature trends is lower than the externally driven changes. This distinction becomes much more important when we consider changes in precipitation or extreme weather events. Nevertheless, even for these noisier variables, studies have tended to consider underlying deterministic changes in diagnostics such as expected occurrence frequency as the legitimate subject of attribution statements, rather than addressing the actual extreme events themselves. Indeed, in popular discussions of the climate change issue, it is frequently asserted that it is impossible in principle to attribute a single event in a chaotic system to external forcing.
This distinction between the observed change in actual temperatures and the underlying change in expected temperatures is largely of academic interest when addressing global temperature trends, because the level of internal variability in 50- or 100-yr temperature trends is lower than the externally driven changes. This distinction becomes much more important when we consider changes in precipitation or extreme weather events. Nevertheless, even for these noisier variables, studies have tended to consider underlying deterministic changes in diagnostics such as expected occurrence frequency as the legitimate subject of attribution statements, rather than addressing the actual extreme events themselves. Indeed, in popular discussions of the climate change issue, it is frequently asserted that it is impossible in principle to attribute a single event in a chaotic system to external forcing.
Allen (2003a), Stone and Allen (2005), and Stott et al. (2004)
argue that quantitative attribution statements can be made regarding
individual events if they are couched in terms of the contribution of
external forcing to the risk (i.e., the probability) of an event of (or
greater than) the observed magnitude. This point is illustrated
conceptually in Fig. 10. Figure 10a
shows how the distribution of a hypothetical climate variable
(precipitation at a given location, e.g.) might alter under climate
change, with a narrower distribution changing to a broader distribution,
increasing the risk of an event exceeding a given threshold. For
assessing changes in risk, it will be necessary to account for
uncertainty in how the distribution has changed: in this case, there is a
5% chance that the risk of exceeding the threshold has actually
declined. Figure 10b (from Allen 2003a)
shows how results from such probabilistic analyses can be summarized,
showing a histogram of changes in risk resulting from the imposed
external forcing (top axis) and the fraction attributable risk (FAR) due
to that forcing (bottom axis). The FAR is an established concept in
epidemiological studies for attribution of cause and effect in
stochastic systems. It has been applied to attributing a part of the
probability of a heat wave as observed in central Europe in 2003 to
anthropogenic forcing (Stott et al. 2004).
6. Recommendations and conclusions |
---|
For example, the detection and attribution of climate change requires long observed time series free from nonclimate-related time-dependent biases. For the analysis of extreme events it is also important that quality control routines do not weed out true extreme events. Blended remote sensing and in situ data, if quality controlled also with regard to extremes, may become very useful to overcome spatial sampling inadequacies. Since every source of data is subject to observational uncertainty, climate records that are based on different observing systems and analysis methods are important for quantifying and decreasing the uncertainty in detection and attribution results. Lessons learned from microwave satellite data, global land surface temperatures, and sea surface temperatures show that our initial estimates of uncertainty from a single dataset are often too low. Therefore, a high priority must be placed on adequate estimation of error, including time-dependent biases.
For reducing uncertainties in detection and attribution results we also need to keep improving our understanding and estimates of historical anthropogenic and natural radiative forcings, particularly those with largest uncertainties such as black carbon, effect of aerosols on clouds, or solar forcing. As the spatial scale upon which detection and attribution efforts focus decreases, forcings that are of minor importance globally, such as land use change, may become more important and need to be considered.
Furthermore, our understanding of model uncertainty needs to be improved, and more complete estimates of model error need to be included in detection and attribution approaches. Both ensembles of models with perturbed parameters (e.g., Allen and Stainforth 2002; Murphy et al. 2004) and true diversity in CGCMs used worldwide are important to sample model uncertainty. Aspects of climate change where there is a significant discrepancy between model simulation and observation, such as the magnitude of changes in annular modes or in the fingerprint of anthropogenic sea level pressure change, need to be understood.
Furthermore, different components of the climate system present their own challenges. In the oceans, it is important to exploit the signatures of climate change in both water mass properties, heat and freshwater, sea level, and in other ocean tracers, such as oxygen concentration, together to more reliably detect and attribute climate change and evaluate ocean model performance. The advantage of exploring water mass variations on density surfaces in addition to inventories of heat and freshwater storage, is that water mass changes largely reflect changes in the surface forcing and are less prone to noise introduced by mesoscale eddies. Furthermore, water mass changes on density surfaces do not contribute to sea level rise and thus provide information about changes within the water column that are independent from sea level measurements.
In the atmosphere, detectable global precipitation changes in response to volcanism may be useful to evaluate simulated changes in the hydrological cycle even before greenhouse gas–induced changes in precipitation become detectable. Also, changes in extreme precipitation may become detectable before changes in total precipitation. Furthermore, the probability of an individual extreme event with and without greenhouse warming can be estimated to assess how much global warming contributes to changes in the risk of a particular extreme event.
We conclude that while the
anthropogenic signal continues to emerge from the background of natural
variability in more components of the climate system, and on decreasing
spatial scales, detection and attribution efforts will be vital to
provide a rigorous comparison between model-simulated and observed
change in both the atmosphere and oceans. Where climate change is
detected and attributed to external forcing, detection results can be
used to constrain uncertainties in future predictions based on observed
climate change. Where attribution fails due to discrepancies between
simulated and observed change, this provides an important encouragement
to revisit climate model and observational uncertainty.
Acknowledgments |
---|
GCH
was supported by NSF Grants ATM-0002206 and ATM-0296007, by NOAA Grant
NA16GP2683 and NOAA’s Office of Global programs, DOE in conjunction with
the Climate Change Data and Detection element, and by Duke University.
ERA-40 data used in this study have been obtained from the ECMWF data
server. We thank Jesse Kenyon and Daithi Stone for help and discussion,
and two anonymous reviewers and Tom Smith for their helpful comments.
REFERENCES |
---|
Allen, M. R., 2003a: Liability for climate change. Nature, 421, 891–892. [CrossRef] | |
Allen, M. R., 2003b: Possible or probable? Nature, 425, 242. [CrossRef] | |
Allen, M. R., and S. F. B. Tett, 1999: Checking for model inconsistency in optimal fingerprinting. Climate Dyn., 15, 419–434. [CrossRef] | |
Allen, M. R., and W. J. Ingram, 2002: Constraints on future changes in climate and the hydrologic cycle. Nature, 419, 224–232. [CrossRef] | |
Allen, M. R., and D. A. Stainforth, 2002: Towards objective probabilistic climate forecasting. Nature, 419, 228. [CrossRef] | |
Allen, M. R., and P. A. Stott, 2003: Estimating signal amplitudes in optimal fingerprinting, Part I: Theory. Climate Dyn., 21, 477–491. [CrossRef] | |
Allen, M. R., P. A. Stott, J. F. B. Mitchell, R. Schnur, and T. Delworth, 2000: Quantifying the uncertainty in forecasts of anthropogenic climate change. Nature, 407, 617–620. [CrossRef] | |
Allen, M. R., and Coauthors, 2006: Quantifying anthropogenic influence on recent near-surface temperature. Surv. Geophys., in press. | |
Anderson, T. L., R. J. Charlson, S. E. Schwartz, R. Knutti, O. Boucher, H. Rodhe, and J. Heintzenberg, 2003: Climate forcing by aerosols—A hazy picture. Science, 16, 1103–1104. [CrossRef] | |
Aoki, S., N. L. Bindoff, and J. A. Church, 2005: Interdecadal watermass changes in the Southern Ocean from 30°E and 160°E. Geophys. Res. Lett., 32.L07607, doi:10.1029/2004GL022220. | |
Baidya, R. S., and R. Avissar, 2002: Impact of land use/land cover change on regional hydrometeorology in the Amazon. J. Geophys. Res., 107.8037, doi:10.1029/2000JD000266. | |
Banks, H. T., and R. A. Wood, 2002: Where to look for anthropogenic climate change in the ocean. J. Climate, 15, 879–891. [Abstract] | |
Banks, H. T., and N. L. Bindoff, 2003: Comparison of observed temperature and salinity changes in the Indo-Pacific with results from the coupled climate model HadCM3: Processes and mechanisms. J. Climate, 16, 156–166. [Abstract] | |
Barnett, T. P., D. W. Pierce, and R. Schnur, 2001: Detection of anthropogenic climate change in the world’s oceans. Science, 292, 270–274. [CrossRef] | |
Berliner, L. M., R. A. Levine, and D. J. Shea, 2000: Bayesian climate change assessment. J. Climate, 13, 3805–3820. [Abstract] | |
Bindoff, N. L., and J. A. Church, 1992: Warming of the water column in the southwest Pacific Ocean. Nature, 357, 59–62. [CrossRef] | |
Bindoff, N. L., and T. J. McDougall, 1994: Diagnosing climate change and ocean ventilation using hydrographic data. J. Phys. Oceanogr., 24, 1137–1152. [Abstract] | |
Bindoff, N. L., and T. J. McDougall, 2000: Decadal changes along an Indian Ocean section at 32°S and their interpretation. J. Phys. Oceanogr., 30, 1207–1222. [Abstract] | |
Boer, G. J., 2004: Long timescale potential predictability in an ensemble of coupled climate models. Climate Dyn., 23, 29–44. [CrossRef] | |
Boer, G. J., B. Yu, S-J. Kim, and G. Flato, 2004: Is there observational support for an El Niño-like pattern of future global warming? Geophys. Res. Lett., 31.L06201, doi:10.1029/2003GL018722. [CrossRef] | |
Boer, G. J., K. Hamilton, and W. Zhu, 2005: Climate sensitivity and climate change under strong forcing. Climate Dyn., 24.doi:10.1007/s00382-004-0500-3. [CrossRef] | |
Bonan, G. B., 1999: Frost followed the plow: Impacts of deforestation on the climate of the United States. Ecol. Appl., 9, 1305–1315. [CrossRef] | |
Braganza, K., D. J. Karoly, A. C. Hirst, P. Stott, R. J. Stouffer, and S. F. B. Tett, 2004: Simple indices of global climate variability and change: Part II—Attribution of climate change during the 20th century. Climate Dyn., 22, 823–838. [CrossRef] | |
Bryden, H. L., E. L. McDonagh, and B. A. King, 2003: Changes in ocean water mass properties: Oscillation or trends? Science, 300, 2086–2088. [CrossRef] | |
Charney, J. G., 1975: Dynamics of deserts and drought in the Sahel. Quart. J. Roy. Meteor. Soc., 101, 193–202. [CrossRef] | |
Chelliah, M., and C. F. Ropelewski, 2000: Reanalysis-based tropospheric temperature estimates: Uncertainties in the context of global climate change detection. J. Climate, 13, 3187–3205. [Abstract] | |
Christidis, N., P. A. Stott, S. Brown, G. C. Hegerl, and J. Caesar, 2005: Detection of changes in temperature extremes during the second half of the 20th century. Geophys. Res. Lett., 32.L20716, doi:10.1029/2005GL023885. [CrossRef] | |
Christy, J. R., and W. B. Norris, 2004: What may we conclude about tropospheric temperature trends? Geophys. Res. Lett., 31.L06211, doi:10.1029/2003GL019361. [CrossRef] | |
Christy, J. R., D. E. Parker, S. J. Brown, I. Macadam, M. Stendel, and W. B. Norris, 2001: Differential trends in tropical sea surface and atmospheric temperatures. Geophys. Res. Lett., 28, 183–186. [CrossRef] | |
Cox, P. M., R. A. Betts, C. D. Jones, S. A. Spall, and I. J. Tatterdell, 2000: Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model. Nature, 408, 184–187. [CrossRef] | |
Curry, R., B. Dickson, and I. Yashayaev, 2003: Ocean evidence of a change in the freshwater balance of the Atlantic over the past four decades. Nature, 426, 826–829. [CrossRef] | |
DeGaetano, A. T., 1999: A method to infer observation time based on day-to-day temperature variations. J. Climate, 12, 3443–3456. [Abstract] | |
Dickson, B., J. Hurrell, N. L. Bindoff, A. P. S. Wong, B. Arbic, B. Owens, S. Imawaki, and I. Yashayaev, 2001: The world during WOCE. WOCE Conference Volume, G. Siedler, J. A. Church, and J. Gould, Eds., Academic Press, 557–583. | |
Dolman, A. J., A. Verhagen, and C. A. Rovers, 2003: Global Environmental Change and Land Use. Kluwer Academic, 210 pp. | |
Easterling, D. R., G. A. Meehl, C. Parmesan, S. Changnon, T. R. Karl, and L. O. Mearns, 2000: Climate extremes: Observations, modeling and impacts. Science, 289, 2068–2074. [CrossRef] | |
Forest, C. E., P. H. Stone, A. P. Sokolov, M. R. Allen, and M. D. Webster, 2002: Quantifying uncertainties in climate system properties with the use of recent observations. Science, 295, 113–117. [CrossRef] | |
Frich, P., L. V. Alexander, P. Della-Marta, B. Gleason, M. Haylock, A. M. G. Klein-Tank, and T. Peterson, 2002: Observed coherent changes in climatic extremes during the second half of the twentieth century. Climate Res., 19, 193–212. [CrossRef] | |
Fu, Q., C. M. Johanson, S. G. Warren, and D. J. Seidel, 2004: Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends. Nature, 249, 55–58. [CrossRef] | |
Gillett, N. P., and D. W. J. Thompson, 2003: Simulation of recent Southern Hemisphere climate change. Science, 302, 273–275. [CrossRef] | |
Gillett, N. P., F. W. Zwiers, A. J. Weaver, G. C. Hegerl, M. R. Allen, and P. A. Stott, 2002: Detecting anthropogenic influence with a multi-model ensemble. Geophys. Res. Lett., 29.1970, doi:10.1029/2002GL015836. | |
Gillett, N. P., H. Graf, and T. Osborn, 2003a: Climate change and the NAO. The North Atlantic Oscillation, Geophys. Monogr., Vol. 134, Amer. Geophys. Union, 193–210. | |
Gillett, N. P., F. W. Zwiers, A. J. Weaver, and P. A. Stott, 2003b: Detection of human influence on sea level pressure. Nature, 422, 292–294. [CrossRef] | |
Gillett, N. P., A. J. Weaver, F. W. Zwiers, and M. D. Flannigan, 2004a: Detecting the effect of human-induced climate change on Canadian forest fires. Geophys. Res. Lett., 31.L18211, doi:10.1029/2004GL020876. | |
Gillett, N. P., A. J. Weaver, F. W. Zwiers, and M. F. Wehner, 2004b: Detection of volcanic influence on global precipitation. Geophys. Res. Lett., 31.L12217, doi:10.1029/2004GL020044. | |
Gregory, J. M., H. T. Banks, P. A. Stott, J. A. Lowe, and M. D. Palmer, 2004: Simulated and observed decadal variability in ocean heat content. Geophys. Res. Lett., 31.L15312, doi:10.1029/2004GL020258. | |
Groisman, P. Ya, and Coauthors, 1999: Changes in the probability of heavy precipitation: Important indicators of climatic change. Climate Change, 42, 243–283. [CrossRef] | |
Groisman, P. Ya, R. W. Knight, D. R. Easterling, T. R. Karl, G. C. Hegerl, and V. N. Razuvaev, 2005: Trends in intense precipitation in the climate record. J. Climate, 18, 1343–1367. [Abstract] | |
Hahmann, A. N., and R. E. Dickinson, 1997: RCCM2–BATS model over tropical South America: Applications to tropical deforestation. J. Climate, 10, 1944–1964. [Abstract] | |
Hasselmann, K., 1979: On the signal-to-noise problem in atmospheric response studies. Meteorology of Tropical Oceans, D. B. Shaw, Ed., Royal Meteorological Society, 251–259. | |
Hasselmann, K., 1997: Multi-pattern fingerprint method for detection and attribution of climate change. Climate Dyn., 13, 601–612. [CrossRef] | |
Hegerl, G. C., and J. M. Wallace, 2002: Influence of patterns of climate variability on the difference between satellite and surface temperature trends. J. Climate, 15, 2412–2428. [Abstract] | |
Hegerl, G. C., H. von Storch, K. Hasselmann, B. D. Santer, U. Cubasch, and P. D. Jones, 1996: Detecting greenhouse-gas-induced climate change with an optimal fingerprint method. J. Climate, 9, 2281–2306. [Abstract] | |
Hegerl, G. C., K. Hasselmann, U. Cubasch, J. F. B. Mitchell, E. Roeckner, R. Voss, and J. Waszkewitz, 1997: Multi-fingerprint detection and attribution of greenhouse-gas and aerosol-forced climate change. Climate Dyn., 13, 613–634. [CrossRef] | |
Hegerl, G. C., P. Stott, M. Allen, J. F. B. Mitchell, S. F. B. Tett, and U. Cubasch, 2000: Detection and attribution of climate change: Sensitivity of results to climate model differences. Climate Dyn., 16, 737–754. [CrossRef] | |
Hegerl, G. C., F. W. Zwiers, V. V. Kharin, and P. A. Stott, 2004: Detectability of anthropogenic changes in temperature and precipitation extremes. J. Climate, 17, 3683–3700. [Abstract] | |
Hegerl, G. C., T. Crowley, M. Allen, W. T. Hyde, H. Pollack, J. Smerdon, and E. Zorita, 2006: Detection of human influence on a new, validated 1500-year temperature reconstruction. J. Climate, in press. | |
Hoerling, M. P., J. W. Hurrell, and T. Y. Xu, 2001: Tropical origins for recent North Atlantic climate change. Science, 292, 90–92. [CrossRef] | |
Houghton, J. T., Y. Ding, D. J. Griggs, M. Noguer, P. J. van der Linden, X. Dai, K. Maskell, and C. A. Johnson, 2001: Climate Change 2001: The Scientific Basis. Cambridge University Press, 881 pp. | |
Hurrell, J. W., 1996: Influence of variations in extratropical wintertime teleconnections on Northern Hemisphere temperature. Geophys. Res. Lett., 23, 655–668. [CrossRef] | |
International Ad Hoc Detection and Attribution Group, 2005: Detecting and attributing external influences on the climate system: A review of recent advances. J. Climate, 18, 1291–1314. [Abstract] | |
Ishii, M., M. Kimoto, and M. Kachi, 2003: Historical subsurface temperature analysis with error estimates. Mon. Wea. Rev., 131, 51–73. [Abstract] | |
Johnson, G. C., and A. H. Orsi, 1997: Southwest Pacific Ocean water-mass changes between 1968/69 and 1990/91. J. Climate, 10, 306–316. [Abstract] | |
Jones, P. D., and M. E. Mann, 2004: Climate over past millennia. Rev. Geophys., 42.RG2002, doi:10.1029/2003RG000143. [CrossRef] | |
Kanamitsu, M., W. Ebisuzaki, J. Woollen, S-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc., 83, 1631–1643. [Abstract] | |
Karl, T. R., and K. E. Trenberth, 2003: Modern global climate change. Science, 302, 1719–1723. [CrossRef] | |
Karl, T. R., S. J. Hassol, C. D. Miller, and W. L. Murray, Eds. 2006: Temperature trends in the lower atmosphere: Steps for understanding and reconciling differences. Climate Change Science Program and the Subcommittee on Global Change Research Report, Washington, DC. | |
Karoly, D. J., and K. Braganza, 2005: Attribution of recent temperature changes in the Australian region. J. Climate, 18, 457–464. [Abstract] | |
Karoly, D. J., K. Braganza, P. A. Stott, J. M. Arblaster, G. A. Meehl, A. J. Broccoli, and D. W. Dixon, 2003: Detection of a human influence on North American climate. Science, 302, 1200–1203. [CrossRef] | |
Katz, R. W., 1999: Extreme value theory for precipitation: Sensitivity analysis for climate change. Adv. Water Resour., 23, 133–139. [CrossRef] | |
Kharin, V. V., and F. W. Zwiers, 2000: Changes in the extremes in an ensemble of transient climate simulations with a coupled atmosphere–ocean GCM. J. Climate, 13, 3760–3788. [Abstract] | |
Kharin, V. V., and F. W. Zwiers, 2005: Estimating extremes in transient climate change simulations. J. Climate, 18, 1156–1173. [Abstract] | |
Kharin, V. V., F. W. Zwiers, and X. Zhang, 2005: Intercomparison of near-surface temperature and precipitation extremes in AMIP-2 simulations, reanalyses, and observations. J. Climate, 18, 5201–5223. [Abstract] | |
Kiktev, D., D. Sexton, L. Alexander, and C. Folland, 2003: Comparison of modeled and observed trends in indices of daily climate extremes. J. Climate, 16, 3560–3571. [Abstract] | |
Lambert, F. H., P. A. Stott, M. R. Allen, and M. A. Palmer, 2004: Detection and attribution of changes in 20th century land precipitation. Geophys. Res. Lett., 31.L10203, doi:10.1029/2004GL019545. [CrossRef] | |
Lee, T. C. K., F. W. Zwiers, X. Zhang, G. C. Hegerl, and M. Tsao, 2005: A Bayesian climate change detection and attribution assessment. J. Climate, 18, 2429–2440. [Abstract] | |
Levitus, S., J. Antonov, J. Wang, T. L. Delworth, K. W. Dixon, and A. J. Broccoli, 2001: Anthropogenic warming of the Earth’s climate system. Science, 292, 267–270. [CrossRef] | |
Levitus, S., J. Antonov, and T. Boyer, 2005: Warming of the world ocean, 1955–2003. Geophys. Res. Lett., 32.L02604, doi:10.1029/2004GL021592. | |
Matthews, H. D., A. J. Weaver, K. J. Meissner, N. P. Gillett, and M. Eby, 2004: Natural and anthropogenic climate change: Incorporating historical land cover change, vegetation dynamics and the global carbon cycle. Climate Dyn., 22, 461–479. [CrossRef] | |
McCollum, J. R., and W. F. Krajewski, 1998: Uncertainty of monthly rainfall estimates from rain gauges in the Global Precipitation Climatology Project. Water Resour. Res., 34, 2647–2654. [CrossRef] | |
Mears, C. A., M. C. Schabel, and F. J. Wentz, 2003: A reanalysis of the MSU channel 2 tropospheric temperature record. J. Climate, 16, 3650–3664. [Abstract] | |
Meehl, G. A., F. Zwiers, J. Evans, T. Knutson, L. Mearns, and P. Whetton, 2000: Trends in extreme weather and climate events: Issues related to modeling extremes in projections of future climate change. Bull. Amer. Meteor. Soc., 81, 427–436. [Abstract] | |
Mitchell, J. F. B., C. A. Wilson, and W. M. Cunningham, 1987: On CO2 climate sensitivity and model dependence of results. Quart. J. Roy. Meteor. Soc., 113, 293–322. [CrossRef] | |
Mitchell, J. F. B., D. J. Karoly, G. C. Hegerl, F. E. Zwiers, and J. Marengo, 2001: Detection of climate change and attribution of causes. Climate Change 2001: The Scientific Basis, J. T. Houghton et al., Eds., Cambridge University Press, 695–738. | |
Murphy, J. M., D. M. H. Sexton, D. N. Barnett, G. S. Jones, M. J. Webb, M. Collins, and D. A. Stainforth, 2004: Quantification of modelling uncertainties in a large ensemble of climate change simulations. Nature, 430, 768–772. [CrossRef] | |
Osborn, T. J., and M. Hulme, 1997: Development of a relationship between station and grid-box rainday frequencies for climate model evaluation. J. Climate, 10, 1885–1908. [Abstract] | |
Palmer, T. N., 1999: A nonlinear dynamical perspective on climate prediction. J. Climate, 12, 575–591. [Abstract] | |
Reichert, K. B., R. Schnur, and L. Bengtsson, 2002: Global ocean warming tied to anthropogenic forcing. Geophys. Res. Lett., 29.1525, doi:10.1029/2001GL013954. [CrossRef] | |
Robock, A., and Y. Liu, 1994: The volcanic signal in Goddard Institute for Space Studies three-dimensional model simulations. J. Climate, 7, 44–55. [Abstract] | |
Rodwell, M. J., D. P. Powell, and C. K. Folland, 1999: Oceanic forcing of the wintertime North Atlantic Oscillation and European climate. Nature, 398, 320–323. [CrossRef] | |
Santer, B. D., and Coauthors, 1996: A search for human influences on the thermal structure in the atmosphere. Nature, 382, 39–45. [CrossRef] | |
Santer, B. D., and Coauthors, 2003a: Influence of satellite data uncertainties on the detection of externally-forced climate change. Science, 300, 1280–1284. [CrossRef] | |
Santer, B. D., and Coauthors, 2003b: Contributions of anthropogenic and natural forcing to recent tropopause height changes. Science, 301, 479–483. [CrossRef] | |
Schär, C., P. L. Vidale, D. Lüthi, C. Frei, C. Häberli, M. A. Liniger, and C. Appenzeller, 2004: The role of increasing temperature variability in European summer heatwaves. Nature, 427.doi:10.1038/nature02300. | |
Schnur, R., and K. Hasselmann, 2004: Optimal filtering for Bayesian detection of climate change. Climate Dyn., 24.doi:10.1007/s00382-004-0456-3. | |
Semenov, V. A., and L. Bengtsson, 2002: Secular trends in daily precipitation characteristics: Greenhouse gas simulation with a coupled AOGCM. Climate Dyn., 19, 123–140. [CrossRef] | |
Shindell, D. T., R. L. Miller, G. A. Schmidt, and L. Pandolfo, 1999: Simulation of recent northern winter climate trends by greenhouse-gas forcing. Nature, 399, 452–455. [CrossRef] | |
Simmons, A. J., M. Hortal, G. Kelly, A. McNally, A. Untch, and S. Uppala, 2005: ECMWF analyses and forecasts of stratospheric winter polar vortex breakup: September 2002 in the Southern Hemisphere and related events. J. Atmos. Sci., 62, 668–689. [Abstract] | |
Smith, T. M., and R. W. Reynolds, 2003: Extended reconstruction of global sea surface temperatures based on COADS data (1854–1997). J. Climate, 16, 1495–1510. [Abstract] | |
Stone, D. A., and M. R. Allen, 2005: The end-to-end attribution problem: From emissions to impacts. Climatic Change, 71, 303–318. [CrossRef] | |
Stott, P. A., 2003: Attribution of regional-scale temperature changes to anthropogenic and natural causes. Geophys. Res. Lett., 30.1728, doi:10.1029/2003GL017324. [CrossRef] | |
Stott, P. A., and S. F. B. Tett, 1998: Scale-dependent detection of climate change. J. Climate, 11, 3282–3294. [Abstract] | |
Stott, P. A., and J. A. Kettleborough, 2002: Origins and estimates of uncertainty in predictions of 21st century temperature rise. Nature, 416, 723–726. [CrossRef] | |
Stott, P. A., S. F. B. Tett, G. S. Jones, M. R. Allen, W. J. Ingram, and J. F. B. Mitchell, 2001: Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dyn., 17, 1–21. [CrossRef] | |
Stott, P. A., D. A. Stone, and M. R. Allen, 2004: Human contribution to the European heatwave of 2003. Nature, 432, 610–614. [CrossRef] | |
Tett, S. F. B., J. F. B. Mitchell, D. E. Parker, and M. R. Allen, 1996: Human influence on the atmospheric vertical temperature structure: Detection and observations. Science, 274, 1170–1173. [CrossRef] | |
Tett, S. F. B., P. A. Stott, M. R. Allen, W. J. Ingram, and J. F. B. Mitchell, 1999: Causes of twentieth century temperature change near the earth’s surface. Nature, 399, 569–572. [CrossRef] | |
Tett, S. F. B., and Coauthors, 2006: The impact of natural and anthropogenic forcings on climate and hydrology since 1550. Climate Dyn., in press. | |
Thompson, D. W. J., and S. Solomon, 2002: Interpretation of recent Southern Hemisphere climate change. Science, 296, 895–899. [CrossRef] | |
Thompson, D. W. J., J. M. Wallace, and G. C. Hegerl, 2000: Annular modes in the extratropical circulation. Part II: Trends. J. Climate, 13, 1018–1036. [Abstract] | |
Thorne, P. W., and Coauthors, 2003: Probable causes of late twentieth century tropospheric temperature trends. Climate Dyn., 21, 573–591. [CrossRef] | |
Trenberth, K. E., and D. A. Paolino, 1980: The Northern Hemisphere sea-level pressure data set: Trends, errors and discontinuities. Mon. Wea. Rev., 108, 855–872. [Abstract] | |
von Storch, H., E. Zorita, J. M. Jones, Y. Dimitriev, F. González-Rouco, and S. F. B. Tett, 2004: Reconstructing past climate from noisy data. Science, 306.doi:10.1126/science.1096109. [CrossRef] | |
Vose, R. S., C. N. Williams, T. C. Peterson, T. R. Karl, and D. R. Easterling, 2003: An evaluation of the time of observation bias adjustment in the U.S. Historical Climatology Network. Geophys. Res. Lett., 30.2046, doi:10.1029/2003GL018111. [CrossRef] | |
Wang, X. L., F. W. Zwiers, and V. R. Swail, 2004: North Atlantic ocean wave climate change scenarios for the twenty-first century. J. Climate, 17, 2368–2383. [Abstract] | |
Wehner, M. F., 2004: Predicted twenty-first-century changes in seasonal extreme precipitation events in the Parallel Climate Model. J. Climate, 17, 4281–4290. [Abstract] | |
White, W. B., M. D. Dettinger, and D. R. Cayan, 2003: Sources of global warming of the upper ocean on decadal period scales. J. Geophys. Res., 108.3248, doi:10.1029/2002JC001396. | |
Widmann, M., C. S. Bretherton, and E. P. Salathe, 2003: Statistical precipitation downscaling over the northwestern United States using numerically simulated precipitation as a predictor. J. Climate, 16, 799–816. [Abstract] | |
Willis, J. K., D. Roemmich, and B. Cornuelle, 2004: Interannual variability in upper-ocean heat content, temperature and thermosteric expansion on global scales. J. Geophys. Res., 109.C12036, doi:10.1029/2003JC002260. [CrossRef] | |
Wong, A., N. L. Bindoff, and J. A. Church, 1999: Large-scale freshening of intermediate waters in the Pacific and Indian Oceans. Nature, 400, 440–443. [CrossRef] | |
Wong, A., N. L. Bindoff, and J. A. Church, 2001: Freshwater and heat changes in the North and South Pacific Oceans between the 1960s and 1985–94. J. Climate, 14, 1613–1633. [Abstract] | |
Yang, F., A. Kumar, M. E. Schlesinger, and W. Wang, 2003: Intensity of hydrological cycles in warmer climates. J. Climate, 16, 2419–2423. [Abstract] | |
Zhang, X., G. Hegerl, F. W. Zwiers, and J. Kenyon, 2005: Avoiding inhomogeneity in percentile-based indices of temperature extremes. J. Climate, 18, 1641–1651. [Abstract] | |
Ziegler, A. D., J. Sheffield, E. P. Maurer, B. Nijssen, E. F. Wood, and D. P. Lettenmaier, 2003: Detection of intensification in global- and continental-scale hydrological cycles: Temporal scale of evaluation. J. Climate, 16, 535–547. [Abstract] | |
Zwiers, F. W., and V. K. Kharin, 1998: Changes in the extremes of the climate simulated by CCC GCM2 under CO2 doubling. J. Climate, 11, 2200–2222. [Abstract] | |
Zwiers, F. W., and X. Zhang, 2003: Toward regional-scale climate change detection. J. Climate, 16, 793–797. [Abstract] |
|
|
|
|
|
|
|
|
|
|
1The word linear
is used in a statistical sense in this context—it indicates linear
scaling of the model-simulated space–time climate change signal. This
use of the word linear does not describe the nature of the
climate change signals that enter into the analysis—those signals may
well evolve in a nonlinear fashion in time.
2There
are fewer distributional concerns with most current applications of the
optimal fingerprinting approach, regardless of whether the variable of
interest is temperature, precipitation, or some other quantity. This is
because almost all studies have applied the technique to data that are
composed of space–time averages computed over long periods of time
(e.g., a decade) and large regions (e.g., 10° × 10° or larger
latitude–longitude boxes). According to the central limit theorem, these
quantities should have distributions that are close to Gaussian.
Cited by
Liwei Jia, Timothy DelSole. (2012) Optimal Determination of Time-Varying Climate Change Signals. Journal of Climate 25:20, 7122-7137
Online publication date: 1-Oct-2012.
Abstract . Full Text . PDF (1240 KB)
Online publication date: 1-Oct-2012.
Abstract . Full Text . PDF (1240 KB)
Frank Drost, David Karoly, Karl Braganza. (2012) Communicating global climate change using simple indices: an update. Climate Dynamics 39:3-4, 989-999
Online publication date: 1-Aug-2012.
CrossRef
Online publication date: 1-Aug-2012.
CrossRef
Dim Coumou, Stefan Rahmstorf. (2012) A decade of weather extremes. Nature Climate Change
Online publication date: 25-Mar-2012.
CrossRef
Online publication date: 25-Mar-2012.
CrossRef
Eugene R. Wahl, Jason E. Smerdon.
(2012) Comparative performance of paleoclimate field and index
reconstructions derived from climate proxies and noise-only predictors. Geophysical Research Letters 39:6, n/a-n/a
Online publication date: 1-Mar-2012.
CrossRef
Online publication date: 1-Mar-2012.
CrossRef
Yangwen Jia, Xiangyi Ding, Hao Wang, Zuhao Zhou, Yaqin Qiu, Cunwen Niu. (2012) Attribution of water resources evolution in the highly water-stressed Hai River Basin of China. Water Resources Research 48:2, n/a-n/a
Online publication date: 1-Feb-2012.
CrossRef
Online publication date: 1-Feb-2012.
CrossRef
Mxolisi E. Shongwe, Geert Jan van Oldenborgh, Bart van den Hurk, Maarten van Aalst. (2011) Projected Changes in Mean and Extreme Precipitation in Africa under Global Warming. Part II: East Africa. Journal of Climate 24:14, 3718-3733
Online publication date: 1-Jul-2011.
Abstract . Full Text . PDF (2892 KB)
Online publication date: 1-Jul-2011.
Abstract . Full Text . PDF (2892 KB)
A. V. Eliseev.
(2011) Estimation of changes in characteristics of the climate and
carbon cycle in the 21st century accounting for the uncertainty of
terrestrial biota parameter values. Izvestiya, Atmospheric and Oceanic Physics 47:2, 131-153
Online publication date: 1-Apr-2011.
CrossRef
Online publication date: 1-Apr-2011.
CrossRef
D. V. Divine, J. Sjolte, E. Isaksson, H. A. J. Meijer, R. S. W. van de Wal, T. Martma, V. Pohjola, C. Sturm, F. Godtliebsen.
(2011) Modelling the regional climate and isotopic composition of
Svalbard precipitation using REMOiso: a comparison with available GNIP
and ice core data. Hydrological Processesn/a
CrossRef
CrossRef
Peter Guttorp, Jia Xu. (2011) Climate change, trends in extremes, and model assessment for a long temperature time series from Sweden. Environmetricsn/a-n/a
Online publication date: 1-Jan-2011.
CrossRef
Online publication date: 1-Jan-2011.
CrossRef
Wendy S. Parker. (2010) Comparative Process Tracing and Climate Change Fingerprints. Philosophy of Science 77:5, 1083-1095
Online publication date: 1-Dec-2010.
CrossRef
Online publication date: 1-Dec-2010.
CrossRef
Nikolaos Christidis, Peter A. Stott, Francis W. Zwiers, Hideo Shiogama, Toru Nozawa. (2010) Probabilistic estimates of recent changes in temperature: a multi-scale attribution analysis. Climate Dynamics 34:7-8, 1139-1156
Online publication date: 1-Jun-2010.
CrossRef
Online publication date: 1-Jun-2010.
CrossRef
Hayley J. Fowler, Daniel Cooley, Stephan R. Sain, Milo Thurston.
(2010) Detecting change in UK extreme precipitation using results from
the climateprediction.net BBC climate change experiment. Extremes 13:2, 241-267
Online publication date: 1-Jun-2010.
CrossRef
Online publication date: 1-Jun-2010.
CrossRef
Giovanna Battipaglia, David Frank, Ulf Büntgen, Petr Dobrovolný, Rudolf Brázdil, Christian Pfister, Jan Esper.
(2010) Five centuries of Central European temperature extremes
reconstructed from tree-ring density and documentary evidence. Global and Planetary Change 72:3, 182-191
Online publication date: 1-Jun-2010.
CrossRef
Online publication date: 1-Jun-2010.
CrossRef
Xiaoming Sun, Ana P. Barros.
(2010) An Evaluation of the Statistics of Rainfall Extremes in Rain
Gauge Observations, and Satellite-Based and Reanalysis Products Using
Universal Multifractals. Journal of Hydrometeorology 11:2, 388-404
Online publication date: 1-Apr-2010.
Abstract . Full Text . PDF (3649 KB)
Online publication date: 1-Apr-2010.
Abstract . Full Text . PDF (3649 KB)
Caspar M. Ammann, Warren M. Washington, Gerald A. Meehl, Lawrence Buja, Haiyan Teng. (2010) Climate engineering through artificial enhancement of natural forcings: Magnitudes and implied consequences. Journal of Geophysical Research 115:D22,
Online publication date: 1-Jan-2010.
CrossRef
Online publication date: 1-Jan-2010.
CrossRef
Čedo Branković, Lidija Srnec, Mirta Patarčić. (2010) An assessment of global and regional climate change based on the EH5OM climate model ensemble. Climatic Change 98:1-2, 21-49
Online publication date: 1-Jan-2010.
CrossRef
Online publication date: 1-Jan-2010.
CrossRef
H. J. Fowler, R. L. Wilby.
(2010) Detecting changes in seasonal precipitation extremes using
regional climate model projections: Implications for managing fluvial
flood risk. Water Resources Research 46:3,
Online publication date: 1-Jan-2010.
CrossRef
Online publication date: 1-Jan-2010.
CrossRef
Peter A. Stott, Nathan P. Gillett, Gabriele C. Hegerl, David J. Karoly, Dáithí A. Stone, Xuebin Zhang, Francis Zwiers. (2010) Detection and attribution of climate change: a regional perspective. Wiley Interdisciplinary Reviews: Climate Changen/a-n/a
Online publication date: 1-Jan-2010.
CrossRef
Online publication date: 1-Jan-2010.
CrossRef
Megan D. Walker, Noah S. Diffenbaugh. (2009) Evaluation of high-resolution simulations of daily-scale temperature and precipitation over the United States. Climate Dynamics 33:7-8, 1131-1147
Online publication date: 1-Dec-2009.
CrossRef
Online publication date: 1-Dec-2009.
CrossRef
H. G. Hidalgo, T. Das, M. D. Dettinger, D. R. Cayan, D. W. Pierce, T. P. Barnett, G. Bala, A. Mirin, A. W. Wood, C. Bonfils, B. D. Santer, T. Nozawa. (2009) Detection and Attribution of Streamflow Timing Changes to Climate Change in the Western United States. Journal of Climate 22:13, 3838-3855
Online publication date: 1-Jul-2009.
Abstract . Full Text . PDF (2490 KB)
Online publication date: 1-Jul-2009.
Abstract . Full Text . PDF (2490 KB)
Simon N. Gosling, Jason A. Lowe, Glenn R. McGregor, Mark Pelling, Bruce D. Malamud. (2009) Associations between elevated atmospheric temperature and human mortality: a critical review of the literature. Climatic Change 92:3-4, 299-341
Online publication date: 1-Feb-2009.
CrossRef
Online publication date: 1-Feb-2009.
CrossRef
Seung-Ki Min, Xuebin Zhang, Francis W. Zwiers, Petra Friederichs, Andreas Hense. (2009) Signal detectability in extreme precipitation changes assessed from twentieth century climate simulations. Climate Dynamics 32:1, 95-111
Online publication date: 1-Jan-2009.
CrossRef
Online publication date: 1-Jan-2009.
CrossRef
Justin Sheffield, Eric F. Wood.
(2008) Projected changes in drought occurrence under future global
warming from multi-model, multi-scenario, IPCC AR4 simulations. Climate Dynamics 31:1, 79-105
Online publication date: 1-Jul-2008.
CrossRef
Gareth S. Jones, Peter A. Stott, Nikolaos Christidis. (2008) Human contribution to rapidly increasing frequency of very warm Northern Hemisphere summers. Journal of Geophysical Research 113:D2, Online publication date: 1-Jul-2008.
CrossRef
Online publication date: 1-Jan-2008.
CrossRef
Subscribe to:
Posts (Atom)