Open Access

Can we see the living brain? A review on in vivo visualization of the brain


Cite

Introduction

At first glance, the question “can we see the living brain?” sounds somewhat strange. Of course we can see the living brain, neurosurgeons see it every day. But if we look closer and beyond the “neurosurgical” superficial macroscopic view of the brain, the more correct questions arise “How to define the living brain versus brain death?” and “Which brain structures are we able to visualize in vivo?”

Before we proceed into details of visualization I would like to define “living” or “life” in a medical context. Which organ is decisive for life? Already as early as 2,500 years ago this question was answered in a different way.

Aristotle who made vivisections of animals was sure that the throbbing heart which pumps warm blood in all parts of the body would be the center of life whereas the greyish cold brain would have only the task of cooling the hot blood.

Hippocrates by contrast, put the brain in the center of human vitality by defining “κατα ταυτα νομιζω τον εγκεφαλον δυναμιν εχειν πλειστην εν τω ανθρωπω” (after all I judge the brain to have the most power in man).

However, Aristotle won this dispute. Until the second half of the 20th century, life ended with the last beat of the heart, a consequent and clear-cut definition of life and death. And still, in our daily emotional life, the heart and not the brain plays the central role. So still today, we are used to feel “from the bottom of our heart” and never “from the bottom of our brain”.

From a medical perspective, the definition of death by using a conception of brain death became only important when in transplantation medicine living organs including the heart were necessary for transplantation while the death of the individual was now to be defined as the death of his or her brain. In 1968, an ad hoc committee at Harvard Medical School advanced new criteria for determining death. It proposed that patients in irreversible coma with no discernible central nervous system activity were actually dead. But even to date, almost fifty years later and with all our sophisticated technical investigations, the “true” definition of brain death remains tricky.

The Uniform Determination of Death Act (UDDA) is a draft state law that was approved for the United States in 1981 by the National Conference of Commissioners on Uniform State Laws, in cooperation with the American Medical Association, the American Bar Association, and the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. By the definition of UDDA, an individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem (“whole brain” definition), is dead. A determination of death must be made in accordance with accepted medical standards. But the U.S. law differs substantially from all other brain death legislation in the world because the U.S. law does not spell out details of the neurologic examination and accepted medical standards [1].

Therefore, brain death is in principle accepted by the medical world but there is still no global consensus in diagnostic criteria [2, 3]. However, the “whole brain” definition of death has reached broad public acceptance and legal enactment in many countries. Despite this, the philosophical and ethical debate about the “whole brain” definition of death is far from being closed. As a European example on the ongoing controversy the recent revision of the Swiss Academy of Medical Sciences guidelines for determining death can be cited [4].

Only recently, in collaboration with the World Health Organization, a definition of human death was developed, namely: “the permanent loss of capacity for consciousness and all brainstem functions, as a consequence of permanent cessation of circulation or catastrophic brain injury” [5].

In summary, the living brain as determinant for individual life is much more difficult to describe in comparison with the beating or not anymore beating heart.

How to understand brain structure and function

The starting point of understanding the brain was analyzing its structure and function by pioneers of scientific anatomy in the 16th century, the most important being Andreas Vesalius [6]. Only beginning in the late 19th century, a more and more concise idea on histological features of normal and pathological brain tissue and peculiarities of glial and nerve cells was formed. All this knowledge was gained from investigations of the dead brain. Exemplary pioneers were among the neurologists Alois Alzheimer and among the neurosurgeons Harvey W. Cushing [7, 8].

From neuropathology we learned about functions of brain areas by analyzing pathologies interfering with defined functions or even deleting these functions.

Important techniques were developed such as various cytological and histological methods for the visualization of structures and morphological patterns, histochemical and immunohistochemical methods for the visualization of chemical components such as fat, sugars or proteins, transmission electron microscopy for the visualization of subcellular structures such as organelles and scanning electron microscopy for the visualization of surface properties of cells and tissues, and molecular genetic approaches such as fluorescence in situ-hybridization for the visualization of genetic mutations on a chromosomal level. However, all these techniques were applied and are applied until to date on samples of the dead brain or, in the best case, on ex-vivo tissue or cells.

Visualization of the living brain

As neuropathologist being a morphologist myself, I am admiring the most impressive progress made by imaging techniques to unravel various aspects of brain morphology and function. Therefore, I would like to present here a short historic overview on some imaging techniques which have enormously contributed to our knowledge of the brain on the one hand and - by their application for medical use - to an important progress in our diagnostic and therapeutic options in clinical medicine on the other hand.

Only when, beginning in the second half of the 20th century, novel imaging techniques were developed, we got a deeper impression of “whole brain” structure by computer assisted tomography (CT) and magnetic resonance imaging (MRI) as well as of brain metabolism by functional magnetic imaging (fMRI), positron emission tomography (PET) and near infrared spectroscopy (NIRS), and of the organization of white matter tracts by diffusion tensor imaging (DTI).

But also for investigation of living nerve cells or brain tissue on a microscopic level novel visualization techniques were developed such as optical coherence tomography (OCT), confocal laser scanning microscopy (CLSM) and stimulated emission depletion microscopy (STED).

Computer tomography (CT)

In 1979, the Nobel Assembly of Karolinska Institutet had decided to award the Nobel Prize in Physiology and Medicine jointly to Allan M. Cormack and Godfrey Newbold Hounsfield “for the development of computer assisted tomography” [9, 10].

Based on a prototype from 1969, in 1972, the first commercial computer tomograph EMI Mark 1 was installed in the Atkinson Morley Hospital in London. Right from the beginning it was used for examining the skull, with special emphasis on diseases of the brain. The method soon experienced an enormous breakthrough in the radiological diagnosis of neural diseases. The reason is the precision and sensitivity of computed tomography.

The Nobel Assembly outlined that the basic feature of the method is that the X-ray tube, in a definite pattern of movement, permits the rays to sweep in many directions through a cross-section of the body or the organ being examined. The X-ray film is replaced by sensitive crystal detectors, and the signals emitted by amplifiers when the detectors are struck by rays are stored and analyzed mathematically in a computer. The computer is programmed to rapidly reconstruct an image of the examined cross-section by solving a large number of equations including a corresponding number of unknowns. The image presented on the screen of the oscilloscope is drawn in a fine system of squares, a so-called matrix, in which each individual square corresponds to a part of the examined organ. Each element expresses the permeability of X-rays of the corresponding part of the organ. A fundamental peculiarity is that the image elements do not influence each other while the image is being reconstructed. In other words, there is no overlapping of elements in the image. Because the sensitivity of the crystal detectors and amplifier is more than 100 times as great as X-ray film, computed tomography can detect very subtle variations of tissue density. This means that the density resolution is exceptionally high. For all practical purposes one achieves a correct image of a thin section of organ tissue.

Magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI)

In 2003, the Nobel Assembly of Karolinska Institutet had decided to award the Nobel Prize in Physiology and Medicine jointly to Paul C. Lauterbur and Sir Peter Mansfield “for their discoveries concerning magnetic resonance imaging” [11, 12].

The Nobel Assembly outlined that the basic feature of the method is that atomic nuclei in a strong magnetic field rotate with a frequency that is dependent on the strength of the magnetic field. Their energy can be increased if they absorb radio waves with the same frequency (resonance). When the atomic nuclei return to their previous energy level, radio waves are emitted. These discoveries were awarded the Nobel Prize in Physics in 1952 (jointly to Felix Bloch and Edward Mills Purcell “for their development of new methods for nuclear magnetic precision measurements and discoveries in connection therewith”). During the following decades, magnetic resonance was used mainly for studies of the chemical structure of substances. In the beginning of the 1970s, Paul C. Lauterbur and Sir Peter Mansfield made pioneering contributions which later led to the applications of magnetic resonance in medical imaging.

Hannah Devlin, with additional contributions by Irene Tracey, Heidi Johansen-Berg and Stuart Clare from the Department of Clinical Neurology, University of Oxford, gave a most concise overview on the physics of MRI. The cylindrical tube of an MRI scanner houses a very powerful electro-magnet. A typical research scanner has a field strength of 3 tesla (T), about 50,000 times greater than the Earth’s magnetic field. The magnetic field inside the scanner affects the magnetic nuclei of atoms. Normally atomic nuclei are randomly oriented but under the influence of a magnetic field the nuclei become aligned with the direction of the field. When pointing in the same direction, the tiny magnetic signals from individual nuclei add up coherently resulting in a signal that is large enough to measure. In MRI, it is the magnetic signal from hydrogen nuclei in water (H2O) that is detected. The key to MRI is that the signal from hydrogen nuclei varies in strength depending on the surroundings. This provides a means of discriminating between gray matter, white matter and cerebral spinal fluid in structural images of the brain.

The development of the functional MRI in the 1990s is credited to Seiji Ogawa and Ken K. Kwong [13, 14].

fMRI works by detecting the changes in blood oxygenation and flow that occur in response to neural activity – when a brain area is more active it consumes more oxygen and to meet this increased demand blood flow increases to the active area. Oxygen is delivered to neurons by hemoglobin in capillary red blood cells. When neuronal activity increases there is an increased demand for oxygen and the local response is an increase in blood flow to regions of increased neural activity. Hemoglobin is diamagnetic when oxygenated but paramagnetic when deoxygenated. This difference in magnetic properties leads to small differences in the MR signal of blood depending on the degree of oxygenation.

Since blood oxygenation varies according to the levels of neural activity these differences can be used to detect brain activity. Deoxygenated areas show a low fMRI signal, oxygenated areas a high fMRI signal. This form of MRI is known as blood oxygenation level dependent (BOLD) imaging.

fMRI is been used to map the visual, auditory and sensory regions and moving toward higher brain functions such as cognitive functions in the brain. fMRI is one of the most revolutionary tools for neuroscience and has become essential for the investigation of higher order cognitive functions such as brain interconnectivity and plasticity. Thus, fMRI allows for visualization of a network of brain regions which are active when we think.

So, for instance, Altenmueller demonstrated that cortical activation during music processing reflects the auditory “learning biography,” the personal experiences accumulated over time. Listening to music, learning to play an instrument, formal instruction, and professional training result in multiple, in many instances multisensory, representations of music seem to be partly interchangeable and rapidly adaptive. In summary, as soon as we consider “real music” apart from laboratory experiments, we have to expect individually formed and quickly adaptive brain substrates, including widely distributed neuronal networks in both hemispheres [15].

But be careful! Boubela et al. showed that imaging the amygdala with functional MRI is confounded by multiple averse factors, notably signal dropouts due to magnetic inhomogeneity and low signal-to-noise ratio, making it difficult to obtain consistent activation patterns in this region. However, even when consistent signal changes are identified, they are likely to be due to nearby vessels, most notably the basal vein of Rosenthal. Using an accelerated fMRI sequence with a high temporal resolution (TR = 333 ms) combined with susceptibility-weighted imaging, they showed how signal changes in the amygdala region can be related to a venous origin [16]. This finding is raising concerns about many conclusions that rely on fMRI evidence alone.

Near infrared spectroscopy (NIRS)

Next to fMRI, NIRS is another option to monitor cerebral oxygenation.

In 1977, the idea of using NIRS to non-invasively measure cerebral tissue oxygenation was first introduced by Frans F. Jöbsis of Duke University [17]. He describes himself that the relatively good transparency of biological materials in the near infrared region of the spectrum permits sufficient photon transmission through organs in situ for the monitoring of cellular events. Observations by infrared transillumination in the exposed heart and in the brain without surgical intervention show that oxygen sufficiency for cytochrome a, a3, function, changes in tissue blood volume, and the average hemoglobin-oxyhemoglobin equilibrium can be recorded effectively and in continuous fashion for research and clinical purposes.

Shortly, the principle of NIRS is based on the fact that near-infrared light passes through skin and skull readily and is absorbed by certain biological molecules in the brain. The ability to noninvasively monitor the cerebral oxygenation has made NIRS a unique tool for the assessment of brain oxygen sufficiency in health and disease.

Positron emission tomography (PET)

Already in 1975, Michel Ter-Pogossian and Michael E. Phelps described a new technique for obtaining emission transaxial images of sections of organs containing positron-emitting radiopharmaceuticals [18]. The detection system is a hexagonal array of 24 NaI (T1) detectors connected to coincidence circuits to achieve the “electronic” collimation of annihilation photons. The image is formed by a computer-applied algorithm which provides quantitative reconstruction of the distribution of activity.

This technique allowed for detection of gamma rays emitted by a positron-emitting radionuclide as tracer. For instance, the 18F-radionuclide containing fluorodeoxyglucose (FDG) is an analogue to glucose. The concentrations of tracer indicate the metabolic brain tissue activity. PET can be combined with CT (PET/ CT) and recently also with MRI (MR-PET). In 2007, the first MR-PET prototype was produced by Siemens.

Diffusion tensor imaging (DTI)

In 1994, the group of Denis LeBihan described a new imaging modality, the MR diffusion tensor imaging [19]. DTI may be used to map and characterize the three-dimensional diffusion of water as a function of spatial location. The diffusion tensor describes the magnitude, the degree of anisotropy, and the orientation of diffusion anisotropy [20]. Free molecules show random, diffusion-driven displacements, thus, isotropic diffusion in all directions whereas anisotropic diffusion is determined by areas of high mature axonal order. The most advanced application is certainly that of fiber tracking in the brain, the tractography, demonstrating connectivity patterns of the white matter and may be obtained using the diffusion anisotropy and the principal diffusion directions. By post-acquisition processing of data for visualization such as tensor calculation, color-encoded fractional anisotropy maps of the white matter can be established.

However, the integrity and number of fibers cannot be assessed in a reliable method. some scientists suggest that we should better speak of streamlines rather than real fibers, since the fiber patterns in fact are reconstructions of water diffusion [21].

In 2005, the term “connectome” for the brain’s wiring was first introduced, the starting point for the Human Connectome Project [22, 23].

Optical coherence tomography (OCT)

In 1991, a technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a twodimensional image of optical scattering from internal tissue microstructures [24]. OCT is based on the interference of infrared radiation (900-1500 nm) and living tissue, thus, allowing for cross-sectional visualization of microstructural morphology.

In 1993, OCT was introduced as routine method for the investigation of the retina achieving the highest depth resolution in vivo retinal images to date. OCT visualizes the thin tissues of the retina by measuring reflections off the retinal surface with a resolution of up to 5 μm [25].

Confocal laser scanning microscopy (CLSM)

Confocal microscopy offers cellular resolution of up to 0.5 μm in depth. Confocal laser light illuminates cell structures in one focal plane. Illumination and detection light-paths share a common focal (confocal) plane. The confocal light source gives less light to cell structures outside the focal plane and most is blocked by pinholes. By that depth selectivity can be obtained, so that one could speak of optical sectioning of the tissue. Repetitive plane scanning along the z-axis results in a stack of 2D images which can be used to construct a 3D view of the area of interest.

In a preliminary ex-vivo study, the use of a confocal endomicroscopic system designed for on-site neurosurgical examination of tissue samples on a separate work station in the OR has been described [26]. The routine intraoperative use of confocal laser scanning microscopy in order to detect clear borders of tumor tissue or to differentiate between non-tumorous brain tissue and various tumor types is still in an experimental stage and needs more experience. However, intraoperative in vivo visualization of brain tumors by confocal laser scanning microscopy deserves further investigation and may become an option for neurosurgeons for quick differentiation between tumorous and non-tumorous tissue.

Single-molecule microscopy and stimulated emission depletion microscopy (STED)

In 2014, the The Royal Swedish Academy of Sciences had decided to award the Nobel Prize in Chemistry jointly for Eric Betzig, Stefan W. Hell and William E. Moerner “for the development of super-resolved fluorescence microscopy”.

The Nobel Assembly outlined that Eric Betzig and William Moerner, working separately, laid the foundation for single-molecule microscopy. The method relies upon the possibility to turn the fluorescence of individual molecules on and off. Scientists image the same area multiple times, letting just a few interspersed molecules glow each time. Superimposing these images yields a dense super-image resolved at the nanolevel. In 2006, it was Eric Betzig who utilized this method for the first time[27, 28].

Based on the same principle, Stefan Hell developed in 2000 stimulated emission depletion (STED) microscopy whereby two laser beams are utilized; one stimulates fluorescent molecules to glow, another cancels out all fluorescence except for that in a nanometer-sized volume.

In 1873, the microscopist Ernst Abbe working at Leitz company in Jena stipulated a physical limit for the maximum resolution of traditional optical microscopy: it could never become better than maximal 200nm. The diffraction limit of resolution in light microscopy does not affect most imaging at the organ or tissue level. However, when zooming into cells, where a large number of subcellular structures are smaller than the wavelength of the light, it becomes an obstacle for studying these structures in detail.

By using STED microscopy, scanning over the sample yields an image - as put forward by a number of comments - with a “resolution” better than Abbe’s stipulated limit. Critically seen Abbe’s formula and limit are still as valid as before; but the STED microscopy has improved the localization of stimulated fluorescent molecules when embedded amongst others, thus, reducing the blurring provoked by molecules situated next to each other.

Due to the achievements of Betzig, Moerner and Hell the optical microscope can now better peer into the nanoworld.

Do we really see the living brain?

The final answer has to be: yes, neurosurgeons do on a macroscopic level, and no, all others do not.

But what do all others really see? Aren’t we like the observer of Platon’s cave analogy where the observer is tied in a cave, so he is just able to look at a wall in the cave. Onto this wall shadows of persons passing the cave’s entrance are casted. The tied observer thinks these shadows of images are the reality, which are but the sensual appearances.

Therefore we should be aware that like in neuropathology where we are looking on repetitive staining artifacts, in basic neuroscience research and in clinical neurosciences we are only looking at radiological, electric, magnetic, isotropic or other artificial signals which are not the living brain. In 1929, the Belgian painter René Magritte created a painting which he called “Ceci n’est pas une pipe.” - that is not a pipe (figure 1).

Figure 1

René Magritte’s painting “Ceci n’est pas une Pipe”, 1929.

The observer should realize that he is only looking at the painting of a pipe and not at the real thing. So, as cautious natural scientist, do not forget to be very critical when looking on the “living brain”, avoid the betrayal of images.

eISSN:
2451-8387
Language:
English