header logo image


Page 1,103«..1020..1,1021,1031,1041,105..1,1101,120..»

Muscle – Wikipedia

December 19th, 2016 6:43 am

Muscle is a soft tissue found in most animals. Muscle cells contain protein filaments of actin and myosin that slide past one another, producing a contraction that changes both the length and the shape of the cell. Muscles function to produce force and motion. They are primarily responsible for maintaining and changing posture, locomotion, as well as movement of internal organs, such as the contraction of the heart and the movement of food through the digestive system via peristalsis.

Muscle tissues are derived from the mesodermal layer of embryonic germ cells in a process known as myogenesis. There are three types of muscle, skeletal or striated, cardiac, and smooth. Muscle action can be classified as being either voluntary or involuntary. Cardiac and smooth muscles contract without conscious thought and are termed involuntary, whereas the skeletal muscles contract upon command.[1] Skeletal muscles in turn can be divided into fast and slow twitch fibers.

Muscles are predominantly powered by the oxidation of fats and carbohydrates, but anaerobic chemical reactions are also used, particularly by fast twitch fibers. These chemical reactions produce adenosine triphosphate (ATP) molecules that are used to power the movement of the myosin heads.[2]

The term muscle is derived from the Latin musculus meaning "little mouse" perhaps because of the shape of certain muscles or because contracting muscles look like mice moving under the skin.[3][4]

The anatomy of muscles includes gross anatomy, which comprises all the muscles of an organism, and microanatomy, which comprises the structures of a single muscle.

Muscle tissue is a soft tissue, and is one of the four fundamental types of tissue present in animals. There are three types of muscle tissue recognized in vertebrates:

Cardiac and skeletal muscles are "striated" in that they contain sarcomeres that are packed into highly regular arrangements of bundles; the myofibrils of smooth muscle cells are not arranged in sarcomeres and so are not striated. While the sarcomeres in skeletal muscles are arranged in regular, parallel bundles, cardiac muscle sarcomeres connect at branching, irregular angles (called intercalated discs). Striated muscle contracts and relaxes in short, intense bursts, whereas smooth muscle sustains longer or even near-permanent contractions.

Skeletal (voluntary) muscle is further divided into two broad types: slow twitch and fast twitch:

The density of mammalian skeletal muscle tissue is about 1.06kg/liter.[8] This can be contrasted with the density of adipose tissue (fat), which is 0.9196kg/liter.[9] This makes muscle tissue approximately 15% denser than fat tissue.

All muscles are derived from paraxial mesoderm. The paraxial mesoderm is divided along the embryo's length into somites, corresponding to the segmentation of the body (most obviously seen in the vertebral column.[10] Each somite has 3 divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. The only epaxial muscles in humans are the erector spinae and small intervertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including those of the limbs are hypaxial, and inervated by the ventral rami of the spinal nerves.[10]

During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongate skeletal muscle cells.[10]

Skeletal muscles are sheathed by a tough layer of connective tissue called the epimysium. The epimysium anchors muscle tissue to tendons at each end, where the epimysium becomes thicker and collagenous. It also protects muscles from friction against other muscles and bones. Within the epimysium are multiple bundles called fascicles, each of which contains 10 to 100 or more muscle fibers collectively sheathed by a perimysium. Besides surrounding each fascicle, the perimysium is a pathway for nerves and the flow of blood within the muscle. The threadlike muscle fibers are the individual muscle cells (myocytes), and each cell is encased within its own endomysium of collagen fibers. Thus, the overall muscle consists of fibers (cells) that are bundled into fascicles, which are themselves grouped together to form muscles. At each level of bundling, a collagenous membrane surrounds the bundle, and these membranes support muscle function both by resisting passive stretching of the tissue and by distributing forces applied to the muscle.[11] Scattered throughout the muscles are muscle spindles that provide sensory feedback information to the central nervous system. (This grouping structure is analogous to the organization of nerves which uses epineurium, perineurium, and endoneurium).

This same bundles-within-bundles structure is replicated within the muscle cells. Within the cells of the muscle are myofibrils, which themselves are bundles of protein filaments. The term "myofibril" should not be confused with "myofiber", which is a simply another name for a muscle cell. Myofibrils are complex strands of several kinds of protein filaments organized together into repeating units called sarcomeres. The striated appearance of both skeletal and cardiac muscle results from the regular pattern of sarcomeres within their cells. Although both of these types of muscle contain sarcomeres, the fibers in cardiac muscle are typically branched to form a network. Cardiac muscle fibers are interconnected by intercalated discs,[12] giving that tissue the appearance of a syncytium.

The filaments in a sarcomere are composed of actin and myosin.

The gross anatomy of a muscle is the most important indicator of its role in the body. There is an important distinction seen between pennate muscles and other muscles. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. However, In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris.

Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii (biceps). The tough, fibrous epimysium of skeletal muscle is both connected to and continuous with the tendons. In turn, the tendons connect to the periosteum layer surrounding the bones, permitting the transfer of force from the muscles to the skeleton. Together, these fibrous layers, along with tendons and ligaments, constitute the deep fascia of the body.

The muscular system consists of all the muscles present in a single body. There are approximately 650 skeletal muscles in the human body,[13] but an exact number is difficult to define. The difficulty lies partly in the fact that different sources group the muscles differently and partly in that some muscles, such as palmaris longus, are not always present.

A muscular slip is a narrow length of muscle that acts to augment a larger muscle or muscles.

The muscular system is one component of the musculoskeletal system, which includes not only the muscles but also the bones, joints, tendons, and other structures that permit movement.

The three types of muscle (skeletal, cardiac and smooth) have significant differences. However, all three use the movement of actin against myosin to create contraction. In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the nerves, the motoneurons (motor nerves) in particular. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine.

The action a muscle generates is determined by the origin and insertion locations. The cross-sectional area of a muscle (rather than volume or length) determines the amount of force it can generate by defining the number of "sarcomeres" which can operate in parallel. Each skeletal muscle contains long units called myofibrils, and each myofibril is a chain of sarcomeres. Since contraction occurs at the same time for all connected sarcomeres in a muscles cell, these chains of sarcomeres shorten together, thus shortening the muscle fiber, resulting in overall length change. [14]The amount of force applied to the external environment is determined by lever mechanics, specifically the ratio of in-lever to out-lever. For example, moving the insertion point of the biceps more distally on the radius (farther from the joint of rotation) would increase the force generated during flexion (and, as a result, the maximum weight lifted in this movement), but decrease the maximum speed of flexion. Moving the insertion point proximally (closer to the joint of rotation) would result in decreased force but increased velocity. This can be most easily seen by comparing the limb of a mole to a horse - in the former, the insertion point is positioned to maximize force (for digging), while in the latter, the insertion point is positioned to maximize speed (for running).

Muscular activity accounts for much of the body's energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles have a short-term store of energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (note that in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a 'warm up' period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise.

At rest, skeletal muscle consumes 54.4 kJ/kg(13.0kcal/kg) per day. This is larger than adipose tissue (fat) at 18.8kJ/kg (4.5kcal/kg), and bone at 9.6kJ/kg (2.3kcal/kg).[15]

The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes.

In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain.

Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain's cerebral cortex. Commands are routed though the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response.

Deeper muscles such as those involved in posture often are controlled from nuclei in the brain stem and basal ganglia.

The afferent leg of the peripheral nervous system is responsible for conveying sensory information to the brain, primarily from the sense organs like the skin. In the muscles, the muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness. More easily demonstrated than explained, proprioception is the "unconscious" awareness of where the various regions of the body are located at any one time. This can be demonstrated by anyone closing their eyes and waving their hand around. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses.

Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion.

The efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. The efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. This low efficiency is the result of about 40% efficiency of generating ATP from food energy, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. For example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour,[16] this amounts to about 20 percent efficiency at 250 watts of mechanical output. The mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. These can be synthesized experimentally using work loop analysis.

A display of "strength" (e.g. lifting a weight) is a result of three factors that overlap: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle's force angle on the lever, moment arm length, joint capabilities).

Vertebrate muscle typically produces approximately 2533N (5.67.4lbf) of force per square centimeter of muscle cross-sectional area when isometric and at optimal length.[17] Some invertebrate muscles, such as in crab claws, have much longer sarcomeres than vertebrates, resulting in many more sites for actin and myosin to bind and thus much greater force per square centimeter at the cost of much slower speed. The force generated by a contraction can be measured non-invasively using either mechanomyography or phonomyography, be measured in vivo using tendon strain (if a prominent tendon is present), or be measured directly using more invasive methods.

The strength of any given muscle, in terms of force exerted on the skeleton, depends upon length, shortening speed, cross sectional area, pennation, sarcomere length, myosin isoforms, and neural activation of motor units. Significant reductions in muscle strength can indicate underlying pathology, with the chart at right used as a guide.

Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the "strongest". But below are several muscles whose strength is noteworthy for different reasons.

Humans are genetically predisposed with a larger percentage of one type of muscle group over another. An individual born with a greater percentage of Type I muscle fibers would theoretically be more suited to endurance events, such as triathlons, distance running, and long cycling events, whereas a human born with a greater percentage of Type II muscle fibers would be more likely to excel at sprinting events such as 100 meter dash.[citation needed]

Exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles. One such effect is muscle hypertrophy, an increase in size. This is used in bodybuilding.

Various exercises require a predominance of certain muscle fiber utilization over another. Aerobic exercise involves long, low levels of exertion in which the muscles are used at well below their maximal contraction strength for long periods of time (the most classic example being the marathon). Aerobic events, which rely primarily on the aerobic (with oxygen) system, use a higher percentage of Type I (or slow-twitch) muscle fibers, consume a mixture of fat, protein and carbohydrates for energy, consume large amounts of oxygen and produce little lactic acid. Anaerobic exercise involves short bursts of higher intensity contractions at a much greater percentage of their maximum contraction strength. Examples of anaerobic exercise include sprinting and weight lifting. The anaerobic energy delivery system uses predominantly Type II or fast-twitch muscle fibers, relies mainly on ATP or glucose for fuel, consumes relatively little oxygen, protein and fat, produces large amounts of lactic acid and can not be sustained for as long a period as aerobic exercise. Many exercises are partially aerobic and partially anaerobic; for example, soccer and rock climbing involve a combination of both.

The presence of lactic acid has an inhibitory effect on ATP generation within the muscle; though not producing fatigue, it can inhibit or even stop performance if the intracellular concentration becomes too high. However, long-term training causes neovascularization within the muscle, increasing the ability to move waste products out of the muscles and maintain contraction. Once moved out of muscles with high concentrations within the sarcomere, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. In addition to increasing the level of lactic acid, strenuous exercise causes the loss of potassium ions in muscle and causing an increase in potassium ion concentrations close to the muscle fibres, in the interstitium. Acidification by lactic acid may allow recovery of force so that acidosis may protect against fatigue rather than being a cause of fatigue.[19]

Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.[20]

Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells.[13]

Biological factors such as age and hormone levels can affect muscle hypertrophy. During puberty in males, hypertrophy occurs at an accelerated rate as the levels of growth-stimulating hormones produced by the body increase. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body's major growth hormones, on average, men find hypertrophy much easier to achieve than women. Taking additional testosterone or other anabolic steroids will increase muscular hypertrophy.

Muscular, spinal and neural factors all affect muscle building. Sometimes a person may notice an increase in strength in a given muscle even though only its opposite has been subject to exercise, such as when a bodybuilder finds her left biceps stronger after completing a regimen focusing only on the right biceps. This phenomenon is called cross education.[citation needed]

Inactivity and starvation in mammals lead to atrophy of skeletal muscle, a decrease in muscle mass that may be accompanied by a smaller number and size of the muscle cells as well as lower protein content.[21] Muscle atrophy may also result from the natural aging process or from disease.

In humans, prolonged periods of immobilization, as in the cases of bed rest or astronauts flying in space, are known to result in muscle weakening and atrophy. Atrophy is of particular interest to the manned spaceflight community, because the weightlessness experienced in spaceflight results is a loss of as much as 30% of mass in some muscles.[22][23] Such consequences are also noted in small hibernating mammals like the golden-mantled ground squirrels and brown bats.[24]

During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass, known as sarcopenia. The exact cause of sarcopenia is unknown, but it may be due to a combination of the gradual failure in the "satellite cells" that help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors that are necessary to maintain muscle mass and satellite cell survival. Sarcopenia is a normal aspect of aging, and is not actually a disease state yet can be linked to many injuries in the elderly population as well as decreasing quality of life.[25]

There are also many diseases and conditions that cause muscle atrophy. Examples include cancer and AIDS, which induce a body wasting syndrome called cachexia. Other syndromes or conditions that can induce skeletal muscle atrophy are congestive heart disease and some diseases of the liver.

Neuromuscular diseases are those that affect the muscles and/or their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A large proportion of neurological disorders, ranging from cerebrovascular accident (stroke) and Parkinson's disease to CreutzfeldtJakob disease, can lead to problems with movement or motor coordination.

Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies.

A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its longitudinal axis and expands across the transverse axis, producing vibrations at the surface.[26]

The evolutionary origin of muscle cells in metazoans is a highly debated topic. In one line of thought scientists have believed that muscle cells evolved once and thus all animals with muscles cells have a single common ancestor. In the other line of thought, scientists believe muscles cells evolved more than once and any morphological or structural similarities are due to convergent evolution and genes that predate the evolution of muscle and even the mesoderm - the germ layer from which many scientists believe true muscle cells derive.

Schmid and Seipel argue that the origin of muscle cells is a monophyletic trait that occurred concurrently with the development of the digestive and nervous systems of all animals and that this origin can be traced to a single metazoan ancestor in which muscle cells are present. They argue that molecular and morphological similarities between the muscles cells in cnidaria and ctenophora are similar enough to those of bilaterians that there would be one ancestor in metazoans from which muscle cells derive. In this case, Schmid and Seipel argue that the last common ancestor of bilateria, ctenophora, and cnidaria was a triploblast or an organism with three germ layers and that diploblasty, meaning an organism with two germ layers, evolved secondarily due to their observation of the lack of mesoderm or muscle found in most cnidarians and ctenophores. By comparing the morphology of cnidarians and ctenophores to bilaterians, Schmid and Seipel were able to conclude that there were myoblast-like structures in the tentacles and gut of some species of cnidarians and in the tentacles of ctenophores. Since this is a structure unique to muscle cells, these scientists determined based on the data collected by their peers that this is a marker for striated muscles similar to that observed in bilaterians. The authors also remark that the muscle cells found in cnidarians and ctenophores are often contests due to the origin of these muscle cells being the ectoderm rather than the mesoderm or mesendoderm. The origin of true muscles cells is argued by others to be the endoderm portion of the mesoderm and the endoderm. However, Schmid and Seipel counter this skepticism about whether or not the muscle cells found in ctenophores and cnidarians are true muscle cells by considering that cnidarians develop through a medusa stage and polyp stage. They observe that in the hydrozoan medusa stage there is a layer of cells that separate from the distal side of the ectoderm to form the striated muscle cells in a way that seems similar to that of the mesoderm and call this third separated layer of cells the ectocodon. They also argue that not all muscle cells are derived from the mesendoderm in bilaterians with key examples being that in both the eye muscles of vertebrates and the muscles of spiralians these cells derive from the ectodermal mesoderm rather than the endodermal mesoderm. Furthermore, Schmid and Seipel argue that since myogenesis does occur in cnidarians with the help of molecular regulatory elements found in the specification of muscles cells in bilaterians that there is evidence for a single origin for striated muscle.[27]

In contrast to this argument for a single origin of muscle cells, Steinmetz et al. argue that molecular markers such as the myosin II protein used to determine this single origin of striated muscle actually predate the formation of muscle cells. This author uses an example of the contractile elements present in the porifera or sponges that do truly lack this striated muscle containing this protein. Furthermore, Steinmetz et al. present evidence for a polyphyletic origin of striated muscle cell development through their analysis of morphological and molecular markers that are present in bilaterians and absent in cnidarians, ctenophores, and bilaterians. Steimetz et al. showed that the traditional morphological and regulatory markers such as actin, the ability to couple myosin side chains phosphorylation to higher concentrations of the positive concentrations of calcium, and other MyHC elements are present in all metazoans not just the organisms that have been shown to have muscle cells. Thus, the usage of any of these structural or regulatory elements in determining whether or not the muscle cells of the cnidarians and ctenophores are similar enough to the muscle cells of the bilaterians to confirm a single lineage is questionable according to Steinmetz et al. Furthermore, Steinmetz et al. explain that the orthologues of the MyHc genes that have been used to hypothesize the origin of striated muscle occurred through a gene duplication event that predates the first true muscle cells (meaning striated muscle), and they show that the MyHc genes are present in the sponges that have contractile elements but no true muscle cells. Furthermore, Steinmetz et all showed that the localization of this duplicated set of genes that serve both the function of facilitating the formation of striated muscle genes and cell regulation and movement genes were already separated into striated myhc and non-muscle myhc. This separation of the duplicated set of genes is shown through the localization of the striated myhc to the contractile vacuole in sponges while the non-muscle myhc was more diffusely expressed during developmental cell shape and change. Steinmetz et al. found a similar pattern of localization in cnidarians with except with the cnidarian N. vectensis having this striated muscle marker present in the smooth muscle of the digestive track. Thus, Steinmetz et al. argue that the pleisiomorphic trait of the separated orthologues of myhc cannot be used to determine the monophylogeny of muscle, and additionally argue that the presence of a striated muscle marker in the smooth muscle of this cnidarian shows a fundamentally different mechanism of muscle cell development and structure in cnidarians.[28]

Steinmetz et al. continue to argue for multiple origins of striated muscle in the metazoans by explaining that a key set of genes used to form the troponin complex for muscle regulation and formation in bilaterians is missing from the cnidarians and ctenophores, and of 47 structural and regulatory proteins observed, Steinmetz et al. were not able to find even on unique striated muscle cell protein that was expressed in both cnidarians and bilaterians. Furthermore, the Z-disc seemed to have evolved differently even within bilaterians and there is a great deal diversity of proteins developed even between this clade, showing a large degree of radiation for muscle cells. Through this divergence of the Z-disc, Steimetz et al. argue that there are only four common protein components that were present in all bilaterians muscle ancestors and that of these for necessary Z-disc components only an actin protein that they have already argued is an uninformative marker through its pleisiomorphic state is present in cnidarians. Through further molecular marker testing, Steinmetz et al. observe that non-bilaterians lack many regulatory and structural components necessary for bilaterians muscle formation and do not find any unique set of proteins to both bilaterians and cnidarians and ctenophores that are not present in earlier, more primitive animals such as the sponges and amoebozoans. Through this analysis the authors conclude that due to the lack of elements that bilaterians muscles are dependent on for structure and usage, nonbilaterian muscles must be of a different origin with a different set regulatory and structural proteins.[28]

In another take on the argument, Andrikou and Arnone use the newly available data on gene regulatory networks to look at how the hierarchy of genes and morphogens and other mechanism of tissue specification diverge and are similar among early deuterostomes and protostomes. By understanding not only what genes are present in all bilaterians but also the time and place of deployment of these genes, Andrikou and Arnone discuss a deeper understanding of the evolution of myogenesis.[29]

In their paper Andrikou and Arnone argue that to truly understand the evolution of muscle cells the function of transcriptional regulators must be understood in the context of other external and internal interactions. Through their analysis, Andrikou and Arnone found that there were conserved orthologues of the gene regulatory network in both invertebrate bilaterians and in cnidarians. They argue that having this common, general regulatory circuit allowed for a high degree of divergence from a single well functioning network. Andrikou and Arnone found that the orthologues of genes found in vertebrates had been changed through different types of structural mutations in the invertebrate deuterostomes and protostomes, and they argue that these structural changes in the genes allowed for a large divergence of muscle function and muscle formation in these species. Andrikou and Arnone were able to recognize not only any difference due to mutation in the genes found in vertebrates and invertebrates but also the integration of species specific genes that could also cause divergence from the original gene regulatory network function. Thus, although a common muscle patterning system has been determined, they argue that this could be due to a more ancestral gene regulatory network being coopted several times across lineages with additional genes and mutations causing very divergent development of muscles. Thus it seems that myogenic patterning framework may be an ancestral trait. However, Andrikou and Arnone explain that the basic muscle patterning structure must also be considered in combination with the cis regulatory elements present at different times during development. In contrast with the high level of gene family apparatuses structure, Andrikou and Arnone found that the cis regulatory elements were not well conserved both in time and place in the network which could show a large degree of divergence in the formation of muscle cells. Through this analysis, it seems that the myogenic GRN is an ancestral GRN with actual changes in myogenic function and structure possibly being linked to later coopts of genes at different times and places.[29]

Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate/arthropod evolutionary line.[30][dead link] This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscle types.

Read the rest here:
Muscle - Wikipedia

Read More...

Rheumatoid Arthritis – National Library of Medicine – PubMed …

December 19th, 2016 6:42 am

Evidence reviews Antimalarials for treating rheumatoid arthritis

Antimalarials have been used for the treatment of rheumatoid arthritis (RA) for several decades. This review found four trials, with 300 patients receiving hydrochloroquine and 292 receiving placebo. A benefit was observed in the patients taking hydroxychloroquine compared to placebo. There was no difference between the two groups in terms of those who had to withdraw from trials due to side effects.

The purpose was to examine the effectiveness of patient education interventions on health status (pain, functional disability, psychological wellbeing and disease activity) in patients with rheumatoid arthritis (RA). Patient education had a small beneficial effect at first followup for disability, joint counts, patient global assessment, psychological status, and depression. At final followup (314 months) no evidence of significant benefits was found.

In rheumatoid arthritis (RA), the joints are swollen, stiff and painful. Nonsteroidal antiinflammatory drugs (NSAIDs) such as ibuprofen are often recommended to ease the pain and swelling in the joints. Paracetamol (also known as acetaminophen) is another type of medication to relieve pain in RA.

See all (641)

Antimalarials have been used for the treatment of rheumatoid arthritis (RA) for several decades. This review found four trials, with 300 patients receiving hydrochloroquine and 292 receiving placebo. A benefit was observed in the patients taking hydroxychloroquine compared to placebo. There was no difference between the two groups in terms of those who had to withdraw from trials due to side effects.

The purpose was to examine the effectiveness of patient education interventions on health status (pain, functional disability, psychological wellbeing and disease activity) in patients with rheumatoid arthritis (RA). Patient education had a small beneficial effect at first followup for disability, joint counts, patient global assessment, psychological status, and depression. At final followup (314 months) no evidence of significant benefits was found.

In rheumatoid arthritis (RA), the joints are swollen, stiff and painful. Nonsteroidal antiinflammatory drugs (NSAIDs) such as ibuprofen are often recommended to ease the pain and swelling in the joints. Paracetamol (also known as acetaminophen) is another type of medication to relieve pain in RA.

See all (126)

Read the original:
Rheumatoid Arthritis - National Library of Medicine - PubMed ...

Read More...

Welcome to the Natural Medicines Research Collaboration

December 18th, 2016 2:43 am

Natural Standard has provided just what the doctor ordered - an evidence-based review to tell us what is known, and what is not. Given the clear imperative to talk with our patients about CAM, here's the evidence summary you need.

Harley Goldberg, DO Medical Director, CAM Kaiser Permanente

Natural Standard provides a critical and transparent review of the evidence regarding herbs and supplements. As such, it is an extremely valuable resource for both clinicians and investigators.

David Eisenberg, MD Director, Osher Institute Division for Research and Education in Complementary & Integrative Medicine Harvard Medical School

The best and most authoritative web site available on herbal medicines.

The World Health Organization (WHO)

At last! An authoritative reference on the many nuances of Alternative Medicine. How to separate the good from the bad and the unknown. An extraordinary piece of work that will become the standard text in this area.

Vincent T. DeVita Jr., MD The Amy and Joseph Perella Professor of Medicine Yale School of Medicine Former Director, National Cancer Institute

Thank you for a great interview; and thanks so much for access to the Natural Standard website. I'm in research heaven!

Angela Hynes Author, Freelance Writer & Editor specializing in health and fitness

Natural Standard is an AAFP recommended resource for development of EB CME content.

American Academy of Family Physicians

"Natural Standard is like having access to the best library in the world so you don't have to look things up in ten locations!"

Jonny Bowden, PhD, CNS Author, The 150 Healthiest Foods on Earth

View original post here:
Welcome to the Natural Medicines Research Collaboration

Read More...

Psoriatic arthritis – Wikipedia

December 14th, 2016 8:42 am

Psoriatic arthritis (also arthritis psoriatica, arthropathic psoriasis or psoriatic arthropathy) is a type of inflammatory arthritis[1][2] that will develop in between 6 and 42% of people who have the chronic skin condition psoriasis.[3] Psoriatic arthritis is classified as a seronegative spondyloarthropathy and therefore occurs more commonly in patients with tissue type HLA-B27.

Pain, swelling, or stiffness in one or more joints is commonly present in psoriatic arthritis.[4] Psoriatic arthritis is inflammatory, and affected joints are generally red or warm to the touch.[4] Asymmetrical oligoarthritis, defined as inflammation affecting one to four joints during the first six months of disease, is present in 70% of cases. However, in 15% of cases the arthritis is symmetrical. The joints of the hand that are involved in psoriasis are the proximal interphalangeal (PIP), the distal interphalangeal (DIP), the metacarpophalangeal (MCP), and the wrist. Involvement of the distal interphalangeal joints (DIP) is a characteristic feature and is present in 15% of cases.

In addition to affecting the joints of the hands and wrists, psoriatic arthritis may affect the fingers, nails, and skin. Sausage-like swelling in the fingers or toes, known as dactylitis, may occur.[4] Psoriasis can also cause changes to the nails, such as pitting or separation from the nail bed,[4]onycholysis, hyperkeratosis under the nails, and horizontal ridging.[5] Psoriasis classically presents with scaly skin lesions, which are most commonly seen over extensor surfaces such as the scalp, natal cleft and umbilicus.

In psoriatic arthritis, pain can occur in the area of the sacrum (the lower back, above the tailbone),[4] as a result of sacroiliitis or spondylitis, which is present in 40% of cases. Pain can occur in and around the feet and ankles, especially enthesitis in the Achilles tendon (inflammation of the Achilles tendon where it inserts into the bone) or plantar fasciitis in the sole of the foot.[4]

Along with the above noted pain and inflammation, there is extreme exhaustion that does not go away with adequate rest. The exhaustion may last for days or weeks without abatement. Psoriatic arthritis may remain mild, or may progress to more destructive joint disease. Periods of active disease, or flares, will typically alternate with periods of remission. In severe forms, psoriatic arthritis may progress to arthritis mutilans[6] which on X-ray gives a "pencil-in-cup" appearance.

Because prolonged inflammation can lead to joint damage, early diagnosis and treatment to slow or prevent joint damage is recommended.[7]

The exact causes are not yet known, but a number of genetic associations have been identified in a genome-wide association study of psoriasis and psoriatic arthritis including HLA-B27.[8][9]

There is no definitive test to diagnose psoriatic arthritis. Symptoms of psoriatic arthritis may closely resemble other diseases, including rheumatoid arthritis. A rheumatologist (a doctor specializing in diseases affecting the joints) may use physical examinations, health history, blood tests and x-rays to accurately diagnose psoriatic arthritis.

Factors that contribute to a diagnosis of psoriatic arthritis include:

Other symptoms that are more typical of psoriatic arthritis than other forms of arthritis include inflammation in the Achilles tendon (at the back of the heel) or the Plantar fascia (bottom of the feet), and dactylitis (sausage-like swelling of the fingers or toes).[10]

Magnetic resonance image of the index finger in psoriatic arthritis (mutilans form). Shown is a T2 weighted fat suppressed sagittal image. Focal increased signal (probable erosion) is seen at the base of the middle phalanx (long thin arrow). There is synovitis at the proximal interphalangeal joint (long thick arrow) plus increased signal in the overlying soft tissues indicating oedema (short thick arrow). There is also diffuse bone oedema (short thin arrows) involving the head of the proximal phalanx and extending distally down the shaft.

Magnetic resonance images of the fingers in psoriatic arthritis. Shown are T1 weighted axial (a) pre-contrast and (b) post-contrast images exhibiting dactylitis due to flexor tenosynovitis at the second finger with enhancement and thickening of the tendon sheath (large arrow). Synovitis is seen in the fourth proximal interphalangeal joint (small arrow).

(a) T1-weighted and (b) short tau inversion recovery (STIR) magnetic resonance images of lumbar and lower thoracic spine in psoriatic arthritis. Signs of active inflammation are seen at several levels (arrows). In particular, anterior spondylitis is seen at level L1/L2 and an inflammatory Andersson lesion at the upper vertebral endplate of L3.

Magnetic resonance images of sacroiliac joints. Shown are T1-weighted semi-coronal magnetic resonance images through the sacroiliac joints (a) before and (b) after intravenous contrast injection. Enhancement is seen at the right sacroiliac joint (arrow, left side of image), indicating active sacroiliitis.

There are five main types of psoriatic arthritis:

The underlying process in psoriatic arthritis is inflammation; therefore, treatments are directed at reducing and controlling inflammation. Milder cases of psoriatic arthritis may be treated with NSAIDs alone; however, there is a trend toward earlier use of disease-modifying antirheumatic drugs or biological response modifiers to prevent irreversible joint destruction.

Typically the medications first prescribed for psoriatic arthritis are NSAIDs such as ibuprofen and naproxen, followed by more potent NSAIDs like diclofenac, indomethacin, and etodolac. NSAIDs can irritate the stomach and intestine, and long-term use can lead to gastrointestinal bleeding.[11][12] Coxibs (COX-2 inhibitors) e.g. Celecoxib or Etoricoxib, are associated with a statistically significant 50 to 66% relative risk reduction in gastrointestinal ulcers and bleeding complications compared to traditional NSAIDs, but carry an increased rate of cardiovascular events such as myocardial infarction (MI) or heart attack, and stroke.[13][14] Both COX-2 inhibitors and other non-selective NSAIDS have potential adverse effects that include damage to the kidneys.

These are used in persistent symptomatic cases without exacerbation. Rather than just reducing pain and inflammation, this class of drugs helps limit the amount of joint damage that occurs in psoriatic arthritis. Most DMARDs act slowly and may take weeks or even months to take full effect. Drugs such as methotrexate or leflunomide are commonly prescribed; other DMARDS used to treat psoriatic arthritis include cyclosporin, azathioprine, and sulfasalazine. These immunosuppressant drugs can also reduce psoriasis skin symptoms but can lead to liver and kidney problems and an increased risk of serious infection.

The most recent class of treatment is called biological response modifiers or biologics has been developed using recombinant DNA technology. Biologic medications are derived from living cells cultured in a laboratory. Unlike traditional DMARDS that affect the entire immune system, biologics target specific parts of the immune system. They are given by injection or intravenous (IV) infusion.

Biologics prescribed for psoriatic arthritis are TNF- inhibitors, including infliximab, etanercept, golimumab, certolizumab pegol and adalimumab, as well as the IL-12/IL-23 inhibitor ustekinumab.

Biologics may increase the risk of minor and serious infections.[citation needed] More rarely, they may be associated with nervous system disorders, blood disorders or certain types of cancer.[citation needed]

A first-in-class treatment option for the management of psoriatic arthritis, apremilast is a small molecule phosphodiesterase-4 inhibitor approved for use by the FDA in 2014. By inhibiting PDE4, an enzyme which breaks down cyclic adenosine monophosphate, cAMP levels rise, resulting in the down-regulation of various pro-inflammatory factors inlcuding TNF- and the up-regulation of anti-inflammatory factor interleukin 10.

It is given in tablet form and taken by mouth. Side effects include headache, back pain, nausea, diarrhea, fatigue, nasopharyngitis and upper respiratory tract infections, as well as depression and weight loss.

Patented in 2014 and manufactured by Celgene, there is no current generic equivalent available on the market.

A review found tentative evidence of benefit of low level laser therapy and concluded that it could be considered for relief of pain and stiffness associated RA.[15]

Retinoid etretinate is effective for both arthritis and skin lesions. Photochemotherapy with methoxy psoralen and long wave ultraviolet light (PUVA) are used for severe skin lesions. Doctors may use joint injections with corticosteroids in cases where one joint is severely affected. In psoriatic arthritis patients with severe joint damage orthopedic surgery may be implemented to correct joint destruction, usually with use of a joint replacement. Surgery is effective for pain alleviation, correcting joint disfigurement, and reinforcing joint usefulness and strength.

Seventy percent of people who develop psoriatic arthritis first show signs of psoriasis on the skin, 15 percent develop skin psoriasis and arthritis at the same time, and 15 percent develop skin psoriasis following the onset of psoriatic arthritis.[16]

Psoriatic arthritis can develop in people who have any level severity of psoriatic skin disease, ranging from mild to very severe.[17]

Psoriatic arthritis tends to appear about 10 years after the first signs of psoriasis. For the majority of people this is between the ages of 30 and 55, but the disease can also affect children. The onset of psoriatic arthritis symptoms before symptoms of skin psoriasis is more common in children than adults.[18]

More than 80% of patients with psoriatic arthritis will have psoriatic nail lesions characterized by nail pitting, separation of the nail from the underlying nail bed, ridging and cracking, or more extremely, loss of the nail itself (onycholysis).[18]

Men and women are equally affected by this condition. Like psoriasis, psoriatic arthritis is more common among Caucasians than Africans or Asians.[19]

Read this article:
Psoriatic arthritis - Wikipedia

Read More...

Induced pluripotent stem cell – Wikipedia

December 13th, 2016 6:42 am

Induced pluripotent stem cells (also known as iPS cells or iPSCs) are a type of pluripotent stem cell that can be generated directly from adult cells. The iPSC technology was pioneered by Shinya Yamanakas lab in Kyoto, Japan, who showed in 2006 that the introduction of four specific genes encoding transcription factors could convert adult cells into pluripotent stem cells.[1] He was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent." [2]

Pluripotent stem cells hold great promise in the field of regenerative medicine. Because they can propagate indefinitely, as well as give rise to every other cell type in the body (such as neurons, heart, pancreatic, and liver cells), they represent a single source of cells that could be used to replace those lost to damage or disease.

The most well-known type of pluripotent stem cell is the embryonic stem cell. However, since the generation of embryonic stem cells involves destruction (or at least manipulation) [3] of the pre-implantation stage embryo, there has been much controversy surrounding their use. Further, because embryonic stem cells can only be derived from embryos, it has so far not been feasible to create patient-matched embryonic stem cell lines.

Since iPSCs can be derived directly from adult tissues, they not only bypass the need for embryos, but can be made in a patient-matched manner, which means that each individual could have their own pluripotent stem cell line. These unlimited supplies of autologous cells could be used to generate transplants without the risk of immune rejection. While the iPSC technology has not yet advanced to a stage where therapeutic transplants have been deemed safe, iPSCs are readily being used in personalized drug discovery efforts and understanding the patient-specific basis of disease.[4]

iPSCs are typically derived by introducing products of specific set of pluripotency-associated genes, or reprogramming factors, into a given cell type. The original set of reprogramming factors (also dubbed Yamanaka factors) are the transcription factors Oct4 (Pou5f1), Sox2, cMyc, and Klf4. While this combination is most conventional in producing iPSCs, each of the factors can be functionally replaced by related transcription factors, miRNAs, small molecules, or even non-related genes such as lineage specifiers.

iPSC derivation is typically a slow and inefficient process, taking 12 weeks for mouse cells and 34 weeks for human cells, with efficiencies around 0.01%0.1%. However, considerable advances have been made in improving the efficiency and the time it takes to obtain iPSCs. Upon introduction of reprogramming factors, cells begin to form colonies that resemble pluripotent stem cells, which can be isolated based on their morphology, conditions that select for their growth, or through expression of surface markers or reporter genes.

Induced pluripotent stem cells were first generated by Shinya Yamanaka's team at Kyoto University, Japan, in 2006.[1] They hypothesized that genes important to embryonic stem cell (ESC) function might be able to induce an embryonic state in adult cells. They chose twenty-four genes previously identified as important in ESCs and used retroviruses to deliver these genes to mouse fibroblasts. The fibroblasts were engineered so that any cells reactivating the ESC-specific gene, Fbx15, could be isolated using antibiotic selection.

Upon delivery of all twenty-four factors, ESC-like colonies emerged that reactivated the Fbx15 reporter and could propagate indefinitely. To identify the genes necessary for reprogramming, the researchers removed one factor at a time from the pool of twenty-four. By this process, they identified four factors, Oct4, Sox2, cMyc, and Klf4, which were each necessary and together sufficient to generate ESC-like colonies under selection for reactivation of Fbx15.

Similar to ESCs, these iPSCs had unlimited self-renewal and were pluripotent, contributing to lineages from all three germ layers in the context of embryoid bodies, teratomas, and fetal chimeras. However, the molecular makeup of these cells, including gene expression and epigenetic marks, was somewhere between that of a fibroblast and an ESC, and the cells failed to produce viable chimeras when injected into developing embryos.

In June 2007, three separate research groups, including that of Yamanaka's, a Harvard/University of California, Los Angeles collaboration, and a group at MIT, published studies that substantially improved on the reprogramming approach, giving rise to iPSCs that were indistinguishable from ESCs. Unlike the first generation of iPSCs, these second generation iPSCs produced viable chimeric mice and contributed to the mouse germline, thereby achieving the 'gold standard' for pluripotent stem cells.

These second-generation iPSCs were derived from mouse fibroblasts by retroviral-mediated expression of the same four transcription factors (Oct4, Sox2, cMyc, Klf4). However, instead of using Fbx15 to select for pluripotent cells, the researchers used Nanog, a gene that is functionally important in ESCs. By using this different strategy, the researchers created iPSCs that were functionally identical to ESCs.[5][6][7][8]

Reprogramming of human cells to iPSCs was reported in November 2007 by two independent research groups: Shinya Yamanaka of Kyoto University, Japan, who pioneered the original iPSC method, and James Thomson of University of Wisconsin-Madison who was the first to derive human embryonic stem cells. With the same principle used in mouse reprogramming, Yamanaka's group successfully transformed human fibroblasts into iPSCs with the same four pivotal genes, OCT4, SOX2, KLF4, and C-MYC, using a retroviral system,[9] while Thomson and colleagues used a different set of factors, OCT4, SOX2, NANOG, and LIN28, using a lentiviral system.[10]

Obtaining fibroblasts to produce iPSCs involves a skin biopsy, and there has been a push towards identifying cell types that are more easily accessible.[11][12] In 2008, iPSCs were derived from human keratinocytes, which could be obtained from a single hair pluck.[13][14] In 2010, iPSCs were derived from peripheral blood cells,[15][16] and in 2012, iPSCs were made from renal epithelial cells in the urine.[17]

Other considerations for starting cell type include mutational load (for example, skin cells may harbor more mutations due to UV exposure),[11][12] time it takes to expand the population of starting cells,[11] and the ability to differentiate into a given cell type.[18]

[citation needed]

The generation of iPS cells is crucially dependent on the transcription factors used for the induction.

Oct-3/4 and certain products of the Sox gene family (Sox1, Sox2, Sox3, and Sox15) have been identified as crucial transcriptional regulators involved in the induction process whose absence makes induction impossible. Additional genes, however, including certain members of the Klf family (Klf1, Klf2, Klf4, and Klf5), the Myc family (c-myc, L-myc, and N-myc), Nanog, and LIN28, have been identified to increase the induction efficiency.

Although the methods pioneered by Yamanaka and others have demonstrated that adult cells can be reprogrammed to iPS cells, there are still challenges associated with this technology:

The table at right summarizes the key strategies and techniques used to develop iPS cells over the past half-decade. Rows of similar colors represents studies that used similar strategies for reprogramming.

One of the main strategies for avoiding problems (1) and (2) has been to use small compounds that can mimic the effects of transcription factors. These molecule compounds can compensate for a reprogramming factor that does not effectively target the genome or fails at reprogramming for another reason; thus they raise reprogramming efficiency. They also avoid the problem of genomic integration, which in some cases contributes to tumor genesis. Key studies using such strategy were conducted in 2008. Melton et al. studied the effects of histone deacetylase (HDAC) inhibitor valproic acid. They found that it increased reprogramming efficiency 100-fold (compared to Yamanakas traditional transcription factor method).[32] The researchers proposed that this compound was mimicking the signaling that is usually caused by the transcription factor c-Myc. A similar type of compensation mechanism was proposed to mimic the effects of Sox2. In 2008, Ding et al. used the inhibition of histone methyl transferase (HMT) with BIX-01294 in combination with the activation of calcium channels in the plasma membrane in order to increase reprogramming efficiency.[33] Deng et al. of Beijing University reported on July 2013 that induced pluripotent stem cells can be created without any genetic modification. They used a cocktail of seven small-molecule compounds including DZNep to induce the mouse somatic cells into stem cells which they called CiPS cells with the efficiency at 0.2% comparable to those using standard iPSC production techniques. The CiPS cells were introduced into developing mouse embryos and were found to contribute to all major cells types, proving its pluripotency.[34][35]

Ding et al. demonstrated an alternative to transcription factor reprogramming through the use of drug-like chemicals. By studying the MET (mesenchymal-epithelial transition) process in which fibroblasts are pushed to a stem-cell like state, Dings group identified two chemicals ALK5 inhibitor SB431412 and MEK (mitogen-activated protein kinase) inhibitor PD0325901 which was found to increase the efficiency of the classical genetic method by 100 fold. Adding a third compound known to be involved in the cell survival pathway, Thiazovivin further increases the efficiency by 200 fold. Using the combination of these three compounds also decreased the reprogramming process of the human fibroblasts from four weeks to two weeks. [36][37]

In April 2009, it was demonstrated that generation of iPS cells is possible without any genetic alteration of the adult cell: a repeated treatment of the cells with certain proteins channeled into the cells via poly-arginine anchors was sufficient to induce pluripotency.[38] The acronym given for those iPSCs is piPSCs (protein-induced pluripotent stem cells).

Another key strategy for avoiding problems such as tumor genesis and low throughput has been to use alternate forms of vectors: adenovirus, plasmids, and naked DNA and/or protein compounds.

In 2008, Hochedlinger et al. used an adenovirus to transport the requisite four transcription factors into the DNA of skin and liver cells of mice, resulting in cells identical to ESCs. The adenovirus is unique from other vectors like viruses and retroviruses because it does not incorporate any of its own genes into the targeted host and avoids the potential for insertional mutagenesis.[39] In 2009, Freed et al. demonstrated successful reprogramming of human fibroblasts to iPS cells.[40] Another advantage of using adenoviruses is that they only need to present for a brief amount of time in order for effective reprogramming to take place.

Also in 2008, Yamanaka et al. found that they could transfer the four necessary genes with a plasmid.[41] The Yamanaka group successfully reprogrammed mouse cells by transfection with two plasmid constructs carrying the reprogramming factors; the first plasmid expressed c-Myc, while the second expressed the other three factors (Oct4, Klf4, and Sox2). Although the plasmid methods avoid viruses, they still require cancer-promoting genes to accomplish reprogramming. The other main issue with these methods is that they tend to be much less efficient compared to retroviral methods. Furthermore, transfected plasmids have been shown to integrate into the host genome and therefore they still pose the risk of insertional mutagenesis. Because non-retroviral approaches have demonstrated such low efficiency levels, researchers have attempted to effectively rescue the technique with what is known as the PiggyBac Transposon System. Several studies have demonstrated that this system can effectively deliver the key reprogramming factors without leaving footprint mutations in the host cell genome. The PiggyBac Transposon System involves the re-excision of exogenous genes, which eliminates the issue of insertional mutagenesis. [42]

In January 2014, two articles were published claiming that a type of pluripotent stem cell can be generated by subjecting the cells to certain types of stress (bacterial toxin, a low pH of 5.7, or physical squeezing); the resulting cells were called STAP cells, for stimulus-triggered acquisition of pluripotency.[43]

In light of difficulties that other labs had replicating the results of the surprising study, in March 2014, one of the co-authors has called for the articles to be retracted.[44] On 4 June 2014, the lead author, Obokata agreed to retract both the papers [45] after she was found to have committed research misconduct as concluded in an investigation by RIKEN on 1 April 2014.[46]

MicroRNAs are short RNA molecules that bind to complementary sequences on messenger RNA and block expression of a gene. Measuring variations in microRNA expression in iPS cells can be used to predict their differentiation potential.[47] Addition of microRNAs can also be used to enhance iPS potential. Several mechanisms have been proposed.[47] ES cell-specific microRNA molecules (such as miR-291, miR-294 and miR-295) enhance the efficiency of induced pluripotency by acting downstream of c-Myc.[48]microRNAs can also block expression of repressors of Yamanakas four transcription factors, and there may be additional mechanisms induce reprogramming even in the absence of added exogenous transcription factors.[47]

Induced pluripotent stem cells are similar to natural pluripotent stem cells, such as embryonic stem (ES) cells, in many aspects, such as the expression of certain stem cell genes and proteins, chromatin methylation patterns, doubling time, embryoid body formation, teratoma formation, viable chimera formation, and potency and differentiability, but the full extent of their relation to natural pluripotent stem cells is still being assessed.[49]

Gene expression and genome-wide H3K4me3 and H3K27me3 were found to be extremely similar between ES and iPS cells.[50][citation needed] The generated iPSCs were remarkably similar to naturally isolated pluripotent stem cells (such as mouse and human embryonic stem cells, mESCs and hESCs, respectively) in the following respects, thus confirming the identity, authenticity, and pluripotency of iPSCs to naturally isolated pluripotent stem cells:

Recent achievements and future tasks for safe iPSC-based cell therapy are collected in the review of Okano et al.[62]

The task of producing iPS cells continues to be challenging due to the six problems mentioned above. A key tradeoff to overcome is that between efficiency and genomic integration. Most methods that do not rely on the integration of transgenes are inefficient, while those that do rely on the integration of transgenes face the problems of incomplete reprogramming and tumor genesis, although a vast number of techniques and methods have been attempted. Another large set of strategies is to perform a proteomic characterization of iPS cells.[63] Further studies and new strategies should generate optimal solutions to the five main challenges. One approach might attempt to combine the positive attributes of these strategies into an ultimately effective technique for reprogramming cells to iPS cells.

Another approach is the use of iPS cells derived from patients to identify therapeutic drugs able to rescue a phenotype. For instance, iPS cell lines derived from patients affected by ectodermal dysplasia syndrome (EEC), in which the p63 gene is mutated, display abnormal epithelial commitment that could be partially rescued by a small compound[64]

An attractive feature of human iPS cells is the ability to derive them from adult patients to study the cellular basis of human disease. Since iPS cells are self-renewing and pluripotent, they represent a theoretically unlimited source of patient-derived cells which can be turned into any type of cell in the body. This is particularly important because many other types of human cells derived from patients tend to stop growing after a few passages in laboratory culture. iPS cells have been generated for a wide variety of human genetic diseases, including common disorders such as Down syndrome and polycystic kidney disease.[65][66] In many instances, the patient-derived iPS cells exhibit cellular defects not observed in iPS cells from healthy patients, providing insight into the pathophysiology of the disease.[67] An international collaborated project, StemBANCC, was formed in 2012 to build a collection of iPS cell lines for drug screening for a variety of disease. Managed by the University of Oxford, the effort pooled funds and resources from 10 pharmaceutical companies and 23 universities. The goal is to generate a library of 1,500 iPS cell lines which will be used in early drug testing by providing a simulated human disease environment.[68] Furthermore, combining hiPSC technology and genetically-encoded voltage and calcium indicators provided a large-scale and high-throughput platform for cardiovascular drug safety screening.[69]

A proof-of-concept of using induced pluripotent stem cells (iPSCs) to generate human organ for transplantation was reported by researchers from Japan. Human liver buds (iPSC-LBs) were grown from a mixture of three different kinds of stem cells: hepatocytes (for liver function) coaxed from iPSCs; endothelial stem cells (to form lining of blood vessels) from umbilical cord blood; and mesenchymal stem cells (to form connective tissue). This new approach allows different cell types to self-organize into a complex organ, mimicking the process in fetal development. After growing in vitro for a few days, the liver buds were transplanted into mice where the liver quickly connected with the host blood vessels and continued to grow. Most importantly, it performed regular liver functions including metabolizing drugs and producing liver-specific proteins. Further studies will monitor the longevity of the transplanted organ in the host body (ability to integrate or avoid rejection) and whether it will transform into tumors.[70][71] Using this method, cells from one mouse could be used to test 1,000 drug compounds to treat liver disease, and reduce animal use by up to 50,000.[72]

Embryonic cord-blood cells were induced into pluripotent stem cells using plasmid DNA. Using cell surface endothelial/pericytic markers CD31 and CD146, researchers identified 'vascular progenitor', the high-quality, multipotent vascular stem cells. After the iPS cells were injected directly into the vitreous of the damaged retina of mice, the stem cells engrafted into the retina, grew and repaired the vascular vessels.[73][74]

Labelled iPSCs-derived NSCs injected into laboratory animals with brain lesions were shown to migrate to the lesions and some motor function improvement was observed.[75]

Although a pint of donated blood contains about two trillion red blood cells and over 107 million blood donations are collected globally, there is still a critical need for blood for transfusion. In 2014, type O red blood cells were synthesized at the Scottish National Blood Transfusion Service from iPSC. The cells were induced to become a mesoderm and then blood cells and then red blood cells. The final step was to make them eject their nuclei and mature properly. Type O can be transfused into all patients. Human clinical trials were not expected to begin before 2016.[76]

The first human clinical trial using autologous iPSCs was approved by the Japan Ministry Health and was to be conducted in 2014 in Kobe. However the trial was suspended after Japan's new regenerative medicine laws came into effect last November.[77] iPSCs derived from skin cells from six patients suffering from wet age-related macular degeneration were to be reprogrammed to differentiate into retinal pigment epithelial (RPE) cells. The cell sheet would be transplanted into the affected retina where the degenerated RPE tissue was excised. Safety and vision restoration monitoring would last one to three years.[78][79] The benefits of using autologous iPSCs are that there is theoretically no risk of rejection and it eliminates the need to use embryonic stem cells.[79]

See the original post here:
Induced pluripotent stem cell - Wikipedia

Read More...

Communities Voices and Insights – Washington Times

December 8th, 2016 5:45 am

Related Articles

Robert P. George writes, "If Donald Trump keeps his word, his victory over Hillary Clinton will have monumental consequences not only for the Supreme Court but for the entire federal judiciary."

Pope Francis on fake news; Jared Kushner and Israeli settlement; Trump and evangelicals

Soviet dictator Joseph Stalin had his Pulitzer Prize winning New York Times reporter Walter Duranty to cover-up his genocidal crimes. Cuban dictator Fidel Castro had his New York Times reporter Herbert Matthews to deny his Communist fanaticism. And Elon Musk has his New York Times reporter Andrew Ross Sorkin to whitewash his job-killing, crony-capitalist, multi-billion dollar plunder of American taxpayers.

Billionaire Sheldon Adelson sanctimoniously demanding a federal monopoly on the exploitation of fashionable debaucheries is like a dog walking on his hind legs. It is done awkwardly, but you are surprised to see it done at all.

Along with the joys of the season, the holidays call us to shop at busier than usual stores; attend special parties and events; as well as travel for extended amounts of time with planes, trains and automobiles to visit family and friends.

Gary Sinise at Pearl Harbor; World Vision and Israel, by Luke Moon; Bibles in hotel rooms

I'm sure that Donald Trump and the people who will serve in his administration have high goals for how to "Make America Great Again" -- that phrase borrowed from Reagan's 1980 campaign. The rarest achievement of all, however, might be for Mr. Trump to serve the American people so splendidly that even after eight years in office, voters say, "We'll take some more of that."

The new federal initiative breaks my heart.

Fake news is an old story. It has featured in domestic politics and international affairs since the beginning of time.

Hugh Hewitt on Christians as strangers in the land; Trump and LGBT Americans; Betsy DeVos

As a modern day woman, I count the Jeep Wrangler as my favorite vehicle and there is much good I can say about the 2017 version. Of course, better to let the Wrangler speak for itself.

By Jan. 20, President Obama will be gone and President-elect Donald Trump will have the opportunity to lead America for four or possibly eight years. But there is still a month and a half left for Mr. Obama to do as much damage has he can on the way out.

General Mattis on reading; Chip and Joanna Gaines; Os Guinness on Christians as salt and light

The United States should abandon its propensity for moral sermonizing in the manner of Dickensian schoolmarms about foreign leaders in obedience to the biblical injunction that, "He who is without sin ... let him first cast a stone at her." We need to tend to our own gardens.

As winter approaches, the temperature has gotten exponentially hotter on the Crimean Peninsula and Ukraine's border with Russia.

Evangelical opinion on a Cabinet with Romney; John Heubusch authors novel gripped by of science and faith; book reading in America

By Lawrence J. Fedewa

The presidential election of 2016 has been the most dramatic in memory. Each candidate went up or down every week, shocking revelations came every few days, then a stunning victory - now this. Just when everyone thought it was over, up comes another chapter: recount petitions!

It is that special time of year - where sugar plums not only dance in our heads but also join "Auntie's" favorite pie along with tables filled with tempting delights, at every turn. And if you have concerns about tipping the scales, it is for good reason.

Falwell and Liberty after Trump; Pro-Life Millennials; Ted Cruz and the Castros

A constitutional wall will block President-elect Donald Trump's mean-spirited ambition to swiftly deport up to 3 million undocumented immigrants

More here:
Communities Voices and Insights - Washington Times

Read More...

HIV/AIDS research – Wikipedia

December 8th, 2016 5:45 am

HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV.

Examples of particular HIV/AIDS research include, drug development, HIV vaccines, pre-exposure prophylaxis, or post-exposure prophylaxis.[1]

A body of scientific evidence has shown that men who are circumcised are less likely to contract HIV than men who are uncircumcized.[2] Research published in 2014, concludes that the sex hormones estrogen and progesterone selectively impact HIV transmission.[3]

"Pre-exposure prophylaxis" refers to the practice of taking some drugs before being exposed to HIV infection, and having a decreased chance of contracting HIV as a result of taking that drug. Post-exposure prophylaxis refers to taking some drugs quickly after being exposed to HIV, while the virus is in a person's body but before the virus has established itself. In both cases, the drugs would be the same as those used to treat persons with HIV, and the intent of taking the drugs would be to eradicate the virus before the person becomes irreversibly infected.

Post-exposure prophylaxis is recommended in anticipated cases of HIV exposure, such as if a nurse somehow has blood-to-blood contact with a patient in the course of work, or if someone without HIV requests the drugs immediately after having unprotected sex with a person who might have HIV. Pre-exposure prophylaxis is sometimes an option for HIV-negative persons who feel that they are at increased risk of HIV infection, such as an HIV-negative person in a serodiscordant relationship with an HIV-positive partner.

Current research in these agents include drug development, efficacy testing, and practice recommendations for using drugs for HIV prevention.

The within-host dynamics of HIV infection include the spread of the virus in vivo, the establishment of latency, the effects of immune response on the virus, etc.[4][5] Early studies used simple models and only considered the cell-free spreading of HIV, in which virus particles bud from an infected T cell, enter the blood/extracellular fluid, and then infect another T cell.[5] A 2015 study[4] proposes a more realistic model of HIV dynamics that also incorporates the viral cell-to-cell spreading mechanism, where the virus is directly transited from one cell to another, as well as the T cell activation, the cellular immune response, and the immune exhaustion as the infection progresses.[4]

A 2014 study with SIV found that the virus initially establishes a reservoir in the gut. The virus infection provokes an inflammatory response of paneth cells in the intestine, helping to spread the virus by causing tissue damage. The findings offer new pointers for potential future treatments, testing (biomarkers), and help to explain the virus resistance to antiviral therapies. The study also identified the bacteria strain Lactobacillus plantarum, which reversed damage by rapidly reducing IL-1 (Interleukin-1 beta).[6] Seeding of HIV in the body begins within a few days, during the acute phase of HIV infection.[7]

Research to improve current treatments includes decreasing side effects of current drugs, further simplifying drug regimens to improve adherence, and determining better sequences of regimens to manage drug resistance. There are variations in the health community in recommendations on what treatment doctors should recommend for people with HIV. One question, for example, is determining when a doctor should recommend that a patient take antiretroviral drugs and what drugs a doctor may recommend. This field also includes the development of antiretroviral drugs.

Infection with the Human Immunodeficiency Virus-1 (HIV) is associated with clinical symptoms of accelerated aging, as evidenced by increased incidence and diversity of age-related illnesses at relatively young ages. A significant age acceleration effect could be detected in brain (7.4 years) and blood (5.2 years) tissue due to HIV-1 infection [8] with the help of a biomarker of aging, which is known as epigenetic clock.

A long-term nonprogressor is a person who is infected with HIV, but whose body, for whatever reason, naturally controls the virus so that the infection does not progress to the AIDS stage. Such persons are of great interest to researchers, who feel that a study of their physiologies could provide a deeper understanding of the virus and disease.

An HIV vaccine is a vaccine that would be given to a person who does not have HIV, in order to confer protection against subsequent exposures to HIV, thus reducing the likelihood that the person would become infected by HIV. Currently, no effective HIV vaccine exists. Various HIV vaccines have been tested in clinical trials almost since the discovery of HIV.

Only a vaccine is thought to be able to halt the pandemic. This is because a vaccine would cost less, thus being affordable for developing countries, and would not require daily treatment.[9] However, after over 20 years of research, HIV-1 remains a difficult target for a vaccine.[9][10]

In 2003 a clinical trial in Thailand tested an HIV vaccine called RV 144. In 2009, the researchers reported that this vaccine showed some efficacy in protecting recipients from HIV infection. Results of this trial give the first supporting evidence of any vaccine being effective in lowering the risk of contracting HIV. Another possible vaccine comes from a novel gene therapy that alters the CCR5 co-receptor permanently, preventing HIV from entering cells.[11] Other vaccine trials continue worldwide.

A microbicide for sexually transmitted diseases is a gel which would be applied to the skin - perhaps a rectal microbicide for persons who engage in anal sex or a vaginal microbicide for persons who engage in vaginal sex - and if infected body fluid such as blood or semen were to touch the gel, then HIV in that fluid would be destroyed and the people having sex would be less likely to spread infection between themselves.

On March 7, 2013, the Washington University in St. Louis website published a report by Julia Evangelou Strait, in which it was reported that ongoing nanoparticle research showed that nanoparticles loaded with various compounds could be used to target infectious agents whilst leaving healthy cells unaffected. In the study detailed by this report, it was found that nanoparticles loaded with Mellitin, a compound found in Bee venom, could deliver the agent to the HIV, causing the breakdown of the outer protein envelope of the virus. This, they say, could lead to the production of a vaginal gel which could help prevent infection by disabling the virus.[12] Dr Joshua Hood goes on to explain that beyond preventative measures in the form of a topical gel, he sees "potential for using nanoparticles with melittin as therapy for existing HIV infections, especially those that are drug-resistant. The nanoparticles could be injected intravenously and, in theory, would be able to clear HIV from the blood stream."[12]

In 2007, Timothy Ray Brown,[13] a 40-year-old HIV-positive man, also known as "the Berlin Patient", was given a stem cell transplant as part of his treatment for acute myeloid leukemia (AML).[14] A second transplant was made a year later after a relapse. The donor was chosen not only for genetic compatibility but also for being homozygous for a CCR5-32 mutation that confers resistance to HIV infection.[15][16] After 20 months without antiretroviral drug treatment, it was reported that HIV levels in Brown's blood, bone marrow, and bowel were below the limit of detection.[16] The virus remained undetectable over three years after the first transplant.[14] Although the researchers and some commentators have characterized this result as a cure, others suggest that the virus may remain hidden in tissues[17] such as the brain (which acts as a viral reservoir).[18] Stem cell treatment remains investigational because of its anecdotal nature, the disease and mortality risk associated with stem cell transplants, and the difficulty of finding suitable donors.[17][19]

Complementing efforts to control viral replication, immunotherapies that may assist in the recovery of the immune system have been explored in past and ongoing trials, including IL-2 and IL-7.[20]

The failure of vaccine candidates to protect against HIV infection and progression to AIDS has led to a renewed focus on the biological mechanisms responsible for HIV latency. A limited period of therapy combining anti-retrovirals with drugs targeting the latent reservoir may one day allow for total eradication of HIV infection.[21] Researchers have discovered an abzyme that can destroy the protein gp120 CD4 binding site. This protein is common to all HIV variants as it is the attachment point for B lymphocytes and subsequent compromising of the immune system.[22]

A turning point for HIV research occurred in 2007, following the bone marrow transplant of HIV sufferer Timothy Ray Brown. Brown underwent the procedure after he developed leukaemia and the donor of the bone marrow possessed a rare genetic mutation that caused Brown's cells to become resistant to HIV. Brown attained the title of the "Berlin Patient" in the HIV research field and is the first man to have been cured of the virus. As of April 2013, two primary approaches are being pursued in the search for a HIV cure: The first is gene therapy that aims to develop a HIV-resistant immune system for patients, and the second is being led by Danish scientists, who are conducting clinical trials to strip the HIV from human DNA and have it destroyed permanently by the immune system.[23]

Two more cases with similarities to the Brown case have occurred since the 2007 discovery; however, they differ because the transplanted marrow has not been confirmed as mutated. The cases were publicized in a July 2013 CNN story that relayed the experience of two patients who had taken antiretroviral therapy for years before they developed lymphoma, a cancer of the lymph nodes. They then underwent lymphoma chemotherapy and bone marrow transplantation, while remaining on an antiretroviral regimen; while they retained traces of HIV four months afterwards, six to nine months after the transplant, the two patients had no detectable trace of HIV in their blood. However, the managing clinician Dr. Timothy Heinrich stated at the Malaysian International AIDS Society Conference where the findings were presented:

It's possible, again, that the virus could return in a week, it could return in a month -- in fact, some mathematical modeling predicts that virus could even return one to two years after we stop antiretroviral therapy, so we really don't know what the long-term or full effects of stem cell transplantation and viral persistence is.[24]

In March 2016, researchers at Temple University, Philadelphia, reported that they have used genome editing to delete HIV from T cells. According to the researchers, this approach could lead to a dramatic reduction of the viral load in patient cells.[25][26]

In April 2016, Innovative Bioresearch, a privately held company owned by research scientist Jonathan Fior, reported the results of a pioneering pilot study that explored the infusion of SupT1 cells as a cell-based therapy for HIV in a humanized mouse model.[27][28] This novel cell-based therapy uses irradiated SupT1 cells as a decoy target for HIV to prevent CD4+ T cell depletion as well as to render the virus less cytopathic. The research showed that in animals treated with SupT1 cell infusion, significantly lower plasma viral load (~10-fold) and potentially preserved CD4+ T cell frequency were observed at Week 1, with one animal showing complete suppression of viral replication and preservation of CD4+ T cell count (no virus detected anymore at Weeks 3 and 4). Interestingly, as also mentioned in a previous paper wrote by the same author, Jonathan Fior, in vitro studies of HIV evolution showed that prolonged virus replication in the SupT1 cell line results in a less cytopathic virus with a reduced capacity for syncytium formation, a higher sensitivity to neutralization, improved replication in SupT1 cells and impaired infection of primary CD4+ T cell.[29] According to the research, this indicates that in vivo virus replication in the infused SupT1 cells should also have a vaccination effect.[28]

Read the rest here:
HIV/AIDS research - Wikipedia

Read More...

Breast Cancer Research | Home page

December 8th, 2016 5:44 am

Dr. Lewis A. Chodosh is a physician-scientist who received a BS in Molecular Biophysics and Biochemistry from Yale University, and MD from Harvard Medical School, and a PhD. in Biochemistry from M.I.T. in the laboratory of Dr. Phillip Sharp.He performed his clinical training in Internal Medicine and Endocrinology at the Massachusetts General Hospital, after which he was a postdoctoral research fellow with Dr. Philip Leder at Harvard Medical School.Dr. Chodosh joined the faculty of the University of Pennsylvania in 1994, where he is currently a Professor in the Departments of Cancer Biology, Cell & Developmental Biology, and Medicine. He serves as Chairman of the Department of Cancer Biology, Associate Director for Basic Science of the Abramson Cancer Center, and Director of Cancer Genetics for the Abramson Family Cancer Research Institute at the University of Pennsylvania. Additionally, heis on the scientific advisory board for the Harvard Nurses' Health Studies I and II.

Dr. Chodosh's research focuses on genetic, genomic and molecular approaches to understanding breast cancer susceptibility and pathogenesis.

Continue reading here:
Breast Cancer Research | Home page

Read More...

Ageing – Wikipedia

December 7th, 2016 2:43 pm

Ageing, also spelled aging, is the process of becoming older. The term refers especially to human beings, many animals, and fungi, whereas for example bacteria, perennial plants and some simple animals are potentially immortal. In the broader sense, ageing can refer to single cells within an organism which have ceased dividing (cellular senescence) or to the population of a species (population ageing).

In humans, ageing represents the accumulation of changes in a human being over time,[1] encompassing physical, psychological, and social change. Reaction time, for example, may slow with age, while knowledge of world events and wisdom may expand. Ageing is among the greatest known risk factors for most human diseases:[2] of the roughly 150,000 people who die each day across the globe, about two thirds die from age-related causes.

The causes of ageing are unknown; current theories are assigned to the damage concept, whereby the accumulation of damage (such as DNA breaks, oxidised DNA and/or mitochondrial malfunctions)[3] may cause biological systems to fail, or to the programmed ageing concept, whereby internal processes (such as DNA telomere shortening) may cause ageing. Programmed ageing should not be confused with programmed cell death (apoptosis).

The discovery, in 1934, that calorie restriction can extend lifespan by 50% in rats has motivated research into delaying and preventing ageing.

Human beings and members of other species, especially animals, necessarily experience ageing and mortality. Fungi, too, can age.[4] In contrast, many species can be considered immortal: for example, bacteria fission to produce daughter cells, strawberry plants grow runners to produce clones of themselves, and animals in the genus Hydra have a regenerative ability with which they avoid dying of old age.

Early life forms on Earth, starting at least 3.7 billion years ago,[5] were single-celled organisms. Such single-celled organisms (prokaryotes, protozoans, algae) multiply by fissioning into daughter cells, thus do not age and are innately immortal.[6][7]

Ageing and mortality of the individual organism became possible with the evolution of sexual reproduction,[8] which occurred with the emergence of the fungal/animal kingdoms approximately a billion years ago, and with the evolution of flowering plants 160 million years ago. The sexual organism could henceforth pass on some of its genetic material to produce new individuals and itself could become disposable with regards to the survival of its species.[8] This classic biological idea has however been perturbed recently by the discovery that the bacterium E. coli may split into distinguishable daughter cells, which opens the theoretical possibility of "age classes" among bacteria.[9]

Even within humans and other mortal species, there are cells with the potential for immortality: cancer cells which have lost the ability to die when maintained in cell culture such as the HeLa cell line,[10] and specific stem cells such as germ cells (producing ova and spermatozoa).[11] In artificial cloning, adult cells can be rejuvenated back to embryonic status and then used to grow a new tissue or animal without ageing.[12] Normal human cells however die after about 50 cell divisions in laboratory culture (the Hayflick Limit, discovered by Leonard Hayflick in 1961).[10]

A number of characteristic ageing symptoms are experienced by a majority or by a significant proportion of humans during their lifetimes.

Dementia becomes more common with age.[35] About 3% of people between the ages of 6574 have dementia, 19% between 75 and 84 and nearly half of those over 85 years of age.[36] The spectrum includes mild cognitive impairment and the neurodegenerative diseases of Alzheimer's disease, cerebrovascular disease, Parkinson's disease and Lou Gehrig's disease. Furthermore, many types of memory may decline with ageing, but not semantic memory or general knowledge such as vocabulary definitions, which typically increases or remains steady until late adulthood[37] (see Ageing brain). Intelligence may decline with age, though the rate may vary depending on the type and may in fact remain steady throughout most of the lifespan, dropping suddenly only as people near the end of their lives. Individual variations in rate of cognitive decline may therefore be explained in terms of people having different lengths of life.[38] There might be changes to the brain: after 20 years of age there may be a 10% reduction each decade in the total length of the brain's myelinated axons.[39][40]

Age can result in visual impairment, whereby non-verbal communication is reduced,[41] which can lead to isolation and possible depression. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80.[42] This degeneration is caused by systemic changes in the circulation of waste products and by growth of abnormal vessels around the retina.[43]

A distinction can be made between "proximal ageing" (age-based effects that come about because of factors in the recent past) and "distal ageing" (age-based differences that can be traced back to a cause early in person's life, such as childhood poliomyelitis).[38]

Ageing is among the greatest known risk factors for most human diseases.[2] Of the roughly 150,000 people who die each day across the globe, about two thirds100,000 per daydie from age-related causes. In industrialised nations, the proportion is higher, reaching 90%.[44][45][46]

At present, researchers are only just beginning to understand the biological basis of ageing even in relatively simple and short-lived organisms such as yeast.[47] Less still is known about mammalian ageing, in part due to the much longer lives in even small mammals such as the mouse (around 3 years). A primary model organism for studying ageing is the nematode C. elegans, thanks to its short lifespan of 23 weeks, the ability to easily perform genetic manipulations or suppress gene activity with RNA interference, and other factors.[48] Most known mutations and RNA interference targets that extend lifespan were first discovered in C. elegans.[49]

Factors that are proposed to influence biological ageing[50] fall into two main categories, programmed and damage-related. Programmed factors follow a biological timetable, perhaps a continuation of the one that regulates childhood growth and development. This regulation would depend on changes in gene expression that affect the systems responsible for maintenance, repair and defence responses. Damage-related factors include internal and environmental assaults to living organisms that induce cumulative damage at various levels.[51]

There are three main metabolic pathways which can influence the rate of ageing:

It is likely that most of these pathways affect ageing separately, because targeting them simultaneously leads to additive increases in lifespan.[53]

The rate of ageing varies substantially across different species, and this, to a large extent, is genetically based. For example, numerous perennial plants ranging from strawberries and potatoes to willow trees typically produce clones of themselves by vegetative reproduction and are thus potentially immortal, while annual plants such as wheat and watermelons die each year and reproduce by sexual reproduction. In 2008 it was discovered that inactivation of only two genes in the annual plant Arabidopsis thaliana leads to its conversion into a potentially immortal perennial plant.[54]

Clonal immortality apart, there are certain species whose individual lifespans stand out among Earth's life-forms, including the bristlecone pine at 5062 years[55] (however Hayflick states that the bristlecone pine has no cells older than 30 years), invertebrates like the hard clam (known as quahog in New England) at 508 years,[56] the Greenland shark at 400 years,[57] fish like the sturgeon and the rockfish, and the sea anemone[58] and lobster.[59][60] Such organisms are sometimes said to exhibit negligible senescence.[61] The genetic aspect has also been demonstrated in studies of human centenarians.

In laboratory settings, researchers have demonstrated that selected alterations in specific genes can extend lifespan quite substantially in yeast and roundworms, less so in fruit flies and less again in mice. Some of the targeted genes have homologues across species and in some cases have been associated with human longevity.[62]

Caloric restriction substantially affects lifespan in many animals, including the ability to delay or prevent many age-related diseases.[103] Typically, this involves caloric intake of 6070% of what an ad libitum animal would consume, while still maintaining proper nutrient intake.[103] In rodents, this has been shown to increase lifespan by up to 50%;[104] similar effects occur for yeast and Drosophila.[103] No lifespan data exist for humans on a calorie-restricted diet,[76] but several reports support protection from age-related diseases.[105][106] Two major ongoing studies on rhesus monkeys initially revealed disparate results; while one study, by the University of Wisconsin, showed that caloric restriction does extend lifespan,[107] the second study, by the National Institute on Ageing (NIA), found no effects of caloric restriction on longevity.[108] Both studies nevertheless showed improvement in a number of health parameters. Notwithstanding the similarly low calorie intake, the diet composition differed between the two studies (notably a high sucrose content in the Wisconsin study), and the monkeys have different origins (India, China), initially suggesting that genetics and dietary composition, not merely a decrease in calories, are factors in longevity.[76] However, in a comparative analysis in 2014, the Wisconsin researchers found that the allegedly non-starved NIA control monkeys in fact are moderately underweight when compared with other monkey populations, and argued this was due to the NIA's apportioned feeding protocol in contrast to Wisconsin's truly unrestricted ad libitum feeding protocol.[109] They conclude that moderate calorie restriction rather than extreme calorie restriction is sufficient to produce the observed health and longevity benefits in the studied rhesus monkeys.[110]

In his book How and Why We Age, Hayflick says that caloric restriction may not be effective in humans, citing data from the Baltimore Longitudinal Study of Aging which shows that being thin does not favour longevity.[need quotation to verify][111] Similarly, it is sometimes claimed that moderate obesity in later life may improve survival, but newer research has identified confounding factors such as weight loss due to terminal disease. Once these factors are accounted for, the optimal body weight above age 65 corresponds to a leaner body mass index of 23 to 27.[112]

Alternatively, the benefits of dietary restriction can also be found by changing the macro nutrient profile to reduce protein intake without any changes to calorie level, resulting in similar increases in longevity.[113][114] Dietary protein restriction not only inhibits mTOR activity but also IGF-1, two mechanisms implicated in ageing.[74] Specifically, reducing leucine intake is sufficient to inhibit mTOR activity, achievable through reducing animal food consumption.[115][116]

The Mediterranean diet is credited with lowering the risk of heart disease and early death.[117][118] The major contributors to mortality risk reduction appear to be a higher consumption of vegetables, fish, fruits, nuts and monounsaturated fatty acids, i.e., olive oil.[119]

The amount of sleep has an impact on mortality. People who live the longest report sleeping for six to seven hours each night.[120][121] Lack of sleep (<5 hours) more than doubles the risk of death from cardiovascular disease, but too much sleep (>9 hours) is associated with a doubling of the risk of death, though not primarily from cardiovascular disease.[122] Sleeping more than 7 to 8 hours per day has been consistently associated with increased mortality, though the cause is probably other factors such as depression and socioeconomic status, which would correlate statistically.[123] Sleep monitoring of hunter-gatherer tribes from Africa and from South America has shown similar sleep patterns across continents: their average sleeping duration is 6.4 hours (with a summer/winter difference of 1 hour), afternoon naps (siestas) are uncommon, and insomnia is very rare (tenfold less than in industrial societies).[124]

Physical exercise may increase life expectancy.[125] People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who are not physically active.[126] Moderate levels of exercise have been correlated with preventing aging and improving quality of life by reducing inflammatory potential.[127] The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week.[128] For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for 25 minutes on a daily basis would together achieve about 3000 MET minutes a week.[128]

Avoidance of chronic stress (as opposed to acute stress) is associated with a slower loss of telomeres in most but not all studies,[129][130] and with decreased cortisol levels. A chronically high cortisol level compromises the immune system, causes cardiac damage/arterosclerosis and is associated with facial ageing, and the latter in turn is a marker for increased morbidity and mortality.[131][132] Stress can be countered by social connection, spirituality, and (for men more clearly than for women) married life, all of which are associated with longevity.[133][134][135]

The following drugs and interventions have been shown to retard or reverse the biological effects of ageing in animal models, but none has yet been proven to do so in humans.

Evidence in both animals and humans suggests that resveratrol may be a caloric restriction mimetic.[136]

As of 2015 metformin was under study for its potential effect on slowing ageing in the worm C.elegans and the cricket.[137] Its effect on otherwise healthy humans is unknown.[137]

Rapamycin was first shown to extend lifespan in eukaryotes in 2006 by Powers et al. who showed a dose-responsive effect of rapamycin on lifespan extension in yeast cells.[138] In a 2009 study, the lifespans of mice fed rapamycin were increased between 28 and 38% from the beginning of treatment, or 9 to 14% in total increased maximum lifespan. Of particular note, the treatment began in mice aged 20 months, the equivalent of 60 human years.[139] Rapamycin has subsequently been shown to extend mouse lifespan in several separate experiments,[140][141] and is now being tested for this purpose in nonhuman primates (the marmoset monkey).[142]

Cancer geneticist Ronald A. DePinho and his colleagues published research in mice where telomerase activity was first genetically removed. Then, after the mice had prematurely aged, they restored telomerase activity by reactivating the telomerase gene. As a result, the mice were rejuvenated: Shrivelled testes grew back to normal and the animals regained their fertility. Other organs, such as the spleen, liver, intestines and brain, recuperated from their degenerated state. "[The finding] offers the possibility that normal human ageing could be slowed by reawakening the enzyme in cells where it has stopped working" says Ronald DePinho. However, activating telomerase in humans could potentially encourage the growth of tumours.[143]

Most known genetic interventions in C. elegans increase lifespan by 1.5 to 2.5-fold. As of 2009[update], the record for lifespan extension in C. elegans is a single-gene mutation which increases adult survival by tenfold.[49] The strong conservation of some of the mechanisms of ageing discovered in model organisms imply that they may be useful in the enhancement of human survival. However, the benefits may not be proportional; longevity gains are typically greater in C. elegans than fruit flies, and greater in fruit flies than in mammals. One explanation for this is that mammals, being much longer-lived, already have many traits which promote lifespan.[49]

Some research effort is directed to slow ageing and extend healthy lifespan.[144][145][146]

The US National Institute on Aging currently funds an intervention testing programme, whereby investigators nominate compounds (based on specific molecular ageing theories) to have evaluated with respect to their effects on lifespan and age-related biomarkers in outbred mice.[147] Previous age-related testing in mammals has proved largely irreproducible, because of small numbers of animals and lax mouse husbandry conditions.[citation needed] The intervention testing programme aims to address this by conducting parallel experiments at three internationally recognised mouse ageing-centres, the Barshop Institute at UTHSCSA, the University of Michigan at Ann Arbor and the Jackson Laboratory.

Several companies and organisations, such as Google Calico, Human Longevity, Craig Venter, Gero,[148]SENS Research Foundation, and Science for Life Extension in Russia,[149] declared stopping or delaying ageing as their goal.

Prizes for extending lifespan and slowing ageing in mammals exist. The Methuselah Foundation offers the Mprize. Recently, the $1 Million Palo Alto Longevity Prize was launched. It is a research incentive prize to encourage teams from all over the world to compete in an all-out effort to "hack the code" that regulates our health and lifespan. It was founded by Joon Yun.[150][151][152][153][154]

Different cultures express age in different ways. The age of an adult human is commonly measured in whole years since the day of birth. Arbitrary divisions set to mark periods of life may include: juvenile (via infancy, childhood, preadolescence, adolescence), early adulthood, middle adulthood, and late adulthood. More casual terms may include "teenagers," "tweens," "twentysomething", "thirtysomething", etc. as well as "vicenarian", "tricenarian", "quadragenarian", etc.

Most legal systems define a specific age for when an individual is allowed or obliged to do particular activities. These age specifications include voting age, drinking age, age of consent, age of majority, age of criminal responsibility, marriageable age, age of candidacy, and mandatory retirement age. Admission to a movie for instance, may depend on age according to a motion picture rating system. A bus fare might be discounted for the young or old. Each nation, government and non-governmental organisation has different ways of classifying age. In other words, chronological ageing may be distinguished from "social ageing" (cultural age-expectations of how people should act as they grow older) and "biological ageing" (an organism's physical state as it ages).[155]

In a UNFPA report about ageing in the 21st century, it highlighted the need to "Develop a new rights-based culture of ageing and a change of mindset and societal attitudes towards ageing and older persons, from welfare recipients to active, contributing members of society."[156] UNFPA said that this "requires, among others, working towards the development of international human rights instruments and their translation into national laws and regulations and affirmative measures that challenge age discrimination and recognise older people as autonomous subjects."[156] Older persons make contributions to society including caregiving and volunteering. For example, "A study of Bolivian migrants who [had] moved to Spain found that 69% left their children at home, usually with grandparents. In rural China, grandparents care for 38% of children aged under five whose parents have gone to work in cities."[156]

Population ageing is the increase in the number and proportion of older people in society. Population ageing has three possible causes: migration, longer life expectancy (decreased death rate) and decreased birth rate. Ageing has a significant impact on society. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights.[157]

In the 21st century, one of the most significant population trends is ageing.[158] Currently, over 11% of the world's current population are people aged 60 and older and the United Nations Population Fund (UNFPA) estimates that by 2050 that number will rise to approximately 22%.[156] Ageing has occurred due to development which has enabled better nutrition, sanitation, health care, education and economic well-being. Consequently, fertility rates have continued to decline and life expectancy have risen. Life expectancy at birth is over 80 now in 33 countries. Ageing is a "global phenomenon," that is occurring fastest in developing countries, including those with large youth populations, and poses social and economic challenges to the work which can be overcome with "the right set of policies to equip individuals, families and societies to address these challenges and to reap its benefits."[159]

As life expectancy rises and birth rates decline in developed countries, the median age rises accordingly. According to the United Nations, this process is taking place in nearly every country in the world.[160] A rising median age can have significant social and economic implications, as the workforce gets progressively older and the number of old workers and retirees grows relative to the number of young workers. Older people generally incur more health-related costs than do younger people in the workplace and can also cost more in worker's compensation and pension liabilities.[161] In most developed countries an older workforce is somewhat inevitable. In the United States for instance, the Bureau of Labor Statistics estimates that one in four American workers will be 55 or older by 2020.[161]

Among the most urgent concerns of older persons worldwide is income security. This poses challenges for governments with ageing populations to ensure investments in pension systems continues in order to provide economic independence and reduce poverty in old age. These challenges vary for developing and developed countries. UNFPA stated that, "Sustainability of these systems is of particular concern, particularly in developed countries, while social protection and old-age pension coverage remain a challenge for developing countries, where a large proportion of the labour force is found in the informal sector."[156]

The global economic crisis has increased financial pressure to ensure economic security and access to health care in old age. In order to elevate this pressure "social protection floors must be implemented in order to guarantee income security and access to essential health and social services for all older persons and provide a safety net that contributes to the postponement of disability and prevention of impoverishment in old age."[156]

It has been argued that population ageing has undermined economic development.[162] Evidence suggests that pensions, while making a difference to the well-being of older persons, also benefit entire families especially in times of crisis when there may be a shortage or loss of employment within households. A study by the Australian Government in 2003 estimated that "women between the ages of 65 and 74 years contribute A$16 billion per year in unpaid caregiving and voluntary work. Similarly, men in the same age group contributed A$10 billion per year."[156]

Due to increasing share of the elderly in the population, health care expenditures will continue to grow relative to the economy in coming decades. This has been considered as a negative phenomenon and effective strategies like labour productivity enhancement should be considered to deal with negative consequences of ageing.[163]

In the field of sociology and mental health, ageing is seen in five different views: ageing as maturity, ageing as decline, ageing as a life-cycle event, ageing as generation, and ageing as survival.[164] Positive correlates with ageing often include economics, employment, marriage, children, education, and sense of control, as well as many others. The social science of ageing includes disengagement theory, activity theory, selectivity theory, and continuity theory. Retirement, a common transition faced by the elderly, may have both positive and negative consequences.[165] As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of ageing and for instance a bio-techno-social definition of ageing has been suggested.[166]

With age inevitable biological changes occur that increase the risk of illness and disability. UNFPA states that,[159]

"A life-cycle approach to health care one that starts early, continues through the reproductive years and lasts into old age is essential for the physical and emotional well-being of older persons, and, indeed, all people. Public policies and programmes should additionally address the needs of older impoverished people who cannot afford health care."

Many societies in Western Europe and Japan have ageing populations. While the effects on society are complex, there is a concern about the impact on health care demand. The large number of suggestions in the literature for specific interventions to cope with the expected increase in demand for long-term care in ageing societies can be organised under four headings: improve system performance; redesign service delivery; support informal caregivers; and shift demographic parameters.[167]

However, the annual growth in national health spending is not mainly due to increasing demand from ageing populations, but rather has been driven by rising incomes, costly new medical technology, a shortage of health care workers and informational asymmetries between providers and patients.[168] A number of health problems become more prevalent as people get older. These include mental health problems as well as physical health problems, especially dementia.

It has been estimated that population ageing only explains 0.2 percentage points of the annual growth rate in medical spending of 4.3% since 1970. In addition, certain reforms to the Medicare system in the United States decreased elderly spending on home health care by 12.5% per year between 1996 and 2000.[169]

Positive self-perception of health has been correlated with higher well-being and reduced mortality in the elderly.[170][171] Various reasons have been proposed for this association; people who are objectively healthy may naturally rate their health better than that of their ill counterparts, though this link has been observed even in studies which have controlled for socioeconomic status, psychological functioning and health status.[172] This finding is generally stronger for men than women,[171] though this relationship is not universal across all studies and may only be true in some circumstances.[172]

As people age, subjective health remains relatively stable, even though objective health worsens.[173] In fact, perceived health improves with age when objective health is controlled in the equation.[174] This phenomenon is known as the "paradox of ageing." This may be a result of social comparison;[175] for instance, the older people get, the more they may consider themselves in better health than their same-aged peers.[176] Elderly people often associate their functional and physical decline with the normal ageing process.[177][178]

The concept of successful ageing can be traced back to the 1950s and was popularised in the 1980s. Traditional definitions of successful ageing have emphasised absence of physical and cognitive disabilities.[179] In their 1987 article, Rowe and Kahn characterised successful ageing as involving three components: a) freedom from disease and disability, b) high cognitive and physical functioning, and c) social and productive engagement.[180]

The ancient Greek dramatist Euripides (5th century BC) describes the multiply-headed mythological monster Hydra as having a regenerative capacity which makes it immortal, which is the historical background to the name of the biological genus Hydra. The Book of Job (c. 6th century BC) describes human lifespan as inherently limited and makes a comparison with the innate immortality that a felled tree may have when undergoing vegetative regeneration.[181]

Read the original here:
Ageing - Wikipedia

Read More...

Ashkenazi Jews – Wikipedia

December 7th, 2016 2:43 pm

Ashkenazi Jews ( Y'hudey Ashkenaz in Ashkenazi Hebrew) Total population (10[1]11.2[2] million) Regions with significant populations United States 56 million[3] Israel 2.8 million[1][4] Russia 194,000500,000 Argentina 300,000 United Kingdom 260,000 Canada 240,000 France 200,000 Germany 200,000 Ukraine 150,000 Australia 120,000 South Africa 80,000 Belarus 80,000 Hungary 75,000 Chile 70,000 Belgium 30,000 Brazil 30,000 Netherlands 30,000 Moldova 30,000 Poland 25,000 Mexico 18,500 Sweden 18,000 Latvia 10,000 Romania 10,000 Austria 9,000 New Zealand 5,000 Azerbaijan 4,300 Lithuania 4,000 Czech Republic 3,000 Slovakia 3,000 Estonia 1,000 Languages Historical: Yiddish Modern: Local languages, primarily:English, Hebrew, Russian Religion Judaism, some secular, irreligious Related ethnic groups Sephardi Jews, Mizrahi Jews, Samaritans,[5][5][6][7]Kurds,[7] other Levantines (Druze, Assyrians,[5][6]Arabs[5][6][8][9]), Mediterranean groups[10][11][12][13][14]

Ashkenazi Jews, also known as Ashkenazic Jews or simply Ashkenazim (Hebrew: , Ashkenazi Hebrew pronunciation: [aknazim], singular: [aknazi], Modern Hebrew: [akenazim, akenazi]; also Y'hudey Ashkenaz),[15] are a Jewish diaspora population who coalesced as a distinct community in the Holy Roman Empire around the end of the first millennium.[16] The traditional diaspora language of Ashkenazi Jews is Yiddish (which incorporates several dialects), while until recently Hebrew was only used as a sacred language.

The Ashkenazim settled and established communities throughout Central and Eastern Europe, which was their primary region of concentration and residence from the Middle Ages until recent times. They subsequently evolved their own distinctive culture and diasporic identities.[17] Throughout their time in Europe, the Ashkenazim have made many important contributions to philosophy, scholarship, literature, art, music and science.[18][19][20][21]

In the late Middle Ages, the center of gravity of the Ashkenazi population shifted steadily eastward,[22] moving out of the Holy Roman Empire into the Pale of Settlement (comprising parts of present-day Belarus, Latvia, Lithuania, Moldova, Poland, Russia, and Ukraine).[23][24] In the course of the late 18th and 19th centuries, those Jews who remained in or returned to the German lands experienced a cultural reorientation; under the influence of the Haskalah and the struggle for emancipation, as well the intellectual and cultural ferment in urban centers, they gradually abandoned the use of Yiddish, while developing new forms of Jewish religious life and cultural identity.[25]

The genocidal impact of the Holocaust (the mass murder of approximately six million Jews during World War II) devastated the Ashkenazim and their culture, affecting almost every Jewish family.[26][27] It is estimated that in the 11th century Ashkenazi Jews composed only three percent of the world's Jewish population, while at their peak in 1931 they accounted for 92 percent of the world's Jews. Immediately prior to the Holocaust, the number of Jews in the world stood at approximately 16.7 million.[28] Statistical figures vary for the contemporary demography of Ashkenazi Jews, oscillating between 10 million[1] and 11.2 million.[2]Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi make up less than 74% of Jews worldwide.[29] Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide.[30]

Genetic studies on Ashkenazimresearching both their paternal and maternal lineagessuggest a significant proportion of West Asian ancestry. Those studies have arrived at diverging conclusions regarding both the degree and the sources of their European ancestry, and have generally focused on the extent of the European genetic origin observed in Ashkenazi maternal lineages.[31] Ashkenazi Jews are popularly contrasted with Sephardi Jews (also called Sephardim), who are descendants of Jews from the Iberian Peninsula (though there are other groups as well). There are some differences in how the two groups pronounce certain Hebrew letters and in points of ritual.

The name Ashkenazi derives from the biblical figure of Ashkenaz, the first son of Gomer, son of Khaphet, son of Noah, and a Japhetic patriarch in the Table of Nations (Genesis 10). The name of Gomer has often been linked to the ethnonym Cimmerians. Biblical Ashkenaz is usually derived from Assyrian Akza (cuneiform Akuzai/Ikuzai), a people who expelled the Cimmerians from the Armenian area of the Upper Euphrates,[32] whose name is usually associated with the name of the Scythians.[33][34] The intrusive n in the Biblical name is likely due to a scribal error confusing a waw with a nun .[34][35][36]

In Jeremiah 51:27, Ashkenaz figures as one of three kingdoms in the far north, the others being Minni and Ararat, perhaps corresponding to Urartu, called on by God to resist Babylon.[36][37]

In the Yoma tractate of the Babylonian Talmud the name Gomer is rendered as Germania, which elsewhere in rabbinical literature was identified with Germanikia in northwestern Syria, but later became associated with Germania. Ashkenaz is linked to Scandza/Scanzia, viewed as the cradle of Germanic tribes, as early as a 6th-century gloss to the Historia Ecclesiastica of Eusebius.[38] In the 10th-century History of Armenia of Yovhannes Drasxanakertc'i (1.15) Ashkenaz was associated with Armenia,[39] as it was occasionally in Jewish usage, where its denotation extended at times to Adiabene, Khazaria, Crimea and areas to the east.[40] His contemporary Saadia Gaon identified Ashkenaz with the Saquliba or Slavic territories,[41] and such usage covered also the lands of tribes neighboring the Slavs, and Eastern and Central Europe.[40] In modern times, Samuel Krauss identified the Biblical "Ashkenaz" with Khazaria.[42]

Sometime in the early medieval period, the Jews of central and eastern Europe came to be called by this term.[36] In conformity with the custom of designating areas of Jewish settlement with biblical names, Spain was denominated Sefarad (Obadiah 20), France was called Tsarefat (1 Kings 17:9), and Bohemia was called the Land of Canaan.[43] By the high medieval period, Talmudic commentators like Rashi began to use Ashkenaz/Eretz Ashkenaz to designate Germany, earlier known as Loter,[36][38] where, especially in the Rhineland communities of Speyer, Worms and Mainz, the most important Jewish communities arose.[44] Rashi uses leshon Ashkenaz (Ashkenazi language) to describe German speech, and Byzantium and Syrian Jewish letters referred to the Crusaders as Ashkenazim.[38] Given the close links between the Jewish communities of France and Germany following the Carolingian unification, the term Ashkenazi came to refer to both the Jews of medieval Germany and France.[45]

Outside of their origins in ancient Israel, the history of Ashkenazim is shrouded in mystery,[46] and many theories have arisen speculating on their emergence as a distinct community of Jews.[47] The most well supported theory is the one that details a Jewish migration from Israel through what is now Italy and other parts of southern Europe.[48] The historical record attests to Jewish communities in southern Europe since pre-Christian times.[49] Many Jews were denied full Roman citizenship until 212 CE, when Emperor Caracalla granted all free peoples this privilege. Jews were required to pay a poll tax until the reign of Emperor Julian in 363. In the late Roman Empire, Jews were free to form networks of cultural and religious ties and enter into various local occupations. But, after Christianity became the official religion of Rome and Constantinople in 380, Jews were increasingly marginalized.

The history of Jews in Greece goes back to at least the Archaic Era of Greece, when the classical culture of Greece was undergoing a process of formalization after the Greek Dark Age. The Greek historian Herodotus knew of the Jews, whom he called "Palestinian Syrians",[citation needed] and listed them among the levied naval forces in service of the invading Persians. While Jewish monotheism was not deeply affected by Greek Polytheism, the Greek way of living was attractive for many wealthier Jews.[50] The Synagogue in the Agora of Athens is dated to the period between 267 and 396 CE. The Stobi Synagogue in Macedonia, was built on the ruins of a more ancient synagogue in the 4th century, while later in the 5th century, the synagogue was transformed into Christian basilica.[51]Hellenistic Judaism thrived in Antioch and Alexandria, many of these Greek-speaking Jews would convert to Christianity.[52] Sporadic[53]epigraphic evidence in grave site excavations, particularly in Brigetio (Szny), Aquincum (buda), Intercisa (Dunajvros), Triccinae (Srvr), Savaria (Szombathely), Sopianae (Pcs) in Hungary, and Osijek in Croatia, attest to the presence of Jews after the 2nd and 3rd centuries where Roman garrisons were established,[54] There was a sufficient number of Jews in Pannonia to form communities and build a synagogue. Jewish troops were among the Syrian soldiers transferred there, and replenished from the Middle East, after 175 C.E. Jews and especially Syrians came from Antioch, Tarsus and Cappadocia. Others came from Italy and the Hellenized parts of the Roman empire. The excavations suggest they first lived in isolated enclaves attached to Roman legion camps, and intermarried among other similar oriental families within the military orders of the region.[53]Raphael Patai states that later Roman writers remarked that they differed little in either customs, manner of writing, or names from the people among whom they dwelt; and it was especially difficult to differentiate Jews from the Syrians.[55][56] After Pannonia was ceded to the Huns in 433, the garrison populations were withdrawn to Italy, and only a few, enigmatic traces remain of a possible Jewish presence in the area some centuries later.[57]

No evidence has yet been found of a Jewish presence in antiquity in Germany beyond its Roman border, nor in Eastern Europe. In Gaul and Germany itself, with the possible exception of Trier and Cologne, the archeological evidence suggests at most a fleeting presence of very few Jews, primarily itinerant traders or artisans.[58] A substantial Jewish population emerged in northern Gaul by the Middle Ages,[59] but Jewish communities existed in 465 CE in Brittany, in 524 CE in Valence, and in 533 CE in Orleans.[60] Throughout this period and into the early Middle Ages, some Jews assimilated into the dominant Greek and Latin cultures, mostly through conversion to Christianity.[61][bettersourceneeded] King Dagobert I of the Franks expelled the Jews from his Merovingian kingdom in 629. Jews in former Roman territories faced new challenges as harsher anti-Jewish Church rulings were enforced.

Charlemagne's expansion of the Frankish empire around 800, including northern Italy and Rome, brought on a brief period of stability and unity in Francia. This created opportunities for Jewish merchants to settle again north of the Alps. Charlemagne granted the Jews freedoms similar to those once enjoyed under the Roman Empire. In addition, Jews from southern Italy, fleeing religious persecution, began to move into central Europe.[citation needed] Returning to Frankish lands, many Jewish merchants took up occupations in finance and commerce, including money lending, or usury. (Church legislation banned Christians from lending money in exchange for interest.) From Charlemagne's time to the present, Jewish life in northern Europe is well documented. By the 11th century, when Rashi of Troyes wrote his commentaries, Jews in what came to be known as "Ashkenaz" were known for their halakhic learning, and Talmudic studies. They were criticized by Sephardim and other Jewish scholars in Islamic lands for their lack of expertise in Jewish jurisprudence (dinim) and general ignorance of Hebrew linguistics and literature.[62]Yiddish emerged as a result of Judeo-Latin language contact with various High German vernaculars in the medieval period.[63] It is a Germanic language written with Hebrew letters, and heavily influenced by Hebrew and Aramaic, with some elements of Romance and later Slavic languages.[64]

Historical records show evidence of Jewish communities north of the Alps and Pyrenees as early as the 8th and 9th century. By the 11th century Jewish settlers, moving from southern European and Middle Eastern centers, appear to have begun to settle in the north, especially along the Rhine, often in response to new economic opportunities and at the invitation of local Christian rulers. Thus Baldwin V, Count of Flanders, invited Jacob ben Yekutiel and his fellow Jews to settle in his lands; and soon after the Norman Conquest of England, William the Conqueror likewise extended a welcome to continental Jews to take up residence there. Bishop Rdiger Huzmann called on the Jews of Mainz to relocate to Speyer. In all of these decisions, the idea that Jews had the know-how and capacity to jump-start the economy, improve revenues, and enlarge trade seems to have played a prominent role.[65] Typically Jews relocated close to the markets and churches in town centres, where, though they came under the authority of both royal and ecclesiastical powers, they were accorded administrative autonomy.[65]

In the 11th century, both Rabbinic Judaism and the culture of the Babylonian Talmud that underlies it became established in southern Italy and then spread north to Ashkenaz.[66]

The Jewish communities along the Rhine river from Cologne to Mainz were decimated in the Rhineland massacres of 1096. With the onset of the Crusades in 1095, and the expulsions from England (1290), France (1394), and parts of Germany (15th century), Jewish migration pushed eastward into Poland (10th century), Lithuania (10th century), and Russia (12th century). Over this period of several hundred years, some have suggested, Jewish economic activity was focused on trade, business management, and financial services, due to several presumed factors: Christian European prohibitions restricting certain activities by Jews, preventing certain financial activities (such as "usurious" loans)[67] between Christians, high rates of literacy, near universal male education, and ability of merchants to rely upon and trust family members living in different regions and countries.

By the 15th century, the Ashkenazi Jewish communities in Poland were the largest Jewish communities of the Diaspora.[68] This area, which eventually fell under the domination of Russia, Austria, and Prussia (Germany), would remain the main center of Ashkenazi Jewry until the Holocaust.

The answer to why there was so little assimilation of Jews in central and eastern Europe for so long would seem to lie in part in the probability that the alien surroundings in central and eastern Europe were not conducive, though contempt did not prevent some assimilation. Furthermore, Jews lived almost exclusively in shtetls, maintained a strong system of education for males, heeded rabbinic leadership, and scorned the life-style of their neighbors; and all of these tendencies increased with every outbreak of antisemitism.[69]

In the first half of the 11th century, Hai Gaon refers to questions that had been addressed to him from Ashkenaz, by which he undoubtedly means Germany. Rashi in the latter half of the 11th century refers to both the language of Ashkenaz[70] and the country of Ashkenaz.[71] During the 12th century, the word appears quite frequently. In the Mahzor Vitry, the kingdom of Ashkenaz is referred to chiefly in regard to the ritual of the synagogue there, but occasionally also with regard to certain other observances.[72]

In the literature of the 13th century, references to the land and the language of Ashkenaz often occur. Examples include Solomon ben Aderet's Responsa (vol. i., No. 395); the Responsa of Asher ben Jehiel (pp.4, 6); his Halakot (Berakot i. 12, ed. Wilna, p.10); the work of his son Jacob ben Asher, Tur Orach Chayim (chapter 59); the Responsa of Isaac ben Sheshet (numbers 193, 268, 270).

In the Midrash compilation, Genesis Rabbah, Rabbi Berechiah mentions Ashkenaz, Riphath, and Togarmah as German tribes or as German lands. It may correspond to a Greek word that may have existed in the Greek dialect of the Jews in Syria Palaestina, or the text is corrupted from "Germanica." This view of Berechiah is based on the Talmud (Yoma 10a; Jerusalem Talmud Megillah 71b), where Gomer, the father of Ashkenaz, is translated by Germamia, which evidently stands for Germany, and which was suggested by the similarity of the sound.

In later times, the word Ashkenaz is used to designate southern and western Germany, the ritual of which sections differs somewhat from that of eastern Germany and Poland. Thus the prayer-book of Isaiah Horowitz, and many others, give the piyyutim according to the Minhag of Ashkenaz and Poland.

According to 16th-century mystic Rabbi Elijah of Chelm, Ashkenazi Jews lived in Jerusalem during the 11th century. The story is told that a German-speaking Jew saved the life of a young German man surnamed Dolberger. So when the knights of the First Crusade came to siege Jerusalem, one of Dolberger's family members who was among them rescued Jews in Palestine and carried them back to Worms to repay the favor.[73] Further evidence of German communities in the holy city comes in the form of halakhic questions sent from Germany to Jerusalem during the second half of the 11th century.[74]

Material relating to the history of German Jews has been preserved in the communal accounts of certain communities on the Rhine, a Memorbuch, and a Liebesbrief, documents that are now part of the Sassoon Collection.[75]Heinrich Graetz has also added to the history of German Jewry in modern times in the abstract of his seminal work, History of the Jews, which he entitled "Volksthmliche Geschichte der Juden."

In an essay on Sephardi Jewry, Daniel Elazar at the Jerusalem Center for Public Affairs[76] summarized the demographic history of Ashkenazi Jews in the last thousand years, noting that at the end of the 11th century, 97% of world Jewry was Sephardic and 3% Ashkenazi; by the end of XVI century, the: 'Treaty on the redemption of captives', by Gracian of the God's Mother, Mercy Priest, who was imprisoned by Turks, cites a Tunisian Hebrew, made captive when arriving to Gaeta, who aided others with money, named: 'Simon Escanasi', in the mid-17th century, "Sephardim still outnumbered Ashkenazim three to two", but by the end of the 18th century, "Ashkenazim outnumbered Sephardim three to two, the result of improved living conditions in Christian Europe versus the Ottoman Muslim world."[76] By 1931, Ashkenazi Jews accounted for nearly 92% of world Jewry.[76] These factors are sheer demography showing the migration patterns of Jews from Southern and Western Europe to Central and Eastern Europe.

In 1740 a family from Lithuania became the first Ashkenazi Jews to settle in the Jewish Quarter of Jerusalem.[77]

In the generations after emigration from the west, Jewish communities in places like Poland, Russia, and Belarus enjoyed a comparatively stable socio-political environment. A thriving publishing industry and the printing of hundreds of biblical commentaries precipitated the development of the Hasidic movement as well as major Jewish academic centers.[78] After two centuries of comparative tolerance in the new nations, massive westward emigration occurred in the 19th and 20th centuries in response to pogroms in the east and the economic opportunities offered in other parts of the world. Ashkenazi Jews have made up the majority of the American Jewish community since 1750.[68]

In the context of the European Enlightenment, Jewish emancipation began in 18th century France and spread throughout Western and Central Europe. Disabilities that had limited the rights of Jews since the Middle Ages were abolished, including the requirements to wear distinctive clothing, pay special taxes, and live in ghettos isolated from non-Jewish communities, and the prohibitions on certain professions. Laws were passed to integrate Jews into their host countries, forcing Ashkenazi Jews to adopt family names (they had formerly used patronymics). Newfound inclusion into public life led to cultural growth in the Haskalah, or Jewish Enlightenment, with its goal of integrating modern European values into Jewish life.[79] As a reaction to increasing antisemitism and assimilation following the emancipation, Zionism was developed in central Europe.[80] Other Jews, particularly those in the Pale of Settlement, turned to socialism. These tendencies would be united in Labor Zionism, the founding ideology of the State of Israel.

Of the estimated 8.8 million Jews living in Europe at the beginning of World War II, the majority of whom were Ashkenazi, about 6 million more than two-thirds were systematically murdered in the Holocaust. These included 3 million of 3.3 million Polish Jews (91%); 900,000 of 1.5 million in Ukraine (60%); and 5090% of the Jews of other Slavic nations, Germany, Hungary, and the Baltic states, and over 25% of the Jews in France. Sephardi communities suffered similar depletions in a few countries, including Greece, the Netherlands and the former Yugoslavia.[81] As the large majority of the victims were Ashkenazi Jews, their percentage dropped from nearly 92% of world Jewry in 1931 to nearly 80% of world Jewry today.[76] The Holocaust also effectively put an end to the dynamic development of the Yiddish language in the previous decades, as the vast majority of the Jewish victims of the Holocaust, around 5 million, were Yiddish speakers.[82] Many of the surviving Ashkenazi Jews emigrated to countries such as Israel, Canada, Argentina, Australia, and the United States after the war.

Following the Holocaust, some sources place Ashkenazim today as making up approximately 8385 percent of Jews worldwide,[83][84][85][86] while Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi make up a notably lower figure, less than 74%.[29] Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide.[30] Ashkenazi Jews constitute around 3536% of Israel's total population, or 47.5% of Israel's Jewish population.[87][88]

In Israel, the term Ashkenazi is now used in a manner unrelated to its original meaning, often applied to all Jews who settled in Europe and sometimes including those whose ethnic background is actually Sephardic. Jews of any non-Ashkenazi background, including Mizrahi, Yemenite, Kurdish and others who have no connection with the Iberian Peninsula, have similarly come to be lumped together as Sephardic. Jews of mixed background are increasingly common, partly because of intermarriage between Ashkenazi and non-Ashkenazi, and partly because many do not see such historic markers as relevant to their life experiences as Jews.[89]

Religious Ashkenazi Jews living in Israel are obliged to follow the authority of the chief Ashkenazi rabbi in halakhic matters. In this respect, a religiously Ashkenazi Jew is an Israeli who is more likely to support certain religious interests in Israel, including certain political parties. These political parties result from the fact that a portion of the Israeli electorate votes for Jewish religious parties; although the electoral map changes from one election to another, there are generally several small parties associated with the interests of religious Ashkenazi Jews. The role of religious parties, including small religious parties that play important roles as coalition members, results in turn from Israel's composition as a complex society in which competing social, economic, and religious interests stand for election to the Knesset, a unicameral legislature with 120 seats.[90]

People of Ashkenazi descent constitute around 47.5% of Israeli Jews (and therefore 3536% of Israelis).[4] They have played a prominent role in the economy, media, and politics[91] of Israel since its founding. During the first decades of Israel as a state, strong cultural conflict occurred between Sephardic and Ashkenazi Jews (mainly east European Ashkenazim). The roots of this conflict, which still exists to a much smaller extent in present-day Israeli society, are chiefly attributed to the concept of the "melting pot".[92] That is to say, all Jewish immigrants who arrived in Israel were strongly encouraged to "melt down" their own particular exilic identities within the general social "pot" in order to become Israeli.[93]

The Ashkenazi Chief Rabbis in the Yishuv and Israel include:

Religious Jews have Minhagim, customs, in addition to Halakha, or religious law, and different interpretations of law. Different groups of religious Jews in different geographic areas historically adopted different customs and interpretations. On certain issues, Orthodox Jews are required to follow the customs of their ancestors, and do not believe they have the option of picking and choosing. For this reason, observant Jews at times find it important for religious reasons to ascertain who their household's religious ancestors are in order to know what customs their household should follow. These times include, for example, when two Jews of different ethnic background marry, when a non-Jew converts to Judaism and determines what customs to follow for the first time, or when a lapsed or less observant Jew returns to traditional Judaism and must determine what was done in his or her family's past. In this sense, "Ashkenazic" refers both to a family ancestry and to a body of customs binding on Jews of that ancestry. Reform Judaism, which does not necessarily follow those minhagim, did nonetheless originate among Ashkenazi Jews.[94]

In a religious sense, an Ashkenazi Jew is any Jew whose family tradition and ritual follows Ashkenazi practice. Until the Ashkenazi community first began to develop in the Early Middle Ages, the centers of Jewish religious authority were in the Islamic world, at Baghdad and in Islamic Spain. Ashkenaz (Germany) was so distant geographically that it developed a minhag of its own. Ashkenazi Hebrew came to be pronounced in ways distinct from other forms of Hebrew.[95]

In this respect, the counterpart of Ashkenazi is Sephardic, since most non-Ashkenazi Orthodox Jews follow Sephardic rabbinical authorities, whether or not they are ethnically Sephardic. By tradition, a Sephardic or Mizrahi woman who marries into an Orthodox or Haredi Ashkenazi Jewish family raises her children to be Ashkenazi Jews; conversely an Ashkenazi woman who marries a Sephardi or Mizrahi man is expected to take on Sephardic practice and the children inherit a Sephardic identity, though in practice many families compromise. A convert generally follows the practice of the beth din that converted him or her. With the integration of Jews from around the world in Israel, North America, and other places, the religious definition of an Ashkenazi Jew is blurring, especially outside Orthodox Judaism.[96]

New developments in Judaism often transcend differences in religious practice between Ashkenazi and Sephardic Jews. In North American cities, social trends such as the chavurah movement, and the emergence of "post-denominational Judaism"[97][98] often bring together younger Jews of diverse ethnic backgrounds. In recent years, there has been increased interest in Kabbalah, which many Ashkenazi Jews study outside of the Yeshiva framework. Another trend is the new popularity of ecstatic worship in the Jewish Renewal movement and the Carlebach style minyan, both of which are nominally of Ashkenazi origin.[99]

Culturally, an Ashkenazi Jew can be identified by the concept of Yiddishkeit, which means "Jewishness" in the Yiddish language.[100]Yiddishkeit is specifically the Jewishness of Ashkenazi Jews.[101] Before the Haskalah and the emancipation of Jews in Europe, this meant the study of Torah and Talmud for men, and a family and communal life governed by the observance of Jewish Law for men and women. From the Rhineland to Riga to Romania, most Jews prayed in liturgical Ashkenazi Hebrew, and spoke Yiddish in their secular lives. But with modernization, Yiddishkeit now encompasses not just Orthodoxy and Hasidism, but a broad range of movements, ideologies, practices, and traditions in which Ashkenazi Jews have participated and somehow retained a sense of Jewishness. Although a far smaller number of Jews still speak Yiddish, Yiddishkeit can be identified in manners of speech, in styles of humor, in patterns of association. Broadly speaking, a Jew is one who associates culturally with Jews, supports Jewish institutions, reads Jewish books and periodicals, attends Jewish movies and theater, travels to Israel, visits historical synagogues, and so forth. It is a definition that applies to Jewish culture in general, and to Ashkenazi Yiddishkeit in particular.

As Ashkenazi Jews moved away from Europe, mostly in the form of aliyah to Israel, or immigration to North America, and other English-speaking areas such as South Africa; and Europe (particularly France) and Latin America, the geographic isolation that gave rise to Ashkenazim has given way to mixing with other cultures, and with non-Ashkenazi Jews who, similarly, are no longer isolated in distinct geographic locales. Hebrew has replaced Yiddish as the primary Jewish language for many Ashkenazi Jews, although many Hasidic and Hareidi groups continue to use Yiddish in daily life. (There are numerous Ashkenazi Jewish anglophones and Russian-speakers as well, although English and Russian are not originally Jewish languages.)

France's blended Jewish community is typical of the cultural recombination that is going on among Jews throughout the world. Although France expelled its original Jewish population in the Middle Ages, by the time of the French Revolution, there were two distinct Jewish populations. One consisted of Sephardic Jews, originally refugees from the Inquisition and concentrated in the southwest, while the other community was Ashkenazi, concentrated in formerly German Alsace, and mainly speaking a German dialect similar to Yiddish. (A third community of Provenal Jews living in Comtat Venaissin were technically outside France, and were later absorbed into the Sephardim.) The two communities were so separate and different that the National Assembly emancipated them separately in 1790 and 1791.[102]

But after emancipation, a sense of a unified French Jewry emerged, especially when France was wracked by the Dreyfus affair in the 1890s. In the 1920s and 1930s, Ashkenazi Jews from Europe arrived in large numbers as refugees from antisemitism, the Russian revolution, and the economic turmoil of the Great Depression. By the 1930s, Paris had a vibrant Yiddish culture, and many Jews were involved in diverse political movements. After the Vichy years and the Holocaust, the French Jewish population was augmented once again, first by Ashkenazi refugees from Central Europe, and later by Sephardi immigrants and refugees from North Africa, many of them francophone.

Then, in the 1990s, yet another Ashkenazi Jewish wave began to arrive from countries of the former Soviet Union and Central Europe. The result is a pluralistic Jewish community that still has some distinct elements of both Ashkenazi and Sephardic culture. But in France, it is becoming much more difficult to sort out the two, and a distinctly French Jewishness has emerged.[103]

In an ethnic sense, an Ashkenazi Jew is one whose ancestry can be traced to the Jews who settled in Central Europe. For roughly a thousand years, the Ashkenazim were a reproductively isolated population in Europe, despite living in many countries, with little inflow or outflow from migration, conversion, or intermarriage with other groups, including other Jews. Human geneticists have argued that genetic variations have been identified that show high frequencies among Ashkenazi Jews, but not in the general European population, be they for patrilineal markers (Y-chromosome haplotypes) and for matrilineal markers (mitotypes).[104] Since the middle of the 20th century, many Ashkenazi Jews have intermarried, both with members of other Jewish communities and with people of other nations and faiths.[105]

A 2006 study found Ashkenazi Jews to be a clear, homogeneous genetic subgroup. Strikingly, regardless of the place of origin, Ashkenazi Jews can be grouped in the same genetic cohort that is, regardless of whether an Ashkenazi Jew's ancestors came from Poland, Russia, Hungary, Lithuania, or any other place with a historical Jewish population, they belong to the same ethnic group. The research demonstrates the endogamy of the Jewish population in Europe and lends further credence to the idea of Ashkenazi Jews as an ethnic group. Moreover, though intermarriage among Jews of Ashkenazi descent has become increasingly common, many Haredi Jews, particularly members of Hasidic or Hareidi sects, continue to marry exclusively fellow Ashkenazi Jews. This trend keeps Ashkenazi genes prevalent and also helps researchers further study the genes of Ashkenazi Jews with relative ease. It is noteworthy that these Haredi Jews often have extremely large families.[10]

The Halakhic practices of (Orthodox) Ashkenazi Jews may differ from those of Sephardi Jews, particularly in matters of custom. Differences are noted in the Shulkhan Arukh itself, in the gloss of Moses Isserles. Well known differences in practice include:

The term Ashkenazi also refers to the nusach Ashkenaz (Hebrew, "liturgical tradition", or rite) used by Ashkenazi Jews in their Siddur (prayer book). A nusach is defined by a liturgical tradition's choice of prayers, order of prayers, text of prayers and melodies used in the singing of prayers. Two other major forms of nusach among Ashkenazic Jews are Nusach Sefard (not to be confused with the Sephardic ritual), which is the general Polish Hasidic nusach, and Nusach Ari, as used by Lubavitch Hasidim.

Several famous people have Ashkenazi as a surname, such as Vladimir Ashkenazy. However, most people with this surname hail from within Sephardic communities, particularly from the Syrian Jewish community. The Sephardic carriers of the surname would have some Ashkenazi ancestors since the surname was adopted by families who were initially of Ashkenazic origins who moved to Sephardi countries and joined those communities. Ashkenazi would be formally adopted as the family surname having started off as a nickname imposed by their adopted communities. Some have shortened the name to Ash.

Relations between Ashkenazim and Sephardim have not always been warm. North African Sepharadim and Berber Jews were often looked upon by Ashkenazim as second-class citizens during the first decade after the creation of Israel. This has led to protest movements such as the Israeli Black Panthers led by Saadia Marciano a Moroccan Jew. Nowadays, relations are getting better.[107] In some instances, Ashkenazi communities have accepted significant numbers of Sephardi newcomers, sometimes resulting in intermarriage.[108][109]

Ashkenazi Jews have a noted history of achievement in Western societies[110] in the fields of exact and social sciences, literature, finance, politics, media, and others. In those societies where they have been free to enter any profession, they have a record of high occupational achievement, entering professions and fields of commerce where higher education is required.[111] Ashkenazi Jews have won a large number of the Nobel awards.[112][113] While they make up about 2% of the U.S. population,[114] 27% of United States Nobel prize winners in the 20th century,[114] a quarter of Fields Medal winners,[115] 25% of ACM Turing Award winners,[114] half the world's chess champions,[114] including 8% of the top 100 world chess players,[116] and a quarter of Westinghouse Science Talent Search winners[115] have Ashkenazi Jewish ancestry.

Time magazine's person of the 20th century, Albert Einstein,[117] was an Ashkenazi Jew. According to a study performed by Cambridge University, 21% of Ivy League students, 25% of the Turing Award winners, 23% of the wealthiest Americans, and 38% of the Oscar-winning film directors, and 29% of Oslo awardees are Ashkenazi Jews.[118]

Efforts to identify the origins of Ashkenazi Jews through DNA analysis began in the 1990s. Currently, there are three types of genetic origin testing, autosomal DNA (atDNA), mitochondrial DNA (mtDNA), and Y-chromosomal DNA (Y-DNA). Autosomal DNA is a mixture from an individual's entire ancestry, Y-DNA shows a male's lineage only along his strict-paternal line, mtDNA shows any person's lineage only along the strict-maternal line. Genome-wide association studies have also been employed to yield findings relevant to genetic origins.

Like most DNA studies of human migration patterns, the earliest studies on Ashkenazi Jews focused on the Y-DNA and mtDNA segments of the human genome. Both segments are unaffected by recombination (except for the ends of the Y chromosome the pseudoautosomal regions known as PAR1 and PAR2), thus allowing tracing of direct maternal and paternal lineages.

These studies revealed that Ashkenazi Jews originate from an ancient (2000 BCE - 700 BCE) population of the Middle East who had spread to Europe.[119] Ashkenazic Jews display the homogeneity of a genetic bottleneck, meaning they descend from a larger population whose numbers were greatly reduced but recovered through a few founding individuals. Although the Jewish people in general were present across a wide geographical area as described, genetic research done by Gil Atzmon of the Longevity Genes Project at Albert Einstein College of Medicine suggests "that Ashkenazim branched off from other Jews around the time of the destruction of the First Temple, 2,500 years ago ... flourished during the Roman Empire but then went through a 'severe bottleneck' as they dispersed, reducing a population of several million to just 400 families who left Northern Italy around the year 1000 for Central and eventually Eastern Europe."[120]

Various studies have arrived at diverging conclusions regarding both the degree and the sources of the non-Levantine admixture in Ashkenazim,[31] particularly with respect to the extent of the non-Levantine genetic origin observed in Ashkenazi maternal lineages, which is in contrast to the predominant Levantine genetic origin observed in Ashkenazi paternal lineages. All studies nevertheless agree that genetic overlap with the Fertile Crescent exists in both lineages, albeit at differing rates. Collectively, Ashkenazi Jews are less genetically diverse than other Jewish ethnic divisions, due to their genetic bottleneck.[121]

The majority of genetic findings to date concerning Ashkenazi Jews conclude that the male line was founded by ancestors from the Middle East.[122][123][124] Others have found a similar genetic line among Greeks, and Macedonians.[citation needed]

A study of haplotypes of the Y-chromosome, published in 2000, addressed the paternal origins of Ashkenazi Jews. Hammer et al.[125] found that the Y-chromosome of Ashkenazi and Sephardic Jews contained mutations that are also common among other Middle Eastern peoples, but uncommon in the autochthonous European population. This suggested that the male ancestors of the Ashkenazi Jews could be traced mostly to the Middle East. The proportion of male genetic admixture in Ashkenazi Jews amounts to less than 0.5% per generation over an estimated 80 generations, with "relatively minor contribution of European Y chromosomes to the Ashkenazim," and a total admixture estimate "very similar to Motulsky's average estimate of 12.5%." This supported the finding that "Diaspora Jews from Europe, Northwest Africa, and the Near East resemble each other more closely than they resemble their non-Jewish neighbors." "Past research found that 5080 percent of DNA from the Ashkenazi Y chromosome, which is used to trace the male lineage, originated in the Near East," Richards said.

The population has subsequently spread out. Based on accounts such as those of Jewish historian Flavius Josephus, by the time of the destruction of the Second Temple in 70 CE, as many as six million Jews were already living in the Roman Empire, but outside Israel, mainly in Italy and Southern Europe. In contrast, only about 500,000 lived in Judea, said Ostrer, who was not involved in the new study.[126]

A 2001 study by Nebel et al. showed that both Ashkenazi and Sephardic Jewish populations share the same overall paternal Near Eastern ancestries. In comparison with data available from other relevant populations in the region, Jews were found to be more closely related to groups in the north of the Fertile Crescent. The authors also report on Eu 19 (R1a) chromosomes, which are very frequent in Central and Eastern Europeans (54%60%) at elevated frequency (12.7%) in Ashkenazi Jews. They hypothesized that the differences among Ashkenazim Jews could reflect low-level gene flow from surrounding European populations or genetic drift during isolation.[127] A later 2005 study by Nebel et al., found a similar level of 11.5% of male Ashkenazim belonging to R1a1a (M17+), the dominant Y-chromosome haplogroup in Central and Eastern Europeans.[128]

Before 2006, geneticists had largely attributed the ethnogenesis of most of the world's Jewish populations, including Ashkenazi Jews, to Israelite Jewish male migrants from the Middle East and "the women from each local population whom they took as wives and converted to Judaism." Thus, in 2002, in line with this model of origin, David Goldstein, now of Duke University, reported that unlike male Ashkenazi lineages, the female lineages in Ashkenazi Jewish communities "did not seem to be Middle Eastern", and that each community had its own genetic pattern and even that "in some cases the mitochondrial DNA was closely related to that of the host community." In his view this suggested "that Jewish men had arrived from the Middle East, taken wives from the host population and converted them to Judaism, after which there was no further intermarriage with non-Jews."[104]

In 2006, a study by Behar et al.,[129] based on what was at that time high-resolution analysis of haplogroup K (mtDNA), suggested that about 40% of the current Ashkenazi population is descended matrilineally from just four women, or "founder lineages", that were "likely from a Hebrew/Levantine mtDNA pool" originating in the Middle East in the 1st and 2nd centuries CE. Additionally, Behar et al. suggested that the rest of Ashkenazi mtDNA is originated from ~150 women, and that most of those were also likely of Middle Eastern origin.[129] In reference specifically to Haplogroup K, they suggested that although it is common throughout western Eurasia, "the observed global pattern of distribution renders very unlikely the possibility that the four aforementioned founder lineages entered the Ashkenazi mtDNA pool via gene flow from a European host population".

In 2013, however, a study of Ashkenazi mitochondrial DNA by a team led by Martin B. Richards of the University of Huddersfield in England reached different conclusions, corroborating the pre-2006 origin hypothesis. Testing was performed on the full 16,600 DNA units composing mitochondrial DNA (the 2006 Behar study had only tested 1,000 units) in all their subjects, and the study found that the four main female Ashkenazi founders had descent lines that were established in Europe 10,000 to 20,000 years in the past[130] while most of the remaining minor founders also have a deep European ancestry. The study states that the great majority of Ashkenazi maternal lineages were not brought from the Near East (i.e., they were non-Israelite), nor were they recruited in the Caucasus (i.e., they were non-Khazar), but instead they were assimilated within Europe, primarily of Italian and Old French origins. Richards summarized the findings on the female line as such: "[N]one [of the mtDNA] came from the North Caucasus, located along the border between Europe and Asia between the Black and Caspian seas. All of our presently available studies including my own, should thoroughly debunk one of the most questionable, but still tenacious, hypotheses: that most Ashkenazi Jews can trace their roots to the mysterious Khazar Kingdom that flourished during the ninth century in the region between the Byzantine Empire and the Persian Empire."[126] The 2013 study estimated that 80 percent of Ashkenazi maternal ancestry comes from women indigenous to Europe, and only 8 percent from the Near East, while the origin of the remainder is undetermined.[12][130] According to the study these findings "point to a significant role for the conversion of women in the formation of Ashkenazi communities."[12][13][131][132][133][134]Karl Skorecki at Technion criticized the study for perceived flaws in phylogenetic analysis. "While Costa et al have re-opened the question of the maternal origins of Ashkenazi Jewry, the phylogenetic analysis in the manuscript does not 'settle' the question."[135]

A 2014 study by Fernndez et al. has found that Ashkenazi Jews display a frequency of haplogroup K in their maternal DNA that suggests an ancient Near Eastern origin, similar to the results of Behar. He stated that this observation clearly contradicts the results of the study led by Richards that suggested a European source for 3 exclusively Ashkenazi K lineages.[136]

In genetic epidemiology, a genome-wide association study (GWA study, or GWAS) is an examination of all or most of the genes (the genome) of different individuals of a particular species to see how much the genes vary from individual to individual. These techniques were originally designed for epidemiological uses, to identify genetic associations with observable traits.[137]

A 2006 study by Seldin et al. used over five thousand autosomal SNPs to demonstrate European genetic substructure. The results showed "a consistent and reproducible distinction between 'northern' and 'southern' European population groups". Most northern, central, and eastern Europeans (Finns, Swedes, English, Irish, Germans, and Ukrainians) showed >90% in the "northern" population group, while most individual participants with southern European ancestry (Italians, Greeks, Portuguese, Spaniards) showed >85% in the "southern" group. Both Ashkenazi Jews as well as Sephardic Jews showed >85% membership in the "southern" group. Referring to the Jews clustering with southern Europeans, the authors state the results were "consistent with a later Mediterranean origin of these ethnic groups".[10]

A 2007 study by Bauchet et al. found that Ashkenazi Jews were most closely clustered with Arabic North African populations when compared to Global population, and in the European structure analysis, they share similarities only with Greeks and Southern Italians, reflecting their east Mediterranean origins.[138][139]

A 2010 study on Jewish ancestry by Atzmon-Ostrer et al. stated "Two major groups were identified by principal component, phylogenetic, and identity by descent (IBD) analysis: Middle Eastern Jews and European/Syrian Jews. The IBD segment sharing and the proximity of European Jews to each other and to southern European populations suggested similar origins for European Jewry and refuted large-scale genetic contributions of Central and Eastern European and Slavic populations to the formation of Ashkenazi Jewry", as both groups the Middle Eastern Jews and European/Syrian Jews shared common ancestors in the Middle East about 2500 years ago. The study examines genetic markers spread across the entire genome and shows that the Jewish groups (Ashkenazi and non Ashkenazi) share large swaths of DNA, indicating close relationships and that each of the Jewish groups in the study (Iranian, Iraqi, Syrian, Italian, Turkish, Greek and Ashkenazi) has its own genetic signature but is more closely related to the other Jewish groups than to their fellow non-Jewish countrymen.[140] Atzmon's team found that the SNP markers in genetic segments of 3 million DNA letters or longer were 10 times more likely to be identical among Jews than non-Jews. Results of the analysis also tally with biblical accounts of the fate of the Jews. The study also found that with respect to non-Jewish European groups, the population most closely related to Ashkenazi Jews are modern-day Italians. The study speculated that the genetic-similarity between Ashkenazi Jews and Italians may be due to inter-marriage and conversions in the time of the Roman Empire. It was also found that any two Ashkenazi Jewish participants in the study shared about as much DNA as fourth or fifth cousins.[141][142]

A 2010 study by Bray et al., using SNP microarray techniques and linkage analysis found that when assuming Druze and Palestinian Arab populations to represent the reference to world Jewry ancestor genome, between 35 and 55 percent of the modern Ashkenazi genome can possibly be of European origin, and that European "admixture is considerably higher than previous estimates by studies that used the Y chromosome" with this reference point. Assuming this reference point the linkage disequilibrium in the Ashkenazi Jewish population was interpreted as "matches signs of interbreeding or 'admixture' between Middle Eastern and European populations".[143] On the Bray et al. tree, Ashkenazi Jews were found to be a genetically more divergent population than Russians, Orcadians, French, Basques, Italians, Sardinians and Tuscans. The study also observed that Ashkenazim are more diverse than their Middle Eastern relatives, which was counterintuitive because Ashkenazim are supposed to be a subset, not a superset, of their assumed geographical source population. Bray et al. therefore postulate that these results reflect not the population antiquity but a history of mixing between genetically distinct populations in Europe. However, it's possible that the relaxation of marriage prescription in the ancestors of Ashkenazim that drove their heterozygosity up, while the maintenance of the FBD rule in native Middle Easterners have been keeping their heterozygosity values in check. Ashkenazim distinctiveness as found in the Bray et al. study, therefore, may come from their ethnic endogamy (ethnic inbreeding), which allowed them to "mine" their ancestral gene pool in the context of relative reproductive isolation from European neighbors, and not from clan endogamy (clan inbreeding). Consequently, their higher diversity compared to Middle Easterners stems from the latter's marriage practices, not necessarily from the former's admixture with Europeans.[144]

The genome-wide genetic study carried out in 2010 by Behar et al. examined the genetic relationships among all major Jewish groups, including Ashkenazim, as well as the genetic relationship between these Jewish groups and non-Jewish ethnic populations. The study found that contemporary Jews (excluding Indian and Ethiopian Jews) have a close genetic relationship with people from the Levant. The authors explained that "the most parsimonious explanation for these observations is a common genetic origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant".[145]

A 2015 study by James Xue et al. results suggested that 75% of the European ancestry in AJ is South-European, with the rest mostly East European. The time of admixture was inferred to be around 30-40 generations ago, on the eve of the Ashkenazi settlement in Eastern-Europe.[146]

In the late 19th century, it was proposed that the core of today's Ashkenazi Jewry are genetically descended from a hypothetical Khazarian Jewish diaspora who had migrated westward from modern Russia and Ukraine into modern France and Germany (as opposed to the currently held theory that Jews from France and Germany migrated into Eastern Europe). The hypothesis is not corroborated by historical sources[147] and is unsubstantiated by genetics, but it is still occasionally supported by scholars who have had some success in keeping the theory in the academic conscience.[148] The theory is associated with antisemitism[149] and anti-Zionism.[150][151]

A 2013 trans-genome study carried out by 30 geneticists, from 13 universities and academies, from 9 countries, assembling the largest data set available to date, for assessment of Ashkenazi Jewish genetic origins found no evidence of Khazar origin among Ashkenazi Jews. "Thus, analysis of Ashkenazi Jews together with a large sample from the region of the Khazar Khaganate corroborates the earlier results that Ashkenazi Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution either from within or from north of the Caucasus region", the authors concluded.[152]

There are many references to Ashkenazi Jews in the literature of medical and population genetics. Indeed, much awareness of "Ashkenazi Jews" as an ethnic group or category stems from the large number of genetic studies of disease, including many that are well reported in the media, that have been conducted among Jews. Jewish populations have been studied more thoroughly than most other human populations, for a variety of reasons:

The result is a form of ascertainment bias. This has sometimes created an impression that Jews are more susceptible to genetic disease than other populations.[153] Healthcare professionals are often taught to consider those of Ashkenazi descent to be at increased risk for colon cancer.[154]

Genetic counseling and genetic testing are often undertaken by couples where both partners are of Ashkenazi ancestry. Some organizations, most notably Dor Yeshorim, organize screening programs to prevent homozygosity for the genes that cause related diseases.[155][156]

Visit link:
Ashkenazi Jews - Wikipedia

Read More...

Could gene therapy become biotechs growth driver in 2017 …

December 7th, 2016 2:41 pm

Despite bouncing off a 2-year low, biotech is still an unpopular sector and investors are rightfully concerned about its near-term prospects. Recent drug failures, growing pricing pressure and the potential impact of biosimilars all contribute to the negative sentiment, but the main problem is the lack of growth drivers for the remainder of 2016 (and potentially 2017).

The biotech industry relies on innovation cycles to create new revenue sources. This was the case in the 2013-2014 biotech bull market, which was driven by a wave of medical breakthroughs (PD-1, HCV, CAR/TCR, oral MS drugs, CF etc.). These waves typically involve new therapeutic approaches coupled with disruptive technologies as their enablers.

In oncology, for example, the understanding that cancer is driven by aberrant signaling coupled with advances in medicinal chemistry and antibody engineering led to the development of kinase inhibitors and monoclonal antibodies as blockers of signaling. A decade later, insights around cancer immunology gave rise to the immuno-oncology field and PD-1 inhibitors in particular, which are expected to become the biggest oncology franchise ever.

Gene therapy ticks all the boxes

While there are several hot areas in biotech such as gene editing and microbiome, most are still early and their applicability is unclear. Gene therapy, on the other hand, is more mature and de-risked with tens of clinical studies and the potential to treat (and perhaps cure) a wide range of diseases where treatment is inadequate or non-existent. The commercial upside from these programs is huge and should expand as additional indications are pursued.

As I previously discussed, the past two years saw a surge in the number of clinical-stage gene therapies, some of which already generated impressive efficacy across multiple indications. This makes gene therapy the only truly disruptive field which is mature enough not only from a technology but also from a clinical standpoint. Importantly, most studies are conducted by companies according to industry and regulatory standards, in contrast to historical gene therapy studies that were run by academic groups.

To me, the striking thing about the results is the breadth of technologies, indications and modes of administrations evaluated to date. This versatility is very important for the future of gene therapy as it reduces overall development risk and increases likelihood of success by allowing companies to tailor the right product for each indication. Parameters include mode of administration (local vs. systemic vs. ex vivo), tropism for the target tissue (eye, bone marrow, liver etc.), immunogenicity and onset of activity.

Building a diversified gene therapy basket

Given the early development stage and large number of technologies, I prefer to own a basket of gene therapy stocks with a focus on the more clinically validated ones: Spark (ONCE), Bluebird (BLUE) and Avexis (AVXS).

Bluebird and Spark are the most further along (and also the largest based on market cap) gene therapy companies and should be the basis for any gene therapy portfolio. With two completely different technologies, the two companies have strong clinical proof-of-concept for their respective lead programs.

Avexis is less advanced without a clinically validated product, but recent data for its lead program are too promising to ignore.

Spark Clinical validation for retinal and liver indications

Sparks lead programs (SPK-RPE65) will probably become the first gene therapy to get FDA approval. In October, the company reported strong P3 data in rare genetic retinal conditions caused by RPE65 mutations, the first randomized and statistically significant data for a gene therapy. The company is expected to complete its BLA submission later in 2016 which should lead to FDA approval in 2017. Sparks second ophthalmology program for choroideremia is in P1 with efficacy data expected later in 2016.

Earlier this month, Spark released an encouraging update for its Hemophilia B program, SPK-9001 (partnered with Pfizer [PFE]). A single administration of SPK-9001 led to a sustained and clinically meaningful production of Factor IX, a clotting factor which is dysfunctional in Hemophilia B patients. All four treated patients experienced a clinically significant increase in Factor IX activity from <2% to 26%-41% (12% is predicted to be sufficient for minimizing incidence bleeding events). Due to the limited follow up (under 6 months), durability is still an open question.

Spark intends to advance its wholly-owned Hemophilia A program (SPK-8011) to the clinic later in 2016 with initial data expected in H1:2017. Results in the Hemophilia B should be viewed as a positive read-through but Hemophilia A still presents certain technical challenges (e.g. missing protein is several fold larger) which required Spark to use a different vector. Hemophilia A represents a $5B opportunity compared to $1B for Hemophilia B.

Bluebird

Despite being one of the worst biotech performers, Bluebird remains the largest and most visible gene therapy company. In contrast to most gene therapy companies, Bluebird treats patients cells ex-vivo (outside of the body) in a process that resembles stem cell transplant or adoptive cell transfer (CAR, TCR). Progenitor cells are collected from the patient, a genetic modification is integrated into the genome followed by infusion of the cells that repopulate the bone marrow. This enables Bluebird to go after hematologic diseases like beta thalassemia and Sickle-cell disease (SCD) where target cells are constantly dividing.

Sentiment around Bluebirds lead program, Lenti-globin , plummeted last year after a series of disappointing results in a subset of beta-thal patients and preliminary data in SCD, which represents the more important commercial opportunity. Particularly in SCD patients, post-treatment hemoglobin levels were relatively low and although some increase has been noted with time, it is still unclear what the maximal effect would be. Market reaction was brutal, sending shares down 75% in just over a year.

Next update for Lenti-globin is expected at ASH in December. Despite the disappointing efficacy observed in SCD and beta-thal, I am cautiously optimistic about Bluebirds efforts to optimize treatment protocols and regimens. These include specific conditioning regimens and ex-vivo treatment of cells that may improve transduction rate and hemoglobin production in patients. Some of these modifications are already being implemented in newly recruited patients and hopefully longer follow up will lead to higher hemoglobin levels in already-reported patients.

The only clinical update so far in 2016 was for Lenti-D in C-ALD, a rare neurological disease that affects infants in their first years. Results demonstrated that of 17 patients treated to date (median follow-up of 16 months), all remain alive and free of major functional deterioration (defined as major functional disabilities, MFD). The primary endpoint, defined as no MFD at 2 years, was reached for 3/3 patients with sufficient follow-up and assuming the trend continues Bluebird may be in a position to file for approval in H2:2017.

Lenti-Ds commercial opportunity is limited (200 patients diagnosed each year in developed countries) so investors understandably focus on Lenti-globin, which is being developed for beta thal (~20k patients in developed countries) and SCD (~160k patients).

Bluebird is expected to end 2016 with ~$650M in cash. Current market cap is $1.7B.

Avexis

Avexis is developing AVXS-101 for Spinal muscular atrophy Type 1 (SMA1), a rapidly deteriorating and fatal neuro-muscular disease. SMA1 is characterized by rapid deterioration in motor and neuronal functions with 50% of patients experiencing death or permanent ventilation by their first anniversary. Most patients die from respiratory failure by the age of two. SMA Type 2 and Type 3 are also caused by SMN1 mutations and are characterized by a later onset and milder disease burden (but unmet need is still significant in these indications). The US prevalence of SMA is 10,000, 600 of which are SMA1.

In contrast to Bluebird and Spark, Avexis does not have conclusive proof it can lead to expression of the missing protein (SMN1) in the target tissue nor does it have randomized clinical data but the results generated to date are simply too provocative to ignore.

At the most recent update, Avexis presented data for 15 patients who received AVXS-101 in their first months of life. 3 patients were treated with a low dose and 12 were treated with a high dose. Strikingly, none of the children experienced an event (defined as ventilation or death), including patients who reached 2 years of age. All 9 patients with sufficient follow up, reached the age of 13.6 months without an event in contrast to historical data that show an event-free survival of 25%. AVXS-101 also led to a dose dependent increase in motor function which had a quick onset especially at the higher dose.

As with any results from an open label study without a control arm, these data should be analyzed with caution, as they need to be corroborated by large controlled studies (expected to start next year). Still, the data point to an overwhelming benefit in a very aggressive disease. One of the most exciting aspects of this program is the fact that it is given systemically via IV administration, which implies the treatment reaches the neurons in the CNS. Avexis plans to start a trial in SMA2 in H2:16 using intrathecal delivery (directly to the spinal canal). This decision is surprising given the results with IV administration in SMA1 and the fact that the BBB immaturity hypothesis in babies is not considered relevant anymore. (See this review)

AVXS-101s main competitor is Biogens (BIIB) and Ionis (IONS) nusinersen, an antisense molecule that needs to be intrathecally injected 3-4 times a year. As both drugs generated encouraging clinical data in small non-randomized studies, it is hard to compare them, however, AVXS-101 has an obvious advantage of being a potentially one time IV injection. Nusinersen is in P3 with topline data expected in mid-2017.

AVXS-101 is based on an AAV9 vector developed by REGENXBIO (RGNX), which licensed the technology to Avexis. Beyond the 5%-10% in royalties REGENXBIO is eligible to receive, data for AVXS-101 bode well for the companys proprietary programs in MPS-I and MPS-II, two other rare diseases with neurological involvement where BBB penetration is crucial. These programs are also based on REGENXBIOs AAV9.

Beyond AVXS-101, REGENXBIO has an impressive partnered pipeline which includes collaborations with Voyager (VYGR), Dimension (DMTX) , Baxalta and Lysogene.

Portfolio updates Immunogen, Marinus, Esperion

June was a rough month for three of my holdings. Immunogen (IMGN) had a disappointing data set at ASCO, Marinus (MRNS) reported a P3 failure in epilepsy and most recently, Esperion was dealt a regulatory blow from the FDA that may push development timelines by several years. I am selling Immunogen and Marinus due to the lack of near-term catalysts although long-term their respective drugs could still be valuable. I decided to keep Esperion as I still find ETC-1002 very attractive and hope that PCSK9s CVOT data will soften FDAs concerns about LDL-C reduction as an approvable endpoint.

Three additional companies with important binary readouts in the coming months are Array Biopharma (ARRY), SAGE (SAGE) and Aurinia (AUPH). Array will have P3 data for selumetinib (partnered with AstraZeneca) in KRAS+ NSCLC. SAGE will report data from a randomized P2 in PPD following a promising single-arm data set. Aurinia will report results from the AURA study in lupus nephritis patients, where there is a strong rationale for using the companys drug (voclosporin) but limited direct clinical validation.

Portfolio holdings July 4, 2016

.

Visit link:
Could gene therapy become biotechs growth driver in 2017 ...

Read More...

Welcome to The Visible Embryo

December 7th, 2016 2:41 pm

Dec 7, 2016-----News ArchiveLatest research covered daily, archived weekly

Low vitamin D in newborns increases risk MS later Babies born with low levels of vitamin D may be more likely to develop multiple sclerosis (MS) later in life than babies with higher vitamin D levels.

Dec 6, 2016-----News ArchiveLatest research covered daily, archived weekly

Toddlers can tell when others hold 'false beliefs' A new study finds 2.5 year-old children can answer questions about people acting on 'false beliefs', an ability most researchers believe will not develop until age 4.

Dec 5, 2016-----News ArchiveLatest research covered daily, archived weekly

Protein that enables our brains and muscles to talk A huge colony of receptors must be correctly positioned and functioning on muscle cells in order to receive signals from our brains. Now a protein has been identified that helps anchor those receptors, ensuring receptor formation and function.

Dec 2, 2016-----News ArchiveLatest research covered daily, archived weekly

Tracking development of individual blood stem cells Harvard Stem Cell Institute (HSCI) researchers use a new cell-labeling technique to track development of adult blood cells to original stem cell in bone marrow advancing our understanding of blood development and blood diseases.

Dec 1, 2016-----News ArchiveLatest research covered daily, archived weekly

Having last baby after 35? Mental sharpness increases A new study finds women have better brainpower after menopause if they had their last baby after 35, or used hormonal contraceptives for more than 10 years, or began their menstrual cycle before turning 13. The women were tested for their verbal memory, attention, concentration, and visual perception.

Nov 30, 2016-----News ArchiveLatest research covered daily, archived weekly

Mouse embryos put in suspended animation for weeks Inhibiting a molecular path lets mouse blastocysts survive for weeks in the lab. Researchers have found a way to pause the development of early mouse embryos for up to a month in the lab. The finding has potential implications for assisted reproduction, regenerative medicine, aging, and even cancers.

Nov 29, 2016-----News ArchiveLatest research covered daily, archived weekly

Tissue damage is key for a cell to reprogram Damaged cells will send signals to neighboring cells to reprogram them back to an embryonic state. This initiates tissue repair and could have implications for treating degenerative diseases.

Nov 28, 2016-----News ArchiveLatest research covered daily, archived weekly

'Princess Leia' brainwaves help store memories Every night while you sleep, electrical waves of brain activity circle around each side of your brain, tracing a pattern that were it on the surface of your head might look like the twin hair buns of Star Wars' Princess Leia.

Nov 25, 2016-----News ArchiveLatest research covered daily, archived weekly

Measuring the gaze between mom and autistic baby Mothers and children with autism spectrum disorder communicate through their gaze just as all parents do. However, a new tool measuring that gaze and its impact on an infant's neurologic development, reveals more.

Nov 24, 2016-----News ArchiveLatest research covered daily, archived weekly

Lying face up pregnant could increase risk of stillbirth Researchers at the University of Auckland have found that pregnant women who lie on their backs in the third trimester, may be increasing their risk for stillbirth.

Nov 23, 2016-----News ArchiveLatest research covered daily, archived weekly

Mom Rheumatoid Arthritis links to epilepsy in child A new study shows a link between mothers with rheumatoid arthritis and children with epilepsy. Rheumatoid arthritis (RA), an autoimmune disease, causes our own immune system to attack our joints. It differs from osteoarthritis, caused by wear and tear on the joints.

Nov 22, 2016-----News ArchiveLatest research covered daily, archived weekly

A protein that points cells in the right direction In animals, the stretching of skin tissue during the growth of an embryo requires the unique CDC-42 GTPase protein. It directs the movement of migrating cells.

Nov 18, 2016-----News ArchiveLatest research covered daily, archived weekly

Genes for speech may not be limited to humans Vocal communication in mice is affected by the same gene needed for human speech..th.

Nov 17, 2016----- News ArchiveLatest research covered daily, archived weekly

Insulin resistance reversed by removal of Gal3 protein By removing the protein galectin-3 (Gal3), a team of investigators were able to reverse diabetic insulin resistance and glucose intolerance in mice used as models of obesity and diabetes.

Nov 16, 2016-----News ArchiveLatest research covered daily, archived weekly

B12 deficiency can increase risk for type 2 diabetes B12 deficiency during pregnancy may predispose baby into adulthood for metabolic problems such as type-2 diabetes.

Nov 15, 2016-----News ArchiveLatest research covered daily, archived weekly

Non-invasive prenatal test at five weeks of pregnancy? The latest developments in prenatal technology may make it possible to test for genetic disorders one month into pregnancy.

Nov 14, 2016-----News ArchiveLatest research covered daily, archived weekly

Heart disease, leukemia links to dysfunctional nucleus In cells, the nucleus keeps DNA protected and intact within an enveloping membrane. But a new study reveals that this containment influences how genes are expressed.

Nov 11, 2016-----News ArchiveLatest research covered daily, archived weekly

Blood vessels control brain growth Blood vessels play a vital role in stem cell reproduction, enabling the brain to grow and develop in the womb, reveals new research in mice.

Nov 10, 2016-----News ArchiveLatest research covered daily, archived weekly

Antibody protects developing fetus from Zika virusThe most devastating consequence of Zika virus is the development of microcephaly, an abnormally small head, in babies infected in utero. Now, research has identified a human antibody preventing pregnant mice, from infecting the fetus with Zika and damaging the placenta. It also protects adult mice from the Zika disease.

Nov 9, 2016-----News ArchiveLatest research covered daily, archived weekly

Better treatments possible for child brain cancer More than 4,000 children and teens are diagnosed with brain cancer yearly, killing more children than any other cancer. Researchers targeted an aggressive pediatric brain tumor CNS-PNET using a zebrafish model. And, in about 80% of cases, eliminated the tumor using existing drugs.

Nov 8, 2016-----News ArchiveLatest research covered daily, archived weekly

Autism linked to mutations in mitochondrial DNA Study of 903 affected children shows inherited, spontaneous mutations increase the risk of autism spectrum disorder (ASD). The children diagnosed with autism had greater numbers of harmful mutations in their mitochondrial DNA than other family members.

Nov 7, 2016-----News ArchiveLatest research covered daily, archived weekly

Mother's blood test may predict birth complications DLK1 protein found in the blood of pregnant women could be developed to test the health of babies and aid in decisions on early elective deliveries, according to a study led by Queen Mary University of London.

Nov 4, 2016-----News ArchiveLatest research covered daily, archived weekly

Essential mouse genes give insight into human disease About a third of all genes in mammals are essential to life. Now an international, multi-institutional team, describes their discovery of which genes they are and what impact they make on human development and disease.

Nov 3, 2016-----News ArchiveLatest research covered daily, archived weekly

Newborns given dextrose gel avoid hypoglycaemia A single dose of dextrose gel, rubbed inside a newborn baby's mouth an hour after birth, can lower the risk for developing neonatal hypoglycaemia, according to a randomized study.

Nov 2, 2016-----News ArchiveLatest research covered daily, archived weekly

Mitochondria divide differently than once thought For the first time a study reveals how mitochondria, the power generators found in nearly all living cells, regularly divide and multiply.

Nov 1, 2016-----News ArchiveLatest research covered daily, archived weekly

Customizing vitamin D may benefit pregnant women Individualized vitamin D supplements help protect pregnant women from its deficiency. Tailored doses may compensate for individual risk factors and even protect bones.

Oct 31, 2016-----News ArchiveLatest research covered daily, archived weekly

Antibody breaks leukemia's hold In mouse models and patient cells, anti-CD98 antibody disrupts interactions between leukemia cells and surrounding blood vessels, inhibiting cancer's spread.

Oct 28, 2016-----News ArchiveLatest research covered daily, archived weekly

Strong, steady forces needed for cell divisionBiologists studying cell division have long disagreed about how much force is needed to pull chromosomes apart in order to form two new cells. A question fundamental to how cells divide.

Oct 27, 2016-----News ArchiveLatest research covered daily, archived weekly

"Fixing" energy signals to treat mitochondrial disease Restoring cellular energy signals may offset mitochondrial diseases in humans. Using existing drugs to treat lab animals, researchers have set the stage for clinical trials.

Oct 26, 2016-----News ArchiveLatest research covered daily, archived weekly

How eggs get the wrong number of chromosomes Twentyfour hours before ovulation, human oocytes start to divide into what will become mature eggs. Ideally, eggs include a complete set of 23 chromosomes, but the process is prone to error especially as women age.

Oct 25, 2016-----News ArchiveLatest research covered daily, archived weekly

Fatal preemie disease due to mitochondrial failure A life-threatening condition preventing gut development in premature infants may be triggered by a disruption in the way the body metabolizes energy from Mitochondria.

Oct 24, 2016-----News ArchiveLatest research covered daily, archived weekly

Zika virus spread timed to brain growth spurts Scientists from the Florida campus of The Scripps Research Institute (TSRI) are able to pinpoint timing of the most aggressive ZIKA attacks on newborn mouse brains information that could help treatments.

Oct 21, 2016-----News ArchiveLatest research covered daily, archived weekly

Short jump from single-cell to multi-cell animals Our single-celled ancestors lived about 800 million years ago. Now, new evidence suggests their leap to multi-celled organisms was not quite as mysterious as once believed.

Oct 20, 2016-----News ArchiveLatest research covered daily, archived weekly

Brainstem and visual cortex control our eyes A mouse study is illuminating how our brain quickly adapts and functions. Tracking mouse eye movements, researchers make an unexpected discovery the part of the brain known to process sensory information, our visual cortex, is also key to spontaneous eye movements.

Oct 19, 2016-----News ArchiveLatest research covered daily, archived weekly

Embryos make sex cells in their first two weeks Producing the next generation of life is already occuring in an embryo in its own first weeks. Human primordial germ cells which give rise to sperm or egg cells are present in embryos by their second week of development.

Oct 18, 2016-----News ArchiveLatest research covered daily, archived weekly

Mom's BMI may affect biological age of her baby Higher Body Mass Index (BMI) in a mother before pregnancy is associated with shorter telomere length a biomarker for biological age in her newborn. Her baby's short telomere length means the baby's cells have shorter lifespans.

Oct 17, 2016-----News ArchiveLatest research covered daily, archived weekly

Two distinct cell types can initiate Crohn's disease A new discovery could lead to personalized treatment for the debilitating gastrointestinal disorder called Crohn's. There appear to be two distinct disease types. One expressed in normal colon tissue, the other in the small intestine. Detecting which type a patient has will assist her in her treatment and desire to get pregnant or carry a pregnancy.

Oct 14, 2016-----News ArchiveLatest research covered daily, archived weekly

Potential treatment of newborns via amniotic fluid? A breakthrough study offers promise for therapeutic management of congenital diseases in utero using designer gene sequences.

Oct 13, 2016-----News ArchiveLatest research covered daily, archived weekly

Infants use their prefrontal cortex to learn Researchers have always thought the prefrontal cortex (PFC) the brain region involved in some of the highest forms of cognition and reasoning was too underdeveloped in young children, especially infants, to participate in complex cognitive tasks. A new study suggests otherwise.

Oct 12, 2016-----News ArchiveLatest research covered daily, archived weekly

'Amplifier' helps make connections in the fetal brain A special amplifier makes neural signals stronger in babies then stops once neural connections are fully strengthened. Oct 11, 2016-----News ArchiveLatest research covered daily, archived weekly

Neurons migrate throughout infancy A previously unrecognized stage of brain development has just been recognized to continue long after birth. Neurons in the cerebral cortex, the outer layer of the brain, migrate into the cortex continuing growth throughout infancy.

Oct 10, 2016-----News ArchiveLatest research covered daily, archived weekly

Calcium triggers stem cells to generate bone Calcium is the main constituent of bone, and now is found to play a major role in regulating its growth. This new finding may affect treatment of conditions caused by too much collagen, such as fibrosis which thickens and scars connective tissue, as well in diseases of too little bone growth, such as Treacher Collins Syndrome (TCS).

Oct 7, 2016-----News ArchiveLatest research covered daily, archived weekly

How evolution has given us 5 fingers Have you ever wondered why our hands have exactly five fingers? Dr. Marie Kmita's team has. The researchers at the Institut de recherches cliniques de Montral and Universit de Montral have uncovered a part of this mystery.

Oct 6, 2016-----News ArchiveLatest research covered daily, archived weekly New links between genes and bigger brains A number of new links between genes and brain size have been identified by United Kingdom scientists, hopefully opening up whole new avenues of understanding brain development including diseases like dementia.

Oct 5, 2016-----News ArchiveLatest research covered daily, archived weekly Progesterone in contraceptives promotes flu healing Over 100 million women are on hormonal contraceptives. All contain some form of progesterone, either alone or in combination with estrogen. Researchers found treatment with progesterone protects female mice against influenza by reducing inflammation and improving pulmonary function.

Oct 4, 2016-----News ArchiveLatest research covered daily, archived weekly

ZIKA in Men? "No procreation for 6 months" The Zika virus has largely spread via mosquitoes, but it can also be spread by sexual intercourse. Men who may have been exposed should wait at least six months before trying to conceive a child with a partner. Regardless whether they ever had any symptoms, say US federal health officials.

Oct 3, 2016-----News ArchiveLatest research covered daily, archived weekly Genetically modified baby boy - with 3 parents New, cheap and accurate DNA-editing techniques called CRISPR-Cas9 and SNT, or single nucleic targeting, are allowing for gene modification in humans. It is not science fiction anymore. In a first, a baby boy with modified DNA has been born in Mexico to overcome a mitochondrial disease that claimed the life of his two earlier sibblings

Sep 30, 2016-----News ArchiveLatest research covered daily, archived weekly Meet the world's largest bony fish For the first time, the genome of the ocean sunfish (Mola mola), the world's largest bony fish, has been sequenced. Researchers involved in the Genome 10K (G10K) project want to collect 10,000 nonmammalian vertebrate genomes for comparative analyses. The ocean sunfish genome has now revealed several altered genes that may explain its' fast growth, large size and unusual shape.

Sep 29, 2016-----News ArchiveLatest research covered daily, archived weekly

Genetic variations that cause skull-fusion disorders During the first year of life, the human brain doubles in size, continuing to grow through adolescence. But sometimes, the loosely connected plates of a baby's skull fuse too early, a disorder known as craniosynostosis. It can also produce facial and skull deformities, potentially damaging a young brain.

Sep 28, 2016-----News ArchiveLatest research covered daily, archived weekly

Heart defect genes both inside and outside the heart Congenital heart defects (CHDs) are a leading cause of birth defect-related deaths. How genetic alterations cause such defects is complicated by the fact that CHD's many critical genes are unknown. Those that are known often contribute only small increases in CHD risk.

Sep 27, 2016-----News ArchiveLatest research covered daily, archived weekly Cesarean baby 15% more likely to become obese Cesarean born babies are 15% more likely to become obese as children than individuals born by vaginal birth and 64% more likely to be obese than their siblings born by vaginal birth. The increased risk may persist through adulthood. All of this data is according to a large study from Harvard T.H. Chan School of Public Health.

Sep 26, 2016-----News ArchiveLatest research covered daily, archived weekly

Male primes female for reproduction - but at a cost Research has discovered that male worms, through an invisible chemical "essence," prime female worms for reproduction but with the unfortunate side effect of also hastening her aging. The results might lead to human therapies to delay puberty or prolong fertility.

Sep 23, 2016-----News ArchiveLatest research covered daily, archived weekly Why Tardigrades Are So Indestructible Tardigrades, or water bears, are microscopic animals capable of withstanding some of the most severe environmental conditions even being "dead" for 30 years, and then restored to life! Research from Japan has now created the most accurate picture yet of the tardigrade genome and why it matters to humans.

Sep 22, 2016-----News ArchiveLatest research covered daily, archived weekly Mouse bone marrow cells reduce miscarriage? Progenitor cells are like stem cells, but differentiated by a first step into one specific cell type. Research now finds the progenitor cells in bone marrow which replace worn out cells may help placental blood vessel growth and reduce abnormal placental development such as in pre-eclampsia.

Continued here:
Welcome to The Visible Embryo

Read More...

Dental caries – Wikipedia

December 7th, 2016 2:41 pm

Dental caries, also known as tooth decay, cavities, or caries, is a breakdown of teeth due to activities of bacteria.[1] The cavities may be a number of different colors from yellow to black.[2] Symptoms may include pain and difficulty with eating.[2][3] Complications may include inflammation of the tissue around the tooth, tooth loss, and infection or abscess formation.[2][4]

The cause of caries is bacterial breakdown of the hard tissues of the teeth (enamel, dentin and cementum). This occurs due to acid made from food debris or sugar on the tooth surface. Simple sugars in food are these bacteria's primary energy source and thus a diet high in simple sugar is a risk factor. If mineral breakdown is greater than build up from sources such as saliva, caries results. Risk factors include conditions that result in less saliva such as: diabetes mellitus, Sjogren's syndrome and some medications. Medications that decrease saliva production include antihistamines and antidepressants.[5] Caries is also associated with poverty, poor cleaning of the mouth, and receding gums resulting in exposure of the roots of the teeth.[1][6]

Prevention includes: regular cleaning of the teeth, a diet low in sugar, and small amounts of fluoride.[3][5] Brushing the teeth twice per day and flossing between the teeth once a day is recommended by many.[1][5] Fluoride may be from water, salt or toothpaste among other sources.[3] Treating a mother's dental caries may decrease the risk in her children by decreasing the numbers of certain bacteria.[5] Screening can result in earlier detection.[1] Depending on the extent of destruction, various treatments can be used to restore the tooth to proper function or the tooth may be removed.[1] There is no known method to grow back large amounts of tooth.[7] The availability of treatment is often poor in the developing world.[3]Paracetamol (acetaminophen) or ibuprofen may be taken for pain.[1]

Worldwide, approximately 2.43billion people (36% of the population) have dental caries in their permanent teeth.[8] The World Health Organization estimates that nearly all adults have dental caries at some point in time.[3] In baby teeth it affects about 620million people or 9% of the population.[8] They have become more common in both children and adults in recent years.[9] The disease is most common in the developed world due to greater simple sugar consumption and less common in the developing world.[1] Caries is Latin for "rottenness".[4]

A person experiencing caries may not be aware of the disease.[10] The earliest sign of a new carious lesion is the appearance of a chalky white spot on the surface of the tooth, indicating an area of demineralization of enamel. This is referred to as a white spot lesion, an incipient carious lesion or a "microcavity".[11] As the lesion continues to demineralize, it can turn brown but will eventually turn into a cavitation ("cavity"). Before the cavity forms, the process is reversible, but once a cavity forms, the lost tooth structure cannot be regenerated. A lesion that appears dark brown and shiny suggests dental caries were once present but the demineralization process has stopped, leaving a stain. Active decay is lighter in color and dull in appearance.[12]

As the enamel and dentin are destroyed, the cavity becomes more noticeable. The affected areas of the tooth change color and become soft to the touch. Once the decay passes through enamel, the dentinal tubules, which have passages to the nerve of the tooth, become exposed, resulting in pain that can be transient, temporarily worsening with exposure to heat, cold, or sweet foods and drinks.[13] A tooth weakened by extensive internal decay can sometimes suddenly fracture under normal chewing forces. When the decay has progressed enough to allow the bacteria to overwhelm the pulp tissue in the center of the tooth, a toothache can result and the pain will become more constant. Death of the pulp tissue and infection are common consequences. The tooth will no longer be sensitive to hot or cold, but can be very tender to pressure.

Dental caries can also cause bad breath and foul tastes.[14] In highly progressed cases, infection can spread from the tooth to the surrounding soft tissues. Complications such as cavernous sinus thrombosis and Ludwig angina can be life-threatening.[15][16][17]

Four things are required for caries formation: a tooth surface (enamel or dentin), caries-causing bacteria, fermentable carbohydrates (such as sucrose), and time.[18] This involves adherence of food to the teeth and acid creation by the bacteria that makes up the dental plaque.[19] However, these four criteria are not always enough to cause the disease and a sheltered environment promoting development of a cariogenic biofilm is required. The caries disease process does not have an inevitable outcome, and different individuals will be susceptible to different degrees depending on the shape of their teeth, oral hygiene habits, and the buffering capacity of their saliva. Dental caries can occur on any surface of a tooth that is exposed to the oral cavity, but not the structures that are retained within the bone.[20]

Tooth decay is caused by biofilm (dental plaque) lying on the teeth and maturing to become cariogenic (causing decay). Certain bacteria in the biofilm produce acid in the presence of fermentable carbohydrates such as sucrose, fructose, and glucose.[21][22]

Caries occur more often in people from the lower end of the socioeconomic scale than people from the upper end of the socioeconomic scale.[23]

The most common bacteria associated with dental cavities are the mutans streptococci, most prominently Streptococcus mutans and Streptococcus sobrinus, and lactobacilli. However, cariogenic bacteria (the ones that can cause the disease) are present in dental plaque, but they are usually in too low concentrations to cause problems unless there is a shift in the balance.[24] This is driven by local environmental change, such as frequent sugar, no biofilm removal (a lack of toothbrushing).[25] If left untreated, the disease can lead to pain, tooth loss and infection.[26]

The mouth contains a wide variety of oral bacteria, but only a few specific species of bacteria are believed to cause dental caries: Streptococcus mutans and Lactobacillus species among them. These organisms can produce high levels of lactic acid following fermentation of dietary sugars, and are resistant to the adverse effects of low pH, properties essential for cariogenic bacteria.[21] As the cementum of root surfaces is more easily demineralized than enamel surfaces, a wider variety of bacteria can cause root caries, including Lactobacillus acidophilus, Actinomyces spp., Nocardia spp., and Streptococcus mutans. Bacteria collect around the teeth and gums in a sticky, creamy-coloured mass called plaque, which serves as a biofilm. Some sites collect plaque more commonly than others, for example sites with a low rate of salivary flow (molar fissures). Grooves on the occlusal surfaces of molar and premolar teeth provide microscopic retention sites for plaque bacteria, as do the interproximal sites. Plaque may also collect above or below the gingiva, where it is referred to as supra- or sub-gingival plaque, respectively.

These bacterial strains, most notably S. mutans, can be inherited by a child from a caretaker's kiss or through feeding premasticated.[27]

Bacteria in a person's mouth convert glucose, fructose, and most commonly sucrose (table sugar) into acids such as lactic acid through a glycolytic process called fermentation.[22] If left in contact with the tooth, these acids may cause demineralization, which is the dissolution of its mineral content. The process is dynamic, however, as remineralization can also occur if the acid is neutralized by saliva or mouthwash. Fluoride toothpaste or dental varnish may aid remineralization.[28] If demineralization continues over time, enough mineral content may be lost so that the soft organic material left behind disintegrates, forming a cavity or hole. The impact such sugars have on the progress of dental caries is called cariogenicity. Sucrose, although a bound glucose and fructose unit, is in fact more cariogenic than a mixture of equal parts of glucose and fructose. This is due to the bacteria utilising the energy in the saccharide bond between the glucose and fructose subunits. S.mutans adheres to the biofilm on the tooth by converting sucrose into an extremely adhesive substance called dextran polysaccharide by the enzyme dextransucranase.[29]

The frequency with which teeth are exposed to cariogenic (acidic) environments affects the likelihood of caries development.[30] After meals or snacks, the bacteria in the mouth metabolize sugar, resulting in an acidic by-product that decreases pH. As time progresses, the pH returns to normal due to the buffering capacity of saliva and the dissolved mineral content of tooth surfaces. During every exposure to the acidic environment, portions of the inorganic mineral content at the surface of teeth dissolve and can remain dissolved for two hours.[31] Since teeth are vulnerable during these acidic periods, the development of dental caries relies heavily on the frequency of acid exposure.

The carious process can begin within days of a tooth's erupting into the mouth if the diet is sufficiently rich in suitable carbohydrates. Evidence suggests that the introduction of fluoride treatments has slowed the process.[32] Proximal caries take an average of four years to pass through enamel in permanent teeth. Because the cementum enveloping the root surface is not nearly as durable as the enamel encasing the crown, root caries tends to progress much more rapidly than decay on other surfaces. The progression and loss of mineralization on the root surface is 2.5 times faster than caries in enamel. In very severe cases where oral hygiene is very poor and where the diet is very rich in fermentable carbohydrates, caries may cause cavities within months of tooth eruption. This can occur, for example, when children continuously drink sugary drinks from baby bottles (see later discussion).

There are certain diseases and disorders affecting teeth that may leave an individual at a greater risk for cavities.

Molar incisor hypomineralization, which is increasing in prevalence.[33] It is caused by systemic factors such as high levels of dioxins or polychlorinated biphenyl (PCB) in the mothers milk, premature birth and oxygen deprivation at birth, and certain disorders during the childs first 3 years such as such as mumps, diphtheria, scarlet fever, measles, hypoparathyroidism, malnutrition, malabsorption, hypovitaminosis D, chronic respiratory diseases, or undiagnosed and untreated coeliac disease, which usually presents with mild or absent gastrointestinal symptoms.[33][34][35][36][37][38]

Amelogenesis imperfecta, which occurs in between 1 in 718 and 1 in 14,000 individuals, is a disease in which the enamel does not fully form or forms in insufficient amounts and can fall off a tooth.[39] In both cases, teeth may be left more vulnerable to decay because the enamel is not able to protect the tooth.[40]

In most people, disorders or diseases affecting teeth are not the primary cause of dental caries. Approximately 96% of tooth enamel is composed of minerals.[41] These minerals, especially hydroxyapatite, will become soluble when exposed to acidic environments. Enamel begins to demineralize at a pH of 5.5.[42]Dentin and cementum are more susceptible to caries than enamel because they have lower mineral content.[43] Thus, when root surfaces of teeth are exposed from gingival recession or periodontal disease, caries can develop more readily. Even in a healthy oral environment, however, the tooth is susceptible to dental caries.

The evidence for linking malocclusion and/or crowding to dental caries is weak;[44][45] however, the anatomy of teeth may affect the likelihood of caries formation. Where the deep developmental grooves of teeth are more numerous and exaggerated, pit and fissure caries is more likely to develop (see next section). Also, caries is more likely to develop when food is trapped between teeth.

Reduced salivary flow rate is associated with increased caries since the buffering capability of saliva is not present to counterbalance the acidic environment created by certain foods. As a result, medical conditions that reduce the amount of saliva produced by salivary glands, in particular the submandibular gland and parotid gland, are likely to lead to dry mouth and thus to widespread tooth decay. Examples include Sjgren's syndrome, diabetes mellitus, diabetes insipidus, and sarcoidosis.[46] Medications, such as antihistamines and antidepressants, can also impair salivary flow. Stimulants, most notoriously methylamphetamine, also occlude the flow of saliva to an extreme degree ("meth mouth"). Tetrahydrocannabinol (THC), the active chemical substance in cannabis, also causes a nearly complete occlusion of salivation, known in colloquial terms as "cotton mouth". Moreover, 63% of the most commonly prescribed medications in the United States list dry mouth as a known side-effect.[46] Radiation therapy of the head and neck may also damage the cells in salivary glands, somewhat increasing the likelihood of caries formation.[47][48]

Susceptibility to caries can be related to altered metabolism in the tooth, in particular to fluid flow in the dentin. Experiments on rats have shown that a high-sucrose, cariogenic diet "significantly suppresses the rate of fluid motion" in dentin.[49]

The use of tobacco may also increase the risk for caries formation. Some brands of smokeless tobacco contain high sugar content, increasing susceptibility to caries.[50] Tobacco use is a significant risk factor for periodontal disease, which can cause the gingiva to recede.[51] As the gingiva loses attachment to the teeth due to gingival recession, the root surface becomes more visible in the mouth. If this occurs, root caries is a concern since the cementum covering the roots of teeth is more easily demineralized by acids than enamel.[52] Currently, there is not enough evidence to support a causal relationship between smoking and coronal caries, but evidence does suggest a relationship between smoking and root-surface caries.[53] Exposure of children to secondhand tobacco smoke is associated with tooth decay.[54]

Intrauterine and neonatal lead exposure promote tooth decay.[55][56][57][58][59][60][61] Besides lead, all atoms with electrical charge and ionic radius similar to bivalent calcium,[62] such as cadmium, mimic the calcium ion and therefore exposure to them may promote tooth decay.[63]

Poverty is also a significant social determinant for oral health.[64] Dental caries have been linked with lower socio-economic status and can be considered a disease of poverty.[65]

Forms are available for risk assessment for caries when treating dental cases; this system using the evidence-based Caries Management by Risk Assessment (CAMBRA).[66] It is still unknown if the identification of high-risk individuals can lead to more effective long-term patient management that prevents caries initiation and arrests or reverses the progression of lesions.[67]

Saliva also contains iodine and EGF. EGF results effective in cellular proliferation, differentiation and survival.[68] Salivary EGF, which seems also regulated by dietary inorganic iodine, plays an important physiological role in the maintenance of oral (and gastro-oesophageal) tissue integrity, and, on the other hand, iodine is effective in prevention of dental caries and oral health.[69]

Teeth are bathed in saliva and have a coating of bacteria on them (biofilm) that continually forms. The minerals in the hard tissues of the teeth (enamel, dentin and cementum) are constantly undergoing processes of demineralization and remineralisation. Dental caries results when the demineralization rate is faster than the remineralisation and there is net mineral loss. This happens when there is an ecologic shift within the dental biofilm, from a balanced population of micro-organisms to a population that produce acids and can survive in an acid environment.[70]

Enamel is a highly mineralized acellular tissue, and caries act upon it through a chemical process brought on by the acidic environment produced by bacteria. As the bacteria consume the sugar and use it for their own energy, they produce lactic acid. The effects of this process include the demineralization of crystals in the enamel, caused by acids, over time until the bacteria physically penetrate the dentin. Enamel rods, which are the basic unit of the enamel structure, run perpendicularly from the surface of the tooth to the dentin. Since demineralization of enamel by caries, in general, follows the direction of the enamel rods, the different triangular patterns between pit and fissure and smooth-surface caries develop in the enamel because the orientation of enamel rods are different in the two areas of the tooth.[71]

As the enamel loses minerals, and dental caries progresses, the enamel develop several distinct zones, visible under a light microscope. From the deepest layer of the enamel to the enamel surface, the identified areas are the: translucent zone, dark zones, body of the lesion, and surface zone.[72] The translucent zone is the first visible sign of caries and coincides with a one to two percent loss of minerals.[73] A slight remineralization of enamel occurs in the dark zone, which serves as an example of how the development of dental caries is an active process with alternating changes.[74] The area of greatest demineralization and destruction is in the body of the lesion itself. The surface zone remains relatively mineralized and is present until the loss of tooth structure results in a cavitation.

Unlike enamel, the dentin reacts to the progression of dental caries. After tooth formation, the ameloblasts, which produce enamel, are destroyed once enamel formation is complete and thus cannot later regenerate enamel after its destruction. On the other hand, dentin is produced continuously throughout life by odontoblasts, which reside at the border between the pulp and dentin. Since odontoblasts are present, a stimulus, such as caries, can trigger a biologic response. These defense mechanisms include the formation of sclerotic and tertiary dentin.[75]

In dentin from the deepest layer to the enamel, the distinct areas affected by caries are the advancing front, the zone of bacterial penetration, and the zone of destruction.[71] The advancing front represents a zone of demineralised dentin due to acid and has no bacteria present. The zones of bacterial penetration and destruction are the locations of invading bacteria and ultimately the decomposition of dentin. The zone of destruction has a more mixed bacterial population where proteolytic enzymes have destroyed the organic matrix. The innermost dentin caries has been reversibly attacked because the collage matrix is not severely damaged, giving it potential for repair. The outer more superficial zone is highly infected with proteolytic degradation of the collagen matrix and as a result the dentin is irreversibly demineralised.[citation needed]

The structure of dentin is an arrangement of microscopic channels, called dentinal tubules, which radiate outward from the pulp chamber to the exterior cementum or enamel border.[76] The diameter of the dentinal tubules is largest near the pulp (about 2.5m) and smallest (about 900nm) at the junction of dentin and enamel.[77] The carious process continues through the dentinal tubules, which are responsible for the triangular patterns resulting from the progression of caries deep into the tooth. The tubules also allow caries to progress faster.

In response, the fluid inside the tubules brings immunoglobulins from the immune system to fight the bacterial infection. At the same time, there is an increase of mineralization of the surrounding tubules.[78] This results in a constriction of the tubules, which is an attempt to slow the bacterial progression. In addition, as the acid from the bacteria demineralizes the hydroxyapatite crystals, calcium and phosphorus are released, allowing for the precipitation of more crystals which fall deeper into the dentinal tubule. These crystals form a barrier and slow the advancement of caries. After these protective responses, the dentin is considered sclerotic.

According to hydrodynamic theory, fluids within dentinal tubules are believed to be the mechanism by which pain receptors are triggered within the pulp of the tooth.[79] Since sclerotic dentin prevents the passage of such fluids, pain that would otherwise serve as a warning of the invading bacteria may not develop at first. Consequently, dental caries may progress for a long period of time without any sensitivity of the tooth, allowing for greater loss of tooth structure.[citation needed]

In response to dental caries, there may be production of more dentin toward the direction of the pulp. This new dentin is referred to as tertiary dentin.[77] Tertiary dentin is produced to protect the pulp for as long as possible from the advancing bacteria. As more tertiary dentin is produced, the size of the pulp decreases. This type of dentin has been subdivided according to the presence or absence of the original odontoblasts.[80] If the odontoblasts survive long enough to react to the dental caries, then the dentin produced is called "reactionary" dentin. If the odontoblasts are killed, the dentin produced is called "reparative" dentin.

In the case of reparative dentin, other cells are needed to assume the role of the destroyed odontoblasts. Growth factors, especially TGF-,[80] are thought to initiate the production of reparative dentin by fibroblasts and mesenchymal cells of the pulp.[81] Reparative dentin is produced at an average of 1.5m/day, but can be increased to 3.5m/day. The resulting dentin contains irregularly shaped dentinal tubules that may not line up with existing dentinal tubules. This diminishes the ability for dental caries to progress within the dentinal tubules.

The incidence of cemental caries increases in older adults as gingival recession occurs from either trauma or periodontal disease. It is a chronic condition that forms a large, shallow lesion and slowly invades first the roots cementum and then dentin to cause a chronic infection of the pulp (see further discussion under classification by affected hard tissue). Because dental pain is a late finding, many lesions are not detected early, resulting in restorative challenges and increased tooth loss.[82]

The presentation of caries is highly variable. However, the risk factors and stages of development are similar. Initially it may appear as a small chalky area (smooth surface caries), which may eventually develop into a large cavitation. Sometimes caries may be directly visible. However other methods of detection such as X-rays are used for less visible areas of teeth and to judge the extent of destruction. Lasers for detecting caries allow detection without ionizing radiation and are now used for detection of interproximal decay (between the teeth). Disclosing solutions are also used during tooth restoration to minimize the chance of recurrence.[citation needed]

Primary diagnosis involves inspection of all visible tooth surfaces using a good light source, dental mirror and explorer. Dental radiographs (X-rays) may show dental caries before it is otherwise visible, in particular caries between the teeth. Large areas of dental caries are often apparent to the naked eye, but smaller lesions can be difficult to identify. Visual and tactile inspection along with radiographs are employed frequently among dentists, in particular to diagnose pit and fissure caries.[84] Early, uncavitated caries is often diagnosed by blowing air across the suspect surface, which removes moisture and changes the optical properties of the unmineralized enamel.

Some dental researchers have cautioned against the use of dental explorers to find caries,[85] in particular sharp ended explorers. In cases where a small area of tooth has begun demineralizing but has not yet cavitated, the pressure from the dental explorer could cause a cavity. Since the carious process is reversible before a cavity is present, it may be possible to arrest the caries with fluoride and remineralize the tooth surface. When a cavity is present, a restoration will be needed to replace the lost tooth structure.

At times, pit and fissure caries may be difficult to detect. Bacteria can penetrate the enamel to reach dentin, but then the outer surface may remineralize, especially if fluoride is present.[86] These caries, sometimes referred to as "hidden caries", will still be visible on X-ray radiographs, but visual examination of the tooth would show the enamel intact or minimally perforated.

The differential diagnosis for dental caries includes dental fluorosis and developmental defects of the tooth including hypomineralization of the tooth and hypoplasia of the tooth.[87]

The early carious lesion is characterized by demineralization of the tooth surface, altering the tooth's optical properties. Technology utilizing laser speckle image (LSI) techniques may provide a diagnostic aid to detect early carious lesions.[83]

Caries can be classified by location, etiology, rate of progression, and affected hard tissues.[88] These forms of classification can be used to characterize a particular case of tooth decay in order to more accurately represent the condition to others and also indicate the severity of tooth destruction. In some instances, caries is described in other ways that might indicate the cause. The G.V. Black classification is as follows:

Early childhood caries (ECC), also known as "baby bottle caries," "baby bottle tooth decay" or "bottle rot," is a pattern of decay found in young children with their deciduous (baby) teeth. The teeth most likely affected are the maxillary anterior teeth, but all teeth can be affected.[89] The name for this type of caries comes from the fact that the decay usually is a result of allowing children to fall asleep with sweetened liquids in their bottles or feeding children sweetened liquids multiple times during the day.[90]

Another pattern of decay is "rampant caries", which signifies advanced or severe decay on multiple surfaces of many teeth.[91] Rampant caries may be seen in individuals with xerostomia, poor oral hygiene, stimulant use (due to drug-induced dry mouth[92]), and/or large sugar intake. If rampant caries is a result of previous radiation to the head and neck, it may be described as radiation-induced caries. Problems can also be caused by the self-destruction of roots and whole tooth resorption when new teeth erupt or later from unknown causes.

Children at 612 months are at increased risk of developing dental caries. For other kids aged 1218 months, dental caries develop on primary teeth and approximately twice yearly for permanent teeth.[93]

Temporal descriptions can be applied to caries to indicate the progression rate and previous history. "Acute" signifies a quickly developing condition, whereas "chronic" describes a condition that has taken an extended time to develop, in which thousands of meals and snacks, many causing some acid demineralization that is not remineralized, eventually result in cavities.

Recurrent caries, also described as secondary, are caries that appear at a location with a previous history of caries. This is frequently found on the margins of fillings and other dental restorations. On the other hand, incipient caries describes decay at a location that has not experienced previous decay. Arrested caries describes a lesion on a tooth that was previously demineralized but was remineralized before causing a cavitation. Fluoride treatment can help recalcification of tooth enamel as well as use of amorphous calcium phosphate.

Depending on which hard tissues are affected, it is possible to describe caries as involving enamel, dentin, or cementum. Early in its development, caries may affect only enamel. Once the extent of decay reaches the deeper layer of dentin, the term "dentinal caries" is used. Since cementum is the hard tissue that covers the roots of teeth, it is not often affected by decay unless the roots of teeth are exposed to the mouth. Although the term "cementum caries" may be used to describe the decay on roots of teeth, very rarely does caries affect the cementum alone. Roots have a very thin layer of cementum over a large layer of dentin, and thus most caries affecting cementum also affects dentin.[citation needed]

Personal hygiene care consists of proper brushing and flossing daily. The purpose of oral hygiene is to minimize any etiologic agents of disease in the mouth. The primary focus of brushing and flossing is to remove and prevent the formation of plaque or dental biofilm. Plaque consists mostly of bacteria.[94] As the amount of bacterial plaque increases, the tooth is more vulnerable to dental caries when carbohydrates in the food are left on teeth after every meal or snack. A toothbrush can be used to remove plaque on accessible surfaces, but not between teeth or inside pits and fissures on chewing surfaces. When used correctly, dental floss removes plaque from areas that could otherwise develop proximal caries but only if the depth of sulcus has not been compromised. Other adjunct oral hygiene aids include interdental brushes, water picks, and mouthwashes.

However oral hygiene is probably more effective at preventing gum disease (periodontal disease) than tooth decay. Food is forced inside pits and fissures under chewing pressure, leading to carbohydrate-fueled acid demineralisation where the brush, fluoride toothpaste, and saliva have no access to remove trapped food, neutralise acid, or remineralise demineralised tooth like on other more accessible tooth surfaces. (Occlusal caries accounts for between 80 and 90% of caries in children (Weintraub, 2001).) Chewing fibre like celery after eating forces saliva inside trapped food to dilute any carbohydrate like sugar, neutralise acid and remineralise demineralised tooth. The teeth at highest risk for carious lesions are the permanent first and second molars due to length of time in oral cavity and presence of complex surface anatomy.

Professional hygiene care consists of regular dental examinations and professional prophylaxis (cleaning). Sometimes, complete plaque removal is difficult, and a dentist or dental hygienist may be needed. Along with oral hygiene, radiographs may be taken at dental visits to detect possible dental caries development in high risk areas of the mouth (e.g. "bitewing" X-rays which visualize the crowns of the back teeth).

For dental health, frequency of sugar intake is more important than the amount of sugar consumed.[30] In the presence of sugar and other carbohydrates, bacteria in the mouth produce acids that can demineralize enamel, dentin, and cementum. The more frequently teeth are exposed to this environment, the more likely dental caries is to occur. Therefore, minimizing snacking is recommended, since snacking creates a continuous supply of nutrition for acid-creating bacteria in the mouth. Also, chewy and sticky foods (such as candy, cookies, potato chips, and crackers) tend to adhere to teeth longer. However, dried fruits such as raisins and fresh fruit such as apples and bananas disappear from the mouth quickly, and do not appear to be a risk factor.[95] For children, the American Dental Association and the European Academy of Paediatric Dentistry recommend limiting the frequency of consumption of drinks with sugar, and not giving baby bottles to infants during sleep (see earlier discussion).[96][97] Mothers are also recommended to avoid sharing utensils and cups with their infants to prevent transferring bacteria from the mother's mouth.[98]

It has been found that milk and certain kinds of cheese like cheddar cheese can help counter tooth decay if eaten soon after the consumption of foods potentially harmful to teeth.[30] Also, chewing gum containing xylitol (a sugar alcohol) is widely used to protect teeth in many countries now. Xylitol's effect on reducing dental biofilm is, it is presumed, due to bacteria's inability to utilize it like other sugars.[99] Chewing and stimulation of flavor receptors on the tongue are also known to increase the production and release of saliva, which contains natural buffers to prevent the lowering of pH in the mouth to the point where enamel may become demineralized.[100]

The use of dental sealants is a means of prevention.[101] A sealant is a thin plastic-like coating applied to the chewing surfaces of the molars to prevent food from being trapped inside pits and fissures. This deprives resident plaque bacteria of carbohydrate, preventing the formation of pit and fissure caries. Sealants are usually applied on the teeth of children, as soon as the teeth erupt but adults are receiving them if not previously performed. Sealants can wear out and fail to prevent access of food and plaque bacteria inside pits and fissures and need to be replaced so they must be checked regularly by dental professionals.

Calcium, as found in food such as milk and green vegetables, is often recommended to protect against dental caries. Fluoride helps prevent decay of a tooth by binding to the hydroxyapatite crystals in enamel.[102] The incorporated fluorine makes enamel more resistant to demineralization and, thus, resistant to decay.[103] Topical fluoride is more highly recommended than systemic intake such as by tablets or drops to protect the surface of the teeth. This may include a fluoride toothpaste or mouthwash or varnish.[104] After brushing with fluoride toothpaste, rinsing should be avoided and the excess spat out.[105] Many dental professionals include application of topical fluoride solutions as part of routine visits and recommend the use of xylitol and amorphous calcium phosphate products. Silver diamine fluoride may work better than fluoride varnish to prevent cavities.[106] Water fluoridation also lowers the risk of tooth decay.[107]

An oral health assessment carried out before a child reaches the age of one may help with management of caries. The oral health assessment should include checking the childs history, a clinical examination, checking the risk of caries in the child including the state of their occlusion and assessing how well equipped the childs parent or carer is to help the child prevent caries.[108] In order to further increase a childs cooperation in caries management, good communication by the dentist and the rest of the staff of a dental practice should be used. This communication can be improved by calling the child by their name, using eye contact and including them in any conversation about their treatment.[108]

Vaccines are also under development.[109]

Most importantly, whether the carious lesion is cavitated or noncavitated dictates the management. Clinical assessment of whether the lesion is active or arrested is also important. Noncavitated lesions can be arrested and remineralization can occur under the right conditions. However, this may require extensive changes to the diet (reduction in frequency of refined sugars), improved oral hygiene (toothbrushing twice per day with fluoride toothpaste and daily flossing), and regular application of topical fluoride. Such management of a carious lesion is termed "non-operative" since no drilling is carried out on the tooth. Non-operative treatment requires excellent understanding and motivation from the individual, otherwise the decay will continue.

Once a lesion has cavitated, especially if dentin is involved, remineralization is much more difficult and a dental restoration is usually indicated ("operative treatment"). Before a restoration can be placed, all of the decay must be removed otherwise it will continue to progress underneath the filling. Sometimes a small amount of decay can be left if it is entombed and there is a seal which isolates the bacteria from their substrate. This can be likened to placing a glass container over a candle, which burns itself out once the oxygen is used up. Techniques such as stepwise caries removal are designed to avoid exposure of the dental pulp and overall reduction of the amount of tooth substance which requires removal before the final filling is placed. Often enamel which overlies decayed dentin must also be removed as it is unsupported and susceptible to fracture. The modern decision-making process with regards the activity of the lesion, and whether it is cavitated, is summarized in the table.[110]

Destroyed tooth structure does not fully regenerate, although remineralization of very small carious lesions may occur if dental hygiene is kept at optimal level.[13] For the small lesions, topical fluoride is sometimes used to encourage remineralization. For larger lesions, the progression of dental caries can be stopped by treatment. The goal of treatment is to preserve tooth structures and prevent further destruction of the tooth. Aggressive treatment, by filling, of incipient carious lesions, places where there is superficial damage to the enamel, is controversial as they may heal themselves, while once a filling is performed it will eventually have to be redone and the site serves as a vulnerable site for further decay.[11]

In general, early treatment is quicker and less expensive than treatment of extensive decay. Local anesthetics, nitrous oxide ("laughing gas"), or other prescription medications may be required in some cases to relieve pain during or following treatment or to relieve anxiety during treatment.[111] A dental handpiece ("drill") is used to remove large portions of decayed material from a tooth. A spoon, a dental instrument used to carefully remove decay, is sometimes employed when the decay in dentin reaches near the pulp.[112] Some dentists remove dental caries using a laser rather than the traditional dental drill. A Cochrane review of this technique looked at Er:YAG (erbium-doped yttrium aluminium garnet), Er,Cr:YSGG (erbium, chromium: yttrium-scandium-gallium-garnet) and Nd:YAG (neodymium-doped yttrium aluminium garnet) lasers and found that although people treated with lasers (compared to a conventional dental "drill") experienced less pain and had a lesser need for dental anaesthesia, that overall there was little difference in caries removal.[113] Once the caries is removed, the missing tooth structure requires a dental restoration of some sort to return the tooth to function and aesthetic condition.

Restorative materials include dental amalgam, composite resin, porcelain, and gold.[114] Composite resin and porcelain can be made to match the color of a patient's natural teeth and are thus used more frequently when aesthetics are a concern. Composite restorations are not as strong as dental amalgam and gold; some dentists consider the latter as the only advisable restoration for posterior areas where chewing forces are great.[115] When the decay is too extensive, there may not be enough tooth structure remaining to allow a restorative material to be placed within the tooth. Thus, a crown may be needed. This restoration appears similar to a cap and is fitted over the remainder of the natural crown of the tooth. Crowns are often made of gold, porcelain, or porcelain fused to metal.

For children, preformed crowns are available to place over the tooth. These are usually made of metal (usually stainless steel but increasingly there are aesthetic materials). Traditionally teeth are shaved down to make room for the crown but, more recently, stainless steel crowns have been used to seal decay into the tooth and stop it progressing. This is known as the Hall Technique and works by depriving the bacteria in the decay of nutrients and making their environment less favorable for them. It is a minimally invasive method of managing decay in children and does not require local anesthetic injections in the mouth.

In certain cases, endodontic therapy may be necessary for the restoration of a tooth.[116] Endodontic therapy, also known as a "root canal", is recommended if the pulp in a tooth dies from infection by decay-causing bacteria or from trauma. In root canal therapy, the pulp of the tooth, including the nerve and vascular tissues, is removed along with decayed portions of the tooth. The canals are instrumented with endodontic files to clean and shape them, and they are then usually filled with a rubber-like material called gutta percha.[117] The tooth is filled and a crown can be placed. Upon completion of root canal therapy, the tooth is non-vital, as it is devoid of any living tissue.

An extraction can also serve as treatment for dental caries. The removal of the decayed tooth is performed if the tooth is too far destroyed from the decay process to effectively restore the tooth. Extractions are sometimes considered if the tooth lacks an opposing tooth or will probably cause further problems in the future, as may be the case for wisdom teeth.[118] Extractions may also be preferred by people unable or unwilling to undergo the expense or difficulties in restoring the tooth.

no data

<50

50-60

60-70

70-80

80-90

90-100

100-115

115-130

130-138

138-140

140142

>142

Worldwide, approximately 2.43billion people (36% of the population) have dental caries in their permanent teeth.[8] In baby teeth it affects about 620million people or 9% of the population.[8] The disease is most common in Latin American countries, countries in the Middle East, and South Asia, and least prevalent in China.[120] In the United States, dental caries is the most common chronic childhood disease, being at least five times more common than asthma.[121] It is the primary pathological cause of tooth loss in children.[122] Between 29% and 59% of adults over the age of 50 experience caries.[123]

The number of cases has decreased in some developed countries, and this decline is usually attributed to increasingly better oral hygiene practices and preventive measures such as fluoride treatment.[124] Nonetheless, countries that have experienced an overall decrease in cases of tooth decay continue to have a disparity in the distribution of the disease.[123] Among children in the United States and Europe, twenty percent of the population endures sixty to eighty percent of cases of dental caries.[125] A similarly skewed distribution of the disease is found throughout the world with some children having none or very few caries and others having a high number.[123]Australia, Nepal, and Sweden (where children receive dental care paid for by the government) have a low incidence of cases of dental caries among children, whereas cases are more numerous in Costa Rica and Slovakia.[126]

The classic DMF (decay/missing/filled) index is one of the most common methods for assessing caries prevalence as well as dental treatment needs among populations. This index is based on in-field clinical examination of individuals by using a probe, mirror and cotton rolls. Because the DMF index is done without X-ray imaging, it underestimates real caries prevalence and treatment needs.[86]

Bacteria typically associated with dental caries have been isolated from vaginal samples from females who have bacterial vaginosis.[127]

There is a long history of dental caries. Over a million years ago, hominins such as Australopithecus suffered from cavities.[128] The largest increases in the prevalence of caries have been associated with dietary changes.[128][129] Archaeological evidence shows that tooth decay is an ancient disease dating far into prehistory. Skulls dating from a million years ago through the neolithic period show signs of caries, including those from the Paleolithic and Mesolithic ages.[130] The increase of caries during the neolithic period may be attributed to the increased consumption of plant foods containing carbohydrates.[131] The beginning of rice cultivation in South Asia is also believed to have caused an increase in caries, although there is also some evidence from sites in Thailand, such as Khok Phanom Di, that shows a decrease in overall percentage of dental caries with the increase in dependence on rice agriculture.[132]

A Sumerian text from 5000 BC describes a "tooth worm" as the cause of caries.[133] Evidence of this belief has also been found in India, Egypt, Japan, and China.[129] Unearthed ancient skulls show evidence of primitive dental work. In Pakistan, teeth dating from around 5500 BC to 7000 BC show nearly perfect holes from primitive dental drills.[134] The Ebers Papyrus, an Egyptian text from 1550 BC, mentions diseases of teeth.[133] During the Sargonid dynasty of Assyria during 668 to 626 BC, writings from the king's physician specify the need to extract a tooth due to spreading inflammation.[129] In the Roman Empire, wider consumption of cooked foods led to a small increase in caries prevalence.[125] The Greco-Roman civilization, in addition to the Egyptian, had treatments for pain resulting from caries.[129]

The rate of caries remained low through the Bronze Age and Iron Age, but sharply increased during the Middle Ages.[128] Periodic increases in caries prevalence had been small in comparison to the 1000 AD increase, when sugar cane became more accessible to the Western world. Treatment consisted mainly of herbal remedies and charms, but sometimes also included bloodletting.[135] The barber surgeons of the time provided services that included tooth extractions.[129] Learning their training from apprenticeships, these health providers were quite successful in ending tooth pain and likely prevented systemic spread of infections in many cases. Among Roman Catholics, prayers to Saint Apollonia, the patroness of dentistry, were meant to heal pain derived from tooth infection.[136]

There is also evidence of caries increase in North American Indians after contact with colonizing Europeans. Before colonization, North American Indians subsisted on hunter-gatherer diets, but afterwards there was a greater reliance on maize agriculture, which made these groups more susceptible to caries.[128]

During the European Age of Enlightenment, the belief that a "tooth worm" caused caries was also no longer accepted in the European medical community.[137]Pierre Fauchard, known as the father of modern dentistry, was one of the first to reject the idea that worms caused tooth decay and noted that sugar was detrimental to the teeth and gingiva.[138] In 1850, another sharp increase in the prevalence of caries occurred and is believed to be a result of widespread diet changes.[129] Prior to this time, cervical caries was the most frequent type of caries, but increased availability of sugar cane, refined flour, bread, and sweetened tea corresponded with a greater number of pit and fissure caries.

In the 1890s, W.D. Miller conducted a series of studies that led him to propose an explanation for dental caries that was influential for current theories. He found that bacteria inhabited the mouth and that they produced acids that dissolved tooth structures when in the presence of fermentable carbohydrates.[139] This explanation is known as the chemoparasitic caries theory.[140] Miller's contribution, along with the research on plaque by G.V. Black and J.L. Williams, served as the foundation for the current explanation of the etiology of caries.[129] Several of the specific strains of lactobacilli were identified in 1921 by Fernando E. Rodriguez Vargas.

In 1924 in London, Killian Clarke described a spherical bacterium in chains isolated from carious lesions which he called Streptococcus mutans. Although Clarke proposed that this organism was the cause of caries, the discovery was not followed up. Later, in the 1950s in the USA, Keyes and Fitzgerald working with hamsters showed that caries was transmissible and caused by an acid-producing Streptococcus. It was not until the late 1960s that it became generally accepted that the Streptococcus isolated from hamster caries was the same as S. mutans described by Clarke.[141]

Tooth decay has been present throughout human history, from early hominids millions of years ago, to modern humans.[142] The prevalence of caries increased dramatically in the 19th century, as the Industrial Revolution made certain items, such as refined sugar and flour, readily available.[129] The diet of the newly industrialized English working class[129] then became centered on bread, jam, and sweetened tea, greatly increasing both sugar consumption and caries.

Naturalized from Latin into English (a loanword), caries in its English form originated as a mass noun that means "rottenness",[4][143] that is, "decay". When used in that sense, it takes singular verb inflections (just like the word decay does). Thus caries was not traditionally a plural word synonymous with holes or cavities; that is, it was not the plural form of any singular form cary meaning hole or cavity. Nonetheless, the idea that it is such a plural is a reanalysis that naturally occurs to most English speakers, and the reanalyzed sense is common enough to be entered in various dictionaries and to exist in respectable usage. It still shows a hint of its reanalyzed origins in that it remains idiomatically limited to a plurale tantum sensethat is, like scissors or glasses, one speaks of plural caries obligately in the pluralnot of one scissor, glass, or cary. (This is why one can look for a singular count-noun form of dental cary in any of a dozen major medical and general dictionaries and not find it listed.) Many still use it in the traditional sense (mass, singular), which is why they speak of carious lesions rather than just caries when they intend the plural count sense.

Cariology is the study of dental caries.

It is estimated that untreated dental caries results in worldwide productivity losses in the size of about US$27 billion yearly.[144]

Dental caries is uncommon among companion animals.[145]

The rest is here:
Dental caries - Wikipedia

Read More...

JCI – Welcome

December 7th, 2016 2:41 pm

Myocardial infarction (MI) results in the generation of dead cells in the infarcted area. These cells are swiftly removed by phagocytes to minimize inflammation and limit expansion of the damaged area. However, the types of cells and molecules responsible for the engulfment of dead cells in the infarcted area remain largely unknown. In this study, we demonstrated that cardiac myofibroblasts, which execute tissue fibrosis by producing extracellular matrix proteins, efficiently engulf dead cells. Furthermore, we identified a population of cardiac myofibroblasts that appears in the heart after MI in humans and mice. We found that these cardiac myofibroblasts secrete milk fat globule-epidermal growth factor 8 (MFG-E8), which promotes apoptotic engulfment, and determined that serum response factor is important for MFG-E8 production in myofibroblasts. Following MFG-E8mediated engulfment of apoptotic cells, myofibroblasts acquired antiinflammatory properties. MFG-E8 deficiency in mice led to the accumulation of unengulfed dead cells after MI, resulting in exacerbated inflammatory responses and a substantial decrease in survival. Moreover, MFG-E8 administration into infarcted hearts restored cardiac function and morphology. MFG-E8producing myofibroblasts mainly originated from resident cardiac fibroblasts and cells that underwent endothelial-mesenchymal transition in the heart. Together, our results reveal previously unrecognized roles of myofibroblasts in regulating apoptotic engulfment and a fundamental importance of these cells in recovery from MI.

Michio Nakaya, Kenji Watari, Mitsuru Tajima, Takeo Nakaya, Shoichi Matsuda, Hiroki Ohara, Hiroaki Nishihara, Hiroshi Yamaguchi, Akiko Hashimoto, Mitsuho Nishida, Akiomi Nagasaka, Yuma Horii, Hiroki Ono, Gentaro Iribe, Ryuji Inoue, Makoto Tsuda, Kazuhide Inoue, Akira Tanaka, Masahiko Kuroda, Shigekazu Nagata, Hitoshi Kurose

Read the rest here:
JCI - Welcome

Read More...

How Your Heart Works | HowStuffWorks

December 7th, 2016 2:40 pm

Everyone knows that the heart is a vital organ. We cannot live without our heart. However, when you get right down to it, the heart is just a pump. A complex and important one, yes, but still just a pump. As with all other pumps it can become clogged, break down and need repair. This is why it is critical that we know how the heart works. With a little knowledge about your heart and what is good or bad for it, you can significantly reduce your risk for heart disease.

Heart disease is the leading cause of death in the United States. Almost 2,000 Americans die of heart disease each day. That is one death every 44 seconds. The good news is that the death rate from heart disease has been steadily decreasing. Unfortunately, heart disease still causes sudden death and many people die before even reaching the hospital.

The heart holds a special place in our collective psyche as well. Of course the heart is synonymous with love. It has many other associations, too. Here are just a few examples:

Certainly no other bodily organ elicits this kind of response. When was the last time you had a heavy pancreas?

In this article, we will look at this important organ so that you can understand exactly what makes your heart tick.

See the original post here:
How Your Heart Works | HowStuffWorks

Read More...

Ophthalmology Medical Services – Eye Care Centers …

December 6th, 2016 3:45 am

New York Eye and Ear Infirmary Mobile Menu New York Eye and Ear Infirmary Main Navigation Eye Services Eye Faculty Practice

Our ophthalmologists provide comprehensive treatment for all eye related conditions.

Learn more

Our specialists offer expertise in the diagnosis and treatment of all retinal disorders

Learn more

Densha and Shavanne McCurchin share how Densha's treatment as an infant has shaped his life.

Read more

Learn more about our featured satellite office in Bay Ridge, Brooklyn

Visit Website

The Department of Ophthalmology at New York Eye and Ear Infirmary of Mount Sinai (NYEE) offers patients the most advanced and comprehensive treatments for all eye conditions. Our physicians are experts in managing all eye problems, including cataracts, glaucoma, age-related macular degeneration, corneal disease, retina conditions, and many other ophthalmologic disorders. We specialize in cornea and refractive surgery, eye trauma, neuro-ophthalmology, ocular immunology and uveitis, ocular oncology, oculoplastic and orbital surgery, ophthalmologic pathology, pediatric ophthalmology, and strabismus.

Search our Find a Doctor Directoryfor an ophthalmology expert.

Our Pediatric Ophthalmology Service has a strong reputation for quality care, accommodating approximately 5,600 patients per year.

John McKnight, MD describes his experience as a patient with the NYEE Ocular Trauma Service

As a national leader in ophthalmology, our Department continually strives to advance eye care throughout the New York metropolitan area, nationally, and internationally.

Icahn School of Medicine at Mount Sinai. All rights reserved.

Read this article:
Ophthalmology Medical Services - Eye Care Centers ...

Read More...

Ophthalmology Manhattan | New York City (NYC)

December 6th, 2016 3:45 am

In 1953, Dr. Mark Fromers stepfather, Dr. Alfred Mamelok, first opened the 115 E. 61st Street office in New York City. In 1988, Mark Fromer, M.D. joined the practice, followed by his sister, Dr. Susan Fromer in 2000. Over the years many advances in ophthalmology have taken place and Fromer Eye Centers have taken an active role in bringing excellence in eye care to the New York area.

Fromer Eye Centers provides comprehensive eye care utilizing state of the art modalities and treatment options.Our physicians are board-certified ophthalmologists and optometrists who have earned national reputations as top clinicians and educators. We offer treatments forcataracts,macular degeneration,diabetic retinopathy,glaucoma, corneal disorders, retinal detachments, ocular muscle disorders, pediatric ophthalmology,uveitis and dry eye syndromes. Our specialists are trained in cosmetic and reconstructive surgery of the eyelid. Our surgeons utilize Botox and the latest fillers for facial rejuvenation. Our practice provides comprehensive examinations foreyeglasses,contact lenses andlaser vision correction.

Most of the patients seen in our offices have been referred to us by other eye care providers and physicians because of the highly specialized nature of our services. We are proud that other physicians entrust us to care for their patients.

Our doctors are on the cutting edge of the latest surgical techniques and treatment options. Our physicians lecture on regional and national levels to further the knowledge of fellow physicians.

Fromer Eye Centers is committed to the future of eye care, and to providing expert medical care with compassion.

Read more:
Ophthalmology Manhattan | New York City (NYC)

Read More...

Longevity

December 6th, 2016 3:44 am

Turning 40 involves incorporating new mantras in order to survive the day. "Don't sweat the small stuff." That is a really good one and it probably applies to all ages. "Don't cry over spilled milk," ...

When your lips are chapped, your nose is running, and youre trying to save money on your heating bill, the last thing you want to eat is a salad. But that doesnt mean you cant eat a healthy dinner....

During the past 50 years, with increased life expectancy and the impact of feminism, we've witnessed a sea change in our concepts of sexuality, motherhood, and age-appropriate behavior. Feminist Molly...

Suicide is the single biggest killer of men under the age of 45 in the UK. That introduction was my shortest ever, but it really needed to be for such a shockingly large statistic. When I first read i...

You dont need the latest yoga bralette, the fanciest juice cleanse, or a personal trainer to eat healthy, stay fit and sane, keep your house clean the natural way, and be good to the planet. If those...

I am qualified to teach both Yoga and Pilates, and though my preference, by and large, is Yoga, I am going to try and make this as non-biased as possible. I began my Yoga journey 8 years ago but Pilat...

If youre anything like me, you occasionally realize that your closet is full of scratchy, stuffy, too-small shirts, pants, and dresses that you simply never wear. Whether you Konmari it (does this s...

Yoga makes you feel and look more youthful. It literally slows the aging process by stretching the body. Muscles can be developed two ways: by building them up into hard little knots of power, which i...

Ammonia makes you cough and choke, Comet smells like the bathrooms of Miss Hannigans orphanage. But you want to make your home or apartment shine like the top of the Chrysler building! You just want ...

Turning 40 is a milestone. Granted, it isn't as exciting as turning 100, but if you still haven't entered menopause and men still look at you when you pass them on the street, you are in pretty good s...

If you follow any food blogs, youve most likely seen the recent storm of posts about Japanese cheesecake. Food bloggers have quickly fallen in love with it, from its name (cheesecake, whats not to l...

See the original post:
Longevity

Read More...

Longevity myths – Wikipedia

December 6th, 2016 3:44 am

This article is about myths related to the mythology of humans or other beings living to mythological ages. For validated specific supercentenarian claims by modern standards, see List of the verified oldest people. For modern, or complete, unvalidated supercentenarian claims, see Longevity claims.

Longevity myths are traditions about long-lived people (generally supercentenarians), either as individuals or groups of people, and practices that have been believed to confer longevity, but for which scientific evidence does not support the ages claimed or the reasons for the claims.[1][2] While literal interpretations of such myths may appear to indicate extraordinarily long lifespans, many scholars[3] believe such figures may be the result of incorrect translation of numbering systems through various languages coupled by the cultural and or symbolic significance of certain numbers.

The phrase "longevity tradition" may include "purifications, rituals, longevity practices, meditations, and alchemy"[4] that have been believed to confer greater human longevity, especially in Chinese culture.[1][2]

Modern science indicates various ways in which genetics, diet, and lifestyle affect human longevity. It also allows us to determine the age of human remains with a fair degree of precision.

The Hebrew Bible, the Torah, Joshua, Job, and 2 Chronicles mention individuals with lifespans up to the 969 years of Methuselah.

Some apologists[who?] explain these extreme ages as ancient mistranslations that converted the word "month" to "year", mistaking lunar cycles for solar ones: this would turn an age of 969 "years" into a more reasonable 969 lunar months, or 78 years of the Metonic cycle.[5]

Donald Etz says that the Genesis 5 numbers were multiplied by ten by a later editor.[6] These interpretations introduce an inconsistency as the ages of the first nine patriarchs at fatherhood, ranging from 62 to 230 years in the manuscripts, would then be transformed into an implausible range such as 5 to 18 years.[7] Others say that the first list, of only 10 names for 1,656 years, may contain generational gaps, which would have been represented by the lengthy lifetimes attributed to the patriarchs.[8] Nineteenth-century critic Vincent Goehlert suggests the lifetimes "represented epochs merely, to which were given the names of the personages especially prominent in such epochs, who, in consequence of their comparatively long lives, were able to acquire an exalted influence."[9]

Those biblical scholars that teach literal interpretation give explanations for the advanced ages of the early patriarchs. In one view man was originally to have everlasting life, but as sin was introduced into the world by Adam,[10] its influence became greater with each generation and God progressively shortened man's life. [11] In a second view, before Noah's flood, a "firmament" over the earth (Genesis 1:68) contributed to people's advanced ages.[12]

Abraham's wife Sarah is the only woman in the Old Testament whose age is given. She was 127 (Genesis 23:1).

Chapter 2 of Falun Gong by Li Hongzhi (2001) states, "A person in Japan named Mitsu Taira lived to be 242 years old. During the Tang Dynasty in our country, there was a monk called Hui Zhao [, 526815[17]] who lived to be 290 [288/289] years old. According to the county annals of Yong Tai in Fujian Province, Chen Jun [] was born in the first year of Zhong He time (881 AD) under the reign of Emperor Xi Zong during the Tang Dynasty. He died in the Tai Ding time of the Yuan Dynasty (1325 AD), after living for 444 years."[18]

Like Methuselah in Judaism, Bhishma among the Hindus is believed to have lived to a very advanced age and is a metaphor for immortality. His life spans four generations and considering that he fought for his great-nephews in the Mahabharata War who were themselves in their 70s and 80s, it is estimated that Bhishma must have been between 130 and 370 years old at the time of his death.

According to 19th-century scholars, Abdul Azziz al-Hafeed al-Habashi ( ) lived 673/674 Gregorian years or 694/695 Islamic years, from 5811276 of the Hijra.[23]

In Twelver Shiism, Muhammad al-Mahdi is believed to currently be in hiding (Major Occultation) and still alive.

Extreme lifespans are ascribed to the Tirthankaras, For instance, Neminatha was said to have lived for over 10,000 years before his ascension, Naminatha was said to have lived for over 20,000 years before his ascension, Munisuvrata was said to have lived for over 30,000 years before his ascension, Mllntha was said to have lived for over 56,000 years before his ascension, Aranatha was said to have lived for over 84,000 years before his ascension, Kunthunatha was said to have lived for over 100,000 years before his ascension, and Shantinatha was said to have lived even for over 700,000 years before his ascension.[24]

These include claims prior to approximately 150 CE, before the fall of the Roman empire.

A book Macrobii ("Long-livers") is a work devoted to longevity. It was attributed to the ancient Greek author Lucian, although it is now accepted that he could not have written it. Most examples given in it are lifespans of 80 to 100 years, but some are much longer:

Some early emperors of Japan ruled for more than a century, according to the tradition documented in the Kojiki, viz., Emperor Jimmu and Emperor Kan.

The reigns of several shahs in the Shahnameh, an epic poem by Ferdowsi, are given as longer than a century:

In Roman times, Pliny wrote about longevity records from the census carried out in 74 AD under Vespasian. In one region of Italy many people allegedly lived past 100; four were said to be 130, others even older. The ancient Greek author Lucian is the presumed author of Macrobii (long-livers), a work devoted to longevity. Most of the examples Lucian gives are what would be regarded as normal long lifespans (80100 years).

Age claims for the earliest eight Sumerian kings in the major recension of the Sumerian King List were in units and fractions of shar (3,600 years) and totaled 67 shar or 241,200 years.[30]

In the only ten-king tablet recension of this list three kings (Alalngar, [...]kidunnu, and En-men-dur-ana) are recorded as having reigned 72,000 years each.[8][31] The major recension assigns 43,200 years to the reign of En-men-lu-ana, and 36,000 years each to those of Alalngar and Dumuzid.[30]

The first 18 Hng kings of Vietnam were reported to live at least over 200 years each. Their reigns lasted from 2879 BC to 258 BC.

These include longevity claims made in a country or region in the modern era, ordered alphabetically by country or region.

Deaths officially reported in Russia in 1815 listed 1068 centenarians, including 246 supercentenarians (50 at age 120155 and one even older).[34]Time magazine considered that, by the Soviet Union, longevity had elevated to a state-supported "Methuselah cult".[74] The USSR insisted on its citizens' unrivaled longevity by claiming 592 people (224 male, 368 female) over age 120 in a 15 January 1959 census[75] and 100 citizens of Russia alone ages 120 to 156 in March 1960.[76] Such later claims were fostered by Georgian-born Joseph Stalin's apparent hope that he would live long past 70.[74]Zhores A. Medvedev, who demonstrated that all 500-plus claims failed birth-record validation and other tests,[74] said Stalin "liked the idea that [other] Georgians lived to be 100".[76]

Swedish death registers contain detailed information on thousands of centenarians going back to 1749; the maximum age at death reported between 1751 and 1800 was 147.[83]

Swiss anatomist Albrecht von Haller collected examples of 62 people ages 110120, 29 ages 120130, and 15 ages 130140.[85]

Cases of extreme longevity were listed by James Easton in 1799, who covered 1712 cases documented between 66 BCE and 1799, the year of publication;[90] Charles Hulbert also edited a book containing a list of cases in 1825. Some extreme longevity claims include:

A periodical The Aesculapian Register, written by physicians and published in Philadelphia in 1824, listed a number of cases, including several purported to have lived over 130. The authors said the list was taken from the Dublin Magazine.[100]

The idea that certain diets can lead to extraordinary longevity (ages beyond 130) is not new. In 1909, lie Metchnikoff believed that drinking goat's milk could confer extraordinary longevity. The Hunza diet, supposedly practiced in an area of northern Pakistan, has been claimed to give people the ability to live to 140 or more.[108] There has been no proof that any diet has led humans to live longer than the genetically-recognized maximum[citation needed] however Caloric restriction diets have increased lifespans of rodents significantly.

Traditions that have been believed to confer greater human longevity include alchemy.[4]

The Fountain of Youth reputedly restores the youth of anyone who drinks of its waters. The New Testament, following older Jewish tradition, attributes healing to the Pool of Bethesda when the waters are "stirred" by an angel.[112]Herodotus attributes exceptional longevity to a fountain in the land of the Ethiopians.[113] The lore of the Alexander Romance and of Al-Khidr describes such a fountain, and stories about the philosopher's stone, universal panaceas, and the elixir of life are widespread.

After the death of Juan Ponce de Len, Gonzalo Fernndez de Oviedo y Valds wrote in Historia General y Natural de las Indias (1535) that Ponce de Len was looking for the waters of Bimini to cure his aging.[114]

Originally posted here:
Longevity myths - Wikipedia

Read More...

5 Symptoms of a Weakened Immune System – Step To Health

December 5th, 2016 10:48 am

Your immune system is the mechanism that your body uses to defend itself from viruses, bacteria, and many types of diseases.Sometimes, it tends to get weak: a poor diet, stress, or some kind of illness can all prevent it from performing its basic functions.

Your immune system is your defense, your immune response to certain external agents that can come inside of you and harm you. It is made up of a network of cells, tissues, and organs that work together to protect your body. You definitely know it, these protective cells are what are called leukocytes or white blood cells. They are in charge of attacking those organisms that causes sicknesses. These cells are found in the thymus, spleen, and bone marrow. They are called lymphatic organs.

If for whatever reason you have a lowered level of leukocytes at any given moment, you will not be able to take onthose external elements that make you sick.So it is important that you are aware of certain kinds of signals so that your doctor can immediately determine the origin of this weakness and you can take it on. So, lets take a look at the signs of aweakened immune system.

It is veryis true that fatigue can have a lot of causes.But when they are continuous, when you wake up in the morning for example and feel exhausted,when you end up tired from the smallest things, when the difference in temperature causes you to get depressed or nausea, etc This are all symptoms to keep in mind.

Urinary tract infections, stomach problems, inflamed and red gums, experiencing diarrhea oftenare examples that your immune system is not handling the external agents that come in your body like it should. It is not producing the proper response and it cannot defend you against certain viruses or bacteria.

How many colds do you tend to catch?One every month? Does your throat always hurt? Do you suddenly catch the flu? You should see your doctor so they can do a test on your levels of white blood cells. Your immune system may not be defending itself like it should.

Some people experience allergic reactions more often than others.They cannot respond to certain pollen, dust, and environmental agents that impact your skin or mucus, and that immediately affect health. If that is the case for you, it is possible that you have a weak immune system.

We all know it.A good diet is a synonymous with good health.But sometimes we only get that when we are already experiencing a problem, when we are already sick. It is necessary to have varied and balanced nutrition at all times, which is rich in fruit, vegetables, and lean protein, and low in excess sugar, fats, and alcohol. Citrus fruits are always excellent health, so dont forget to eat oranges, mandarins, papaya, grapes, tomatoes, etc.

Get a restful and repairing sleep.This is essential for keeping your immune system strong and for letting yourself recover energy and perform essential functions. Insomnia and concerns, the things that make you wait up constantly, are enemies of your health.

We also know this, but sometimes it gets by us.Washing your hands before eating, before handling food,after touching animals, after getting home from outside or work It is also important to take care of the cleanliness of your food. Wash the vegetables that you are going to cook well. Submerge them in water and get rid of any remnants. This is all essential forprotecting your immune system.

Stress is not only an emotion. If it turns chronic, it can cause serious problems.Toxins accumulate in your body, weaken your immune system, make you sick So keep it in mind. Establish priorities, learn to love yourself, find time for yourself and do things you like to do.

See the article here:
5 Symptoms of a Weakened Immune System - Step To Health

Read More...

Page 1,103«..1020..1,1021,1031,1041,105..1,1101,120..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick