header logo image


Page 953«..1020..952953954955..960970..»

Treating arthritis with algae – Medical Xpress – Medical Xpress

August 24th, 2017 1:47 am

The new approach to treating arthritis is based on brown algae. Credit: istockphoto / Empa

Researchers at ETH Zurich, Empa and the Norwegian research institute SINTEF are pursuing a new approach to treating arthritis. This is based on a polysaccharide, a long-chain sugar molecule, originating from brown algae. When chemically modified, this "alginate" reduces oxidative stress, has an anti-inflammatory effect in cell culture tests and suppresses the immune reaction against cartilage cells, thereby combating the causes of arthritis. The research is, however, still in its infancy.

Arthritis is the most-widespread joint disease, with around 90 percent of all people over 65 being affected to varying degrees, but this degenerative disease is also widespread amongst younger people. In arthritis, the cartilage in the joint, a type of protective layer on bones that "lubricates" the joint, degenerates over time. This can be extremely painful for sufferers, because inflammatory reactions are associated with cartilage degeneration. In the later stages of the disease, bones are no longer adequately protected and can directly rub against each other.

Arthritis can affect all joints in the body, but most often affects the knee joint, hip joint and fingers. The disease has been considered incurable until now. Current treatment methods, such as anti-inflammatory drugs and painkillers, mainly address the symptoms. Often, the only remaining option is an operation to replace the affected joint with an artificial one.

Initial research results are encouraging

In laboratory tests, the team led by ETHZ researcher Marcy Zenobi-Wong and Empa researcher Katharina Maniura has now succeeded, together with SINTEF in Norway, in identifying a substance with the potential to halt cartilage degeneration in joints. This substance is the polysaccharide alginate extracted from the stems of brown algae - or more precisely cuvie (Lat. Laminaria hyperborea), which is similar to specific extracellular biomolecules in cartilage. The researchers chemically modified the alginate with sulfate groups and then added it in dissolved form to cell cultures to examine the reaction of various cell types to the modified polysaccharide. This revealed that alginate sulfate can significantly reduce oxidative stress, which is a frequent cause of cell damage or even cell death, and the more sulfate groups attached to the alginate molecule, the greater this reduction.

Alginate sulfate was also able to suppress the inflammatory reaction, again depending on the number of sulfate groups, and was able to down-regulate the expression of genes that trigger an inflammatory reaction in both human cartilage cells, known as chondrocytes, and in macrophages, the "scavenger cells" of our immune system. The algal molecules should therefore slow down cartilage degeneration. "The hope is that they can even stop this degeneration," says Empa researcher Markus Rottmar.

Further research work necessary

The alginate sulfates have so far only been tested in vitro, i.e. in the laboratory with cell cultures. However, the encouraging results mean that research will now continue. The next stage is to test the substances on animals. If this is also successful, clinical trials can then be conducted on people. These tests are, however, laborious and time-consuming. If everything were to work perfectly, it would still be a few years before arthritis patients could be treated with alginate sulfate.

Explore further: Prolactin reduces arthritis inflammation

More information: Anne Kerschenmeyer et al. Anti-oxidant and immune-modulatory properties of sulfated alginate derivatives on human chondrocytes and macrophages, Biomater. Sci. (2017). DOI: 10.1039/c7bm00341b

The rest is here:
Treating arthritis with algae - Medical Xpress - Medical Xpress

Read More...

Arthritis: Experts STEP UP battle to beat agonising condition – Express.co.uk

August 24th, 2017 1:47 am

GETTY

Doctors battling to find ways of stopping or easing the pain caused by the disease launched a nationwide campaign to highlight the true impact on patients.

It came as research shows arthritis costs the UK economy 2.6billion a year.

More than 10 million people in Britain suffer from the disease, which causes swelling of the joints.

Yet while it is the leading cause of pain in the UK, campaigners say it remains largely invisible.

Dr Liam OToole, chief executive of Arthritis Research UK, said: We have this sort of culture of suffer in silence, grin and bear it, its what my granny used to suffer from.

"Actually it affects all of us directly and indirectly. We all lose out from it and we want to make sure people dont suffer in silence.

GETTY

Launching the campaign, Dr OToole added: Today we have taken an important step in changing the way the nation sees this major public health issue.

New research shows 25 million working days are lost annually in the UK due to the two most common forms, osteoarthritis and rheumatoid arthritis.

The figure is set to rise to 25.9 million lost days at a cost of 3.4billion to the economy by 2030, according to research by the York Health Economics Consortium at the University of York.

By 2050 the figures will increase to 27.2 million working days, costing the economy 4.7billion a year.

Getty

1 of 12

Change will only come if we can win acknowledgement that there is a problem in which we all have a stake

Dr OToole

The NHS and wider healthcare system will spend 10.2billion treating the conditions this year and a total of 118.6billion in the next decade.

Consortium director Matthew Taylor said: Our research highlights just how significant that impact is and the fact that its set to increase.

"Its imperative that we all understand arthritis better, so that we can take the necessary steps to help people living with it.

A separate survey showed that around three in four people with all types of arthritis say their family and social lives are compromised.

Just over half of those quizzed feel they are a nuisance to their families, while around a third of sufferers report a negative effect on physical intimacy with their partners.

While 88 per cent of sufferers describe it as a debilitating and life-restricting condition, the report reveals the condition is largely hidden from public view.

The survey found 86 per cent of people with the condition try hard not to let arthritis define them or their personality.

And even though it impacts people of all ages, 89 per cent of people living with it believe the condition is viewed by society as an old persons disease.

GETTY

More than 100million is being spent this year to develop breakthrough treatments and find a cure.

But campaigners insist more resources need to be ploughed into the issue.

Dr OToole said: There is a complete mismatch between the enormous impact arthritis has on individuals, their families and society and the attention, priority and resources society currently gives to it.

He added: One of the root causes of this is the conditions invisibility. Change will only come if we can win acknowledgement that there is a problem in which we all have a stake.

GETTY

Anne Kearl, 55, from Hampshire has osteoarthritis.

She said pain is always there, adding: When friends and colleagues cant physically see anything wrong with you, they assume youre OK and often I let people think that rather than be honest about my arthritis.

Original post:
Arthritis: Experts STEP UP battle to beat agonising condition - Express.co.uk

Read More...

Arthritis Drugs and Kidney Disease – HuffPost

August 24th, 2017 1:47 am

Do you suffer from arthritis and take over-the-counter (OTC) arthritis drugs? You could be at risk of adverse drug events.

You are not alone. The most commonly used OTC arthritis drugs are non-steroidal anti-inflammatory drugs (NSAIDs) such as Ibuprofen, Motrin, Advil, Aleve, and Naproxen. The FDA estimates that 17% of adults in the United State took Ibuprofen and 3.5% of adults took Naproxen in any given week. While these drugs are extremely effective in reducing pain and inflammation associated with arthritis, they are also responsible for many adverse drug reactions, and are associated with stomach ulcers, high blood pressure, heart attacks, heart failure and liver failure.

The label instructions for use of both Ibuprofen and Aleve recommend that you consult with your physician if you take these drugs for more than 10 days for arthritis pain. It goes on to say that the drugs temporarily relieve minor aches and pain due to----minor pain of arthritis. Id like to emphasize the word temporary and advise all with arthritis pain that these drugs are not indicated for long term use unless supervised by your physician.

According to a recent report, the use of these NSAIDs increase the risk of acute kidney injury by 50% in the general population and patients with chronic kidney disease (CKD). The report also estimated that the incidence of acute kidney injury doubled in patients over 50 compared to those that did not take NSAIDs.

There are many forms of acute kidney injury associated with NSAIDs including kidney failure requiring dialysis, allergic disease of the kidney, worsening of underlying chronic kidney disease, worsening high blood pressure and elevation of blood potassium levels. NSAIDs also can cause interactions with other drugs that you may be taking for other diseases. Many of these adverse drug events can be life-threatening.

I recommend that if you are interested in taking NSAIDs that you consult with your physician and review your medical history. You should avoid long term use of these agents and you should be monitored carefully while you are taking these drugs over extended periods of time.

Learn more about pain medications.

The Morning Email

Wake up to the day's most important news.

Go here to read the rest:
Arthritis Drugs and Kidney Disease - HuffPost

Read More...

Fibromyalgia, arthritis support group to meet Sept. 5 – The Bryan Times (subscription)

August 24th, 2017 1:47 am

MONTPELIER The next meeting for the Fibromyalgia and Arthritis Support Group will be held on Tuesday, Sept. 5, at 7 p.m. at the Montpelier Senior Center, 325 N. Jonesville St.

Fibromyalgia is a common and disabling disorder affecting two to four percent of the population. Patients with fibromyalgia usually ache all over, sleep poorly, are stiff on walking, and tired all day.

kAm%96J 92G6 >FD4F=2C A2:? E9C@F89@FE E96 3@5J[ AC@3=6>D E9:?<:?8 4=62C=J 2?5 2D D9@CE E6C> >6>@CJ =@DD

kAm!2E:6?ED H9@ =62C? 2D >F49 2D A@DD:3=6 23@FE E9:D 5:D@C56C FDF2==J 5@ 36EE6C] u@C E96D6 C62D@?D[ 2?5 ;FDE 3642FD6 :E 😀 8@@5 E@

kAm|66E:?8D 2C6 96=5 @? E96 7:CDE %F6D52J @7 6G6CJ >@?E9] u@C >@C6 :?7@C>2E:@?[ 42== tDE96C 2E c`hebedb`b]k^Am

sharon@bryantimes.com

Go here to see the original:
Fibromyalgia, arthritis support group to meet Sept. 5 - The Bryan Times (subscription)

Read More...

Oral Contraceptives Tied to Lower Rheumatoid Arthritis Risk – New York Times

August 24th, 2017 1:47 am

Photo

Taking oral contraceptives may reduce the risk for rheumatoid arthritis, a new study has found.

The exact cause of rheumatoid arthritis is unclear, but since it is about three times more common in women than in men, some have suggested hormonal factors might be involved.

Swedish researchers studied 2,641 women with the disease and 4,251 healthy controls. They did blood tests and collected health and behavioral data, including information about their reproductive history, breast-feeding and use of contraception. The study, in the Annals of the Rheumatic Diseases, followed them for eight years.

Women in the study had used oral contraceptives for an average of seven years. Over all, after adjusting for age, alcohol consumption, smoking and other factors, current users were 15 percent less likely, and past users 13 percent less likely, than those who had never used oral contraceptives to develop rheumatoid arthritis. Users with positive blood tests for the antibody called ACPA, a predictor of rheumatoid arthritis, reduced their risk by 16 percent.

Although some smaller studies have found a link between women who had breast-fed their babies and a lower risk of rheumatoid arthritis, this study found none.

If youre already using oral contraception, you dont need to stop if you have a family history of R.A. or are diagnosed with the disease, said the lead author, Cecilia Orellana, a postdoctoral fellow at the Karolinska Institute in Stockholm. But we are not recommending you start them as a preventive. We cant overlook the other potential side effects of the drugs on other conditions.

Visit link:
Oral Contraceptives Tied to Lower Rheumatoid Arthritis Risk - New York Times

Read More...

Embryonic stem cell – Wikipedia

August 24th, 2017 1:47 am

Embryonic stem cells (ES cells) are pluripotent stem cells derived from the inner cell mass of a blastocyst, an early-stage preimplantation embryo.[1][2] Human embryos reach the blastocyst stage 45 days post fertilization, at which time they consist of 50150 cells. Isolating the embryoblast or inner cell mass (ICM) results in destruction of the blastocyst, which raises ethical issues, including whether or not embryos at the pre-implantation stage should be considered to have the same moral or legal status as embryos in the post-implantation stage of development.[3][4]

Human ES cells measure approximately 14 m while mouse ES cells are closer to 8 m.[5]

Embryonic stem cells, derived from the blastocyst stage early mammalian embryos, are distinguished by their ability to differentiate into any cell type and by their ability to propagate. Embryonic stem cell's properties include having a normal karyotype, maintaining high telomerase activity, and exhibiting remarkable long-term proliferative potential.[6]

Embryonic stem cells of the inner cell mass are pluripotent, that is, they are able to differentiate to generate primitive ectoderm, which ultimately differentiates during gastrulation into all derivatives of the three primary germ layers: ectoderm, endoderm, and mesoderm. These include each of the more than 220 cell types in the adult body. Pluripotency distinguishes embryonic stem cells from adult stem cells found in adults; while embryonic stem cells can generate all cell types in the body, adult stem cells are multipotent and can produce only a limited number of cell types. If the pluripotent differentiation potential of embryonic stem cells could be harnessed in vitro, it might be a means of deriving cell or tissue types virtually to order. This would provide a radical new treatment approach to a wide variety of conditions where age, disease, or trauma has led to tissue damage or dysfunction.

In 2012, the Nobel Prize for Medicine was attributed conjointed to John B. Gurdon and Shinya Yamanaka for the discovery that mature cells can be reprogrammed to become pluripotent.[7]

Additionally, under defined conditions, embryonic stem cells are capable of propagating themselves indefinitely in an undifferentiated state and have the capacity when provided with the appropriate signals to differentiate, presumably via the formation of precursor cells, to almost all mature cell phenotypes.[8] This allows embryonic stem cells to be employed as useful tools for both research and regenerative medicine, because they can produce limitless numbers of themselves for continued research or clinical use.

Because of their plasticity and potentially unlimited capacity for self-renewal, embryonic stem cell therapies have been proposed for regenerative medicine and tissue replacement after injury or disease. Diseases that could potentially be treated by pluripotent stem cells include a number of blood and immune-system related genetic diseases, cancers, and disorders; juvenile diabetes; Parkinson's disease; blindness and spinal cord injuries. Besides the ethical concerns of stem cell therapy (see stem cell controversy), there is a technical problem of graft-versus-host disease associated with allogeneic stem cell transplantation. However, these problems associated with histocompatibility may be solved using autologous donor adult stem cells, therapeutic cloning. Stem cell banks or more recently by reprogramming of somatic cells with defined factors (e.g. induced pluripotent stem cells). Embryonic stem cells provide hope that it will be possible to overcome the problems of donor tissue shortage and also, by making the cells immunocompatible with the recipient. Other potential uses of embryonic stem cells include investigation of early human development, study of genetic disease and as in vitro systems for toxicology testing.[6]

According to a 2002 article in PNAS, "Human embryonic stem cells have the potential to differentiate into various cell types, and, thus, may be useful as a source of cells for transplantation or tissue engineering."[9]

Current research focuses on differentiating ES into a variety of cell types for eventual use as cell replacement therapies (CRTs). Some of the cell types that have or are currently being developed include cardiomyocytes (CM), neurons, hepatocytes, bone marrow cells, islet cells and endothelial cells.[10] However, the derivation of such cell types from ESs is not without obstacles and hence current research is focused on overcoming these barriers. For example, studies are underway to differentiate ES in to tissue specific CMs and to eradicate their immature properties that distinguish them from adult CMs.[11]

Besides in the future becoming an important alternative to organ transplants, ES are also being used in field of toxicology and as cellular screens to uncover new chemical entities (NCEs) that can be developed as small molecule drugs. Studies have shown that cardiomyocytes derived from ES are validated in vitro models to test drug responses and predict toxicity profiles.[10] ES derived cardiomyocytes have been shown to respond to pharmacological stimuli and hence can be used to assess cardiotoxicity like Torsades de Pointes.[12]

ES-derived hepatocytes are also useful models that could be used in the preclinical stages of drug discovery. However, the development of hepatocytes from ES has proven to be challenging and this hinders the ability to test drug metabolism. Therefore, current research is focusing on establishing fully functional ES-derived hepatocytes with stable phase I and II enzyme activity.[13]

Researchers have also differentiated ES into dopamine-producing cells with the hope that these neurons could be used in the treatment of Parkinsons disease.[14][15] Recently, the development of ESC after Somatic Cell Nuclear Transfer (SCNT) of Olfactory ensheathing cells (OEC's) to a healthy Oocyte has been recommended for Neuro-degenerative diseases.[16] ESs have also been differentiated to natural killer (NK) cells and bone tissue.[17] Studies involving ES are also underway to provide an alternative treatment for diabetes. For example, DAmour et al. were able to differentiate ES into insulin producing cells[18] and researchers at Harvard University were able to produce large quantities of pancreatic beta cells from ES.[19]

Several new studies have started to address this issue. This has been done either by genetically manipulating the cells, or more recently by deriving diseased cell lines identified by prenatal genetic diagnosis (PGD). This approach may very well prove invaluable at studying disorders such as Fragile-X syndrome, Cystic fibrosis, and other genetic maladies that have no reliable model system.

Yury Verlinsky, a Russian-American medical researcher who specialized in embryo and cellular genetics (genetic cytology), developed prenatal diagnosis testing methods to determine genetic and chromosomal disorders a month and a half earlier than standard amniocentesis. The techniques are now used by many pregnant women and prospective parents, especially those couples with a history of genetic abnormalities or where the woman is over the age of 35, when the risk of genetically related disorders is higher. In addition, by allowing parents to select an embryo without genetic disorders, they have the potential of saving the lives of siblings that already had similar disorders and diseases using cells from the disease free offspring.[20]

Scientists have discovered a new technique for deriving human embryonic stem cell (ESC). Normal ESC lines from different sources of embryonic material including morula and whole blastocysts have been established. These findings allows researchers to construct ESC lines from embryos that acquire different genetic abnormalities; therefore, allowing for recognition of mechanisms in the molecular level that are possibly blocked that could impede the disease progression. The ESC lines originating from embryos with genetic and chromosomal abnormalities provide the data necessary to understand the pathways of genetic defects.[21]

A donor patient acquires one defective gene copy and one normal, and only one of these two copies is used for reproduction. By selecting egg cell derived from embryonic stem cells that have two normal copies, researchers can find variety of treatments for various diseases. To test this theory Dr. McLaughlin and several of his colleagues looked at whether parthenogenetic embryonic stem cells can be used in a mouse model that has thalassemia intermedia. This disease is described as an inherited blood disorder in which there is a lack of hemoglobin leading to anemia. The mouse model used, had one defective gene copy. Embryonic stem cells from an unfertilized egg of the diseased mice were gathered and those stem cells that contained only healthy hemoglobin genes were identified. The healthy embryonic stem cell lines were then converted into cells transplanted into the carrier mice. After five weeks, the test results from the transplant illustrated that these carrier mice now had a normal blood cell count and hemoglobin levels.[22]

Differentiated somatic cells and ES cells use different strategies for dealing with DNA damage. For instance, human foreskin fibroblasts, one type of somatic cell, use non-homologous end joining (NHEJ), an error prone DNA repair process, as the primary pathway for repairing double-strand breaks (DSBs) during all cell cycle stages.[23] Because of its error-prone nature, NHEJ tends to produce mutations in a cells clonal descendants.

ES cells use a different strategy to deal with DSBs.[24] Because ES cells give rise to all of the cell types of an organism including the cells of the germ line, mutations arising in ES cells due to faulty DNA repair are a more serious problem than in differentiated somatic cells. Consequently, robust mechanisms are needed in ES cells to repair DNA damages accurately, and if repair fails, to remove those cells with un-repaired DNA damages. Thus, mouse ES cells predominantly use high fidelity homologous recombinational repair (HRR) to repair DSBs.[24] This type of repair depends on the interaction of the two sister chromosomes formed during S phase and present together during the G2 phase of the cell cycle. HRR can accurately repair DSBs in one sister chromosome by using intact information from the other sister chromosome. Cells in the G1 phase of the cell cycle (i.e. after metaphase/cell division but prior the next round of replication) have only one copy of each chromosome (i.e. sister chromosomes arent present). Mouse ES cells lack a G1 checkpoint and do not undergo cell cycle arrest upon acquiring DNA damage.[25] Rather they undergo programmed cell death (apoptosis) in response to DNA damage.[26] Apoptosis can be used as a fail-safe strategy to remove cells with un-repaired DNA damages in order to avoid mutation and progression to cancer.[27] Consistent with this strategy, mouse ES stem cells have a mutation frequency about 100-fold lower than that of isogenic mouse somatic cells.[28]

The major concern with the possible transplantation of ESC into patients as therapies is their ability to form tumors including teratoma.[29] Safety issues prompted the FDA to place a hold on the first ESC clinical trial (see below), however no tumors were observed.

The main strategy to enhance the safety of ESC for potential clinical use is to differentiate the ESC into specific cell types (e.g. neurons, muscle, liver cells) that have reduced or eliminated ability to cause tumors. Following differentiation, the cells are subjected to sorting by flow cytometry for further purification. ESC are predicted to be inherently safer than IPS cells because they are not genetically modified with genes such as c-Myc that are linked to cancer. Nonetheless, ESC express very high levels of the iPS inducing genes and these genes including Myc are essential for ESC self-renewal and pluripotency,[30] and potential strategies to improve safety by eliminating c-Myc expression are unlikely to preserve the cells' "stemness". However, N-myc and L-myc have been identified to induce iPS cells instead of c-myc with similar efficiency.[31][32]

In 1964, Lewis Kleinsmith and G. Barry Pierce Jr. isolated a single type of cell from a teratocarcinoma, a tumor now known to be derived from a germ cell.[33] These cells isolated from the teratocarcinoma replicated and grew in cell culture as a stem cell and are now known as embryonal carcinoma (EC) cells.[34] Although similarities in morphology and differentiating potential (pluripotency) led to the use of EC cells as the in vitro model for early mouse development,[35] EC cells harbor genetic mutations and often abnormal karyotypes that accumulated during the development of the teratocarcinoma. These genetic aberrations further emphasized the need to be able to culture pluripotent cells directly from the inner cell mass.

In 1981, embryonic stem cells (ES cells) were independently first derived from mouse embryos by two groups. Martin Evans and Matthew Kaufman from the Department of Genetics, University of Cambridge published first in July, revealing a new technique for culturing the mouse embryos in the uterus to allow for an increase in cell number, allowing for the derivation of ES cells from these embryos.[36]Gail R. Martin, from the Department of Anatomy, University of California, San Francisco, published her paper in December and coined the term Embryonic Stem Cell.[37] She showed that embryos could be cultured in vitro and that ES cells could be derived from these embryos. In 1998, a breakthrough occurred when researchers, led by James Thomson at the University of Wisconsin-Madison, first developed a technique to isolate and grow human embryonic stem cells in cell culture.[38]

On January 23, 2009, Phase I clinical trials for transplantation of oligodendrocytes (a cell type of the brain and spinal cord) derived from human ES cells into spinal cord-injured individuals received approval from the U.S. Food and Drug Administration (FDA), marking it the world's first human ES cell human trial.[39] The study leading to this scientific advancement was conducted by Hans Keirstead and colleagues at the University of California, Irvine and supported by Geron Corporation of Menlo Park, CA, founded by Michael D. West, PhD. A previous experiment had shown an improvement in locomotor recovery in spinal cord-injured rats after a 7-day delayed transplantation of human ES cells that had been pushed into an oligodendrocytic lineage.[40] The phase I clinical study was designed to enroll about eight to ten paraplegics who have had their injuries no longer than two weeks before the trial begins, since the cells must be injected before scar tissue is able to form. The researchers emphasized that the injections were not expected to fully cure the patients and restore all mobility. Based on the results of the rodent trials, researchers speculated that restoration of myelin sheathes and an increase in mobility might occur. This first trial was primarily designed to test the safety of these procedures and if everything went well, it was hoped that it would lead to future studies that involve people with more severe disabilities.[41] The trial was put on hold in August 2009 due to FDA concerns regarding a small number of microscopic cysts found in several treated rat models but the hold was lifted on July 30, 2010.[42]

In October 2010 researchers enrolled and administered ESTs to the first patient at Shepherd Center in Atlanta.[43] The makers of the stem cell therapy, Geron Corporation, estimated that it would take several months for the stem cells to replicate and for the GRNOPC1 therapy to be evaluated for success or failure.

In November 2011 Geron announced it was halting the trial and dropping out of stem cell research for financial reasons, but would continue to monitor existing patients, and was attempting to find a partner that could continue their research.[44] In 2013 BioTime (NYSEMKT:BTX), led by CEO Dr. Michael D. West, acquired all of Geron's stem cell assets, with the stated intention of restarting Geron's embryonic stem cell-based clinical trial for spinal cord injury research.[45]

BioTime company Asterias Biotherapeutics (NYSE MKT: AST) was granted a $14.3 million Strategic Partnership Award by the California Institute for Regenerative Medicine (CIRM) to re-initiate the worlds first embryonic stem cell-based human clinical trial, for spinal cord injury. Supported by California public funds, CIRM is the largest funder of stem cell-related research and development in the world.[46]

The award provides funding for Asterias to reinitiate clinical development of AST-OPC1 in subjects with spinal cord injury and to expand clinical testing of escalating doses in the target population intended for future pivotal trials.[47]

AST-OPC1 is a population of cells derived from human embryonic stem cells (hESCs) that contains oligodendrocyte progenitor cells (OPCs). OPCs and their mature derivatives called oligodendrocytes provide critical functional support for nerve cells in the spinal cord and brain. Asterias recently presented the results from phase 1 clinical trial testing of a low dose of AST-OPC1 in patients with neurologically-complete thoracic spinal cord injury. The results showed that AST-OPC1 was successfully delivered to the injured spinal cord site. Patients followed 2-3 years after AST-OPC1 administration showed no evidence of serious adverse events associated with the cells in detailed follow-up assessments including frequent neurological exams and MRIs. Immune monitoring of subjects through one year post-transplantation showed no evidence of antibody-based or cellular immune responses to AST-OPC1. In four of the five subjects, serial MRI scans performed throughout the 2-3 year follow-up period indicate that reduced spinal cord cavitation may have occurred and that AST-OPC1 may have had some positive effects in reducing spinal cord tissue deterioration. There was no unexpected neurological degeneration or improvement in the five subjects in the trial as evaluated by the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam.[48]

The Strategic Partnership III grant from CIRM will provide funding to Asterias to support the next clinical trial of AST-OPC1 in subjects with spinal cord injury, and for Asterias product development efforts to refine and scale manufacturing methods to support later-stage trials and eventually commercialization. CIRM funding will be conditional on FDA approval for the trial, completion of a definitive agreement between Asterias and CIRM, and Asterias continued progress toward the achievement of certain pre-defined project milestones.[49]

In vitro fertilization generates multiple embryos. The surplus of embryos is not clinically used or is unsuitable for implantation into the patient, and therefore may be donated by the donor with consent. Human embryonic stem cells can be derived from these donated embryos or additionally they can also be extracted from cloned embryos using a cell from a patient and a donated egg.[50] The inner cell mass (cells of interest), from the blastocyst stage of the embryo, is separated from the trophectoderm, the cells that would differentiate into extra-embryonic tissue. Immunosurgery, the process in which antibodies are bound to the trophectoderm and removed by another solution, and mechanical dissection are performed to achieve separation. The resulting inner cell mass cells are plated onto cells that will supply support. The inner cell mass cells attach and expand further to form a human embryonic cell line, which are undifferentiated. These cells are fed daily and are enzymatically or mechanically separated every four to seven days. For differentiation to occur, the human embryonic stem cell line is removed from the supporting cells to form embryoid bodies, is co-cultured with a serum containing necessary signals, or is grafted in a three-dimensional scaffold to result.[51]

Embryonic stem cells are derived from the inner cell mass of the early embryo, which are harvested from the donor mother animal. Martin Evans and Matthew Kaufman reported a technique that delays embryo implantation, allowing the inner cell mass to increase. This process includes removing the donor mother's ovaries and dosing her with progesterone, changing the hormone environment, which causes the embryos to remain free in the uterus. After 46 days of this intrauterine culture, the embryos are harvested and grown in in vitro culture until the inner cell mass forms egg cylinder-like structures, which are dissociated into single cells, and plated on fibroblasts treated with mitomycin-c (to prevent fibroblast mitosis). Clonal cell lines are created by growing up a single cell. Evans and Kaufman showed that the cells grown out from these cultures could form teratomas and embryoid bodies, and differentiate in vitro, all of which indicating that the cells are pluripotent.[36]

Gail Martin derived and cultured her ES cells differently. She removed the embryos from the donor mother at approximately 76 hours after copulation and cultured them overnight in a medium containing serum. The following day, she removed the inner cell mass from the late blastocyst using microsurgery. The extracted inner cell mass was cultured on fibroblasts treated with mitomycin-c in a medium containing serum and conditioned by ES cells. After approximately one week, colonies of cells grew out. These cells grew in culture and demonstrated pluripotent characteristics, as demonstrated by the ability to form teratomas, differentiate in vitro, and form embryoid bodies. Martin referred to these cells as ES cells.[37]

It is now known that the feeder cells provide leukemia inhibitory factor (LIF) and serum provides bone morphogenetic proteins (BMPs) that are necessary to prevent ES cells from differentiating.[52][53] These factors are extremely important for the efficiency of deriving ES cells. Furthermore, it has been demonstrated that different mouse strains have different efficiencies for isolating ES cells.[54] Current uses for mouse ES cells include the generation of transgenic mice, including knockout mice. For human treatment, there is a need for patient specific pluripotent cells. Generation of human ES cells is more difficult and faces ethical issues. So, in addition to human ES cell research, many groups are focused on the generation of induced pluripotent stem cells (iPS cells).[55]

On August 23, 2006, the online edition of Nature scientific journal published a letter by Dr. Robert Lanza (medical director of Advanced Cell Technology in Worcester, MA) stating that his team had found a way to extract embryonic stem cells without destroying the actual embryo.[56] This technical achievement would potentially enable scientists to work with new lines of embryonic stem cells derived using public funding in the USA, where federal funding was at the time limited to research using embryonic stem cell lines derived prior to August 2001. In March, 2009, the limitation was lifted.[57]

The iPSC technology was pioneered by Shinya Yamanakas lab in Kyoto, Japan, who showed in 2006 that the introduction of four specific genes encoding transcription factors could convert adult cells into pluripotent stem cells.[58] He was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent." [59]

In 2007 it was shown that pluripotent stem cells highly similar to embryonic stem cells can be generated by the delivery of three genes (Oct4, Sox2, and Klf4) to differentiated cells.[60] The delivery of these genes "reprograms" differentiated cells into pluripotent stem cells, allowing for the generation of pluripotent stem cells without the embryo. Because ethical concerns regarding embryonic stem cells typically are about their derivation from terminated embryos, it is believed that reprogramming to these "induced pluripotent stem cells" (iPS cells) may be less controversial. Both human and mouse cells can be reprogrammed by this methodology, generating both human pluripotent stem cells and mouse pluripotent stem cells without an embryo.[61]

This may enable the generation of patient specific ES cell lines that could potentially be used for cell replacement therapies. In addition, this will allow the generation of ES cell lines from patients with a variety of genetic diseases and will provide invaluable models to study those diseases.

However, as a first indication that the induced pluripotent stem cell (iPS) cell technology can in rapid succession lead to new cures, it was used by a research team headed by Rudolf Jaenisch of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, to cure mice of sickle cell anemia, as reported by Science journal's online edition on December 6, 2007.[62][63]

On January 16, 2008, a California-based company, Stemagen, announced that they had created the first mature cloned human embryos from single skin cells taken from adults. These embryos can be harvested for patient matching embryonic stem cells.[64]

The online edition of Nature Medicine published a study on January 24, 2005, which stated that the human embryonic stem cells available for federally funded research are contaminated with non-human molecules from the culture medium used to grow the cells.[65] It is a common technique to use mouse cells and other animal cells to maintain the pluripotency of actively dividing stem cells. The problem was discovered when non-human sialic acid in the growth medium was found to compromise the potential uses of the embryonic stem cells in humans, according to scientists at the University of California, San Diego.[66]

However, a study published in the online edition of Lancet Medical Journal on March 8, 2005 detailed information about a new stem cell line that was derived from human embryos under completely cell- and serum-free conditions. After more than 6 months of undifferentiated proliferation, these cells demonstrated the potential to form derivatives of all three embryonic germ layers both in vitro and in teratomas. These properties were also successfully maintained (for more than 30 passages) with the established stem cell lines.[67]

Read more:
Embryonic stem cell - Wikipedia

Read More...

How processing health data has become increasingly problematic – Irish Times

August 24th, 2017 1:47 am

Almost four and a half years ago, then minister for health Dr James Reilly ordered the Health Service Executive not to destroy more than one million blood samples taken from newborn children in the Republic between 1984 and 2002.

The heel-prick tests, known as Guthrie tests, are carried out on all babies to screen for genetic conditions.

The decision to destroy the cards with the blood samples on them came after it emerged that those taken before July 1st, 2011, were being retained without consent, and therefore in breach of national and EU data protection law.

The Royal College of Physicians at the time said there was an explosion of molecular genetics every day that was being added to and that the museum piece cards could prove to be even more valuable in the future.

The Irish Heart Foundation, which had campaigned to save the cards, said some 1,400 families that had lost a member through sudden adult death syndrome would, as a result, be able to get a genetic diagnosis to see if they were at risk.

The debate over those cards and the legality of retaining them still rumbles on, as do ethical questions about the privacy of highly sensitive medical data obtained for one purpose and whether it should ever be used for another without the consent of the original subject, in the absence of a legal exemption.

Meanwhile, medical and scientific researchers are closely watching the new EU General Data Protection Regulation (GDPR) and what it might mean for them and their work after it takes effect next May.

While the regulation allows certain exemptions for processing special cateogories of data, including genetic and biometric data, the Irish legislation hasnt been written yet and researchers are waiting to see what it will mean for their work.

In some cases, they are worried about what the new law will mean for historic datasets and longitudinal studies and whether they will have to delete them on the grounds that they will not have the appropriate standards of explicit consent post May 2018 to retain them.

Even in just a few years, the medical, legal, ethical and social dilemmas involved in processing health data, including biological samples obtained from patients or research study volunteers have become vastly more complex.

The ethical issues that arise around areas such as stem cell research, embryo research and reproductive cloning, genome sequencing, gene editing and population-scale biobanks are huge.

Opportunities for uncovering the causes of disease, for resolving fertility issues, for fixing genetic conditions, for treating cancers, are within the grasp of scientists and researchers, but there is still no international consensus on many issues.

Concerns are evolving too in light of new models for funding research, such as venture capital-backed projects where highly sensitive data used for research, and effectively a permanent record, may ultimately end up being used by or sold for profit to companies or other third parties anywhere in the world.

Researchers are still uncertain what exactly the GDPR will mean for them in terms of the exemptions from data protection legislation that will apply to so-called special categories of data including genetic and biometric health data used for research.

At a recent event in Dublin, the Irish Platform for Patients Organisations, Science and Industry (IPPOSI) explored the concerns about data protection, consent and the forthcoming regulation.

IPPOSI chief executive Dr Derick Mitchell told the event: Patients are aware that the altruistic benefit of being involved in research far outweighs the risks, but they do expect that they will be consulted on the use of their data.

He said empowerment of the data owner was fundamental to the forthcoming changes in the law, and the event explored a model of so-called dynamic consent to allow people consent to have their data used for research, possibly allowing broad consent at the outset and opt-outs at a later stage where they did not agree to new uses. The legal jury is still out on whether such a model is even possible.

Dr Mitchell said a national response was required to GDPR and not just for health research.

He hoped that guidance on the question of consent for processing of personal data expected later in the year from the independent body representing all of the EUs national data protection authorities would be a step forward.

But I think the real crux is the code of conduct and each institution in effect will have to develop their own code of conduct as to how they approach data protection from the beginning of projects rather than having it as a kind of tick-box exercise at the end of a project, he said.

Dr Mitchell said the explicit consent referred to in the EU regulation, for example, had very real consequences for the continuation of large-scale population biobanks, for example.

There was also an ethical argument going on as to whether a persons consent could be said to be informed if they ultimately did not know what the research project might ultimately examine.

Prof Jane Grimson, a member of the e-health Ireland committee and a former director of Health Information in the Health Information and Quality Authority, said the potential of health data and research had to be balanced with a patients right to privacy.

Ownership of patient records was critical, she said.

I think the way we are moving now is much more towards electronic health records that will be owned and controlled by the individual. Its their information and they should be in control of who has a right to see information and the information (that is used in research).

Ethics research committees were critical and needed to operate to a very high standard to ensure the trust of people, she added.

Its an absolute minefield but I really think that ethics committees are critical.

Prof Orla Sheils, director of the Trinity Translational Medicine Institute and director of medical ethics at the School of Medicine, TCD, said she believed GDPR would have immediate consequences for data already being processed by researchers. It was a very grey area.

The difficulty with that is that if data has been collected over a long period of time that a person may not want to be reminded of the time that they were ill. Thats the balance you are trying to find there. So the way to get around that is to try to give people enough options up front to decide yes, I want to gift my sample and provided the research thats going to be done is ethical and has been approved, thats okay by me.

Prof Sheils, who sits on the St Jamess and Tallaght hospital research ethics committee, said all research involving humans had to be approved by such a committee.

Its never an ethical issue if the answer is easy, she said.

Like everything else in life, its about finding that happy balance that people are comfortable with, she said.

There is never really a right answer when there is an ethical dilemma. What I always say to students is that you are hoping for the least bad option.

Cathal Ryan said the new EU regulation would bring harmonisation, transparency and accountability to a very dense and complex area. The regulation was very pragmatic and the code of conduct within it would act as a form of self-regulation, with the additional oversight of an independent monitoring body. But he said adequate transparency on data protection in the sector had been lacking.

If there is an erosion of trust, if the health sector doesnt treat an individuals data in the right way, there will be problems.

Go here to read the rest:
How processing health data has become increasingly problematic - Irish Times

Read More...

Another voice: On gun violence research, California sets an example for the nation – Tampabay.com

August 24th, 2017 1:47 am

California has never been reluctant to take the lead on critical issues facing the nation. When federal funding was cut off for embryonic stem cell research, it created its own state program. It adopted standards for vehicle emissions and fuel efficiency that have been emulated by other states, and it has doubled down on a commitment to climate change policy in the face of disinterest, if not outright hostility, from the Trump administration. But perhaps nothing is more welcome than California's decision to advance the science of gun violence prevention with the establishment of the country's first publicly funded research center.

The Firearm Violence Research Center, launched last month at the University of California at Davis with a $5 million appropriation from the state, aims to find effective ways to prevent firearm violence through scientific investigation and understanding. Located at the university's Sacramento campus, the research institute will augment the work of Garen Wintemute, an emergency room physician and nationally recognized expert on the epidemiology of firearm violence who serves as its director.

California's decision to be at the forefront of research on gun violence as a public health issue stands in contrast to the dismal abdication of the federal government. Legislation passed by Congress in 1996 barring the Centers for Disease Control and Prevention from spending any funds "to advocate or promote gun control" made the agency skittish about conducting research. Scientific investigation has been key in devising lifesaving solutions to other public health issues, such as automobile safety and swimming pool safety, so the dearth of research into firearms, a leading cause of death for Americans under the age of 65, is intolerable.

Indeed, it is instructive that the lawmaker who successfully carried the National Rifle Association's water in getting the restrictive rider through Congress eventually came to have a change of heart. The late Jay Dickey, a Republican from Arkansas whose amendment led to the scarcity of gun research, joined forces with Mark Rosenberg, former director of the CDC's National Center for Injury Prevention and Control, to advocate federally funded gun research as well as champion in the face of predictable opposition from the gun lobby the establishment of California's center.

Wintemute stressed that the center is not about validating predetermined political agendas but rather, as he told the Los Angeles Times, "understanding the problem of firearm violence that cuts across pro-gun and anti-gun boundaries." In wanting to confront the problem of gun violence with sound data about causes, consequences and effective solutions, California once again sets a good example.

Another voice: On gun violence research, California sets an example for the nation 08/23/17 [Last modified: Wednesday, August 23, 2017 3:40pm] Photo reprints | Article reprints

More here:
Another voice: On gun violence research, California sets an example for the nation - Tampabay.com

Read More...

Company fined after ‘wilful blindness’ led to employee’s hand being … – Devon Live

August 24th, 2017 1:47 am

A textiles company have been fined 300,000 after their wilful blindness led to a 21-year-old having to have his left hand partially amputated.

Heathcoat Fabrics, based in Tiverton, admitted contravening health and safety regulations by failing to prevent access to the dangerous parts of the L-Stenter mangle at their plant in Westexe.

Exeter Magistrates Court on Tuesday heard that an investigation was launched by the Health and Safety Executive after an incident that occurred in the factory on August 23 which led to Anthony Seward, an employee with the company, suffering a serious crush injury to his left hand.

Prosecuting, Mr Mannell said that a light curtain, which automatically stops the rollers on the L-Stenter mangle machine when the beam of light is broken, had been installed in 2009 for the machine, but it has broken down in January 2014. Replacement parts had been ordered, but rather than being installed, a risk assessment had been undertaken that decided that the use of an emergency stop-cord would be sufficient as a safety measure.

On August 23, 2016, Mr Seward was preparing and cleaning the L-Stenter mangle for the night shift when his left hand became entangled in the machine when he didnt realise that the rollers were on.

Mr Mannell said: The stenter had been used for two years and seven months without a light curtain as they felt that the stop-cord would be sufficient to prevent the risk of injury, but they failed to appreciate what could happen due to a lack of concentration or other factors when someone entered the danger zone. They were wrong that a stop-wire would be enough to reduce the risk of injury.

Mr Seward suffered severe crush injuries to four fingers on his left hand and he was flown by Air Ambulance to Bristol.

Explaining his injuries to the court, he said that he currently has no use of his left hand, he has to go back and forth to Bristol twice-a-week, he is not expected to gain the full use of his hand, and he may require an amputation.

He said that he was a retained firefighter but as a result of the injuries, this was no longer a career option.

Mr Mannell said: This case is about the fact that they failed to reinstate the light curtain to stop access to the danger zone. They knew this was a risk as they had installed it as a control measure prior to the incident happening.

The fact that they had assessed the risk and had put in the control of the light curtain previously shows how avoidable and preventable this accident was. What they did instead was inadequate and resulted in this very serious injuries.

Their wilful blindness to the risk that was in place meant that the controls did not reach industrial standard.

Mitigating on behalf of Heathcoat Fabrics, Mr Christopher Ducann said: This was a complete tragedy as to what had happened and it is truly regrettable. This was an avoidable accident and to that extent, we fully apologise for it.

He said that although it was not entirely clear as to the circumstances that led to the injury occurring, it was irrelevant as the law was about the risk of injury occurring.

He added: This is a company with no previous convictions and it is a matter of considerable shame and embarrassment that they are in this court today. Within a matter of days, they stop steps to prevent this happening again and they have fully co-operated with the investigation.

Sentencing the company, District Judge Stephen Nicholls said that it was clear that Mr Seward had suffered a considerable injury.

He fined Heathcoat Fabrics Ltd 300,000, and also ordered them to pay costs of 2,862.30 and a victim surcharge of 170.

Heathcoat Fabrics Ltd pleaded guilty to the charge of contravening a health and safety regulation in that between 18 January 2014 adn 24 August 2016 being an employer within the meaning of the Health & Safety at Work etc Act 1974 ("theAct"), contravened regulation 11(1) of the Provision and Use of Work Equipment Regulations 1998 in that you failed to take effective measures to prevent access to dangerous parts of the L-stenter mangle and in particular, its mangle rollers, whereby you are guilty of an offence contrary to Section 33(1)(c) of the Act.

Speaking after the sentence, Cameron Harvie, managing director of Heathcoat Fabrics said in a statement: Heathcoat Fabrics deeply regrets the incident which resulted in todays hearing. As the Court has today acknowledged, the Company takes health and safety serious and has an established track record in safety performance.

In the aftermath of the incident, we have taken the opportunity to further review and improve our existing safety systems. We have co-operated fully with the HSE in its investigation into the incident.

The court also heard that Mr Seward was pursuing a civil action about the company.

Founded in 1808, Heathcoat Fabrics is the leading supplier of engineered textile solutions. From off the shelf fabrics to bespoke solutions our innovative, customer-focused approach ensures that we can design, develop, test and deliver a wide variety of fabrics across the continents to many of the world's leading companies

Original post:
Company fined after 'wilful blindness' led to employee's hand being ... - Devon Live

Read More...

Centrelink worker faked blindness for disability pension – Northern Star

August 24th, 2017 1:47 am

A FORMER Northern Rivers resident has been charged with defrauding the Commonwealth by pretending she was blind to obtain a disability pension.

And it was an inside job - as Rebecca Teece, now 35, was working for Centrelink at the time.

Teece, also known as Rebecca O'Grady, is facing four counts of obtaining financial advantage by deception over her alleged use of fake medical reports and fake names to claim eligibility for pension payments between 2012 and 2015.

Teece worked at Centrelink offices in Coffs Harbour and then Pottsville when the alleged fraud took place.

In August 2012, Teece allegedly lodged a fake report from an opthamologist called Dr D. Gregor to justify a claim for a blindness disability pension. She was working in North Boambee Valley at the time.

As a result of the alleged deception, Teece received payments between December 17, 2012, and May 1, 2015.

Two years later, while working in Pottsville, Teece is alleged to have used a fake name, Rachel Lewis, to lodge another fraudulent claim for a disability pension.

Court papers allege Teece made the claim between October 30, 2014, and May 1, 2015, and as a result received payments between November 28, 2014, and May 1, 2015.

During this period Teece allegedly struck a third time, this time between March 5 and 9, 2015.

On this occasion she made and then approved her own claim for a carer's payment under the fictitious name of Margereet Lewis.

She is alleged to have done this twice.

Teece was served with a court attendance notice on January 30 this year and the matter was mentioned in Lismore Local Court on Tuesday this week.

It was adjourned to September 19 to return to Lismore Local Court.

Magistrate David Heilpern said no further adjournments would be allowed.

See original here:
Centrelink worker faked blindness for disability pension - Northern Star

Read More...

What Happens to Your Eyes If You Look Directly at the Sun During a Solar Eclipse? – TIME

August 24th, 2017 1:47 am

For the first time in U.S. history, a solar eclipse will travel exclusively across America, enabling millions of people to view the moon block out the sun on Aug. 21. (Watch TIME's livestream of the total eclipse beginning at 12 p.m. ET on Monday.) But those who watch this rare celestial event in person need to take precautions, because staring right at the sun can quickly harm your eyes.

"Looking directly at the sun is unsafe except during the brief total phase of a solar eclipse (totality), when the moon entirely blocks the suns bright face, which will happen only within the narrow path of totality," NASA explains on its website. "The only safe way to look directly at the uneclipsed or partially eclipsed sun is through special-purpose solar filters, such as eclipse glasses.'

The path of totality, which is about 70 miles wide, is viewable from parts of 14 states, as shown on this solar eclipse map, and only lasts a maximum of two minutes and 40 seconds, according to NASA. Before and after the total solar eclipse, those in its path will see a partial eclipse, in which the moon only partly blocks the sun. The rest of the country will also see a partial eclipse so essentially, everyone needs to prepare themselves to view the eclipse safely.

NEXT: Watch the Whole Total Solar Eclipse in 4 Minutes

Here's what you need to know about why a solar eclipse hurts your eyes and how to protect your eyes effectively:

According to experts, viewing the sun with your naked eye during the eclipse can burn your retina, damaging the images your brain can view. This phenomenon, known as "eclipse blindness," can cause temporary or permanent vision impairment, and in worst-case scenarios can lead to legal blindness, which entails significant loss of vision.

"If people look without the proper protection [at the sun], they run the risk of injuring their eyes. And if they get an injury, depending on how often and how long they look at the sun without the protection, they do have a substantial risk of developing a permanent loss of vision," said Dr. B. Ralph Chou, p resident of the Royal Astronomical Society of Canada and a former optometry professor. It is not possible to go completely blind from looking at the eclipse, Chou said, because the injury is limited to the central part of your visual field.

There are no immediate symptoms or pain associated with the damage the retina doesn't have any pain receptors so its hard to know at the time if you've actually been afflicted with eclipse blindness. If you look at the sun unfiltered, you may immediately notice a dazzle effect, or a glare the way you would from any bright object, but that doesn't necessarily mean your retina is damaged. According to Chou, symptoms generally begin occurring 12 hours after viewing the eclipse, when people wake up in the morning and notice their vision has been altered.

"They cant see faces in the mirror, they cant read the newspaper or the smartphone display, theyre having trouble looking at road signs, and basically theyve got this center spot in their vision that is intensely blurred," Chou said.

There are no remedies to effectively mitigate the injury, said Chou, aside from waiting and seeing if the patient regains vision. This does happen, but not until at least three months after the injury.

Yes. People have hurt their eyes by watching the sun during a solar eclipse unfiltered. However, it is a relatively rare occurrence. Although Chou said there is no definitive data on the number of people afflicted with eclipse blindness, he noted that after a solar eclipse crossed Britain in 1999, ophthalmologists reported 70 instances of eye injuries, and the majority of those people had viewed the eclipse unfiltered. In Canada, 20 cases were reported following the total solar eclipse of 1979. O f the cases reported over the years, Chou said half the people afflicted completely recovered their vision over the course of the following year.

"It's a fact that for individual practitioners, they are not seeing that many [cases] overall," Chou said. "It's only if you start looking at large populations in the hundreds of millions that you start adding up into significant numbers."

To ensure your experience is injury-free, listen to NASA's advisory and buy eclipse glasses, which block approximately 99.99% of light rays. But also make sure follow NASA's instructions in using these glasses. When the glasses are on, NASA says, it is imperative that you don't look at the sun through an unfiltered camera lens, telescope, or binoculars.

Additionally, make sure that the brand of glasses you buy has been verified to meet the international safety standard, something Chou emphasized as critical to injury prevention. The American Astronomical Society has released a list of manufacturers selling these glasses that meet this standard. NASA also suggests you inspect your filter before putting it on, and discard it if it has any scratches or damages.

"If you don't try to sneak a peek without the filter," says Chou, "Then you should not run any risk of being hurt."

Continue reading here:
What Happens to Your Eyes If You Look Directly at the Sun During a Solar Eclipse? - TIME

Read More...

Blindness set to triple globally by 2050 – InDaily – InDaily

August 24th, 2017 1:47 am

Blindness affects 36 million people globally, with the greatest burden in developing countries, a global investigation has found.

Forecasts predict that there will be almost 115 million cases of blindness and 588 million people with moderate to severe vision impairment in 2050 (up from figures of 36 million and 217 million today, respectively).

Worldwide, there are an estimated 36 million people who are blind, and this is set to grow to almost 115 million people by 2050.

The greatest burden will be found in developing countries in Asia and sub-Saharan Africa, according to a study published in The Lancet Global Health journal.

With the number of people with vision impairment accelerating, we must take action to increase our current treatment efforts at global, regional and country levels, says lead author Professor Rupert Bourne, Anglia Ruskin University, UK.

Investing in these treatments has previously reaped considerable benefits, including improved quality of life, and economic benefits as people remain in work.

Although rates of blindness and vision impairment have gone down in recent years, as the world population ages, the number of cases has increased. The new estimates highlight the need to scale up efforts to alleviate vision impairment to help improve quality of life, and educational and economic opportunities globally.

Even mild visual impairment can significantly impact a persons life, for example reducing their independence in many countries as it often means people are barred from driving, as well as reducing educational and economic opportunities, Professor Bourne says.

The greatest number of people who are blind reside in south, east and southeast Asia, while rates of blindness among older adults are highest in eastern and western sub-Saharan Africa and south Asia.

The study analysed the prevalence of blindness and vision impairment in 188 countries between 1990 and 2015, as well as providing projections for 2020 and 2050.

The study, funded by the Brien Holden Vision Institute, involved researchers from Anglia Ruskin University, the University of Melbourne, University of New South Wales, University of Auckland and Flinders University.

It is the first to include figures on presbyopia a condition that affects ones ability to read and is associated with ageing, and can be treated with eye glasses and finds that almost 1095 million people aged over 35 are affected by the condition, including almost 667 million people over 50.

The researchers estimate that global blindness crude prevalence declined from 0.75% in 1990 to 0.48% in 2015, while the rate of moderate to severe vision impairment reduced from 3.83% to 2.90%. This is likely to be a result of socio-economic development, targeted public health programs, and greater access to eye health services.

However, with most vision impairment being a result of ageing, as the population continues to grow and age, the number of people affected has increased globally. Rising from 30.6 million blind people in 1990 to 36 million in 2015, and from 160 million to 217 million people with moderate to severe vision impairment.

In addition, the study projections suggest that prevalence rates could see an upturn by 2020 (to 0.50% for blindness and 3.06% for vision impairment). They also predict further increases in the number of cases by 2050 if treatment is not improved with almost 115 million cases of blindness, and 588 million people with moderate to severe vision impairment.

The areas most affected include developing regions for example, 11.7 million people who are blind lived in south Asia in 2015, 6.2 million lived in east Asia, and 3.5 million lived in Southeast Asia. The same three regions were also home to the most people with moderate or severe vision impairment (61.2 million in south Asia, 52.9 million east Asia, and 20.8 million Southeast Asia).

Rates of blindness and vision impairment varied by region. In 2015, in western and eastern sub-Saharan Africa and south Asia the prevalence of blindness was more than 4%, while it was 0.5% or less in all high income regions (high income Asia Pacific, western Europe, Australasia, northern America, central Europe and eastern Europe).

While moderate to severe vision impairment rates were highest in south Asia, north Africa, the Middle East, and western and central sub-Saharan Africa, rates were lowest in the high-income regions.

To counter the growing numbers of cases of blindness and vision impairment, the researchers note the importance of investing in treatments. They also note that, between 1990 and 2010, when investments were made in treatments for vision impairment, prevalence of blindness reduced.

Interventions for vision impairment provide some of the largest returns on investment, and are some of the most easily implemented interventions in developing regions because they are cheap, require little infrastructure and countries recover their costs as people enter back into the workforce, Professor Bourne says.

Read more:
Blindness set to triple globally by 2050 - InDaily - InDaily

Read More...

Heritability of IQ – Wikipedia

August 24th, 2017 1:46 am

Research on heritability of IQ infers, from the similarity of IQ in closely related persons, the proportion of variance of IQ among individuals in a study population that is associated with genetic variation within that population. This provides a maximum estimate of genetic versus environmental influence for phenotypic variation in IQ in that population. "Heritability", in this sense, "refers to the genetic contribution to variance within a population and in a specific environment".[1] In other words, heritability is a mathematical estimate that indicates how much of a traits variation can be attributed to genes. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century.[2]Intelligence in the normal range is a polygenic trait, meaning it's influenced by more than one gene.[3][4]

The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults.[5][6] In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with age and reaches an asymptote at 1820 years of age and continues at that level well into adulthood.[7] Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores;[8] however, poor prenatal environment, malnutrition and disease can have deleterious effects.[9][10]

"Heritability" is defined as the proportion of variance in a trait which is attributable to genetic variation within a defined population in a specific environment.[1] Heritability takes a value ranging from 0 to 1; a heritability of 1 indicates that all variation in the trait in question is genetic in origin and a heritability of 0 indicates that none of the variation is genetic. The determination of many traits can be considered primarily genetic under similar environmental backgrounds. For example, a 2006 study found that adult height has a heritability estimated at 0.80 when looking only at the height variation within families where the environment should be very similar.[11] Other traits have lower heritabilities, which indicate a relatively larger environmental influence. For example, a twin study on the heritability of depression in men calculated it as 0.29, while it was 0.42 for women in the same study.[12] Contrary to popular[citation needed] belief, two parents of higher IQ will not necessarily produce offspring of equal or higher intelligence. In fact, according to the concept of regression toward the mean, parents whose IQ is at either extreme are more likely to produce offspring with IQ closer to the mean (or average).[13][14]

There are a number of points to consider when interpreting heritability:

Various studies have found the heritability of IQ to be between 0.7 and 0.8 in adults and 0.45 in childhood in the United States.[6][18][19] It may seem reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. However, that the opposite occurs is well documented. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.8 in adulthood.[7] One proposed explanation is that people with different genes tend to seek out different environments that reinforce the effects of those genes.[6] The brain undergoes morphological changes in development which suggests that age-related physical changes could also contribute to this effect.[20]

A 1994 article in Behavior Genetics based on a study of Swedish monozygotic and dizygotic twins found the heritability of the sample to be as high as 0.80 in general cognitive ability; however, it also varies by trait, with 0.60 for verbal tests, 0.50 for spatial and speed-of-processing tests, and 0.40 for memory tests. In contrast, studies of other populations estimate an average heritability of 0.50 for general cognitive ability.[18]

In 2006, The New York Times Magazine listed about three quarters as a figure held by the majority of studies.[21]

There are some family effects on the IQ of children, accounting for up to a quarter of the variance. However, adoption studies show that by adulthood adoptive siblings aren't more similar in IQ than strangers,[22] while adult full siblings show an IQ correlation of 0.24. However, some studies of twins reared apart (e.g. Bouchard, 1990) find a significant shared environmental influence, of at least 10% going into late adulthood.[19]Judith Rich Harris suggests that this might be due to biasing assumptions in the methodology of the classical twin and adoption studies.[23]

There are aspects of environments that family members have in common (for example, characteristics of the home). This shared family environment accounts for 0.25-0.35 of the variation in IQ in childhood. By late adolescence it is quite low (zero in some studies). There is a similar effect for several other psychological traits. These studies have not looked into the effects of extreme environments such as in abusive families.[6][22][24][25]

The American Psychological Association's report "Intelligence: Knowns and Unknowns" (1995) states that there is no doubt that normal child development requires a certain minimum level of responsible care. Severely deprived, neglectful, or abusive environments must have negative effects on a great many aspects of development, including intellectual aspects. Beyond that minimum, however, the role of family experience is in serious dispute. There is no doubt that such variables as resources of the home and parents' use of language are correlated with children's IQ scores, but such correlations may be mediated by genetic as well as (or instead of) environmental factors. But how much of that variance in IQ results from differences between families, as contrasted with the varying experiences of different children in the same family? Recent twin and adoption studies suggest that while the effect of the shared family environment is substantial in early childhood, it becomes quite small by late adolescence. These findings suggest that differences in the life styles of families whatever their importance may be for many aspects of children's lives make little long-term difference for the skills measured by intelligence tests.

Although parents treat their children differently, such differential treatment explains only a small amount of non-shared environmental influence. One suggestion is that children react differently to the same environment due to different genes. More likely influences may be the impact of peers and other experiences outside the family.[6][24] For example, siblings grown up in the same household may have different friends and teachers and even contract different illnesses. This factor may be one of the reasons why IQ score correlations between siblings decreases as they get older.[26]

Certain single-gene genetic disorders can severely affect intelligence. Phenylketonuria is an example,[27] with publications demonstrating the capacity of phenylketonuria to produce a reduction of 10 IQ points on average.[28] Meta-analyses have found that environmental factors, such as iodine deficiency, can result in large reductions in average IQ; iodine deficiency has been shown to produce a reduction of 12.5 IQ points on average.[29]

The APA report "Intelligence: Knowns and Unknowns" (1995) also stated that:

"We should note, however, that low-income and non-white families are poorly represented in existing adoption studies as well as in most twin samples. Thus it is not yet clear whether these studies apply to the population as a whole. It remains possible that, across the full range of income and ethnicity, between-family differences have more lasting consequences for psychometric intelligence."[6]

A study (1999) by Capron and Duyme of French children adopted between the ages of four and six examined the influence of socioeconomic status (SES). The children's IQs initially averaged 77, putting them near retardation. Most were abused or neglected as infants, then shunted from one foster home or institution to the next. Nine years later after adoption, when they were on average 14 years old, they retook the IQ tests, and all of them did better. The amount they improved was directly related to the adopting family's socioeconomic status. "Children adopted by farmers and laborers had average IQ scores of 85.5; those placed with middle-class families had average scores of 92. The average IQ scores of youngsters placed in well-to-do homes climbed more than 20 points, to 98."[21][30]

Stoolmiller (1999) argued that the range of environments in previous adoption studies were restricted. Adopting families tend to be more similar on, for example, socio-economic status than the general population, which suggests a possible underestimation of the role of the shared family environment in previous studies. Corrections for range restriction to adoption studies indicated that socio-economic status could account for as much as 50% of the variance in IQ.[31]

On the other hand, the effect of this was examined by Matt McGue and colleagues (2007), who wrote that "restriction in range in parent disinhibitory psychopathology and family socio-economic status had no effect on adoptive-sibling correlations [in] IQ"[32]

Turkheimer and colleagues (2003) argued that the proportions of IQ variance attributable to genes and environment vary with socioeconomic status. They found that in a study on seven-year-old twins, in impoverished families, 60% of the variance in early childhood IQ was accounted for by the shared family environment, and the contribution of genes is close to zero; in affluent families, the result is almost exactly the reverse.[33]

In contrast to Turkheimer (2003), a study by Nagoshi and Johnson (2005) concluded that the heritability of IQ did not vary as a function of parental socioeconomic status in the 949 families of Caucasian and 400 families of Japanese ancestry who took part in the Hawaii Family Study of Cognition.[34]

Asbury and colleagues (2005) studied the effect of environmental risk factors on verbal and non-verbal ability in a nationally representative sample of 4-year-old British twins. There was not any statistically significant interaction for non-verbal ability, but the heritability of verbal ability was found to be higher in low-SES and high-risk environments.[35]

Harden and colleagues (2007) investigated adolescents, most 17 years old, and found that, among higher income families, genetic influences accounted for approximately 55% of the variance in cognitive aptitude and shared environmental influences about 35%. Among lower income families, the proportions were in the reverse direction, 39% genetic and 45% shared environment."[36]

Rushton and Jensen (2010) criticized many of these studies for being done on children or adolescents. They argued that heritability increases during childhood and adolescence, and even increases greatly between 1620 years of age and adulthood, so one should be cautious drawing conclusions regarding the role of genetics from studies where the participants are not adults. Furthermore, the studies typically did not examine if IQ gains due to adoption were on the general intelligence factor (g). When the studies by Capron and Duyme were re-examined, IQ gains from being adopted into high SES homes were on non-g factors. By contrast, the adopted children's g mainly depended on their biological parents SES, which implied that g is more difficult to environmentally change.[17] The most cited adoption projects that sought to estimate the heritability of IQ were those of Texas,[37] Colorado[38] and Minnesota[39] that were started in the 1970s. These studies showed that while the adoptive parents' IQ does correlate with adoptees' IQ in early life, when the adoptees reach adolescence the correlation has faded and disappeared. The correlation with the biological parent seemed to explain most of the variation.

A 2011 study by Tucker-Drob and colleagues reported that at age 2, genes accounted for approximately 50% of the variation in mental ability for children being raised in high socioeconomic status families, but genes accounted for negligible variation in mental ability for children being raised in low socioeconomic status families. This gene-environment interaction was not apparent at age 10 months, suggesting that the effect emerges over the course of early development.[40]

A 2012 study based on a representative sample of twins from the United Kingdom, with longitudinal data on IQ from age two to age fourteen, did not find evidence for lower heritability in low-SES families. However, the study indicated that the effects of shared family environment on IQ were generally greater in low-SES families than in high-SES families, resulting in greater variance in IQ in low-SES families. The authors noted that previous research had produced inconsistent results on whether or not SES moderates the heritability of IQ. They suggested three explanations for the inconsistency. First, some studies may have lacked statistical power to detect interactions. Second, the age range investigated has varied between studies. Third, the effect of SES may vary in different demographics and different countries.[41]

A 2017 King's College London study suggests that genes account for nearly 50 per cent of the differences between whether children are socially mobile or not.[42]

A meta-analysis by Devlin and colleagues (1997) of 212 previous studies evaluated an alternative model for environmental influence and found that it fits the data better than the 'family-environments' model commonly used. The shared maternal (fetal) environment effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. They argue that the shared maternal environment may explain the striking correlation between the IQs of twins, especially those of adult twins that were reared apart.[2] IQ heritability increases during early childhood, but whether it stabilizes thereafter remains unclear.[2][old info] These results have two implications: a new model may be required regarding the influence of genes and environment on cognitive function; and interventions aimed at improving the prenatal environment could lead to a significant boost in the population's IQ.[2]

Bouchard and McGue reviewed the literature in 2003, arguing that Devlin's conclusions about the magnitude of heritability is not substantially different from previous reports and that their conclusions regarding prenatal effects stands in contradiction to many previous reports.[43] They write that:

Chipuer et al. and Loehlin conclude that the postnatal rather than the prenatal environment is most important. The Devlin et al. (1997a) conclusion that the prenatal environment contributes to twin IQ similarity is especially remarkable given the existence of an extensive empirical literature on prenatal effects. Price (1950), in a comprehensive review published over 50 years ago, argued that almost all MZ twin prenatal effects produced differences rather than similarities. As of 1950 the literature on the topic was so large that the entire bibliography was not published. It was finally published in 1978 with an additional 260 references. At that time Price reiterated his earlier conclusion (Price, 1978). Research subsequent to the 1978 review largely reinforces Prices hypothesis (Bryan, 1993; Macdonald et al., 1993; Hall and Lopez-Rangel, 1996; see also Martin et al., 1997, box 2; Machin, 1996).[43]

Dickens and Flynn (2001) argued that the "heritability" figure includes both a direct effect of the genotype on IQ and also indirect effects where the genotype changes the environment, in turn affecting IQ. That is, those with a higher IQ tend to seek out stimulating environments that further increase IQ. The direct effect can initially have been very small but feedback loops can create large differences in IQ. In their model an environmental stimulus can have a very large effect on IQ, even in adults, but this effect also decays over time unless the stimulus continues. This model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects.

The Flynn effect is the increase in average intelligence test scores by about 0.3% annually, resulting in the average person today scoring 15 points higher in IQ compared to the generation 50 years ago.[44] This effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they taught children how to replicate outside the program the kinds of cognitively demanding experiences that produce IQ gains while they are in the program and motivate them to persist in that replication long after they have left the program.[45][46] Most of the improvements have allowed for better abstract reasoning, spatial relations, and comprehension. Some scientists have suggested that such enhancements are due to better nutrition, better parenting and schooling, as well as exclusion of the least intelligent, genetically inferior, people from reproduction. However, Flynn and a group of other scientists share the viewpoint that modern life implies solving many abstract problems which leads to a rise in their IQ scores.[44]

More recent research has illuminated genetic factors underlying IQ stability and change. Genome-wide association studies have demonstrated that the genes involved in intelligence remain fairly stable over time.[47] Specifically, in terms of IQ stability, "genetic factors mediated phenotypic stability throughout this entire period [age 0 to 16], whereas most age-to-age instability appeared to be due to non-shared environmental influences".[48][49] These findings have been replicated extensively and observed in the United Kingdom,[50] the United States,[48][51] and the Netherlands.[52][53][54][55] Additionally, researchers have shown that naturalistic changes in IQ occur in individuals at variable times.[56]

Spatial ability has been shown to be unifactorial (a single score accounts well for all spatial abilities), and is 69% heritable in a sample of 1,367 twins from the ages 19 through 21.[57] Further only 8% of spatial ability can be accounted for by a shared environmental factors like school and family.[58] Of the genetically determined portion of spacial ability, 24% is shared with verbal ability (general intelligence) and 43% was specific to spatial ability alone.[59]

A 2009 review article identified over 50 genetic polymorphisms that have been reported to be associated with cognitive ability in various studies, but noted that the discovery of small effect sizes and lack of replication have characterized this research so far.[60] Another study attempted to replicate 12 reported associations between specific genetic variants and general cognitive ability in three large datasets, but found that only one of the genotypes was significantly associated with general intelligence in one of the samples, a result expected by chance alone. The authors concluded that most reported genetic associations with general intelligence are probably false positives brought about by inadequate sample sizes. Arguing that common genetic variants explain much of the variation in general intelligence, they suggested that the effects of individual variants are so small that very large samples are required to reliably detect them.[61] Genetic diversity within individuals is heavily correlated with IQ.[62]

A novel molecular genetic method for estimating heritability calculates the overall genetic similarity (as indexed by the cumulative effects of all genotyped single nucleotide polymorphisms) between all pairs of individuals in a sample of unrelated individuals and then correlates this genetic similarity with phenotypic similarity across all the pairs. A study using this method estimated that the lower bounds for the narrow-sense heritability of crystallized and fluid intelligence are 40% and 51%, respectively. A replication study in an independent sample confirmed these results, reporting a heritability estimate of 47%.[63] These findings are compatible with the view that a large number of genes, each with only a small effect, contribute to differences in intelligence.[61]

The relative influence of genetics and environment for a trait can be calculated by measuring how strongly traits covary in people of a given genetic (unrelated, siblings, fraternal twins, or identical twins) and environmental (reared in the same family or not) relationship. One method is to consider identical twins reared apart, with any similarities which exists between such twin pairs attributed to genotype. In terms of correlation statistics, this means that theoretically the correlation of tests scores between monozygotic twins would be 1.00 if genetics alone accounted for variation in IQ scores; likewise, siblings and dizygotic twins share on average half of their alleles and the correlation of their scores would be 0.50 if IQ were affected by genes alone (or greater if, as is undoubtedly the case, there is a positive correlation between the IQs of spouses in the parental generation). Practically, however, the upper bound of these correlations are given by the reliability of the test, which is 0.90 to 0.95 for typical IQ tests[64]

If there is biological inheritance of IQ, then the relatives of a person with a high IQ should exhibit a comparably high IQ with a much higher probability than the general population. In 1982, Bouchard and McGue reviewed such correlations reported in 111 original studies in the United States. The mean correlation of IQ scores between monozygotic twins was 0.86, between siblings, 0.47, between half-siblings, 0.31, and between cousins, 0.15.[65]

The 2006 edition of Assessing adolescent and adult intelligence by Alan S. Kaufman and Elizabeth O. Lichtenberger reports correlations of 0.86 for identical twins raised together compared to 0.76 for those raised apart and 0.47 for siblings.[66] These number are not necessarily static. When comparing pre-1963 to late 1970s data, researches DeFries and Plomin found that the IQ correlation between parent and child living together fell significantly, from 0.50 to 0.35. The opposite occurred for fraternal twins.[67]

Another summary:

Although IQ differences between individuals are shown to have a large hereditary component, it does not follow that mean group-level disparities (between-group differences) in IQ necessarily have a genetic basis. The Flynn effect is one example where there is a large difference between groups(past and present) with little or no genetic difference. An analogy, attributed to Richard Lewontin,[70] illustrates this point:

Suppose two handfuls are taken from a sack containing a genetically diverse variety of corn, and each grown under carefully controlled and standardized conditions, except that one batch is lacking in certain nutrients that are supplied to the other. After several weeks, the plants are measured. There is variability of growth within each batch, due to the genetic variability of the corn. Given that the growing conditions are closely controlled, nearly all the variation in the height of the plants within a batch will be due to differences in their genes. Thus, within populations, heritabilities will be very high. Nevertheless, the difference between the two groups is due entirely to an environmental factordifferential nutrition. Lewontin didn't go so far as to have the one set of pots painted white and the other set black, but you get the idea. The point of the example, in any case, is that the causes of between-group differences may in principle be quite different from the causes of within-group variation.[71]

Arthur Jensen has written in agreement that this is technically correct, but he has also stated that a high heritability increases the probability that genetics play a role in average group differences.[72][73]

See the original post:
Heritability of IQ - Wikipedia

Read More...

Genesis and Genetics | We look at Genetics in Genesis

August 24th, 2017 1:46 am

One lingering mystery concerning Noahs ark is: How many animals were on board? Since DNA has a very good reputation for solving mysteries in the courtroom, now its time to unleash its powers and reveal Noahs passenger list.

As we look about the earth we see a multitude of animals reproducing after their kind, each retaining their distinction as a kind/specie. How does this happen? Two things are required for kinds/species to remain distinct:

(1) They must have the desire (instincts coded in their DNA) to mate with their own kind/species and

(2) They must have the ability (compatible DNA) to produce viable offspring like themselves.

These two requirements are the basis for both the Biblical and secular scientific definition of species/kinds. The words species and kinds are synonyms, but usually species is used by the secular scientific community and kinds is used by the Biblical community. Nonetheless, both words should define the same creatures, and our conclusion is that they do. Our position is as follows:

Fundamentally, all of the species currently defined by modern science were on the Ark

Consider humans, we have the desire and ability to produce more humans like ourselves. We know that we cannot produce a pig or a chimpanzee because we do not have the genetic ability in our DNAto do so.

Next, consider the great horned owls, they desire to mate with other great horned owls and they have the ability to produce other great horned owls. However, their DNA does not produce the desire or the ability to create a bluebird, a barn owl, or even an eagle owl which is the same genus as the great horned owl.

We wrote a technical paper, The Genetics of Kinds Ravens, Owls, and Doves, and found that not one of the owl kinds/species we examined could possibly produce any other owl kinds/species. That is also true for the ravens and doves. They differ from one another by too much genetic information. We also wrote a technical paper, A Study of Biblical Kinds Using 62 Species of Mice; which showed the various species/kinds of mouse DNA differed from one another by significant amounts with distinct DNA gaps between the kinds/species. It would be impossible to bridge these gaps by means of any natural process.

Our study of the mouse was very interesting in that we found that there are more than one hundredmouse kinds/species and they all remain distinct. How do they do it? They have been magnificently designed with the desire and ability to reproduce after their kinds. Here are a few facts: They can read each others genetics like a barcode (Ref 1). They mate only with their own species (Ref 2). They dont breed with close relatives (Ref 3) and the males do not mate with under aged females (Ref 4). All of this is coded in the DNA and not only does it preserve their distinctiveness, but also maintains good genetic health. You may read all about it, get all of the references, and gain access to all of the DNA sequences at: A Study of Biblical Kinds Using 62 Species of Mice.

If only a few kinds would have been on the Ark, there would only be a few kinds now. The scriptures are clear: every kind was created (Genesis 1); every kind was loaded on the Ark (Genesis 6:19-20); and every kind disembarked from the Ark (Genesis 8:17-20). The kinds were distinct and remain distinct.

Our conclusion would necessitate that on the order of 6000 amphibian, 10,000 bird, 6,000 mammal, and 8,000 reptile kinds/species were aboard the Ark. Accounting for pairs, sevens of clean animals, and those that have gone extinct since the flood, the total number aboard the Ark would be on the order of 100,000. This would be no problem for the very large Ark with all of the animals in Biblical deep sleep (Ref 5)

As we look at this glorious creation, we see that the kinds are distinct. They are distinct because they have both the desire and ability to mate with their own kind and produce offspring of like kind. God always does things right, and in order to replenish the earth properly, He gave every kind a berth on the Ark. All of the passengers were peacefully asleep being transported to a new world filled with adventure and hope.

Key words:

Animals of the Ark, Species on the Ark, Kinds on the Ark, Noahs Ark, Noahs Ark, species vs. kinds, and DNA Noahs Ark

Additional Suggested Reading:

Noahs Ark A Fresh Look

Noahs Ark Hermetically Sealed and Safe

References:1. Beynon, R.J. and Hurst, J.L., 2003. Multiple roles of major urinary proteins in the house mouse, Mus domesticus., Biochem Soc Trans. 2003 Feb;31(Pt 1):142-6. PMID:12546672.

2. Lane, R.P., Young, J., Newman, T., and Trask, B.J., 2004. Species specificity in rodent pheromone receptor repertoires. Genome Res. 14: 603-608. [PMC free article] [PubMed]

3. Sherborne, A.L., Michael D., Thom, M.D., Paterson, S., Jury, F., Ollier, W.E.R., Stockley, P., Beynon, R.J. and Hurst, J.L., 2007. The Genetic Basis of Inbreeding Avoidance in House Mice, Current Biology 17, 20612066, December 4, 2007.

4. Ferrero, D.M., Moeller, L.M., Osakada T., Horio, N., Li, Q., Dheeraj S.R., Cichy, A., Spehr, M. Touhara, K. Liberles, S.D., 2013. A juvenile mouse pheromone inhibits sexual behaviour through the vomeronasal system.Nature, 2013; DOI: 10.1038/nature12579

5. http://www.genesisandgenetics.org/2013/07/20/122/

Continued here:
Genesis and Genetics | We look at Genetics in Genesis

Read More...

Study reveals white nationalists’ reactions when genetics test results challenge their identity – UCLA Newsroom

August 24th, 2017 1:46 am

A new study by UCLA researchers reveals the range of reactions from rejection to reinterpretation to acceptance after white nationalists learn that DNA ancestry test results indicate they may not be as white or European as they previously thought.

Thestudy,When Genetics Challenges a Racists Identity: Genetic Ancestry Testing Among White Nationalists, is the work of UCLA researchersAaron Panofskyand Joan Donovan, who presented their findings at the annual meeting of the American Sociological Association held Aug. 14, 2017, in Montreal, Canada.

Upon receiving genetic evidence of non-white or non-European ancestry, those posting online expend considerable energy to repair identities by rejecting or reinterpreting genetic ancestry testing results, said the researchers, who studied discussion threads on the topic posted on the white nationalist online forum Stormfront.

UCLA Luskin School of Public Affairs

Aaron Panofsky

In their study, Donovan and Panofsky, an associate professor with appointments in Public Policy at UCLA Luskin School of Public Affairs, the Institute for Society and Genetics, and Sociology, looked at more than 3,000 posts in 70 discussion threads on topics related to test reveals. These included posts by individuals who revealed results of non-white/non-European ancestry on Stormfront, a website that requires members to be white or European with non-Jewish ancestry. Responses also included the comments on those test results.

Panofsky and Donovan, a postdoctoral fellow at the Institute for Society and Genetics, report that while ancestry tests promote the capacity to reveal ones genetic ties to ethnic groups, ancient populations and historical migrations, and even famous historical figures this opportunity to know thyself can come with significant risks.

Panofsky points out that based on white nationalists responses to genetic information upon learning their test results, there is no reason to believe that they would give up their racial ideology, and, more importantly, that genetic information cannot be relied on to change the views of white nationalists.

In addition, Panofsky said that, as a group, white nationalists appear to have a combination of sophisticated and unsophisticated methods of interpreting the data from statistical and genetic viewpoints, as well as on their own historical reasoning or reinterpretation.

In this framework, the repair strategy is not to reject scientific or historical knowledge, but to educate oneself to understand the construction of [genetic test] results and to explain those results in alternate terms, the researchers conclude.

In parsing responses to genetic ancestry test results posted on Stormfront, Panofsky and Donovan created a decision tree consisting of good news responses, or confirmation of white identity, or bad news, revealing results of non-white or non-European ancestry.

Good news served a confirming purpose and was well-received, but bad news elicited responses of rejection of the test results. Alternatives to the rejected responses included championing traditional methods, citing family history or using a mirror test, whereby individuals evaluated their outward appearance as a gauge of racial identity.

Many of the responses to bad news are about how to repair the damage, rather than latching onto the ideology of Stormfront, Panofsky said. Even though they have that idea of purity, they help people explain away or dismiss the result.

The researchers also found that some who reject unfavorable genetic test results interpret them as the product of companies with an anti-white bias, or Jewish ownership invested in sowing racial doubt and confusion among whites. They also attribute a small percentage of non-white or non-European markers as being part of a multicultural conspiracy, according to the study.

Another way the posters dealt with bad news, Panofsky and Donovan reported, was to discount indications of non-white ancestry as a statistical error or noise to engage in scientific reinterpretation of the results.

The findings also indicate that white nationalists are using genetic ancestry test results to rethink the boundaries of whiteness. Panofsky and Donovan point out that a great deal of discussion on Stormfront focuses on what are the genetic markers of legitimate whiteness or European-ness, and how to think about white nationalism in an era of genetic ancestry testing.

Go here to read the rest:
Study reveals white nationalists' reactions when genetics test results challenge their identity - UCLA Newsroom

Read More...

UCLA Researchers Study Reveals White Nationalists’ Reactions When Genetics Test Results Challenge Their Identity – Sierra Sun Times

August 24th, 2017 1:46 am

August 23, 2017 - By Stan Paul - A new study by UCLA researchers reveals the range of reactions from rejection to reinterpretation to acceptance after white nationalists learn that DNA ancestry test results indicate they may not be as white or European as they previously thought.

Thestudy,When Genetics Challenges a Racists Identity: Genetic Ancestry Testing Among White Nationalists, is the work of UCLA researchersAaron Panofskyand Joan Donovan, who presented their findings at the annual meeting of the American Sociological Association held Aug. 14, 2017, in Montreal, Canada.

Upon receiving genetic evidence of non-white or non-European ancestry, those posting online expend considerable energy to repair identities by rejecting or reinterpreting genetic ancestry testing results, said the researchers, who studied discussion threads on the topic posted on the white nationalist online forum Stormfront.

(Right) Aaron Panofsky - Credit: UCLA Luskin School of Public Affairs

In their study, Donovan and Panofsky, an associate professor with appointments in Public Policy at UCLA Luskin School of Public Affairs, the Institute for Society and Genetics, and Sociology, looked at more than 3,000 posts in 70 discussion threads on topics related to test reveals. These included posts by individuals who revealed results of non-white/non-European ancestry on Stormfront, a website that requires members to be white or European with non-Jewish ancestry. Responses also included the comments on those test results.

Panofsky and Donovan, a postdoctoral fellow at the Institute for Society and Genetics, report that while ancestry tests promote the capacity to reveal ones genetic ties to ethnic groups, ancient populations and historical migrations, and even famous historical figures this opportunity to know thyself can come with significant risks.

Panofsky points out that based on white nationalists responses to genetic information upon learning their test results, there is no reason to believe that they would give up their racial ideology, and, more importantly, that genetic information cannot be relied on to change the views of white nationalists.

In addition, Panofsky said that, as a group, white nationalists appear to have a combination of sophisticated and unsophisticated methods of interpreting the data from statistical and genetic viewpoints, as well as on their own historical reasoning or reinterpretation.

In this framework, the repair strategy is not to reject scientific or historical knowledge, but to educate oneself to understand the construction of [genetic test] results and to explain those results in alternate terms, the researchers conclude.

In parsing responses to genetic ancestry test results posted on Stormfront, Panofsky and Donovan created a decision tree consisting of good news responses, or confirmation of white identity, or bad news, revealing results of non-white or non-European ancestry.

Good news served a confirming purpose and was well-received, but bad news elicited responses of rejection of the test results. Alternatives to the rejected responses included championing traditional methods, citing family history or using a mirror test, whereby individuals evaluated their outward appearance as a gauge of racial identity.

Many of the responses to bad news are about how to repair the damage, rather than latching onto the ideology of Stormfront, Panofsky said. Even though they have that idea of purity, they help people explain away or dismiss the result.

The researchers also found that some who reject unfavorable genetic test results interpret them as the product of companies with an anti-white bias, or Jewish ownership invested in sowing racial doubt and confusion among whites. They also attribute a small percentage of non-white or non-European markers as being part of a multicultural conspiracy, according to the study.

Another way the posters dealt with bad news, Panofsky and Donovan reported, was to discount indications of non-white ancestry as a statistical error or noise to engage in scientific reinterpretation of the results.

The findings also indicate that white nationalists are using genetic ancestry test results to rethink the boundaries of whiteness. Panofsky and Donovan point out that a great deal of discussion on Stormfront focuses on what are the genetic markers of legitimate whiteness or European-ness, and how to think about white nationalism in an era of genetic ancestry testing.Source: UCLA

See more here:
UCLA Researchers Study Reveals White Nationalists' Reactions When Genetics Test Results Challenge Their Identity - Sierra Sun Times

Read More...

Sycamores investigate genetics behind congenital heart defects – Indiana Statesman

August 24th, 2017 1:46 am

When Katy Neese and Olivia Sacopulos jumped into their research this summer, they did it with all heart mice hearts, that is.

Neese and Sacopulos used the Summer Undergraduate Research Experience at Indiana State University to conduct preliminary research on the Foxhead BOX (FOX) gene expression, which encodes transcription factor proteins that switch genes on or off as the heart forms. Its a critical process the body has to get right for proper heart development. But when some Forkhead genes are mutated or dysfunctional, they fail to produce proteins that can correctly turn on or off other genes. The result is a congenital heart defect.

The project by Neese and Sacopulos used two approaches: further analysis of previously published available gene expression microarray data and the collection of FOX gene expression analyses curated by genomic databases and published in scientific literature.

They charted their findings to see when and where FOX genes are expressed or unexpressed using the Mouse Genome Informatics Database as a primary resource for a spectrum of genetic, genomic and biological data, which archives bioinformatic and experimental data of the mouse as an experimental model system for understanding human biology and disease.

The comparison of the curated gene expression databases validate the microarray dataset by identifying sever FOX genes with known expression during heart development, said Neese, a junior biology with a medical lab specialization major from Martinsville, Ind. Several of the FOX genes that are significantly changed in the heart according to the microarray data set have not been characterized in using conventional gene expression analysis techniques.

Sacopulos curated a list of FOX genes, looking at all 44 of the genes and used the Mouse Genome Informatics database to see the expression of each gene during different stages of development. She found that 22 genes were expressed, 22 were undefined and 11 were not expressed and used the data to validate Neeses findings.

My part involved coding and using the Bio conductor package to pull out the statistically significant FOX genes and create a heat map to show when the genes are expressed, said Socaphales, a junior biology major from Terre Haute. If we can determine if heart defects are caused by the genes, there may be a way to correct the problem.

Their preliminary findings will ultimately aid Indiana State biology instructor Kristopher Schwab with his research on the FOX genes functions when cardiac muscular tissue is formed, particularly during embryonic development.

Research exposes students to new areas of science, allowing them to explore the language, concepts and tools of research. Rather than just reading from a book, the research allowed them to get involved in the process and learn at a greater depth by going through data analysis, hypothesizing and investigating, Schwab said. Once they have that basic skill set, that they can transfer to other areas of science and apply it.

Read more:
Sycamores investigate genetics behind congenital heart defects - Indiana Statesman

Read More...

Oxford Genetics secures investment; expands UK facility and eyes US market – BioPharma-Reporter.com

August 24th, 2017 1:46 am

Oxford Genetics will expand its bioproduction services in the UK and target the US market through an office in Boston after receiving a 7.5m ($9.6m) investment.

The investment comes from existing investor Mercia Technologies PLC, and Invesco Perpetual and will help the bioprocessing support firm expand its global presence and increase its DNA, protein, viral and cell line service offerings.

The UK extension adds another floor in its building in Oxford which will be fitted out to increase capacity across the firms entire service offering, allowing the segregation of material flow and the isolation of individual projects, a spokesperson from Oxford Genetics told us.

This will allow us to continue to exceed regulatory requirements and provide quality assurance for our clients. We will also add more analytical, purification and process development equipment, for instance small scale bioreactors, to enable us to fully support our clients from research up to the point of GMP bioproduction.

The 6,000 sq ft extension is expected to be ready by November, and will include cell line engineering capabilities, viral vector production and purification suites, high-throughput robotic screening systems and process development facilities.

The US expansion, meanwhile, will see the firm open an office in Boston to target the large US market.

A US office is integral because it is the single largest market for our technologies and services, we were told. There has been a significant increase in the demand for our viral expression systems and cell line development for virus production.

The firm, founded in 2011, licenses its technology platforms on a non-exclusive basis to all biopharma and according to the spokesperson has had tremendous interest from firms looking for bioproduction optimisation solutions.

We have already begun to sign licenses and collaboration deals. The latter agreements are particularly interesting since they are allowing our collaborators accelerated access to some of our virus production platform technologies, which will fully mature over the next 18 months.

In the past year, Oxford Genetics has benefitted from several funding projects including a 1.6m and 1m, both from Innovate UK, to explore computational and synthetic biology approaches for optimising mammalian biomanufacturing processes, and to overcome the inefficient and costly scale-up of viral vector production, respectively.

See the original post:
Oxford Genetics secures investment; expands UK facility and eyes US market - BioPharma-Reporter.com

Read More...

H. pylori May Increase Risk of Stomach Cancer By Turning On Subset of Stem Cells – MedicalResearch.com (blog)

August 24th, 2017 1:46 am

MedicalResearch.com Interview with:Michael Sigal PhDClinical scientist of the Charit Universittsmedizin BerlinInvestigator at the Max Planck Institute for Infection Biology

MedicalResearch.com: What is the background for this study? What are the main findings?

Response: We have previously found that H. pylori can colonize gastric glands and that in colonized glands the epithelial turnover was increased. We wanted tocharacterizethe mechanisms that control the gland turnover in thestomach.

We foundthat Axin2, a classic Wnt target gene, marks two different subpopulations of cells with stem cell properties, one of which is Lgr5-positive and theotherone Lgr5-negative. Bothpopulations are affected by Rspondin 3, that isproduced in myofibroblasts right beneath the stem cell compartment. Rspondin is crucial for stem cell signaling andknockout of Rspondin 3 in myofibroblastsresults in loss of Lgr5 and Axin2 expression. Once weincreased thebioavailability of Rspondin, that now could also interact with cells outside of the stem cellcompartment, we noticed that the number of Axin2positive stem cells dramatically increased. Of interest, only Lgr5-negative cells expanded in number and proliferate more, while the Lgr5-positive cells remained silenced.

Infection with Helicobacter pylori leads to an expansion of Axin2-positive cells which is driven by increased expression of Rspondin3. Expansion of the long lived stem cell pool could be an explanation for how H. pylori infectionincreases the risk for gastric cancer.

MedicalResearch.com: What should clinicians and patients take away from your report?

Response: It is interesting how different cell types of the tissue communicate with each other and we show an example, including the molecular details, how thiscommunication is organized and how itdynamically adapts to infection.

MedicalResearch.com: What recommendations do you have for future research as a result of this study?

Response: I think it will be very important to address the question of how and why different subpopulations of stem cells can have differential responses to the same stimulus. What is the molecular basis for this. And also, what are the consequences?

No financial conflicts of interest.

MedicalResearch.com: Thank you for your contribution to the MedicalResearch.com community.

Citation:

Michael Sigal, Catriona Y. Logan, Marta Kapalczynska, Hans-Joachim Mollenkopf, Hilmar Berger, Bertram Wiedenmann, Roeland Nusse, Manuel R. Amieva, Thomas F. Meyer.Stromal R-spondin orchestrates gastric epithelial stem cells and gland homeostasis.Nature, 2017; DOI:10.1038/nature23642

Note: Content is Not intended as medical advice. Please consult your health care provider regarding your specific medical condition and questions.

Link:
H. pylori May Increase Risk of Stomach Cancer By Turning On Subset of Stem Cells - MedicalResearch.com (blog)

Read More...

Behind the Lyme disease debate: ‘No silver bullet’ for confounding ailment – The Independent

August 24th, 2017 1:46 am

Editors note: This is the first installment in a three-part series exploring the debate over Lyme disease diagnosis and treatment, and how it has affected Rhode Island and those with the disease locally.

North Kingstown resident Joe Russo used to be a healthy, active man.

Along with providing for himself through full-time employment, Russo was an avid hunter, always searching for game during his downtime. He also rode bicycles up and down his neighborhood.

Then, more than a decade ago, things began to change. He continued his normal daily routine but had weird symptoms that would come and go, or just didnt feel right. He would seek medical treatment to find out what was wrong, but the doctors only said he was either anxious, depressed or had allergies.

So, you try all the things they tell you and you keep trucking, he said.

The symptoms would subside, he added, but months later he would experience these other flares and weird stuff, like vertigo, dizzy spells or sweats.

As the years passed, Russo did not get better. He sought testing for everything under the sun cancer and heart issues, among others to get an answer for why his condition was deteriorating. However, the doctors at the time failed to test for another possibility Lyme disease.

Russo ultimately saw another doctor and received a confirmed diagnosis of the tick-borne disease. Relieved at finally having an answer, he went back to his primary care physician to plan a course of action.

But the response he got was unexpected.

When I went back to [my physician] with the positive lab [results], telling her, OK, we have an answer, she dismissed me, he said. Our relationship was very good. She was always caring. She tested me for everything, but once she found out I had Lyme, she wanted nothing to do with me.

Russo said he inquired in writing to seek an explanation for the dismissal, but received no response.

She left me with no reason, he said.

He added, That was humbling and it actually hurt. A doctor can dismiss you with this and theres nothing you can do about it.

***

Lyme disease is an extremely complex illness with no known cure. It can produce an array of symptoms, such as fatigue, joint pain, arthritis, muscle aches, palsy, tingling and twitching muscles and other neurological symptoms. The most common symptom is a rash, often in a bulls-eye pattern.

Dr. Jim Gloor, a North Kingstown-based physician who has specialized in Lyme disease treatment for more than three decades, said Lyme-causing bacteria is one of the most complex eukaryotic cells. Typically, he said, a persons immune system is supposed to sort out what is foreign and what is not inside a persons body. But Lyme-causing bacteria is so effective in evading the immune system that it overheats the immune engine, causing a sufferers health to go haywire.

Its like when you race an old Chevy down the highway at 100 mph over a bumpy road, somethings going to break, he said. The immune system goes haywire and we have fallout from it that ends up being other conditions that are autoimmune, which is a nice way of saying, We dont know exactly why the immune system went sour.

Gloor said the medical community has yet to fully understand the immune system and autoimmunity. Because of that, he said, a lot more research needs to be done, particularly because Lyme disease is so sophisticated.

It confuses and confounds the immune system, and it confounds the body in general, he said.

The complexity has caused confusion over how Lyme disease is diagnosed, and led to intense debate locally and nationally about treatments and whether the disease can be designated as chronic.

Dr. Utpala Bandy, a state epidemiologist and medical director for the Rhode Island Department of Healths Division of Preparedness, Infectious Disease and Emergency Medical Services, said July 26 while the state does not develop, produce, approve or endorse any guidelines on treating Lyme disease, two main sets of guidelines are in place nationally for physicians to follow.

One comes from the Infectious Diseases Society of America, or IDSA, in conjunction with the Centers for Disease Control and Prevention, and the other from the International Lyme and Associated Diseases Society, or ILADS.

The organizations have conflicting views of Lyme disease treatment and diagnoses of long-term Lyme disease.

According to the state Department of Healths Jan. 31, 2016, report on Lyme disease, the IDSA/CDC guidelines indicate patients should take the antibiotic doxycycline for several Lyme disease symptoms, including the bulls-eye rash, palsy, cardiac disease, arthritis without neurological involvement and recurrent arthritis after a single course of antibiotics.

IDSA, according to its 2006 report Clinical Assessment, Treatment, and Prevention of Lyme Disease, Human Granulocytic Anaplasmosis, and Babesiosis: Clinical Practice Guidelines, also maintains there is no convincing evidence for the existence of a chronic borrelia burgdorferi infection the bacteria that causes Lyme disease in patients who have received the recommended treatments. The report states antibiotic therapy is not recommended for patients with chronic Lyme disease lasting longer than six months.

In June, CDC published in its Morbidity and Mortality Weekly Report an article that also warns of serious risks for Lyme patients receiving antibiotic and intravenous treatment, even resulting in serious harm, including death.

The introduction to the article, titled Serious Bacterial Infections Acquired During Treatment of Patients Given a Diagnosis of Chronic Lyme Disease United States, reads: The term chronic Lyme disease is used by some health care providers as a diagnosis for various constitutional, musculoskeletal, and neuropsychiatric symptoms. Patients with a diagnosis of chronic Lyme disease have been provided a wide range of medications as treatment, including long courses of intravenous (IV) antibiotics. Studies have not shown that such treatments lead to substantial long-term improvement for patients, and they can be harmful. This report describes cases of septic shock, osteomyelitis, Clostridium difficile colitis, and paraspinal abscess resulting from treatments for chronic Lyme disease. Patients, clinicians, and public health practitioners should be aware that treatments for chronic Lyme disease can carry serious risks.

Other treatments, including IV infusions of hydrogen peroxide, immunoglobulin therapy, hyperbaric oxygen therapy, electromagnetic frequency treatments, garlic supplements, colloidal silver, and stem cell transplants, are also presented as being risky and not shown to produce favorable results.

The article further states, At least five randomized, placebo-controlled studies have shown that prolonged courses of IV antibiotics in particular do not substantially improve long-term outcome for patients with a diagnosis of chronic Lyme disease and can result in serious harm, including death. Five specific cases are cited to support the articles assertions.

In contrast, ILADS states on its website that it is impossible to state a meaningful success rate for the prevention of Lyme disease by a single 200 mg dose of doxycycline, and advises clinicians should not use a single doxycycline dose for treatment due to very-low quality evidence.

An optimal treatment regimen has not yet been determined, ILADS states, adding that it is too early to standardize restrictive protocols. ILADS recommends patient goals and values regarding treatment options be identified and strongly considered during a shared decision-making process.

ILADS President Dr. Samuel Shor pushed back against the CDCs June 16 report in a June 23 post on the National Center for Biotechnology Informations website. He notes one National Institutes of Health-supported study, in which 37 patients suspected of having active neuroborrelliosis a central nervous system disorder caused by Lyme disease received 10 weeks of IV Ceftriaxone, or two grams a day of the antibiotic. Pain and physical functioning improved at 12 [weeks] and was sustained at 24 weeks, Shor states.

Shor also points to another NIH-supported study in which 55 patients who felt they had an active infection of the main Lyme disease bacteria and experienced severe fatigue for longer than six months received 28 days of IV Ceftriaxone. According to Shor, a significant improvement in fatigue was sustained at 6 months.

While there are conflicting viewpoints regarding how Lyme disease should be approached and treated, one local advocate believes the state sides with the IDSA/CDC guidelines which, he believes, are outdated.

***

Lane Poor, a North Kingstown resident, was first diagnosed with Lyme disease in 1983 and still feels the effects of the disease. He has traveled around the state for years making residents aware of the disease and its harmful effects especially if it goes undetected.

In an interview in June, he said he believes the state cant accept the fact that Lyme disease is a major problem in Rhode Island, mainly because it is complicated, messy and there is no silver bullet in terms of treatment. He said the disease can result in multiple infections following the initial infection from a tick bite, and doctors are not used to it being multiple infections.

Theyre used to it as, Its one disease and you have that one thing that will kill the one disease. That doesnt happen with Lyme, he said. Lyme disease, you can get other infections. It is so hard for doctors to identify whats going on because youve got three, four different infections. Its a compound disease, and were not used to treating compound diseases.

Poor also believes there are many more Lyme disease cases in Rhode Island than the Department of Health has reported. In 2014, Rhode Island, at 86 cases per 100,000 people, had the fourth-highest Lyme disease rate in the U.S. Bandy said the department processes approximately 3,000 test reports for surveillance every summer, and of that amount, approximately 900 new Lyme disease cases are determined annually.

Poor, though, thinks Rhode Islands rate may be 10 times higher than the official figure.

Massachusetts has got an infection rate of about 800 people per 100,000, and RIDOH says we have 80, he said. Were neighbors with Massachusetts.

Both Poor and Gloor are critical of the CDCs recommended two-tier testing regimen. CDC suggests doctors first order a test known as an enzyme-linked immunosorbent assay, or ELISA, to screen for Lyme disease and then confirm it with a western blot, a procedure used to detect specific proteins.

However, according to the advocacy group LymeDisease.org, a 2002 study by Massachusetts Dr. Samuel Donta revealed 52 percent of patients with chronic Lyme disease are considered negative by the ELISA test, but the western blot shows a positive result. The College of American Pathologists, according to a 1997 study, found that ELISA tests do not have adequate sensitivity to be used for screening purposes.

Gloor said in about half of his cases, the tests are negative across the board.

LymeDisease.org states a quality test can help a doctor assess the diseases severity, estimate a patients prognosis, monitor the diseases progression, detect relapses or adjust therapies.

Unfortunately, a test with this capability does not exist for Lyme disease, according to LymeDisease.org.

Donta also said the tests are indirect and do not indicate whether or not the Lyme disease bacteria is still present. Until a proven test is developed, he said, clinical judgment and observations ... should be how one proceeds to treat such patients.

Bandy said physicians and patients can do whatever they want as long as it is a shared decision-making process.

But Lyme disease specialists in Rhode Island and beyond have had their methods questioned by health officials.

***

Over more than five years, Gloor was called to appear before the Department of Healths Board of Medical Licensure and Discipline multiple times to answer allegations from several complainants regarding his regimens for treating long-term Lyme disease, mostly with antibiotics and intravenous medicines.

The complaints, according to a 2014 consent decree filed with the health department, which was obtained by The Independent, were received by the board in 2012 and 2013, and the panel questioned the reasonableness of the diagnosis and treatment plans.

While the board found no wrongdoing regarding Gloors regimens in 2014, the decree indicates he was penalized for failing to properly document diagnoses in the assessment and plan sections of certain medical charts and with respect to the legibility of patient records. He was ordered to pay a $1,935 administrative fee to the health department and was placed on a monitoring period for one year.

In December 2016, Gloor was again cited by the Board of Medical Licensure and Discipline for poor record keeping, according to a second consent decree filed with the health department, which also indicated Gloors handwriting was barely legible and his script was so large there is only room for the barest information.

Gloor agreed to a reprimand and paid an administrative fee of $500. He additionally agreed to complete eight hours of continuing education regarding record keeping, health department spokeswoman Annemarie Beardsworth said in an email July 24.

Poor was sharply critical of the health departments dealings with Gloor, and indicated other local physicians had been treated similarly. He said the consent decrees amount to health officials serving as judge, jury and executioner all in one thing.

Doctors are scared stiff theyre going to be brought before the board for treating Lyme disease, so they dont [treat it], he added. A lot of doctors have refused to have anything to do with it.

Gloor said the issues with the health department have taken a toll emotionally and mentally, and have had a significant negative impact on his practice. He was forced to move from his former office at Wickford Junction to a new space on Post Road.

This tied me up for years, he said. My practice went to hell.

nk@independentri.com

Go here to read the rest:
Behind the Lyme disease debate: 'No silver bullet' for confounding ailment - The Independent

Read More...

Page 953«..1020..952953954955..960970..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick