header logo image


Page 1,110«..1020..1,1091,1101,1111,112..1,1201,130..»

Biotechnology at UMBC

October 27th, 2016 5:44 am

UMBC Biotechnology Graduate Programs

The Masters in Professional Studies in Biotechnology prepares science professionals to fill management and leadership roles in biotechnology-related companies or agencies.

UMBCs Biotechnology curriculum is intended to address changes in the needs of the biotechology industry through experiential learning, by providing advanced instruction in the life sciences, in addition to coursework in regulatory affairs, leadership, management, and financial management in a life science-oriented business.

Global challenges in human health, food security, sustainable industrial production and environmental protection continues to fuel the biosciences industry, creating new opportunities within the four primary sub sectors:

UMBC's Biotechnology Graduate Program and its strong academic programs in the life sciences are led by a distinguished faculty of nearly fifty members spanning the departments of:

This established academic and research expertise in the biosciences provides a foundation for programs in biotechnology management and biochemical regulatory engineering.

Over the past decade the industry has added nearly 111,000 new, high-paying jobs or 7.4 percent to its employment base, according to the latest Battelle/BIO report.

Economic output of the bioscience industry has expanded significantly with 17 percent growth for the biosciences since 2007, nearly twice the national private sector nominal output growth.

UMBC Division of Professional Studies 1000 Hilltop Circle, Sherman Hall East 4th Floor, Baltimore, MD 21250 410-455-2336 dps@umbc.edu

Link:
Biotechnology at UMBC

Read More...

1. What is agricultural biotechnology? – GreenFacts

October 27th, 2016 5:44 am

Broadly speaking, biotechnology is any technique that uses living organisms or substances from these organisms to make or modify a product for a practical purpose (Box 2). Biotechnology can be applied to all classes of organism - from viruses and bacteria to plants and animals - and it is becoming a major feature of modern medicine, agriculture and industry. Modern agricultural biotechnology includes a range of tools that scientists employ to understand and manipulate the genetic make-up of organisms for use in the production or processing of agricultural products.

Some applications of biotechnology, such as fermentation and brewing, have been used for millennia. Other applications are newer but also well established. For example, micro-organisms have been used for decades as living factories for the production of life-saving antibiotics including penicillin, from the fungus Penicillium, and streptomycin from the bacterium Streptomyces. Modern detergents rely on enzymes produced via biotechnology, hard cheese production largely relies on rennet produced by biotech yeast and human insulin for diabetics is now produced using biotechnology.

Biotechnology is being used to address problems in all areas of agricultural production and processing. This includes plant breeding to raise and stabilize yields; to improve resistance to pests, diseases and abiotic stresses such as drought and cold; and to enhance the nutritional content of foods. Biotechnology is being used to develop low-cost disease-free planting materials for crops such as cassava, banana and potato and is creating new tools for the diagnosis and treatment of plant and animal diseases and for the measurement and conservation of genetic resources. Biotechnology is being used to speed up breeding programmes for plants, livestock and fish and to extend the range of traits that can be addressed. Animal feeds and feeding practices are being changed by biotechnology to improve animal nutrition and to reduce environmental waste. Biotechnology is used in disease diagnostics and for the production of vaccines against animal diseases.

Clearly, biotechnology is more than genetic engineering. Indeed, some of the least controversial aspects of agricultural biotechnology are potentially the most powerful and the most beneficial for the poor. Genomics, for example, is revolutionizing our understanding of the ways genes, cells, organisms and ecosystems function and is opening new horizons for marker-assisted breeding and genetic resource management. At the same time, genetic engineering is a very powerful tool whose role should be carefully evaluated. It is important to understand how biotechnology - particularly genetic engineering - complements and extends other approaches if sensible decisions are to be made about its use.

This chapter provides a brief description of current and emerging uses of biotechnology in crops, livestock, fisheries and forestry with a view to understanding the technologies themselves and the ways they complement and extend other approaches. It should be emphasized that the tools of biotechnology are just that: tools, not ends in themselves. As with any tool, they must be assessed within the context in which they are being used.

Read the rest here:
1. What is agricultural biotechnology? - GreenFacts

Read More...

Biotechnology Industry Salaries, Bonuses and Benefits …

October 27th, 2016 5:44 am

What are some average salaries for jobs in the Biotechnology industry? These pages lists all of the job titles in the Biotechnology industry for which we have salary information. If you know the pay grade of the job you are searching for you can narrow down this list to only view Biotechnology industry jobs that pay less than $30K, $30K-$50K, $50K-$80K, $80K-$100K, or more than $100K. If you are unsure how much your Biotechnology industry job pays you can choose to either browse all Biotechnology industry salaries below or you can search all salaries.

Category: All Accounting Administrative, Support, and Clerical Advertising Aerospace and Defense Agriculture, Forestry, and Fishing Architecture Arts and Entertainment Automotive Aviation and Airlines Banking Biotechnology Clergy Construction and Installation Consulting Services Customer Services Education Energy and Utilities Engineering Entry Level Environment Executive and Management Facilities, Maintenance, and Repair Financial Services Fire, Law Enforcement, and Security Food, Beverage, and Tobacco Government Graphic Arts Healthcare -- Administrative Healthcare -- Nursing Healthcare -- Practitioners Healthcare -- Technicians Hotel, Gaming, Leisure, and Travel Human Resources Insurance Internet and New Media IT -- All IT -- Computers, Hardware IT -- Computers, Software IT -- Executive, Consulting IT -- Manager IT -- Networking Legal Services Library Services Logistics Manufacturing Marketing Materials Management Media -- Broadcast Media -- Print Military Mining Non-Profit and Social Services Personal Care and Service Pharmaceuticals Planning Printing and Publishing Public Relations Purchasing Real Estate Restaurant and Food Services Retail/Wholesale Sales Science and Research Skilled and Trades Sports and Recreation Telecommunications Training Transportation and Warehousing

Industry: Aerospace & Defense Biotechnology Business Services Chemicals Construction Edu., Gov't. & Nonprofit Energy & Utilities Financial Services Healthcare Hospitality & Leisure Insurance Internet Media MFG Durable MFG Nondurable Pharmaceuticals Retail & Wholesale Software & Networking Telecom Transportation

Income Level: All $100,000+ $80,000 - $100,000 $50,000 - $80,000 $30,000 - $50,000 $10,000 - $30,000

The rest is here:
Biotechnology Industry Salaries, Bonuses and Benefits ...

Read More...

Houston Integrative Medicine – Home – Houston, TX

October 26th, 2016 1:45 am

The inferior physician treats gross disease

The mediocre physician treats disease just manifesting

The superior physician treats before there is a disease

-- (Yellow Emperor's Inner Canon)

Our Mission

The Center for Primary Care and Integrative Medicine is a primary care clinic thatapplies both eastern and western medicalmodalitiesand provides themost effectivepatient care. Our practice is founded on a few underlying principles/desires:

First, we strongly believe in the value of preventative care, a conceptgrounded intraditional Chinese medicine.Brian Carter's Pulse of Oriental Medicinestates the traditional Chinesedoctor's job was to keep the village from getting sick and they in return would make sure his needs were met. Once theybecame sick, they were unable to take care of the doctor, therefore, it onlymadesense for him to keep them well.Our role is to keep you well before any signs of diseasesurface. By keepingmind, body and spirit inbalance, maintaining appropriate nutrient levels and exercising a positive lifestyle, oneis proactively taking care of themselves.

Second, we believe in natural healing. The body has an innate ability to heal itself, we simply assist you on your journey towards wellness. While western medication is effective at treating many illnesses, it can also act as a double-edged sword; the chemicals in pills and other drugs can have many potentiallyharmful side effects. Ourdoctors takea comprehensive look at your medicalconcerns and prescribe the healthiest solution that isindividualized for your needs.

Third, the Center for Primary Care and Integrative Medicine seeks to reduce the increasingly prevalent abuse of narcotics. The United States consumes 60% of the world's narcotics, and these are increasingly prescribed unnecessarily. This has adverse effects on the patient's body. This is not to say that medication/narcotics are bad, but we should reduce their use as much as possible without compromising pain control. Today, more and more people are turning tonatural methods of healing. The Center for Primary Care and Integrative Medicineincorporates the best of conventional and alternative medicine to provide the highest quality of carepossible.

While preventing chronic disease has been our main focus of practice, we emphasize the importance of helping patients who already suffer from a variety of chronic diseases actively recover. In addition to regular cardiopulmonary rehab, we offer Taichi, massage, and acupuncture to help patients from a variety of chronic conditions, e.g., chronic Congestive Heart Failure, COPD, Parkinsons disease, etc., improve functional status. Studies have shown that acupuncture and Taichi can favorably affect heart rate variability and thus decrease post-myocardial infarct mortality. Taichi-based cardiac rehabilitation was associated with an increase in peak oxygen consumption, a marker of functional capacity, in patients with recent MI. Acupuncture has been shown to reduce interleukin-17 (IL-17, inflammation marker) in asthmatic patients and increase 6 minute walking distance and quality of life in COPD patients. Taichi and Scalp acupuncture effectively slow down disease progress in Parkinsons disease patients and improve quality of life.

Last, but not least, we strive to reduce the cost of medicine for both individuals and the nation. Health care costs have been rising for several years and remains a focus of worldwide discussion. National health expenditures have doubled over the past decade from $1.3 trillion in 2000 to $2.6 trillion in 2010. Total health care expenditures grew at an annual rate of 4.4 percent in 2008,outpacing inflation and the growth in national income. Indeed, we are a nation providing the best "sick" care. If we looked atreplacing"sick" care with preventative medicine,wewould be a healthierand wealthier nation. Spending on new medical technology and prescription drugs has been cited as a leading contributor to the increase in overall health. The Center for Primary Careand Integrative Medicine focuses on prevention and treatmentof chronic diseases such as hypertension, diabetes, obesity, and chronic pain.Integrative medicinehas been known to be highly effective in the treatment of such illnesses. In addition,the Center also gives consults to patients who want to learn taichi and yoga to improve well being.

Center for primary care and Integrative Medicine has also been actively collaborating with world renowned institutes to explore mechanisms underlying the effects of acupuncture, Ethnopharmacology, and the application of traditional Chinese Medicine in health regimen.

More:
Houston Integrative Medicine - Home - Houston, TX

Read More...

Stemaid : Embryonic Stem-cells

October 26th, 2016 1:42 am

What is Stem Cell Therapy? Stem Cell Therapy (SCT), provides the body with stem cells in the location where it is most needed in order to assist in the healing and regeneration of its existing cells. Contact us to find a doctor/clinic near you who can provide you with therapy using Stemaid Embryonic Stem-Cells. About Us Stemaid provides Embryonic Stem Cells and unique protocols to doctors in order to help their patients who face major health conditions as well as individuals who simply wish to stay young and healthy. Over the past five years of development, we have successfully conducted research into targeted major diseases related to lung, kidney, liver and heart failures, we have aided in helping people who have suffered stroke or brain injury to walk or talk again, we have removed all traces of tumor in multiple cancer patients and we have achieved significant results in fighting aging and its physical markers. You may see a more detailed and referenced list of these successes here.

Embryonic Stem cell treatments have not been approved by FDA. For this reason, we are located abroad. If you are interested in our technology or would like us to put you in touch with the nearest clinic to you who can provide you with stem-cells, please contact us here .

Call us toll-free on 1-844-STEMAID

Original post:
Stemaid : Embryonic Stem-cells

Read More...

Stem cell – Wikipedia

October 26th, 2016 1:42 am

Stem cells are undifferentiated biological cells that can differentiate into specialized cells and can divide (through mitosis) to produce more stem cells. They are found in multicellular organisms. In mammals, there are two broad types of stem cells: embryonic stem cells, which are isolated from the inner cell mass of blastocysts, and adult stem cells, which are found in various tissues. In adult organisms, stem cells and progenitor cells act as a repair system for the body, replenishing adult tissues. In a developing embryo, stem cells can differentiate into all the specialized cellsectoderm, endoderm and mesoderm (see induced pluripotent stem cells)but also maintain the normal turnover of regenerative organs, such as blood, skin, or intestinal tissues.

There are three known accessible sources of autologous adult stem cells in humans:

Stem cells can also be taken from umbilical cord blood just after birth. Of all stem cell types, autologous harvesting involves the least risk. By definition, autologous cells are obtained from one's own body, just as one may bank his or her own blood for elective surgical procedures.

Adult stem cells are frequently used in various medical therapies (e.g., bone marrow transplantation). Stem cells can now be artificially grown and transformed (differentiated) into specialized cell types with characteristics consistent with cells of various tissues such as muscles or nerves. Embryonic cell lines and autologous embryonic stem cells generated through somatic cell nuclear transfer or dedifferentiation have also been proposed as promising candidates for future therapies.[1] Research into stem cells grew out of findings by Ernest A. McCulloch and James E. Till at the University of Toronto in the 1960s.[2][3]

The classical definition of a stem cell requires that it possess two properties:

Two mechanisms exist to ensure that a stem cell population is maintained:

Potency specifies the differentiation potential (the potential to differentiate into different cell types) of the stem cell.[4]

In practice, stem cells are identified by whether they can regenerate tissue. For example, the defining test for bone marrow or hematopoietic stem cells (HSCs) is the ability to transplant the cells and save an individual without HSCs. This demonstrates that the cells can produce new blood cells over a long term. It should also be possible to isolate stem cells from the transplanted individual, which can themselves be transplanted into another individual without HSCs, demonstrating that the stem cell was able to self-renew.

Properties of stem cells can be illustrated in vitro, using methods such as clonogenic assays, in which single cells are assessed for their ability to differentiate and self-renew.[7][8] Stem cells can also be isolated by their possession of a distinctive set of cell surface markers. However, in vitro culture conditions can alter the behavior of cells, making it unclear whether the cells shall behave in a similar manner in vivo. There is considerable debate as to whether some proposed adult cell populations are truly stem cells.[citation needed]

Embryonic stem (ES) cells are the cells of the inner cell mass of a blastocyst, an early-stage embryo.[9] Human embryos reach the blastocyst stage 45 days post fertilization, at which time they consist of 50150 cells. ES cells are pluripotent and give rise during development to all derivatives of the three primary germ layers: ectoderm, endoderm and mesoderm. In other words, they can develop into each of the more than 200 cell types of the adult body when given sufficient and necessary stimulation for a specific cell type. They do not contribute to the extra-embryonic membranes or the placenta.

During embryonic development these inner cell mass cells continuously divide and become more specialized. For example, a portion of the ectoderm in the dorsal part of the embryo specializes as 'neurectoderm', which will become the future central nervous system.[10] Later in development, neurulation causes the neurectoderm to form the neural tube. At the neural tube stage, the anterior portion undergoes encephalization to generate or 'pattern' the basic form of the brain. At this stage of development, the principal cell type of the CNS is considered a neural stem cell. These neural stem cells are pluripotent, as they can generate a large diversity of many different neuron types, each with unique gene expression, morphological, and functional characteristics. The process of generating neurons from stem cells is called neurogenesis. One prominent example of a neural stem cell is the radial glial cell, so named because it has a distinctive bipolar morphology with highly elongated processes spanning the thickness of the neural tube wall, and because historically it shared some glial characteristics, most notably the expression of glial fibrillary acidic protein (GFAP).[11][12] The radial glial cell is the primary neural stem cell of the developing vertebrate CNS, and its cell body resides in the ventricular zone, adjacent to the developing ventricular system. Neural stem cells are committed to the neuronal lineages (neurons, astrocytes, and oligodendrocytes), and thus their potency is restricted.[10]

Nearly all research to date has made use of mouse embryonic stem cells (mES) or human embryonic stem cells (hES) derived from the early inner cell mass. Both have the essential stem cell characteristics, yet they require very different environments in order to maintain an undifferentiated state. Mouse ES cells are grown on a layer of gelatin as an extracellular matrix (for support) and require the presence of leukemia inhibitory factor (LIF). Human ES cells are grown on a feeder layer of mouse embryonic fibroblasts (MEFs) and require the presence of basic fibroblast growth factor (bFGF or FGF-2).[13] Without optimal culture conditions or genetic manipulation,[14] embryonic stem cells will rapidly differentiate.

A human embryonic stem cell is also defined by the expression of several transcription factors and cell surface proteins. The transcription factors Oct-4, Nanog, and Sox2 form the core regulatory network that ensures the suppression of genes that lead to differentiation and the maintenance of pluripotency.[15] The cell surface antigens most commonly used to identify hES cells are the glycolipids stage specific embryonic antigen 3 and 4 and the keratan sulfate antigens Tra-1-60 and Tra-1-81. By using human embryonic stem cells to produce specialized cells like nerve cells or heart cells in the lab, scientists can gain access to adult human cells without taking tissue from patients. They can then study these specialized adult cells in detail to try and catch complications of diseases, or to study cells reactions to potentially new drugs. The molecular definition of a stem cell includes many more proteins and continues to be a topic of research.[16]

There are currently no approved treatments using embryonic stem cells. The first human trial was approved by the US Food and Drug Administration in January 2009.[17] However, the human trial was not initiated until October 13, 2010 in Atlanta for spinal cord injury research. On November 14, 2011 the company conducting the trial (Geron Corporation) announced that it will discontinue further development of its stem cell programs.[18] ES cells, being pluripotent cells, require specific signals for correct differentiationif injected directly into another body, ES cells will differentiate into many different types of cells, causing a teratoma. Differentiating ES cells into usable cells while avoiding transplant rejection are just a few of the hurdles that embryonic stem cell researchers still face.[19] Due to ethical considerations, many nations currently have moratoria or limitations on either human ES cell research or the production of new human ES cell lines. Because of their combined abilities of unlimited expansion and pluripotency, embryonic stem cells remain a theoretically potential source for regenerative medicine and tissue replacement after injury or disease.

Human embryonic stem cell colony on mouse embryonic fibroblast feeder layer

The primitive stem cells located in the organs of fetuses are referred to as fetal stem cells.[20] There are two types of fetal stem cells:

Adult stem cells, also called somatic (from Greek , "of the body") stem cells, are stem cells which maintain and repair the tissue in which they are found.[22] They can be found in children, as well as adults.[23]

Pluripotent adult stem cells are rare and generally small in number, but they can be found in umbilical cord blood and other tissues.[24] Bone marrow is a rich source of adult stem cells,[25] which have been used in treating several conditions including liver cirrhosis,[26] chronic limb ischemia [27] and endstage heart failure.[28] The quantity of bone marrow stem cells declines with age and is greater in males than females during reproductive years.[29] Much adult stem cell research to date has aimed to characterize their potency and self-renewal capabilities.[30] DNA damage accumulates with age in both stem cells and the cells that comprise the stem cell environment. This accumulation is considered to be responsible, at least in part, for increasing stem cell dysfunction with aging (see DNA damage theory of aging).[31]

Most adult stem cells are lineage-restricted (multipotent) and are generally referred to by their tissue origin (mesenchymal stem cell, adipose-derived stem cell, endothelial stem cell, dental pulp stem cell, etc.).[32][33]

Adult stem cell treatments have been successfully used for many years to treat leukemia and related bone/blood cancers through bone marrow transplants.[34] Adult stem cells are also used in veterinary medicine to treat tendon and ligament injuries in horses.[35]

The use of adult stem cells in research and therapy is not as controversial as the use of embryonic stem cells, because the production of adult stem cells does not require the destruction of an embryo. Additionally, in instances where adult stem cells are obtained from the intended recipient (an autograft), the risk of rejection is essentially non-existent. Consequently, more US government funding is being provided for adult stem cell research.[36]

Multipotent stem cells are also found in amniotic fluid. These stem cells are very active, expand extensively without feeders and are not tumorigenic. Amniotic stem cells are multipotent and can differentiate in cells of adipogenic, osteogenic, myogenic, endothelial, hepatic and also neuronal lines.[37] Amniotic stem cells are a topic of active research.

Use of stem cells from amniotic fluid overcomes the ethical objections to using human embryos as a source of cells. Roman Catholic teaching forbids the use of embryonic stem cells in experimentation; accordingly, the Vatican newspaper "Osservatore Romano" called amniotic stem cells "the future of medicine".[38]

It is possible to collect amniotic stem cells for donors or for autologuous use: the first US amniotic stem cells bank [39][40] was opened in 2009 in Medford, MA, by Biocell Center Corporation[41][42][43] and collaborates with various hospitals and universities all over the world.[44]

These are not adult stem cells, but rather adult cells (e.g. epithelial cells) reprogrammed to give rise to pluripotent capabilities. Using genetic reprogramming with protein transcription factors, pluripotent stem cells equivalent to embryonic stem cells have been derived from human adult skin tissue.[45][46][47]Shinya Yamanaka and his colleagues at Kyoto University used the transcription factors Oct3/4, Sox2, c-Myc, and Klf4[45] in their experiments on human facial skin cells. Junying Yu, James Thomson, and their colleagues at the University of WisconsinMadison used a different set of factors, Oct4, Sox2, Nanog and Lin28,[45] and carried out their experiments using cells from human foreskin.

As a result of the success of these experiments, Ian Wilmut, who helped create the first cloned animal Dolly the Sheep, has announced that he will abandon somatic cell nuclear transfer as an avenue of research.[48]

Frozen blood samples can be used as a source of induced pluripotent stem cells, opening a new avenue for obtaining the valued cells.[49]

To ensure self-renewal, stem cells undergo two types of cell division (see Stem cell division and differentiation diagram). Symmetric division gives rise to two identical daughter cells both endowed with stem cell properties. Asymmetric division, on the other hand, produces only one stem cell and a progenitor cell with limited self-renewal potential. Progenitors can go through several rounds of cell division before terminally differentiating into a mature cell. It is possible that the molecular distinction between symmetric and asymmetric divisions lies in differential segregation of cell membrane proteins (such as receptors) between the daughter cells.[50]

An alternative theory is that stem cells remain undifferentiated due to environmental cues in their particular niche. Stem cells differentiate when they leave that niche or no longer receive those signals. Studies in Drosophila germarium have identified the signals decapentaplegic and adherens junctions that prevent germarium stem cells from differentiating.[51][52]

Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplant is a form of stem cell therapy that has been used for many years without controversy. No stem cell therapies other than bone marrow transplant are widely used.[53][54]

Stem cell treatments may require immunosuppression because of a requirement for radiation before the transplant to remove the person's previous cells, or because the patient's immune system may target the stem cells. One approach to avoid the second possibility is to use stem cells from the same patient who is being treated.

Pluripotency in certain stem cells could also make it difficult to obtain a specific cell type. It is also difficult to obtain the exact cell type needed, because not all cells in a population differentiate uniformly. Undifferentiated cells can create tissues other than desired types.[55]

Some stem cells form tumors after transplantation;[56] pluripotency is linked to tumor formation especially in embryonic stem cells, fetal proper stem cells, induced pluripotent stem cells. Fetal proper stem cells form tumors despite multipotency.[citation needed]

Some of the fundamental patents covering human embryonic stem cells are owned by the Wisconsin Alumni Research Foundation (WARF) they are patents 5,843,780, 6,200,806, and 7,029,913 invented by James A. Thomson. WARF does not enforce these patents against academic scientists, but does enforce them against companies.[57]

In 2006, a request for the US Patent and Trademark Office (USPTO) to re-examine the three patents was filed by the Public Patent Foundation on behalf of its client, the non-profit patent-watchdog group Consumer Watchdog (formerly the Foundation for Taxpayer and Consumer Rights).[57] In the re-examination process, which involves several rounds of discussion between the USTPO and the parties, the USPTO initially agreed with Consumer Watchdog and rejected all the claims in all three patents,[58] however in response, WARF amended the claims of all three patents to make them more narrow, and in 2008 the USPTO found the amended claims in all three patents to be patentable. The decision on one of the patents (7,029,913) was appealable, while the decisions on the other two were not.[59][60] Consumer Watchdog appealed the granting of the '913 patent to the USTPO's Board of Patent Appeals and Interferences (BPAI) which granted the appeal, and in 2010 the BPAI decided that the amended claims of the '913 patent were not patentable.[61] However, WARF was able to re-open prosecution of the case and did so, amending the claims of the '913 patent again to make them more narrow, and in January 2013 the amended claims were allowed.[62]

In July 2013, Consumer Watchdog announced that it would appeal the decision to allow the claims of the '913 patent to the US Court of Appeals for the Federal Circuit (CAFC), the federal appeals court that hears patent cases.[63] At a hearing in December 2013, the CAFC raised the question of whether Consumer Watchdog had legal standing to appeal; the case could not proceed until that issue was resolved.[64]

Diseases and conditions where stem cell treatment is being investigated include:

Research is underway to develop various sources for stem cells, and to apply stem cell treatments for neurodegenerative diseases and conditions, diabetes, heart disease, and other conditions.[80]

In more recent years, with the ability of scientists to isolate and culture embryonic stem cells, and with scientists' growing ability to create stem cells using somatic cell nuclear transfer and techniques to create induced pluripotent stem cells, controversy has crept in, both related to abortion politics and to human cloning.

Hepatotoxicity and drug-induced liver injury account for a substantial number of failures of new drugs in development and market withdrawal, highlighting the need for screening assays such as stem cell-derived hepatocyte-like cells, that are capable of detecting toxicity early in the drug development process.[81]

Link:
Stem cell - Wikipedia

Read More...

Stem Cell Therapy for COPD

October 26th, 2016 1:42 am

The results of a quick and dirty research project on stem cell research studies.....

If I Google "stem cell clinical trials", I get several hundred thousand results... So, changing to "clinical trials stem cell COPD", I come up with several sites that claim to be experiencing fabulous success in stem cell therapy for COPD patients. Further study of these sites reveals that they all make a point that their procedures are not approved by the FDA, and that the only verifiable positive results are anecdotal, that is, from the statements of their own patients. All well and good, if the statements are true, and if the reports of their patients are not just the result of the placebo effect brought on by their desperate hope that the trials did in fact work.

In addition to these sites, there is one from the American Lung Association with quite a bit of information on the possibilities of stem cell therapy. Included in the ALA site is a link to:

http://www.clinicaltrials.gov/

which takes me to a listing of the hundreds of clinical trials currently under consideration, recruiting, or underway. There are also a few that have been completed. I urge you to take a look at the site. Once there, I searched for stem cells COPD, and came up with a list of 18 trials in various stages, most of which actually had something to do with COPD.

On the surface, the listing appeared to be just that; the information for various institutions that are seriously looking into the value of stem cells in the treatment of lung disease. And, hopefully, most of them are legitimate.

So, reading through the list, it appears that there are some trials going on, most of them in the US, having to do with stem cell therapy for COPD. However, further digging in at least a couple of the sites revealed the following:

One of the companies, on its web site, appears to me to claim that they are presently administering stem cell treatments, and that they have had success in relieving the symptoms and (at least so far), improving the prognosis of COPD patients. Again to me, that casts a bit of a cloud on the validity of the supposed trial.

Another of the companies talks a lot about various stem cell treatments, but does not mention anything related to COPD. So far, so good...but then, there it was! The information that they have a clinic in Mexico that deals in stem cell therapy for COPD.

Believe me when I say that nothing would make me happier than to discover that someone, somewhere, was having success in healing COPD patients, whether it was from stem cells or from dancing around them dressed in feathers. I fully realize the desperation of someone with a chronic disease. I have been there. However, I totally detest anyone who would take advantage of that desperation to extract money from the patients or their families.

Please be careful.

Uncle Jim

View original post here:
Stem Cell Therapy for COPD

Read More...

Stem Cell Therapy Treatment at Allure Medical Spa in Michigan

October 26th, 2016 1:42 am

Stem Cell Therapy in Michigan

Thank you for visiting. Many people have been awaiting a practical way to get stem cells for various conditions. This site is intended to offer information so you can learn about current options, understand what stem cells are, and to allow you to determine if this stem cell therapy be for you.

The term Stem Cells refers to cells in your body that are lying dormant, and designed to regenerate or repair diseased tissues. Stem Cell Therapy refers to isolating and deploying stem cells into your body with the intention of regenerating the tissues they are designed to repair.

Your stem cells are your bodys natural healing cells. They are recruited by chemical signals emitted by damaged tissues to repair and regenerate your damaged cells. Stem cells derived from your own tissues may well be the next major advance in medicine. Allure Medical Spa has the technology to produce a solution rich with your own stem cells. Under investigational protocols these can be deployed to treat a number of degenerative conditions and diseases.

At this time, the cost of experimental stem cell treatments is not covered by insurance companies. We believe that our research is university quality. We are patient funded and we have no source of grants or pharmaceutical company funding. Although we are a for-profit organization, our goal is not to patent stem cell technology for corporate profit but rather to learn the medical potential of these cells and contribute to the science of regenerative medicine. We have set our fees very reasonably to lower the threshold of access to stem cell medicine. Our fee includes harvesting, isolating cells and deployment of your own cells. Also, under special conditions, your stem cells may be cryogenically stored for future treatments.

Read more:
Stem Cell Therapy Treatment at Allure Medical Spa in Michigan

Read More...

Stem Cell Therapy for Knee Injuries and Arthritis – StemCell ARTS

October 26th, 2016 1:42 am

Utilizing your own stem cells to help the healing process of injured or degenerated joints The human body is made of billions of specialized cells that form specific organs like the brain, skin, muscles, tendons, ligaments, joints, and bone. Each day these cells go through a degenerative and regenerative process. As older cells die, new cells are born from stem cells with the unique capability of being able to create multiple types of other cells. However, when tissues are injured, the degenerative process exceeds this regenerative process, resulting in structures that become weaker, painful and less functional. While there are several types of stem cells, those that are best at promoting musculoskeletal healing (tendon, ligament, cartilage and bone) are found in bone marrow. These mesenchymal stem cells, or MSCs, are essential to successful patient outcomes and at Stem Cell ARTS we utilize the patented Regenexx Stem Cell Protocol, which iscapable of yielding much higher concentrations of these important cells. Most Commonly Treated Knee Conditions and Injuries Below is a list of the most common knee injuries and conditions that we treat with stem cells or platelet procedures. This is not an all-inclusive list. Knee Patient Outcome Data

This Regenexx bone marrow derived stem cell treatment outcome data analysis is part of the Regenexx data download of patients who were tracked in the Regenexx advanced patient registry.

Regenexx has published more data on stem cell safety in peer reviewed medical research for orthopedic applications than any other group world-wide. This is a report of 1,591 patients and 1,949 procedures treated with the Regenexx Stem Cell Procedure. Based on our analysis of this treatment registry data, the Regenexx Stem Cell Procedure is about as safe as any typical injection procedure, which is consistent with what we see every day in the clinic.

To use, begin playing the first video. Then use the Playlist Dropdown Menu in the upper left corner of the video display to show all video titles. Use the Scroll Bar on the right hand side of the playlist to browse all video titles if required.

These non-surgical stem cell injection procedures happen within a single day and may offer a viable alternative for those who are facing surgery or even joint replacement. Patients are typically able to return to normal activity following the procedure and are able to avoid the painful and lengthy rehabilitation periods that are typically required to help restore strength, mobility and range-of-motion following invasive joint surgeries. Lastly, patients are far less vulnerable to the risks of surgeries, such as infection and blood clots.

Modern techniques in todays medicine allows us to withdraw stem cells from bone marrow, concentrate them through a lab process and then re-inject them precisely into the injured tissues in other areas of the body using advanced imaging guidance. Through Fluoroscopy and MSK Ultrasound, were able to ensure the cells are being introduced into the exact area of need. When the stem cells are re-injected, they enhance the natural repair process of degenerated and injured tendons, ligaments, and arthritic joints Turning the tables on the natural breakdown process that occurs from aging, overuse and injury.

If you are suffering from a joint injury or degenerative condition such as osteoarthritis, you may be a good candidate for a stem cell procedure. Please complete the form below and we will immediately send you an email with additional information and next steps in determining whether youre a candidate for these advanced stem cell procedures.

Go here to see the original:
Stem Cell Therapy for Knee Injuries and Arthritis - StemCell ARTS

Read More...

Stem Cell Therapy and Regenerative Medicine

October 26th, 2016 1:42 am

Mayo Clin Proc. 2009 Oct; 84(10): 859861.

Regenerative Medicine Institute, National Centre for Biomedical Engineering Science, National University of Ireland, Galway

Stem cell therapy has recently progressed from the preclinical to the early clinical trial arena for a variety of disease states. Two review articles published in the current issue of Mayo Clinic Proceedings address the use of stem cells for cardiac repair and bone disorders.1,2 These articles provide state-of-the-art information regarding 2 important aspects of an exciting topic with wide-ranging therapeutic potential in a manner relevant to the Proceedings' core audience of practicing clinicians. Stem cell therapy is potentially applicable to all subspecialties of medicine, but both articles stress that caution is required in interpreting the current role of these technologies in medical practice.

The clinical need for new therapies for cardiac repair is obvious and particularly relevant to conditions such as heart failure, ischemic cardiomyopathy, and myocardial infarction (MI). Studies using cell therapies in humans with these conditions are performed rapidly after demonstration of efficacy in animal models. This progression has occurred without a clear understanding of the basic science underpinning this technology.

Most patients enrolled in clinical studies of cardiac repair using stem cell therapy have had an MI. The clinical rationale for stem cell therapy for MI is to restore cardiac function and thus prevent left ventricular remodeling that can lead to heart failure. Gersh et al1 report that these studies have demonstrated safety, with only modest improvement in cardiac function. Recent meta-analyses have confirmed modest improvements in left ventricular ejection fraction (LVEF) associated with cell therapy after MI.3,4 The findings of some studies have suggested that patients with the most severe MIs benefit the most, but a recent publication of the REGENT trial has shown no benefit from cell therapy, even in patients with LVEF of less than 40%.5 The REGENT trial may have been limited by inadequate power to detect a difference between the study and control groups, but contradictory results have also been observed in previous studies of intracoronary delivery of bone marrow-derived progenitor cells (ASTAMI and REPAIR-AMI).6,7 Substantial progress has been made in understanding the potential of cell therapy in cardiovascular disease, but there is still a dearth of crucial information, such as the optimal cell type; mode of processing of cells; and dose, mode, and timing of cell delivery. Most studies have used unfractionated or mononuclear bone marrow cells that were injected via catheters into the infarct-related artery within a few days of the MI. These limitations may be responsible for the inconsistent outcomes reported in human studies. It would appear that, in patients with preserved LVEF after MI, stem cell therapy provides no benefit, but those with large MIs and reduced LVEF may benefit. However, the modest efficacy outcomes are probably related to poor engraftment and retention of the injected cells in myocardium, issues that require additional preclinical experiments. Future studies should focus on patients with the largest infarcts and on methods to enhance engraftment of stem cells at the site of injury.

See also pages 876 and 893

In another study in the Proceedings, Undale et al2 review the therapeutic potential of stem cell therapy for bone repair and metabolic bone disease. This field is at an earlier stage than cell therapy for cardiac repair in that the numbers of patients studied are lower. These authors review human studies in nonunion of fractures, osteogenesis imperfecta, and hypophosphatasia. In contrast with most studies of cardiac repair in which mixed cell populations have been used, a single cell type, mesenchymal stem cells (MSCs), has been used in studies of bone repair. Although the nature of MSCs is beyond the scope of this editorial, this cell type has considerable potential for treatment of musculoskeletal disorders due to its ability to differentiate to bone and cartilage. In addition, MSCs can be expanded easily in culture and have immunosuppressive properties, which raises the possibility of allogeneic off-the-shelf treatments. Potential problems include culture expansion-induced karyotypic abnormalities, but this has not been observed in all studies.8,9

The current status of adult stem cell therapy could be summarized as having shown enormous potential in preclinical animal studies without the same degree of positive results in early human studies. This may be due to the fact that stem cells, despite their demonstrated resistance to hypoxia,10 have low survival rates at the disease site. Indeed, the relationship between therapeutic effect and numbers of cells administered is highlighted in the review by Undale at al. Genetic modification of stem cells and the use of biomaterial scaffolds to promote engraftment and enhance persistence at the disease site in animal models have augmented the therapeutic effect.11,12

Before stem cell therapy for tissue repair applications can progress, several important topics must be addressed thoroughly. First, the therapeutic mechanism of action needs to be defined. The early assumption was that differentiation of the transplanted cells gave rise to cells with a local phenotype that reconstituted or rebuilt damaged tissue, but little evidence supports this theory. It seems more likely that the concept of engineered tissue is not central to the mode of action and that the repair response depends rather on a dynamic and complex signaling network between the transplanted cells and host cells. This involves secretion of paracrine factors by the transplanted cells, and expression of these factors may be stimulated by the injured host environment.

Second, wide-ranging toxicology studies are needed to enhance our confidence in the use of cellular therapies. Although these therapies are generally considered safe, data on the long-term effects of cell transplant are still lacking. The possibility of tumorigenicity has been raised in a number of studies. For allogeneic transplant, these issues become even more important.

Third, proper standardization and characterization of transplanted cell preparations have not yet been achieved. This is a serious impediment to meaningful interpretation of the results of preclinical and early clinical studies. The issues of heterogeneity and phenotypic changes associated with expansion of MSCs must be addressed more satisfactorily before we can understand the full therapeutic potential of these cells.

Stem cell therapies have not yet become a routine component of clinical practice, but practicing physicians may be asked for advice by patients seeking cures for conditions for which conventional medicine offers no solution. Substantial numbers of patients are pursuing experimental stem cell treatments and in many cases are incurring considerable expense. Both review articles in this issue of Mayo Clinic Proceedings emphasize that stem cell research is at an early stage and that patients should be discouraged from undergoing a form of treatment whose safety and efficacy have not yet been proven.

As previously mentioned, it is vitally important to understand the mechanism underlying the potential benefits of stem cell administration so that new therapeutic paradigms may evolve. A large body of evidence suggests that the cell per se may not be required and that the mechanism of effect is paracrine in nature.13 For instance, MSCs secrete proangiogenic and cytoprotective factors that may be responsible for their therapeutic benefit.14 These paracrine factors may also activate host endogenous stem cells. Understanding the host-stem cell interaction may allow identification of novel therapeutic factors or pathways that can be modulated without the need for cell delivery.

Compared with the concept of paracrine effects, there is less evidence of therapeutic benefit related to differentiation of transplanted adult stem cells to host tissue, but this approach may be important in certain disease states. Future areas of research may focus on the need for differentiation vs paracrine effects to afford a specific therapeutic outcome. If therapeutic benefit depends on differentiation rather than paracrine effects, embryonic stem cells or the recently developed induced pluripotent stem cells may be the optimal choice.15 Although induced pluripotent stem cells lack the ethical problems associated with embryonic stem cells, they have substantial regulatory hurdles to surmount before introduction to the clinical realm because of the factors required for their generation and the risks of teratogenicity.

Stem cells may be considered one of the available tools in the evolving area of regenerative medicine. The goal of regenerative medicine is to promote organ repair and regeneration, thus obviating the need for replacement. Stem cell therapy may participate in this process via paracrine mechanisms or differentiation into native tissues. The target disease will probably influence which of these mechanisms is more important. Successful translation to the clinical realm will require an understanding of disease pathogenesis and stem cell biology and partnership with other disciplines such as medical device technology, biomaterials science, gene therapy, and transplantation immunology. Advanced hybrid technologies arising from such partnerships will represent the next generation of regenerative therapeutics and will assist in overcoming current barriers to clinical translation, such as poor rates of stem cell engraftment and persistence.

Stem cell therapies have demonstrated therapeutic efficacy and benefit in preclinical models, but results in clinical studies have not been impressive. For this reason, stem cell therapies remain in the realm of experimental medicine. The debate continues as to whether clinical trials are justified in the absence of a more complete understanding of the biology underpinning stem cell therapies. Basic science studies to understand the mechanism of effect and the biology of stem cell differentiation must continue.

However, carefully planned and ethically approved clinical trials resulting from a robust preclinical pathway are necessary to advance the field. This will require a programmatic approach that involves partnerships of clinicians, academics, industry, and regulatory authorities with a focus on understanding basic biology that informs a tight linkage between preclinical and clinical studies. Rather than suggesting that clinical trials are premature, such trials should be encouraged as part of multidisciplinary programs in regenerative medicine.

Articles from Mayo Clinic Proceedings are provided here courtesy of The Mayo Foundation for Medical Education and Research

See original here:
Stem Cell Therapy and Regenerative Medicine

Read More...

Types of stem cells and their current uses | Europe’s stem …

October 26th, 2016 1:42 am

Types of stem cells

Not all stem cells come from an early embryo. In fact, we have stem cells in our bodies all our lives. One way to think about stem cells is to divide them into three categories:

You can read in detail about the properties of these different types of stem cells and current research work in our other fact sheets. Here, we compare the progress made towards therapies for patients using different stem cell types, and the challenges or limitations that still need to be addressed.

Embryonic stem cells (ESCs) cells have unlimited potential to produce specialised cells of the body, which suggests enormous possibilities for disease research and for providing new therapies. Human ESCs were first grown in the lab in 1998. Recently, human ESCs that meet the strict quality requirements for use in patients have been produced. These clinical grade human ESCs have been approved for use in a very small number of early clinical trials. One example is a clinical trial carried out by The London Project to Cure Blindness, using ESCs to produce a particular type of eye cell for treatment of patients with age-related macular degeneration. The biotechnology company ACT is also using human ESCs to make cells for patients with an eye disease: Stargardts macular dystrophy.

Current challenges facing ESC research include ethical considerations and the need to ensure that ESCs fully differentiate into the required specialised cells before transplantation into patients. If the initial clinical trials are successful in terms of safety and patient benefit, ESC research may soon begin to deliver its first clinical applications.

Many tissues in the human body are maintained and repaired throughout life by stem cells. These tissue stem cells are very different from embryonic stem cells.

Blood and skin stem cells: therapy pioneers Stem cell therapy has been in routine use since the 1970s! Bone marrow transplants are able to replace a patients diseased blood system for life, thanks to the properties of blood stem cells. Many thousands of patients benefit from this kind of treatment every year, although some do suffer from complications: the donors immune cells sometimes attack the patients tissues (graft-versus-host disease or GVHD) and there is a risk of infection during the treatment because the patients own bone marrow cells must be killed with chemotherapy before the transplant can take place.

Skin stem cells have been used since the 1980s to grow sheets of new skin in the lab for severe burn patients. However, the new skin has no hair follicles, sweat glands or sebaceous (oil) glands, so the technique is far from perfect and further research is needed to improve it. Currently, the technique is mainly used to save the lives of patients who have third degree burns over very large areas of their bodies and is only carried out in a few clinical centres.

Cord blood stem cells Cord blood stem cells can be harvested from the umbilical cord of a baby after birth. The cells can be frozen (cryopreserved) in cell banks and are currently used to treat children with cancerous blood disorders such as leukaemia, as well as genetic blood diseases like Fanconi anaemia. Treatment of adults has so far been more challenging but adults have been successfully treated with double cord transplants. The most commonly held view is that success in adults is restricted by the number of cells that can be obtained from one umbilical cord, but immune response may also play a role.One advantage of cord blood transplants is that they appear to be less likely than conventional bone marrow transplants to be rejected by the immune system, or to result in a reaction such as Graft versus Host Disease. Nevertheless, cord blood must still be matched to the patient to be successful.

There are limitations to the types of disease that can be treated: cord blood stem cells can only be used to make new blood cells for blood disease therapies. Although some studies have suggested cord blood may contain stem cells that can produce other types of specialised cells not related to the blood, none of this research has yet been widely reproduced and confirmed. No therapies for non-blood-related diseases have yet been developed using blood stem cells from either cord blood or the adult bone marrow.

Mesenchymal stem cells Mesenchymal stem cells (MSCs) are found in the bone marrow and are responsible for bone and cartilage repair. They also produce fat cells. Early research suggested that MSCs could differentiate into many other types of cells but it is now clear that this is not the case. MSCs, like all tissue stem cells, are not pluripotent but multipotent they can make a limited number of types of cells, but NOT all types of cells of the body. Claims have also been made that MSCs can be obtained from a wide variety of tissues in addition to bone marrow. These claims have not been confirmed and scientists are still debating the exact nature of cells obtained from these other tissues.

No treatments using mesenchymal stem cells are yet proven. Some clinical trials are investigating the safety and effectiveness of MSC treatments for repairing bone or cartilage. Other trials are investigating whether MSCs might help repair blood vessel damage linked to heart attacks or diseases such as critical limb ischaemia, but it is not yet clear whether these treatments will be effective. MSCs do not themselves produce blood vessel cells but might support other cells to repair damage. Indeed MSCs appear to play a crucial role in supporting blood stem cells.

Several claims have been made that MSCs can avoid detection by the immune system and that MSCs taken from one person can be transplanted into another with little or no risk of rejection by the body. The results of other studies have not supported these claims. It has also been suggested that MSCs may be able to affect immune responses in the body to reduce inflammation and help treat transplant rejection or autoimmune diseases. Again, this has yet to be conclusively proven but is an area of ongoing investigation.

Stem cells in the eye Clinical studies in patients have shown that tissue stem cells taken from an area of the eye called the limbus can be used to repair damage to the cornea the transparent layer at the front of the eye. If the cornea is severely damaged, for example by a chemical burn, limbal stem cells can be taken from the patient, multiplied in the lab and transplanted back onto the patients damaged eye(s) to restore sight. However, this can only help patients who have some undamaged limbal stem cells remaining in one of their eyes. The treatment has been shown to be safe and effective in early stage trials. Further studies with larger numbers of patients must now be carried out before this therapy can be approved by regulatory authorities for widespread use in Europe.

A relatively recent breakthrough in stem cell research is the discovery that specialised adult cells can be reprogrammed into cells that behave like embryonic stem cells, termed induced pluripotent stem cells (iPSCs). The generation of iPSCs has huge implications for disease research and drug development. For example, researchers have generated brain cells from iPSCs made from skin samples belonging to patients with neurological disorders such as Downs syndrome or Parkinsons disease. These lab-grown brain cells show signs of the patients diseases. This has implications for understanding how the diseases actually happen researchers can watch the process in a dish and for searching for and testing new drugs. Such studies give a taste of the wide range of disease research being carried out around the world using iPSCs.

The discovery of iPSCs also raised hopes that cells could be made from a patients own skin in order to treat their disease, avoiding the risk of immune rejection. However, use of iPSCs in cell therapy is theoretical at the moment. The technology is very new and the reprogramming process is not yet well understood. Scientists need to find ways to produce iPSCs safely. Current techniques involve genetic modification, which can sometimes result in the cells forming tumours. The cells must also be shown to completely and reproducibly differentiate into the required types of specialised cells to meet standards suitable for use in patients.

Stem cells are important tools for disease research and offer great potential for use in the clinic. Some adult stem cell sources are currently used for therapy, although they have limitations. The first clinical trials using cells made from embryonic stem cells are just beginning. Meanwhile, induced pluripotent stem cells are already of great use in research, but a lot of work is needed before they can be considered for use in the clinic. An additional avenue of current research is transdifferentiation converting one type of specialised cell directly into another.

All these different research approaches are important if stem cell research is to achieve its potential for delivering therapies for many debilitating diseases. The table below gives a brief overview of the different types of stem cells and their uses. You can also download this table as a pdf.

See the article here:
Types of stem cells and their current uses | Europe's stem ...

Read More...

Animal Biotechnology | Bioscience Topics | About Bioscience

October 25th, 2016 10:40 am

Related Links http://www.bbsrc.ac.uk

The Biotechnology and Biological Sciences Research Council (BBSRC) is the United Kingdoms principal funder of basic and strategic biological research. To deliver its mission, the BBSRC supports research and training in universities and research centers and promotes knowledge transfer from research to applications in business, industry and policy, and public engagement in the biosciences. The site contains extensive articles on the ethical and social issues involved in animal biotechnology.

The Department of Agriculture (USDA) provides leadership on food, agriculture, natural resources and related issues through public policy, the best available science and efficient management. The National Institute of Food and Agriculture is part of the USDA; its site contains information about the science behind animal biotechnology and a glossary of terms. Related topics also are searchable, including animal breeding, genetics and many others.

The Pew Initiative on Food and Biotechnology is an independent, objective source of information on agricultural biotechnology. Funded by a grant from the Pew Charitable Trusts to the University of Richmond, it advocates neither for nor against agricultural biotechnology. Instead, the initiative is committed to providing information and encouraging dialogue so consumers and policy-makers can make their own informed decisions.

Animal biotechnology is the use of science and engineering to modify living organisms. The goal is to make products, to improve animals and to develop microorganisms for specific agricultural uses.

Examples of animal biotechnology include creating transgenic animals (animals with one or more genes introduced by human intervention), using gene knock out technology to make animals with a specific inactivated gene and producing nearly identical animals by somatic cell nuclear transfer (or cloning).

The animal biotechnology in use today is built on a long history. Some of the first biotechnology in use includes traditional breeding techniques that date back to 5000 B.C.E. Such techniques include crossing diverse strains of animals (known as hybridizing) to produce greater genetic variety. The offspring from these crosses then are bred selectively to produce the greatest number of desirable traits. For example, female horses have been bred with male donkeys to produce mules, and male horses have been bred with female donkeys to produce hinnies, for use as work animals, for the past 3,000 years. This method continues to be used today.

The modern era of biotechnology began in 1953, when American biochemist James Watson and British biophysicist Francis Crick presented their double-helix model of DNA. That was followed by Swiss microbiologist Werner Arbers discovery in the 1960s of special enzymes, called restriction enzymes, in bacteria. These enzymes cut the DNA strands of any organism at precise points. In 1973, American geneticist Stanley Cohen and American biochemist Herbert Boyer removed a specific gene from one bacterium and inserted it into another using restriction enzymes. That event marked the beginning of recombinant DNA technology, or genetic engineering. In 1977, genes from other organisms were transferred to bacteria, an achievement that led eventually to the first transfer of a human gene.

Animal biotechnology in use today is based on the science of genetic engineering. Under the umbrella of genetic engineering exist other technologies, such as transgenics and cloning, that also are used in animal biotechnology.

Transgenics (also known as recombinant DNA) is the transferal of a specific gene from one organism to another. Gene splicing is used to introduce one or more genes of an organism into a second organism. A transgenic animal is created once the second organism incorporates the new DNA into its own genetic material.

In gene splicing, DNA cannot be transferred directly from its original organism, the donor, to the recipient organism, or the host. Instead, the donor DNA must be cut and pasted, or recombined, into a compatible fragment of DNA from a vector an organism that can carry the donor DNA into the host. The host organism often is a rapidly multiplying microorganism such as a harmless bacterium, which serves as a factory where the recombined DNA can be duplicated in large quantities. The subsequently produced protein then can be removed from the host and used as a genetically engineered product in humans, other animals, plants, bacteria or viruses. The donor DNA can be introduced directly into an organism by techniques such as injection through the cell walls of plants or into the fertilized egg of an animal.

This transferring of genes alters the characteristics of the organism by changing its protein makeup. Proteins, including enzymes and hormones, perform many vital functions in organisms. Individual genes direct an animals characteristics through the production of proteins.

Scientists use reproductive cloning techniques to produce multiple copies of mammals that are nearly identical copies of other animals, including transgenic animals, genetically superior animals and animals that produce high quantities of milk or have some other desirable trait. To date, cattle, sheep, pigs, goats, horses, mules, cats, rats and mice have been cloned, beginning with the first cloned animal, a sheep named Dolly, in 1996.

Reproductive cloning begins with somatic cell nuclear transfer (SCNT). In SCNT, scientists remove the nucleus from an egg cell (oocyte) and replace it with a nucleus from a donor adult somatic cell, which is any cell in the body except for an oocyte or sperm. For reproductive cloning, the embryo is implanted into a uterus of a surrogate female, where it can develop into a live being.

In addition to the use of transgenics and cloning, scientists can use gene knock out technology to inactivate, or knock out, a specific gene. It is this technology that creates a possible source of replacement organs for humans. The process of transplanting cells, tissues or organs from one species to another is referred to as xenotransplantation. Currently, the pig is the major animal being considered as a viable organ donor to humans. Unfortunately, pig cells and human cells are not immunologically compatible. Pigs, like almost all mammals, have markers on their cells that enable the human immune system to recognize them as foreign and reject them. Genetic engineering is used to knock out the pig gene responsible for the protein that forms the marker to the pig cells.

Animal biotechnology has many potential uses. Since the early 1980s, transgenic animals have been created with increased growth rates, enhanced lean muscle mass, enhanced resistance to disease or improved use of dietary phosphorous to lessen the environmental impacts of animal manure. Transgenic poultry, swine, goats and cattle that generate large quantities of human proteins in eggs, milk, blood or urine also have been produced, with the goal of using these products as human pharmaceuticals. Human pharmaceutical proteins include enzymes, clotting factors, albumin and antibodies. The major factor limiting the widespread use of transgenic animals in agricultural production systems is their relatively inefficient production rate (a success rate of less than 10 percent).

A specific example of these particular applications of animal biotechnology is the transfer of the growth hormone gene of rainbow trout directly into carp eggs. The resulting transgenic carp produce both carp and rainbow trout growth hormones and grow to be one-third larger than normal carp. Another example is the use of transgenic animals to clone large quantities of the gene responsible for a cattle growth hormone. The hormone is extracted from the bacterium, is purified and is injected into dairy cows, increasing their milk production by 10 to 15 percent. That growth hormone is called bovine somatotropin or BST.

Another major application of animal biotechnology is the use of animal organs in humans. Pigs currently are used to supply heart valves for insertion into humans, but they also are being considered as a potential solution to the severe shortage in human organs available for transplant procedures.

While predicting the future is inherently risky, some things can be said with certainty about the future of animal biotechnology. The government agencies involved in the regulation of animal biotechnology, mainly the Food and Drug Administration (FDA), likely will rule on pending policies and establish processes for the commercial uses of products created through the technology. In fact, as of March 2006, the FDA was expected to rule in the next few months on whether to approve meat and dairy products from cloned animals for sale to the public. If these animals and animal products are approved for human consumption, several companies reportedly are ready to sell milk, and perhaps meat, from cloned animals most likely cattle and swine. It also is expected that technologies will continue to be developed in the field, with much hope for advances in the use of animal organs in human transplant operations.

The potential benefits of animal biotechnology are numerous and include enhanced nutritional content of food for human consumption; a more abundant, cheaper and varied food supply; agricultural land-use savings; a decrease in the number of animals needed for the food supply; improved health of animals and humans; development of new, low-cost disease treatments for humans; and increased understanding of human disease.

Yet despite these potential benefits, several areas of concern exist around the use of biotechnology in animals. To date, a majority of the American public is uncomfortable with genetic modifications to animals.

According to a survey conducted by the Pew Initiative on Food and Biotechnology, 58 percent of those polled said they opposed scientific research on the genetic engineering of animals. And in a Gallup poll conducted in May 2004, 64 percent of Americans polled said they thought it was morally wrong to clone animals.

Concerns surrounding the use of animal biotechnology include the unknown potential health effects to humans from food products created by transgenic or cloned animals, the potential effects on the environment and the effects on animal welfare.

Before animal biotechnology will be used widely by animal agriculture production systems, additional research will be needed to determine if the benefits of animal biotechnology outweigh these potential risks.

The main question posed about the safety of food produced through animal biotechnology for human consumption is, Is it safe to eat? But answering that question isnt simple. Other questions must be answered first, such as, What substances expressed as a result of the genetic modification are likely to remain in food? Despite these questions, the National Academies of Science (NAS) released a report titled Animal Biotechnology: Science-Based Concerns stating that the overall concern level for food safety was determined to be low. Specifically, the report listed three specific food concerns: allergens, bioactivity and the toxicity of unintended expression products.

The potential for new allergens to be expressed in the process of creating foods from genetically modified animals is a real and valid concern, because the process introduces new proteins. While food allergens are not a new issue, the difficulty comes in how to anticipate these adequately, because they only can be detected once a person is exposed and experiences a reaction.

Another food safety issue, bioactivity, asks, Will putting a functional protein like a growth hormone in an animal affect the person who consumes food from that animal? As of May 2006, scientists cannot say for sure if the proteins will.

Finally, concern exists about the toxicity of unintended expression products in the animal biotechnology process. While the risk is considered low, there is no data available. The NAS report stated it still needs be proven that the nutritional profile does not change in these foods and that no unintended and potentially harmful expression products appear.

Another major concern surrounding the use of animal biotechnology is the potential for negative impact to the environment. These potential harms include the alteration of the ecologic balance regarding feed sources and predators, the introduction of transgenic animals that alter the health of existing animal populations and the disruption of reproduction patterns and their success.

To assess the risk of these environmental harms, many more questions must be answered, such as: What is the possibility the altered animal will enter the environment? Will the animals introduction change the ecological system? Will the animal become established in the environment? and Will it interact with and affect the success of other animals in the new community? Because of the many uncertainties involved, it is challenging to make an assessment.

To illustrate a potential environmental harm, consider that if transgenic salmon with genes engineered to accelerate growth were released into the natural environment, they could compete more successfully for food and mates than wild salmon. Thus, there also is concern that genetically engineered organisms will escape and reproduce in the natural environment. It is feared existing species could be eliminated, thus upsetting the natural balance of organisms.

The regulation of animal biotechnology currently is performed under existing government agencies. To date, no new regulations or laws have been enacted to deal with animal biotechnology and related issues. The main governing body for animal biotechnology and their products is the FDA. Specifically, these products fall under the new animal drug provisions of the Food, Drug, and Cosmetic Act (FDCA). In this use, the introduced genetic construct is considered the drug. This lack of concrete regulatory guidance has produced many questions, especially because the process for bringing genetically engineered animals to market remains unknown.

Currently, the only genetically engineered animal on the market is the GloFish, a transgenic aquarium fish engineered to glow in the dark. It has not been subject to regulation by the FDA, however, because it is not believed to be a threat to the environment.

Many people question the use of an agency that was designed specifically for drugs to regulate live animals. The agencys strict confidentiality provisions and lack of an environmental mandate in the FDCA also are concerns. It still is unclear how the agencys provisions will be interpreted for animals and how multiple agencies will work together in the regulatory system.

When animals are genetically engineered for biomedical research purposes (as pigs are, for example, in organ transplantation studies), their care and use is carefully regulated by the Department of Agriculture. In addition, if federal funds are used to support the research, the work further is regulated by the Public Health Service Policy on Humane Care and Use of Laboratory Animals.

Whether products generated from genetically engineered animals should be labeled is yet another controversy surrounding animal biotechnology. Those opposed to mandatory labeling say it violates the governments traditional focus on regulating products, not processes. If a product of animal biotechnology has been proven scientifically by the FDA to be safe for human consumption and the environment and not materially different from similar products produced via conventional means, these individuals say it is unfair and without scientific rationale to single out that product for labeling solely because of the process by which it was made.

On the other hand, those in favor of mandatory labeling argue labeling is a consumer right-to-know issue. They say consumers need full information about products in the marketplace including the processes used to make those products not for food safety or scientific reasons, but so they can make choices in line with their personal ethics.

On average, it takes seven to nine years and an investment of about $55 million to develop, test and market a new genetically engineered product. Consequently, nearly all researchers involved in animal biotechnology are protecting their investments and intellectual property through the patent system. In 1988, the first patent was issued on a transgenic animal, a strain of laboratory mice whose cells were engineered to contain a cancer-predisposing gene. Some people, however, are opposed ethically to the patenting of life forms, because it makes organisms the property of companies. Other people are concerned about its impact on small farmers. Those opposed to using the patent system for animal biotechnology have suggested using breed registries to protect intellectual property.

Ethical and social considerations surrounding animal biotechnology are of significant importance. This especially is true because researchers and developers worry the future market success of any products derived from cloned or genetically engineered animals will depend partly on the publics acceptance of those products.

Animal biotechnology clearly has its skeptics as well as its outright opponents. Strict opponents think there is something fundamentally immoral about the processes of transgenics and cloning. They liken it to playing God. Moreover, they often oppose animal biotechnology on the grounds that it is unnatural. Its processes, they say, go against nature and, in some cases, cross natural species boundaries.

Still others question the need to genetically engineer animals. Some wonder if it is done so companies can increase profits and agricultural production. They believe a compelling need should exist for the genetic modification of animals and that we should not use animals only for our own wants and needs. And yet others believe it is unethical to stifle technology with the potential to save human lives.

While the field of ethics presents more questions than it answers, it is clear animal biotechnology creates much discussion and debate among scientists, researchers and the American public. Two main areas of debate focus on the welfare of animals involved and the religious issues related to animal biotechnology.

Perhaps the most controversy and debate regarding animal biotechnology surrounds the animals themselves. While it has been noted that animals might, in fact, benefit from the use of animal biotechnology through improved health, for example the majority of discussion is about the known and unknown potential negative impacts to animal welfare through the process.

For example, calves and lambs produced through in vitro fertilization or cloning tend to have higher birth weights and longer gestation periods, which leads to difficult births that often require cesarean sections. In addition, some of the biotechnology techniques in use today are extremely inefficient at producing fetuses that survive. Of the transgenic animals that do survive, many do not express the inserted gene properly, often resulting in anatomical, physiological or behavioral abnormalities. There also is a concern that proteins designed to produce a pharmaceutical product in the animals milk might find their way to other parts of the animals body, possibly causing adverse effects.

Animal telos is a concept derived from Aristotle and refers to an animals fundamental nature. Disagreement exists as to whether it is ethical to change an animals telos through transgenesis. For example, is it ethical to create genetically modified chickens that can tolerate living in small cages? Those opposed to the concept say it is a clear sign we have gone too far in changing that animal.

Those unopposed to changing an animals telos, however, argue it could benefit animals by fitting them for living conditions for which they are not naturally suited. In this way, scientists could create animals that feel no pain.

Religion plays a crucial part in the way some people view animal biotechnology. For some people, these technologies are considered blasphemous. In effect, God has created a perfect, natural order, they say, and it is sinful to try to improve that order by manipulating the basic ingredient of all life, DNA. Some religions place great importance on the integrity of species, and as a result, those religions followers strongly oppose any effort to change animals through genetic modification.

Not all religious believers make these assertions, however, and different believers of the same religion might hold differing views on the subject. For example, Christians do not oppose animal biotechnology unanimously. In fact, some Christians support animal biotechnology, saying the Bible teaches humanitys dominion over nature. Some modern theologians even see biotechnology as a challenging, positive opportunity for us to work with God as co-creators.

Transgenic animals can pose problems for some religious groups. For example, Muslims, Sikhs and Hindus are forbidden to eat certain foods. Such religious requirements raise basic questions about the identity of animals and their genetic makeup. If, for example, a small amount of genetic material from a fish is introduced into a melon (in order to allow it grow to in lower temperatures), does that melon become fishy in any meaningful sense? Some would argue all organisms share common genetic material, so the melon would not contain any of the fishs identity. Others, however, believe the transferred genes are exactly what make the animal distinctive; therefore the melon would be forbidden to be eaten as well.

Follow this link:
Animal Biotechnology | Bioscience Topics | About Bioscience

Read More...

History of biotechnology – Wikipedia

October 21st, 2016 6:41 pm

Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services.[1] From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power.

Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.

The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. While it seems only natural nowadays to link pharmaceutical drugs as solutions to health and societal problems, this relationship of biotechnology serving social needs began centuries ago.

Biotechnology arose from the field of zymotechnology or zymurgy, which began as a search for a better understanding of industrial fermentation, particularly beer. Beer was an important industrial, and not just social, commodity. In late 19th-century Germany, brewing contributed as much to the gross national product as steel, and taxes on alcohol proved to be significant sources of revenue to the government.[2] In the 1860s, institutes and remunerative consultancies were dedicated to the technology of brewing. The most famous was the private Carlsberg Institute, founded in 1875, which employed Emil Christian Hansen, who pioneered the pure yeast process for the reliable production of consistent beer. Less well known were private consultancies that advised the brewing industry. One of these, the Zymotechnic Institute, was established in Chicago by the German-born chemist John Ewald Siebel.

The heyday and expansion of zymotechnology came in World War I in response to industrial needs to support the war. Max Delbrck grew yeast on an immense scale during the war to meet 60 percent of Germany's animal feed needs.[2] Compounds of another fermentation product, lactic acid, made up for a lack of hydraulic fluid, glycerol. On the Allied side the Russian chemist Chaim Weizmann used starch to eliminate Britain's shortage of acetone, a key raw material for cordite, by fermenting maize to acetone.[3] The industrial potential of fermentation was outgrowing its traditional home in brewing, and "zymotechnology" soon gave way to "biotechnology."

With food shortages spreading and resources fading, some dreamed of a new industrial solution. The Hungarian Kroly Ereky coined the word "biotechnology" in Hungary during 1919 to describe a technology based on converting raw materials into a more useful product. He built a slaughterhouse for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat operations in the world. In a book entitled Biotechnologie, Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages. For Ereky, the term "biotechnologie" indicated the process by which raw materials could be biologically upgraded into socially useful products.[4]

This catchword spread quickly after the First World War, as "biotechnology" entered German dictionaries and was taken up abroad by business-hungry private consultancies as far away as the United States. In Chicago, for example, the coming of prohibition at the end of World War I encouraged biological industries to create opportunities for new fermentation products, in particular a market for nonalcoholic drinks. Emil Siebel, the son of the founder of the Zymotechnic Institute, broke away from his father's company to establish his own called the "Bureau of Biotechnology," which specifically offered expertise in fermented nonalcoholic drinks.[1]

The belief that the needs of an industrial society could be met by fermenting agricultural waste was an important ingredient of the "chemurgic movement."[4] Fermentation-based processes generated products of ever-growing utility. In the 1940s, penicillin was the most dramatic. While it was discovered in England, it was produced industrially in the U.S. using a deep fermentation process originally developed in Peoria, Illinois.[5] The enormous profits and the public expectations penicillin engendered caused a radical shift in the standing of the pharmaceutical industry. Doctors used the phrase "miracle drug", and the historian of its wartime use, David Adams, has suggested that to the public penicillin represented the perfect health that went together with the car and the dream house of wartime American advertising.[2] Beginning in the 1950s, fermentation technology also became advanced enough to produce steroids on industrially significant scales.[6] Of particular importance was the improved semisynthesis of cortisone which simplified the old 31 step synthesis to 11 steps.[7] This advance was estimated to reduce the cost of the drug by 70%, making the medicine inexpensive and available.[8] Today biotechnology still plays a central role in the production of these compounds and likely will for years to come.[9][10]

Even greater expectations of biotechnology were raised during the 1960s by a process that grew single-cell protein. When the so-called protein gap threatened world hunger, producing food locally by growing it from waste seemed to offer a solution. It was the possibilities of growing microorganisms on oil that captured the imagination of scientists, policy makers, and commerce.[1] Major companies such as British Petroleum (BP) staked their futures on it. In 1962, BP built a pilot plant at Cap de Lavera in Southern France to publicize its product, Toprina.[1] Initial research work at Lavera was done by Alfred Champagnat,[11] In 1963, construction started on BP's second pilot plant at Grangemouth Oil Refinery in Britain.[11]

As there was no well-accepted term to describe the new foods, in 1966 the term "single-cell protein" (SCP) was coined at MIT to provide an acceptable and exciting new title, avoiding the unpleasant connotations of microbial or bacterial.[1]

The "food from oil" idea became quite popular by the 1970s, when facilities for growing yeast fed by n-paraffins were built in a number of countries. The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) [12][13] and Kirishi (1974).[citation needed]

By the late 1970s, however, the cultural climate had completely changed, as the growth in SCP interest had taken place against a shifting economic and cultural scene (136). First, the price of oil rose catastrophically in 1974, so that its cost per barrel was five times greater than it had been two years earlier. Second, despite continuing hunger around the world, anticipated demand also began to shift from humans to animals. The program had begun with the vision of growing food for Third World people, yet the product was instead launched as an animal food for the developed world. The rapidly rising demand for animal feed made that market appear economically more attractive. The ultimate downfall of the SCP project, however, came from public resistance.[1]

This was particularly vocal in Japan, where production came closest to fruition. For all their enthusiasm for innovation and traditional interest in microbiologically produced foods, the Japanese were the first to ban the production of single-cell proteins. The Japanese ultimately were unable to separate the idea of their new "natural" foods from the far from natural connotation of oil.[1] These arguments were made against a background of suspicion of heavy industry in which anxiety over minute traces of petroleum was expressed. Thus, public resistance to an unnatural product led to the end of the SCP project as an attempt to solve world hunger.

Also, in 1989 in the USSR, the public environmental concerns made the government decide to close down (or convert to different technologies) all 8 paraffin-fed-yeast plants that the Soviet Ministry of Microbiological Industry had by that time.[citation needed]

In the late 1970s, biotechnology offered another possible solution to a societal crisis. The escalation in the price of oil in 1974 increased the cost of the Western world's energy tenfold.[1] In response, the U.S. government promoted the production of gasohol, gasoline with 10 percent alcohol added, as an answer to the energy crisis.[2] In 1979, when the Soviet Union sent troops to Afghanistan, the Carter administration cut off its supplies to agricultural produce in retaliation, creating a surplus of agriculture in the U.S. As a result, fermenting the agricultural surpluses to synthesize fuel seemed to be an economical solution to the shortage of oil threatened by the Iran-Iraq war. Before the new direction could be taken, however, the political wind changed again: the Reagan administration came to power in January 1981 and, with the declining oil prices of the 1980s, ended support for the gasohol industry before it was born.[1]

Biotechnology seemed to be the solution for major social problems, including world hunger and energy crises. In the 1960s, radical measures would be needed to meet world starvation, and biotechnology seemed to provide an answer. However, the solutions proved to be too expensive and socially unacceptable, and solving world hunger through SCP food was dismissed. In the 1970s, the food crisis was succeeded by the energy crisis, and here too, biotechnology seemed to provide an answer. But once again, costs proved prohibitive as oil prices slumped in the 1980s. Thus, in practice, the implications of biotechnology were not fully realized in these situations. But this would soon change with the rise of genetic engineering.

The origins of biotechnology culminated with the birth of genetic engineering. There were two key events that have come to be seen as scientific breakthroughs beginning the era that would unite genetics with biotechnology. One was the 1953 discovery of the structure of DNA, by Watson and Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the DNA of another.[14] This approach could, in principle, enable bacteria to adopt the genes and produce proteins of other organisms, including humans. Popularly referred to as "genetic engineering," it came to be defined as the basis of new biotechnology.

Genetic engineering proved to be a topic that thrust biotechnology into the public scene, and the interaction between scientists, politicians, and the public defined the work that was accomplished in this area. Technical developments during this time were revolutionary and at times frightening. In December 1967, the first heart transplant by Christian Barnard reminded the public that the physical identity of a person was becoming increasingly problematic. While poetic imagination had always seen the heart at the center of the soul, now there was the prospect of individuals being defined by other people's hearts.[1] During the same month, Arthur Kornberg announced that he had managed to biochemically replicate a viral gene. "Life had been synthesized," said the head of the National Institutes of Health.[1] Genetic engineering was now on the scientific agenda, as it was becoming possible to identify genetic characteristics with diseases such as beta thalassemia and sickle-cell anemia.

Responses to scientific achievements were colored by cultural skepticism. Scientists and their expertise were looked upon with suspicion. In 1968, an immensely popular work, The Biological Time Bomb, was written by the British journalist Gordon Rattray Taylor. The author's preface saw Kornberg's discovery of replicating a viral gene as a route to lethal doomsday bugs. The publisher's blurb for the book warned that within ten years, "You may marry a semi-artificial man or womanchoose your children's sextune out painchange your memoriesand live to be 150 if the scientific revolution doesnt destroy us first."[1] The book ended with a chapter called "The Future If Any." While it is rare for current science to be represented in the movies, in this period of "Star Trek", science fiction and science fact seemed to be converging. "Cloning" became a popular word in the media. Woody Allen satirized the cloning of a person from a nose in his 1973 movie Sleeper, and cloning Adolf Hitler from surviving cells was the theme of the 1976 novel by Ira Levin, The Boys from Brazil.[1]

In response to these public concerns, scientists, industry, and governments increasingly linked the power of recombinant DNA to the immensely practical functions that biotechnology promised. One of the key scientific figures that attempted to highlight the promising aspects of genetic engineering was Joshua Lederberg, a Stanford professor and Nobel laureate. While in the 1960s "genetic engineering" described eugenics and work involving the manipulation of the human genome, Lederberg stressed research that would involve microbes instead.[1] Lederberg emphasized the importance of focusing on curing living people. Lederberg's 1963 paper, "Biological Future of Man" suggested that, while molecular biology might one day make it possible to change the human genotype, "what we have overlooked is euphenics, the engineering of human development."[1] Lederberg constructed the word "euphenics" to emphasize changing the phenotype after conception rather than the genotype which would affect future generations.

With the discovery of recombinant DNA by Cohen and Boyer in 1973, the idea that genetic engineering would have major human and societal consequences was born. In July 1974, a group of eminent molecular biologists headed by Paul Berg wrote to Science suggesting that the consequences of this work were so potentially destructive that there should be a pause until its implications had been thought through.[1] This suggestion was explored at a meeting in February 1975 at California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic outcome was an unprecedented call for a halt in research until it could be regulated in such a way that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of Health (NIH) guidelines were established.

Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper countering the pessimism and fears of misuses with the benefits conferred by successful use. He described "an early chance for a technology of untold importance for diagnostic and therapeutic medicine: the ready production of an unlimited variety of human proteins. Analogous applications may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the improvement of microbes for the production of antibiotics and of special industrial chemicals."[1] In June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee (DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of experiments and the appropriate physical conditions for their pursuit, as well as a list of things too dangerous to perform at all. Moreover, modified organisms were not to be tested outside the confines of a laboratory or allowed into the environment.[14]

Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead to the development of the biotechnology industry. Over the next two years, as public concern over the dangers of recombinant DNA research grew, so too did interest in its technical and practical applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that producing human simple proteins could be good business. Insulin, one of the smaller, best characterized and understood proteins, had been used in treating type 1 diabetes for a half century. It had been extracted from animals in a chemically slightly different form from the human product. Yet, if one could produce synthetic human insulin, one could meet an existing demand with a product whose approval would be relatively easy to obtain from regulators. In the period 1975 to 1977, synthetic "human" insulin represented the aspirations for new products that could be made with the new biotechnology. Microbial production of synthetic human insulin was finally announced in September 1978 and was produced by a startup company, Genentech.[15] Although that company did not commercialize the product themselves, instead, it licensed the production method to Eli Lilly and Company. 1978 also saw the first application for a patent on a gene, the gene which produces human growth hormone, by the University of California, thus introducing the legal principle that genes could be patented. Since that filing, almost 20% of the more than 20,000 genes in the human DNA have been patented.[citation needed]

The radical shift in the connotation of "genetic engineering" from an emphasis on the inherited characteristics of people to the commercial production of proteins and therapeutic drugs was nurtured by Joshua Lederberg. His broad concerns since the 1960s had been stimulated by enthusiasm for science and its potential medical benefits. Countering calls for strict regulation, he expressed a vision of potential utility. Against a belief that new techniques would entail unmentionable and uncontrollable consequences for humanity and the environment, a growing consensus on the economic value of recombinant DNA emerged.[citation needed]

With ancestral roots in industrial microbiology that date back centuries, the new biotechnology industry grew rapidly beginning in the mid-1970s. Each new scientific advance became a media event designed to capture investment confidence and public support.[15] Although market expectations and social benefits of new products were frequently overstated, many people were prepared to see genetic engineering as the next great advance in technological progress. By the 1980s, biotechnology characterized a nascent real industry, providing titles for emerging trade organizations such as the Biotechnology Industry Organization (BIO).

The main focus of attention after insulin were the potential profit makers in the pharmaceutical industry: human growth hormone and what promised to be a miraculous cure for viral diseases, interferon. Cancer was a central target in the 1970s because increasingly the disease was linked to viruses.[14] By 1980, a new company, Biogen, had produced interferon through recombinant DNA. The emergence of interferon and the possibility of curing cancer raised money in the community for research and increased the enthusiasm of an otherwise uncertain and tentative society. Moreover, to the 1970s plight of cancer was added AIDS in the 1980s, offering an enormous potential market for a successful therapy, and more immediately, a market for diagnostic tests based on monoclonal antibodies.[16] By 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA): synthetic insulin, human growth hormone, hepatitis B vaccine, alpha-interferon, and tissue plasminogen activator (TPa), for lysis of blood clots. By the end of the 1990s, however, 125 more genetically engineered drugs would be approved.[16]

Genetic engineering also reached the agricultural front as well. There was tremendous progress since the market introduction of the genetically engineered Flavr Savr tomato in 1994.[16] Ernst and Young reported that in 1998, 30% of the U.S. soybean crop was expected to be from genetically engineered seeds. In 1998, about 30% of the US cotton and corn crops were also expected to be products of genetic engineering.[16]

Genetic engineering in biotechnology stimulated hopes for both therapeutic proteins, drugs and biological organisms themselves, such as seeds, pesticides, engineered yeasts, and modified human cells for treating genetic diseases. From the perspective of its commercial promoters, scientific breakthroughs, industrial commitment, and official support were finally coming together, and biotechnology became a normal part of business. No longer were the proponents for the economic and technological significance of biotechnology the iconoclasts.[1] Their message had finally become accepted and incorporated into the policies of governments and industry.

According to Burrill and Company, an industry investment bank, over $350 billion has been invested in biotech since the emergence of the industry, and global revenues rose from $23 billion in 2000 to more than $50 billion in 2005. The greatest growth has been in Latin America but all regions of the world have shown strong growth trends. By 2007 and into 2008, though, a downturn in the fortunes of biotech emerged, at least in the United Kingdom, as the result of declining investment in the face of failure of biotech pipelines to deliver and a consequent downturn in return on investment.[17]

Excerpt from:
History of biotechnology - Wikipedia

Read More...

Veterinary medicine – Wikipedia

October 20th, 2016 7:45 pm

"Animal hospital" redirects here. For the BBC television show, see Animal Hospital.

Veterinary medicine is the branch of medicine that deals with the prevention, diagnosis and treatment of disease, disorder and injury in non-human animals. The scope of veterinary medicine is wide, covering all animal species, both domesticated and wild, with a wide range of conditions which can affect different species.

Veterinary medicine is widely practiced, both with and without professional supervision. Professional care is most often led by a veterinary physician (also known as a vet, veterinary surgeon or veterinarian), but also by paraveterinary workers such as veterinary nurses or technicians. This can be augmented by other paraprofessionals with specific specialisms such as animal physiotherapy or dentistry, and species relevant roles such as farriers.

Veterinary science helps human health through the monitoring and control of zoonotic disease (infectious disease transmitted from non-human animals to humans), food safety, and indirectly through human applications from basic medical research. They also help to maintain food supply through livestock health monitoring and treatment, and mental health by keeping pets healthy and long living. Veterinary scientists often collaborate with epidemiologists, and other health or natural scientists depending on type of work. Ethically, veterinarians are usually obliged to look after animal welfare.

The Egyptian Papyrus of Kahun (1900 BCE) and Vedic literature in ancient India offer one of the first written records of veterinary medicine. (See also Shalihotra) ( Buddhism) First Buddhist Emperor of India edicts of Asoka reads: "Everywhere King Piyadasi (Asoka) made two kinds of medicine () available, medicine for people and medicine for animals. Where there were no healing herbs for people and animals, he ordered that they be bought and planted."

The first attempts to organize and regulate the practice of treating animals tended to focus on horses because of their economic significance. In the Middle Ages from around 475 CE, farriers combined their work in horseshoeing with the more general task of "horse doctoring". In 1356, the Lord Mayor of London, concerned at the poor standard of care given to horses in the city, requested that all farriers operating within a seven-mile radius of the City of London form a "fellowship" to regulate and improve their practices. This ultimately led to the establishment of the Worshipful Company of Farriers in 1674.[3]

Meanwhile, Carlo Ruini's book Anatomia del Cavallo, (Anatomy of the Horse) was published in 1598. It was the first comprehensive treatise on the anatomy of a non-human species.[4]

The first veterinary college was founded in Lyon, France in 1762 by Claude Bourgelat.[5] According to Lupton, after observing the devastation being caused by cattle plague to the French herds, Bourgelat devoted his time to seeking out a remedy. This resulted in his founding a veterinary college in Lyon in 1761, from which establishment he dispatched students to combat the disease; in a short time, the plague was stayed and the health of stock restored, through the assistance rendered to agriculture by veterinary science and art."[6]

The Odiham Agricultural Society was founded in 1783 in England to promote agriculture and industry,[7] and played an important role in the foundation of the veterinary profession in Britain. A founding member, Thomas Burgess, began to take up the cause of animal welfare and campaign for the more humane treatment of sick animals.[8] A 1785 Society meeting resolved to "promote the study of Farriery upon rational scientific principles.

The physician James Clark wrote a treatise entitled Prevention of Disease in which he argued for the professionalization of the veterinary trade, and the establishment of veterinary colleges. This was finally achieved in 1790, through the campaigning of Granville Penn, who persuaded the Frenchman, Benoit Vial de St. Bel to accept the professorship of the newly established Veterinary College in London.[7] The Royal College of Veterinary Surgeons was established by royal charter in 1844. Veterinary science came of age in the late 19th century, with notable contributions from Sir John McFadyean, credited by many as having been the founder of modern Veterinary research.[9]

In the United States, the first schools were established in the early 19th century in Boston, New York and Philadelphia. In 1879, Iowa Agricultural College became the first land grant college to establish a school of veterinary medicine.[10]

Veterinary care and management is usually led by a veterinary physician (usually called a vet, veterinary surgeon or veterinarian). This role is the equivalent of a doctor in human medicine, and usually involves post-graduate study and qualification.

In many countries, the local nomenclature for a vet is a protected term, meaning that people without the prerequisite qualifications and/or registration are not able to use the title, and in many cases, the activities that may be undertaken by a vet (such as animal treatment or surgery) are restricted only to those people who are registered as vet. For instance, in the United Kingdom, as in other jurisdictions, animal treatment may only be performed by registered vets (with a few designated exceptions, such as paraveterinary workers), and it is illegal for any person who is not registered to call themselves a vet or perform any treatment.

Most vets work in clinical settings, treating animals directly. These vets may be involved in a general practice, treating animals of all types; may be specialized in a specific group of animals such as companion animals, livestock, laboratory animals, zoo animals or horses; or may specialize in a narrow medical discipline such as surgery, dermatology, laboratory animal medicine, or internal medicine.

As with healthcare professionals, vets face ethical decisions about the care of their patients. Current debates within the profession include the ethics of purely cosmetic procedures on animals, such as declawing of cats, docking of tails, cropping of ears and debarking on dogs.

Paraveterinary workers, including veterinary nurses, technicians and assistants, either assist vets in their work, or may work within their own scope of practice, depending on skills and qualifications, including in some cases, performing minor surgery.

The role of paraveterinary workers is less homogeneous globally than that of a vet, and qualification levels, and the associated skill mix, vary widely.

A number of professions exist within the scope of veterinary medicine, but which may not necessarily be performed by vets or veterinary nurses. This includes those performing roles which are also found in human medicine, such as practitioners dealing with musculoskeletal disorders, including osteopaths, chiropractors and physiotherapists.

There are also roles which are specific to animals, but which have parallels in human society, such as animal grooming and animal massage.

Some roles are specific to a species or group of animals, such as farriers, who are involved in the shoeing of horses, and in many cases have a major role to play in ensuring the medical fitness of the horse.

Exotic veterinary care is the scope of treatment, diagnosis and care for animals persisting of the nontraditional domesticated animals. An exotic animal can be briefly described as one that isn't normally domesticated or owned, there-go, exotic. The research and study of veterinary medicine pertains to this form of treatment and care only on a smaller scale due to demand and resources available for this field of work.

Veterinary research includes research on prevention, control, diagnosis, and treatment of diseases of animals and on the basic biology, welfare, and care of animals. Veterinary research transcends species boundaries and includes the study of spontaneously occurring and experimentally induced models of both human and animal disease and research at human-animal interfaces, such as food safety, wildlife and ecosystem health, zoonotic diseases, and public policy.[11]

As in medicine, randomized controlled trials are fundamental also in veterinary medicine to establish the effectiveness of a treatment.[12] However, clinical veterinary research is far behind human medical research, with fewer randomized controlled trials, that have a lower quality and that are mostly focused on research animals.[13] Possible improvement consists in creation of network for inclusion of private veterinary practices in randomized controlled trials.

Read the original:
Veterinary medicine - Wikipedia

Read More...

Nanomedicine – Wikipedia

October 20th, 2016 7:43 pm

Nanomedicine is the medical application of nanotechnology.[1] Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).

Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.

Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future.[2][3] The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging.[4] Nanomedicine research is receiving funding from the US National Institutes of Health, including the funding in 2005 of a five-year plan to set up four nanomedicine centers.

Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013.[5] As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.

Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles.

The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices.[6][7] More than $65 billion are wasted each year due to poor bioavailability.[citation needed] A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.[8] The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.[citation needed]

Drug delivery systems, lipid- [9] or polymer-based nanoparticles,[10] can be designed to improve the pharmacokinetics and biodistribution of the drug.[11][12][13] However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients.[14] When designed to avoid the body's defence mechanisms,[15] nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.[16] Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials[15] and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.[17]

Nanoparticles can be used in combination therapy for decreasing antibiotic resistance or for their antimicrobial properties.[18][19][20] Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.[21]

Two forms of nanomedicine that have already been tested in mice and are awaiting human trials that will be using gold nanoshells to help diagnose and treat cancer,[22] and using liposomes as vaccine adjuvants and as vehicles for drug transport.[23][24] Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats.[25] Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications.[26] Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.[10]

Polymeric nano-particles are a competing technology to lipidic (based mainly on Phospholipids) nano-particles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterisation. However, in the patient's body this very stability (slow degradation) is a negative factor. Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).[citation needed]

Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.[citation needed]

Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.[27]

Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells.Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time). This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.[citation needed]

Some nanotechnology-based drugs that are commercially available or in human clinical trials include:

Existing and potential drug nanocarriers have been reviewed.[38][39][40][41]

Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system).[42] Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility. Nanoparticles have the potential to overcome these problems.[43]

In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue. Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist. Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) .[citation needed] Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.[44][45]

In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. This might be accomplished by self assembled biocompatible nanodevices that will detect, evaluate, treat and report to the clinical doctor automatically.[citation needed]

The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements.[citation needed]

Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles[46] into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.[47]

Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.[citation needed]

Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood.[48]Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.[49]

Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test. These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.[50] Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individuals tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.[51]

Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others. More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.[52][53] In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.[citation needed]

The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties.[54] Binding agents such as proteins,[53]antibodies,[52]antibiotics,[55] or synthetic ligands[56] are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.[57][58]

The small size (< 100nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.[59]

This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins[55] or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.[60]

Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight%) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.[61][62] Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.[citation needed]

For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.[63] Another example is nanonephrology, the use of nanomedicine on the kidney.

Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons.[64] One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.[65]

Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.[1][65][66][67] Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.[1]Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[68] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[69]

Read more from the original source:
Nanomedicine - Wikipedia

Read More...

Longevity – Wikipedia

October 20th, 2016 7:43 pm

The word "longevity" is sometimes used as a synonym for "life expectancy" in demography - however, the term "longevity" is sometimes meant to refer only to especially long-lived members of a population, whereas "life expectancy" is always defined statistically as the average number of years remaining at a given age. For example, a population's life expectancy at birth is the same as the average age at death for all people born in the same year (in the case of cohorts). Longevity is best thought of as a term for general audiences meaning 'typical length of life' and specific statistical definitions should be clarified when necessary.

Reflections on longevity have usually gone beyond acknowledging the brevity of human life and have included thinking about methods to extend life. Longevity has been a topic not only for the scientific community but also for writers of travel, science fiction, and utopian novels.

There are many difficulties in authenticating the longest human life span ever by modern verification standards, owing to inaccurate or incomplete birth statistics. Fiction, legend, and folklore have proposed or claimed life spans in the past or future vastly longer than those verified by modern standards, and longevity narratives and unverified longevity claims frequently speak of their existence in the present.

A life annuity is a form of longevity insurance.

Various factors contribute to an individual's longevity. Significant factors in life expectancy include gender, genetics, access to health care, hygiene, diet and nutrition, exercise, lifestyle, and crime rates. Below is a list of life expectancies in different types of countries:[3]

Population longevities are increasing as life expectancies around the world grow:[1][4]

The Gerontology Research Group validates current longevity records by modern standards, and maintains a list of supercentenarians; many other unvalidated longevity claims exist. Record-holding individuals include:[citation needed]

Evidence-based studies indicate that longevity is based on two major factors, genetics and lifestyle choices.[5]

Twin studies have estimated that approximately 20-30% the variation in human lifespan can be related to genetics, with the rest due to individual behaviors and environmental factors which can be modified.[6] Although over 200 gene variants have been associated with longevity according to a US-Belgian-UK research database of human genetic variants,[7] these explain only a small fraction of the heritability.[8] A 2012 study found that even modest amounts of leisure time physical exercise can extend life expectancy by as much as 4.5 years.[9]

Lymphoblastoid cell lines established from blood samples of centenarians have significantly higher activity of the DNA repair protein PARP (Poly ADP ribose polymerase) than cell lines from younger (20 to 70 year old) individuals.[10] The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after H2O2 sublethal oxidative DNA damage and in their PARP gene expression.[11] These findings suggest that elevated PARP gene expression contributes to the longevity of centenarians, consistent with the DNA damage theory of aging.[12]

A study of the regions of the world known as blue zones, where people commonly live active lives past 100 years of age, speculated that longevity is related to a healthy social and family life, not smoking, eating a plant-based diet, frequent consumption of legumes and nuts, and engaging in regular physical activity.[13] In a cohort study, the combination of a plant based diet, normal BMI, and not smoking accounted for differences up to 15 years in life expectancy.[14] Korean court records going back to 1392 indicate that the average lifespan of eunuchs was 70.0 1.76 years, which was 14.419.1 years longer than the lifespan of non-castrated men of similar socio-economic status.[15] The Alameda County Study hypothesized three additional lifestyle characteristics that promote longevity: limiting alcohol consumption, sleeping 7 to 8 hours per night, and not snacking (eating between meals), although the study found the association between these characteristics and mortality is "weak at best".[16] There are however many other possible factors potentially affecting longevity, including the impact of high peer competition, which is typically experienced in large cities.[17]

In preindustrial times, deaths at young and middle age were more common than they are today. This is not due to genetics, but because of environmental factors such as disease, accidents, and malnutrition, especially since the former were not generally treatable with pre-20th century medicine. Deaths from childbirth were common in women, and many children did not live past infancy. In addition, most people who did attain old age were likely to die quickly from the above-mentioned untreatable health problems. Despite this, we do find many examples of pre-20th century individuals attaining lifespans of 75 years or greater, including Benjamin Franklin, Thomas Jefferson, John Adams, Cato the Elder, Thomas Hobbes, Eric of Pomerania, Christopher Polhem, and Michelangelo. This was also true for poorer people like peasants or laborers. Genealogists will almost certainly find ancestors living to their 70s, 80s and even 90s several hundred years ago.

For example, an 1871 census in the UK (the first of its kind, but personal data from other censuses dates back to 1841 and numerical data back to 1801) found the average male life expectancy as being 44, but if infant mortality is subtracted, males who lived to adulthood averaged 75 years. The present male life expectancy in the UK is 77 years for males and 81 for females, while the United States averages 74 for males and 80 for females.

Studies have shown that black American males have the shortest lifespans of any group of people in the US, averaging only 69 years (Asian-American females average the longest).[18] This reflects overall poorer health and greater prevalence of heart disease, obesity, diabetes, and cancer among black American men.

Women normally outlive men, and this was as true in pre-industrial times as today. Theories for this include smaller bodies (and thus less stress on the heart), a stronger immune system (since testosterone acts as an immunosuppressant), and less tendency to engage in physically dangerous activities.

There is a current debate as to whether or not the pursuit of longevity is a worthwhile health care goal for the United States. Bioethicist Ezekiel Emanuel, who is also one of the architects of ObamaCare, has stated that the pursuit of longevity via the compression of morbidity explanation is a "fantasy" and that life is not worth living after age 75; therefore longevity should not be a goal of health care policy.[19] This has been refuted by neurosurgeon Miguel Faria, who states that life can be worthwhile in healthy old age; that the compression of morbidity is a real phenomenon; that longevity should be pursued in association with quality of life.[20] Faria has discussed how longevity in association with leading healthy lifestyles can lead to the postponement of senescence as well as happiness and wisdom in old age.[21]

All of the biological organisms have a limited longevity, and different species of animals and plants have different potentials of longevity. Misrepair-accumulation aging theory [22][23] suggests that the potential of longevity of an organism is related to its structural complexity.[24] Limited longevity is due to the limited structural complexity of the organism. If a species of organisms has too high structural complexity, most of its individuals would die before the reproduction age, and the species could not survive. This theory suggests that limited structural complexity and limited longevity are essential for the survival of a species.

Longevity traditions are traditions about long-lived people (generally supercentenarians), and practices that have been believed to confer longevity.[25][26] A comparison and contrast of "longevity in antiquity" (such as the Sumerian King List, the genealogies of Genesis, and the Persian Shahnameh) with "longevity in historical times" (common-era cases through twentieth-century news reports) is elaborated in detail in Lucian Boia's 2004 book Forever Young: A Cultural History of Longevity from Antiquity to the Present and other sources.[27]

The Fountain of Youth reputedly restores the youth of anyone who drinks of its waters. The New Testament, following older Jewish tradition, attributes healing to the Pool of Bethesda when the waters are "stirred" by an angel.[28] After the death of Juan Ponce de Len, Gonzalo Fernndez de Oviedo y Valds wrote in Historia General y Natural de las Indias (1535) that Ponce de Len was looking for the waters of Bimini to cure his aging.[29] Traditions that have been believed to confer greater human longevity also include alchemy,[30] such as that attributed to Nicolas Flamel. In the modern era, the Okinawa diet has some reputation of linkage to exceptionally high ages.[31]

More recent longevity claims are subcategorized by many editions of Guinness World Records into four groups: "In late life, very old people often tend to advance their ages at the rate of about 17 years per decade .... Several celebrated super-centenarians (over 110 years) are believed to have been double lives (father and son, relations with the same names or successive bearers of a title) .... A number of instances have been commercially sponsored, while a fourth category of recent claims are those made for political ends ...."[32] The estimate of 17 years per decade was corroborated by the 1901 and 1911 British censuses.[32] Mazess and Forman also discovered in 1978 that inhabitants of Vilcabamba, Ecuador, claimed excessive longevity by using their fathers' and grandfathers' baptismal entries.[32][33]Time magazine considered that, by the Soviet Union, longevity had been elevated to a state-supported "Methuselah cult".[34]Robert Ripley regularly reported supercentenarian claims in Ripley's Believe It or Not!, usually citing his own reputation as a fact-checker to claim reliability.[35]

The U.S. Census Bureau view on the future of longevity is that life expectancy in the United States will be in the mid-80s by 2050 (up from 77.85 in 2006) and will top out eventually in the low 90s, barring major scientific advances that can change the rate of human aging itself, as opposed to merely treating the effects of aging as is done today. The Census Bureau also predicted that the United States would have 5.3 million people aged over 100 in 2100. The United Nations has also made projections far out into the future, up to 2300, at which point it projects that life expectancies in most developed countries will be between 100 and 106 years and still rising, though more and more slowly than before. These projections also suggest that life expectancies in poor countries will still be less than those in rich countries in 2300, in some cases by as much as 20 years. The UN itself mentioned that gaps in life expectancy so far in the future may well not exist, especially since the exchange of technology between rich and poor countries and the industrialization and development of poor countries may cause their life expectancies to converge fully with those of rich countries long before that point, similarly to the way life expectancies between rich and poor countries have already been converging over the last 60 years as better medicine, technology, and living conditions became accessible to many people in poor countries. The UN has warned that these projections are uncertain, and cautions that any change or advancement in medical technology could invalidate such projections.[36]

Recent increases in the rates of lifestyle diseases, such as obesity, diabetes, hypertension, and heart disease, may eventually slow or reverse this trend toward increasing life expectancy in the developed world, but have not yet done so. The average age of the US population is getting higher[37] and these diseases show up in older people.[38]

Jennifer Couzin-Frankel examined how much mortality from various causes would have to drop in order to boost life expectancy and concluded that most of the past increases in life expectancy occurred because of improved survival rates for young people. She states that it seems unlikely that life expectancy at birth will ever exceed 85 years.[39]Michio Kaku argues that genetic engineering, nanotechnology and future breakthroughs will accelerate the rate of life expectancy increase indefinitely.[40] Already genetic engineering has allowed the life expectancy of certain primates to be doubled, and for human skin cells in labs to divide and live indefinitely without becoming cancerous.[41]

However, since 1840, record life expectancy has risen linearly for men and women, albeit more slowly for men. For women the increase has been almost three months per year, for men almost 2.7 months per year. In light of steady increase, without any sign of limitation, the suggestion that life expectancy will top out must be treated with caution. Scientists Oeppen and Vaupel observe that experts who assert that "life expectancy is approaching a ceiling ... have repeatedly been proven wrong." It is thought that life expectancy for women has increased more dramatically owing to the considerable advances in medicine related to childbirth.[42]

Mice have been genetically engineered to live twice as long as ordinary mice. Drugs such as deprenyl are a part of the prescribing pharmacopia of veterinarians specifically to increase mammal lifespan. A large plurality of research chemicals have been described at the scientific literature that increase the lifespan of a number of species.

Some argue that molecular nanotechnology will greatly extend human life spans. If the rate of increase of life span can be raised with these technologies to a level of twelve months increase per year, this is defined as effective biological immortality and is the goal of radical life extension.

Currently living:

Non-living:

Certain exotic organisms do not seem to be subject to aging and can live indefinitely. Examples include Tardigrades and Hydras. That is not to say that these organisms cannot die, merely that they only die as a result of disease or injury rather than age-related deterioration (and that they are not subject to the Hayflick limit).

Longevity

Here is the original post:
Longevity - Wikipedia

Read More...

Alternative medicine – Wikipedia

October 20th, 2016 7:42 pm

Alternative or fringe medicine is any practice claimed to have the healing effects of medicine and is: proven not to work; has no scientific evidence showing that it works; or that is solely harmful.[n 1][n 2][n 3] Alternative medicine is not a part of medicine,[n 1][n 4][n 5][n 6] or science-based healthcare systems.[1][2][4] It consists of a wide variety of practices, products, and therapiesranging from those that are biologically plausible but not well tested, to those with known harmful and toxic effects.[n 4][5][6][7][8][9] Despite significant costs in testing alternative medicine, including $2.5 billion spent by the United States government, almost none have shown any effectiveness beyond that of false treatments (placebo).[10][11] Perceived effects of alternative medicine are caused by the placebo effect, decreased effects of functional treatment (and thus also decreased side-effects), and regression toward the mean where spontaneous improvement is credited to alternative therapies.

Complementary medicine or integrative medicine is when alternative medicine is used together with functional medical treatment, in a belief that it "complements" (improves the efficacy of) the treatment.[n 7][13][14][15][16] However, significant drug interactions caused by alternative therapies may instead negatively influence the treatment, making treatments less effective, notably cancer therapy.[17][18]CAM is an abbreviation of complementary and alternative medicine.[19][20] It has also be called sCAM or SCAM for "so-called complementary and alternative medicine" or "supplements and complementary and alternative medicine".[21][22]Holistic health or holistic medicine claims to take into account the "whole" person, including spirituality in its treatmentsand is a similar concept. Due to its many names the field has been criticized for intense rebranding of what are essentially the same practices: as soon as one name is declared synonymous with quackery, a new is chosen.

Alternative medical diagnoses and treatments are not included in the science-based treatments taught in medical schools, and are not used in medical practice where treatments are based on scientific knowledge. Alternative therapies are either unproven, disproved, or impossible to prove,[n 8][5][13][24][25] and are often based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud.[5][26][6][13] Regulation and licensing of alternative medicine and health care providers varies between and within countries. Marketing alternative therapies as treating or preventing cancer is illegal in many countries including the United States and most parts of the European Union.

Alternative medicine has been criticized for being based on misleading statements, quackery, pseudoscience, antiscience, fraud, or poor scientific methodology. Promoting alternative medicine has been called dangerous and unethical.[n 9][28] Testing alternative medicine that have no scientific basis has been called a waste of scarce medical research resources.[29][30] Critics have said "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't",[31] and the problem is not only that it does not work, but that the "underlying logic is magical, childish or downright absurd".[32] There have also been calls that the concept of any alternative medicine that works is paradoxical, as any treatment proven to work is simply "medicine".[33]

Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies.[1] Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based.[5][26][1][13] Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, supersition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods.[5][26][6][13] Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices.

Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science.[1]

Homeopathy is a system developed in a belief that a substance that causes the symptoms of a disease in healthy people cures similar symptoms in sick people.[n 10] It was developed before knowledge of atoms and molecules, and of basic chemistry, which shows that repeated dilution as practiced in homeopathy produces only water, and that homeopathy is scientifically implausible.[36][37][38][39] Homeopathy is considered quackery in the medical community.[40]

Naturopathic medicine is based on a belief that the body heals itself using a supernatural vital energy that guides bodily processes,[41] a view in conflict with the paradigm of evidence-based medicine.[42] Many naturopaths have opposed vaccination,[43] and "scientific evidence does not support claims that naturopathic medicine can cure cancer or any other disease".[44]

Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine, Ayurveda in India, or practices of other cultures around the world.[1]

Traditional Chinese medicine is a combination of traditional practices and beliefs developed over thousands of years in China, together with modifications made by the Communist party. Common practices include herbal medicine, acupuncture (insertion of needles in the body at specified points), massage (Tui na), exercise (qigong), and dietary therapy. The practices are based on belief in a supernatural energy called qi, considerations of Chinese Astrology and Chinese numerology, traditional use of herbs and other substances found in Chinaa belief that the tongue contains a map of the body that reflects changes in the body, and an incorrect model of the anatomy and physiology of internal organs.[5][45][46][47][48][49]

The Chinese Communist Party Chairman Mao Zedong, in response to a lack of modern medical practitioners, revived acupuncture, and had its theory rewritten to adhere to the political, economic, and logistic necessities of providing for the medical needs of China's population.[50][pageneeded] In the 1950s the "history" and theory of traditional Chinese medicine was rewritten as communist propaganda, at Mao's insistence, to correct the supposed "bourgeois thought of Western doctors of medicine".Acupuncture gained attention in the United States when President Richard Nixon visited China in 1972, and the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients.[45]Cochrane reviews found acupuncture is not effective for a wide range of conditions.[52] A systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture.[53] Although, other reviews have found that acupuncture is successful at reducing chronic pain, where as sham acupuncture was not found to be better than a placebo as well as no-acupuncture groups.[54]

Ayurvedic medicine is a traditional medicine of India. Ayurveda believes in the existence of three elemental substances, the doshas (called Vata, Pitta and Kapha), and states that a balance of the doshas results in health, while imbalance results in disease. Such disease-inducing imbalances can be adjusted and balanced using traditional herbs, minerals and heavy metals. Ayurveda stresses the use of plant-based medicines and treatments, with some animal products, and added minerals, including sulfur, arsenic, lead, copper sulfate.[citation needed]

Safety concerns have been raised about Ayurveda, with two U.S. studies finding about 20 percent of Ayurvedic Indian-manufactured patent medicines contained toxic levels of heavy metals such as lead, mercury and arsenic. Other concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. Incidents of heavy metal poisoning have been attributed to the use of these compounds in the United States.[8][57][58][59]

Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine.[1]

Biofield therapies are intended to influence energy fields that, it is purported, surround and penetrate the body.[1] Writers such as noted astrophysicist and advocate of skeptical thinking (Scientific skepticism) Carl Sagan (1934-1996) have described the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated.

Acupuncture is a component of traditional Chinese medicine. Proponents of acupuncture believe that a supernatural energy called qi flows through the universe and through the body, and helps propel the bloodand that blockage of this energy leads to disease.[46] They believe that inserting needles in various parts of the body, determined by astrological calculations, can restore balance to the blocked flows and thereby cure disease.[46]

Chiropractic was developed in the belief that manipulating the spine affects the flow of a supernatural vital energy and thereby affects health and disease.

In the western version of Japanese Reiki, practitioners place their palms on the patient near Chakras that they believe are centers of supernatural energies, and believe that these supernatural energies can transfer from the practitioner's palms to heal the patient.

Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner.[1]Magnetic healing does not claim existence of supernatural energies, but asserts that magnets can be used to defy the laws of physics to influence health and disease.

Mind-body medicine takes a holistic approach to health that explores the interconnection between the mind, body, and spirit. It works under the premise that the mind can affect "bodily functions and symptoms".[1] Mind body medicines includes healing claims made in yoga, meditation, deep-breathing exercises, guided imagery, hypnotherapy, progressive relaxation, qi gong, and tai chi.[1]

Yoga, a method of traditional stretches, exercises, and meditations in Hinduism, may also be classified as an energy medicine insofar as its healing effects are believed to be due to a healing "life energy" that is absorbed into the body through the breath, and is thereby believed to treat a wide variety of illnesses and complaints.[61]

Since the 1990s, tai chi (t'ai chi ch'uan) classes that purely emphasise health have become popular in hospitals, clinics, as well as community and senior centers. This has occurred as the baby boomers generation has aged and the art's reputation as a low-stress training method for seniors has become better known.[62][63] There has been some divergence between those that say they practice t'ai chi ch'uan primarily for self-defence, those that practice it for its aesthetic appeal (see wushu below), and those that are more interested in its benefits to physical and mental health.

Qigong, chi kung, or chi gung, is a practice of aligning body, breath, and mind for health, meditation, and martial arts training. With roots in traditional Chinese medicine, philosophy, and martial arts, qigong is traditionally viewed as a practice to cultivate and balance qi (chi) or what has been translated as "life energy".[64]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods.[1][11][65] Examples include healing claims for nonvitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng.[66]Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products.[11] It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements".[11] Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents.[11] This may include use of known toxic substances, such as use of the poison lead in traditional Chinese medicine.[66]

Manipulative and body-based practices feature the manipulation or movement of body parts, such as is done in bodywork and chiropractic manipulation.

Osteopathic manipulative medicine, also known as osteopathic manipulative treatment, is a core set of techniques of osteopathy and osteopathic medicine distinguishing these fields from mainstream medicine.[67]

Religion based healing practices, such as use of prayer and the laying of hands in Christian faith healing, and shamanism, rely on belief in divine or spiritual intervention for healing.

Shamanism is a practice of many cultures around the world, in which a practitioner reaches an altered states of consciousness in order to encounter and interact with the spirit world or channel supernatural energies in the belief they can heal.[68]

Some alternative medicine practices may be based on pseudoscience, ignorance, or flawed reasoning.[69] This can lead to fraud.[5]

Practitioners of electricity and magnetism based healing methods may deliberately exploit a patient's ignorance of physics to defraud them.[13]

"Alternative medicine" is a loosely defined set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine,[n 2][n 4] but whose effectiveness has not been clearly established using scientific methods,[n 2][n 3][5][6][23][25] whose theory and practice is not part of biomedicine,[n 4][n 1][n 5][n 6] or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine.[5][26][6] "Biomedicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Alternative medicine is a diverse group of medical and health care systems, practices, and products that originate outside of biomedicine,[n 1] are not considered part of biomedicine,[1] are not widely used by the biomedical healthcare professions,[74] and are not taught as skills practiced in biomedicine.[74] Unlike biomedicine,[n 1] an alternative medicine product or practice does not originate from the sciences or from using scientific methodology, but may instead be based on testimonials, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources.[n 3][5][6][13] The expression "alternative medicine" refers to a diverse range of related and unrelated products, practices, and theories, originating from widely varying sources, cultures, theories, and belief systems, and ranging from biologically plausible practices and products and practices with some evidence, to practices and theories that are directly contradicted by basic science or clear evidence, and products that have proven to be ineffective or even toxic and harmful.[n 4][7][8]

Alternative medicine, complementary medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning (are synonyms) in some contexts,[75][76][77] but may have different meanings in other contexts, for example, unorthodox medicine may refer to biomedicine that is different from what is commonly practiced, and fringe medicine may refer to biomedicine that is based on fringe science, which may be scientifically valid but is not mainstream.

The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an actual effective alternative to medical science, although some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness.[5]Marcia Angell stated that "alternative medicine" is "a new name for snake oil. There's medicine that works and medicine that doesn't work."[78] Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not, e.g., the use of the expressions "western medicine" and "eastern medicine" to suggest that the difference is a cultural difference between the Asiatic east and the European west, rather than that the difference is between evidence-based medicine and treatments that don't work.[5]

"Complementary medicine" refers to use of alternative medical treatments alongside conventional medicine, in the belief that it increases the effectiveness of the science-based medicine.[79][80][81] An example of "complementary medicine" is use of acupuncture (sticking needles in the body to influence the flow of a supernatural energy), along with using science-based medicine, in the belief that the acupuncture increases the effectiveness or "complements" the science-based medicine.[81] "CAM" is an abbreviation for "complementary and alternative medicine".

The expression "Integrative medicine" (or "integrated medicine") is used in two different ways. One use refers to a belief that medicine based on science can be "integrated" with practices that are not. Another use refers only to a combination of alternative medical treatments with conventional treatments that have some scientific proof of efficacy, in which case it is identical with CAM.[16] "holistic medicine" (or holistic health) is an alternative medicine practice that claims to treat the "whole person" and not just the illness.

"Traditional medicine" and "folk medicine" refer to prescientific practices of a culture, not to what is traditionally practiced in cultures where medical science dominates. "Eastern medicine" typically refers to prescientific traditional medicines of Asia. "Western medicine", when referring to modern practice, typically refers to medical science, and not to alternative medicines practiced in the west (Europe and the Americas). "Western medicine", "biomedicine", "mainstream medicine", "medical science", "science-based medicine", "evidence-based medicine", "conventional medicine", "standard medicine", "orthodox medicine", "allopathic medicine", "dominant health system", and "medicine", are sometimes used interchangeably as having the same meaning, when contrasted with alternative medicine, but these terms may have different meanings in some contexts, e.g., some practices in medical science are not supported by rigorous scientific testing so "medical science" is not strictly identical with "science-based medicine", and "standard medical care" may refer to "best practice" when contrasted with other biomedicine that is less used or less recommended.[n 11][84]

Prominent members of the science[31][85] and biomedical science community[24] assert that it is not meaningful to define an alternative medicine that is separate from a conventional medicine, that the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to anything at all.[24][31][85][86] Their criticisms of trying to make such artificial definitions include: "There's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't;"[24][31][85] "By definition, alternative medicine has either not been proved to work, or been proved not to work. You know what they call alternative medicine that's been proved to work? Medicine;"[33] "There cannot be two kinds of medicine conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted;"[24] and "There is no alternative medicine. There is only scientifically proven, evidence-based medicine supported by solid data or unproven medicine, for which scientific evidence is lacking."[86]

Others in both the biomedical and CAM communities point out that CAM cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between CAM and biomedicine overlap, are porous, and change. The expression "complementary and alternative medicine" (CAM) resists easy definition because the health systems and practices it refers to are diffuse, and its boundaries poorly defined.[7][n 12] Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Some alternative therapies, including traditional Chinese medicine (TCM) and Ayurveda, have antique origins in East or South Asia and are entirely alternative medical systems;[91] others, such as homeopathy and chiropractic, have origins in Europe or the United States and emerged in the eighteenth and nineteenth centuries. Some, such as osteopathy and chiropractic, employ manipulative physical methods of treatment; others, such as meditation and prayer, are based on mind-body interventions. Treatments considered alternative in one location may be considered conventional in another.[94] Thus, chiropractic is not considered alternative in Denmark and likewise osteopathic medicine is no longer thought of as an alternative therapy in the United States.[94]

One common feature of all definitions of alternative medicine is its designation as "other than" conventional medicine. For example, the widely referenced descriptive definition of complementary and alternative medicine devised by the US National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH), states that it is "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine."[1] For conventional medical practitioners, it does not necessarily follow that either it or its practitioners would no longer be considered alternative.[n 13]

Some definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare.[99] This can refer to the lack of support that alternative therapies receive from the medical establishment and related bodies regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum.[99] In 1993, the British Medical Association (BMA), one among many professional organizations who have attempted to define alternative medicine, stated that it[n 14] referred to "...those forms of treatment which are not widely used by the conventional healthcare professions, and the skills of which are not taught as part of the undergraduate curriculum of conventional medical and paramedical healthcare courses."[74] In a US context, an influential definition coined in 1993 by the Harvard-based physician,[100] David M. Eisenberg,[101] characterized alternative medicine "as interventions neither taught widely in medical schools nor generally available in US hospitals".[102] These descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and CAM introductory courses or modules can be offered as part of standard undergraduate medical training;[103] alternative medicine is taught in more than 50 per cent of US medical schools and increasingly US health insurers are willing to provide reimbursement for CAM therapies. In 1999, 7.7% of US hospitals reported using some form of CAM therapy; this proportion had risen to 37.7% by 2008.[105]

An expert panel at a conference hosted in 1995 by the US Office for Alternative Medicine (OAM),[106][n 15] devised a theoretical definition[106] of alternative medicine as "a broad domain of healing resources... other than those intrinsic to the politically dominant health system of a particular society or culture in a given historical period."[107] This definition has been widely adopted by CAM researchers,[106] cited by official government bodies such as the UK Department of Health,[108] attributed as the definition used by the Cochrane Collaboration,[109] and, with some modification,[dubious discuss] was preferred in the 2005 consensus report of the US Institute of Medicine, Complementary and Alternative Medicine in the United States.[n 4]

The 1995 OAM conference definition, an expansion of Eisenberg's 1993 formulation, is silent regarding questions of the medical effectiveness of alternative therapies.[110] Its proponents hold that it thus avoids relativism about differing forms of medical knowledge and, while it is an essentially political definition, this should not imply that the dominance of mainstream biomedicine is solely due to political forces.[110] According to this definition, alternative and mainstream medicine can only be differentiated with reference to what is "intrinsic to the politically dominant health system of a particular society of culture".[111] However, there is neither a reliable method to distinguish between cultures and subcultures, nor to attribute them as dominant or subordinate, nor any accepted criteria to determine the dominance of a cultural entity.[111] If the culture of a politically dominant healthcare system is held to be equivalent to the perspectives of those charged with the medical management of leading healthcare institutions and programs, the definition fails to recognize the potential for division either within such an elite or between a healthcare elite and the wider population.[111]

Normative definitions distinguish alternative medicine from the biomedical mainstream in its provision of therapies that are unproven, unvalidated, or ineffective and support of theories with no recognized scientific basis. These definitions characterize practices as constituting alternative medicine when, used independently or in place of evidence-based medicine, they are put forward as having the healing effects of medicine, but are not based on evidence gathered with the scientific method.[1][13][24][79][80][113] Exemplifying this perspective, a 1998 editorial co-authored by Marcia Angell, a former editor of the New England Journal of Medicine, argued that:

This line of division has been subject to criticism, however, as not all forms of standard medical practice have adequately demonstrated evidence of benefit, [n 1][84] and it is also unlikely in most instances that conventional therapies, if proven to be ineffective, would ever be classified as CAM.[106]

Public information websites maintained by the governments of the US and of the UK make a distinction between "alternative medicine" and "complementary medicine", but mention that these two overlap. The National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH) (a part of the US Department of Health and Human Services) states that "alternative medicine" refers to using a non-mainstream approach in place of conventional medicine and that "complementary medicine" generally refers to using a non-mainstream approach together with conventional medicine, and comments that the boundaries between complementary and conventional medicine overlap and change with time.[1]

The National Health Service (NHS) website NHS Choices (owned by the UK Department of Health), adopting the terminology of NCCIH, states that when a treatment is used alongside conventional treatments, to help a patient cope with a health condition, and not as an alternative to conventional treatment, this use of treatments can be called "complementary medicine"; but when a treatment is used instead of conventional medicine, with the intention of treating or curing a health condition, the use can be called "alternative medicine".[115]

Similarly, the public information website maintained by the National Health and Medical Research Council (NHMRC) of the Commonwealth of Australia uses the acronym "CAM" for a wide range of health care practices, therapies, procedures and devices not within the domain of conventional medicine. In the Australian context this is stated to include acupuncture; aromatherapy; chiropractic; homeopathy; massage; meditation and relaxation therapies; naturopathy; osteopathy; reflexology, traditional Chinese medicine; and the use of vitamin supplements.[116]

The Danish National Board of Health's "Council for Alternative Medicine" (Sundhedsstyrelsens Rd for Alternativ Behandling (SRAB)), an independent institution under the National Board of Health (Danish: Sundhedsstyrelsen), uses the term "alternative medicine" for:

In General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine, published in 2000 by the World Health Organization (WHO), complementary and alternative medicine were defined as a broad set of health care practices that are not part of that country's own tradition and are not integrated into the dominant health care system.[118] Some herbal therapies are mainstream in Europe but are alternative in the US.[120]

The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment.[5][121][122][123][124] It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery.[121][122] Until the 1970's, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.[123] In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine".[5][121][122][123][125]

Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s.[5][126][127] This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine.[5][122][123][124][125][127][128] At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation.[121]:xxi[128] By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine.[5][128][129][130] By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen".[128] In this 1983 article, the BMJ wrote, "one of the few growth industries in contemporary Britain is alternative medicine", noting that by 1983, "33% of patients with rheumatoid arthritis and 39% of those with backache admitted to having consulted an alternative practitioner".[128]

By about 1990, the American alternative medicine industry had grown to a $27 Billion per year, with polls showing 30% of Americans were using it.[127][131] Moreover, polls showed that Americans made more visits for alternative therapies than the total number of visits to primary care doctors, and American out-of-pocket spending (non-insurance spending) on alternative medicine was about equal to spending on biomedical doctors.[121]:172 In 1991, Time magazine ran a cover story, "The New Age of Alternative Medicine: Why New Age Medicine Is Catching On".[127][131] In 1993, the New England Journal of Medicine reported one in three Americans as using alternative medicine.[127] In 1993, the Public Broadcasting System ran a Bill Moyers special, Healing and the Mind, with Moyers commenting that "...people by the tens of millions are using alternative medicine. If established medicine does not understand that, they are going to lose their clients."[127]

Another explosive growth began in the 1990s, when senior level political figures began promoting alternative medicine, investing large sums of government medical research funds into testing alternative medicine, including testing of scientifically implausible treatments, and relaxing government regulation of alternative medicine products as compared to biomedical products.[5][121]:xxi[122][123][124][125][132][133] Beginning with a 1991 appropriation of $2 million for funding research of alternative medicine research, federal spending grew to a cumulative total of about $2.5 billion by 2009, with 50% of Americans using alternative medicine by 2013.[10][134]

In 1991, pointing to a need for testing because of the widespread use of alternative medicine without authoritative information on its efficacy, United States Senator Tom Harkin used $2 million of his discretionary funds to create the Office for the Study of Unconventional Medical Practices (OSUMP), later renamed to be the Office of Alternative Medicine (OAM).[121]:170[135][136] The OAM was created to be within the National Institute of Health (NIH), the scientifically prestigious primary agency of the United States government responsible for biomedical and health-related research.[121]:170[135][136] The mandate was to investigate, evaluate, and validate effective alternative medicine treatments, and alert the public as the results of testing its efficacy.[131][135][136][137]

Sen. Harkin had become convinced his allergies were cured by taking bee pollen pills, and was urged to make the spending by two of his influential constituents.[131][135][136] Bedell, a longtime friend of Sen. Harkin, was a former member of the United States House of Representatives who believed that alternative medicine had twice cured him of diseases after mainstream medicine had failed, claiming that cow's milk colostrum cured his Lyme disease, and an herbal derivative from camphor had prevented post surgical recurrence of his prostate cancer.[121][131] Wiewel was a promoter of unproven cancer treatments involving a mixture of blood sera that the Food and Drug Administration had banned from being imported.[131] Both Bedell and Wiewel became members of the advisory panel for the OAM. The company that sold the bee pollen was later fined by the Federal Trade Commission for making false health claims about their bee-pollen products reversing the aging process, curing allergies, and helping with weight loss.[138]

In 1993, Britain's Prince Charles, who claimed that homeopathy and other alternative medicine was an effective alternative to biomedicine, established the Foundation for Integrated Health (FIH), as a charity to explore "how safe, proven complementary therapies can work in conjunction with mainstream medicine".[139] The FIH received government funding through grants from Britain's Department of Health.[139]

In 1994, Sen. Harkin (D) and Senator Orrin Hatch (R) introduced the Dietary Supplement Health and Education Act (DSHEA).[140][141] The act reduced authority of the FDA to monitor products sold as "natural" treatments.[140] Labeling standards were reduced to allow health claims for supplements based only on unconfirmed preliminary studies that were not subjected to scientific peer review, and the act made it more difficult for the FDA to promptly seize products or demand proof of safety where there was evidence of a product being dangerous.[141] The Act became known as the "The 1993 Snake Oil Protection Act" following a New York Times editorial under that name.[140]

Senator Harkin complained about the "unbendable rules of randomized clinical trials", citing his use of bee pollen to treat his allergies, which he claimed to be effective even though it was biologically implausible and efficacy was not established using scientific methods.[135][142] Sen. Harkin asserted that claims for alternative medicine efficacy be allowed not only without conventional scientific testing, even when they are biologically implausible, "It is not necessary for the scientific community to understand the process before the American public can benefit from these therapies."[140] Following passage of the act, sales rose from about $4 billion in 1994, to $20 billion by the end of 2000, at the same time as evidence of their lack of efficacy or harmful effects grew.[140] Senator Harkin came into open public conflict with the first OAM Director Joseph M. Jacobs and OAM board members from the scientific and biomedical community.[136] Jacobs' insistence on rigorous scientific methodology caused friction with Senator Harkin.[135][142][143] Increasing political resistance to the use of scientific methodology was publicly criticized by Dr. Jacobs and another OAM board member complained that "nonsense has trickled down to every aspect of this office".[135][142] In 1994, Senator Harkin appeared on television with cancer patients who blamed Dr. Jacobs for blocking their access to untested cancer treatment, leading Jacobs to resign in frustration.[135][142]

In 1995, Wayne Jonas, a promoter of homeopathy and political ally of Senator Harkin, became the director of the OAM, and continued in that role until 1999.[144] In 1997, the NCCAM budget was increased from $12 million to $20 million annually.[145] From 1990 to 1997, use of alternative medicine in the US increased by 25%, with a corresponding 50% increase in expenditures.[146] The OAM drew increasing criticism from eminent members of the scientific community with letters to the Senate Appropriations Committee when discussion of renewal of funding OAM came up.[121]:175 Nobel laureate Paul Berg wrote that prestigious NIH should not be degraded to act as a cover for quackery, calling the OAM "an embarrassment to serious scientists."[121]:175[145] The president of the American Physical Society wrote complaining that the government was spending money on testing products and practices that "violate basic laws of physics and more clearly resemble witchcraft".[121]:175[145] In 1998, the President of the North Carolina Medical Association publicly called for shutting down the OAM.[147]

In 1998, NIH director and Nobel laureate Harold Varmus came into conflict with Senator Harkin by pushing to have more NIH control of alternative medicine research.[148] The NIH Director placed the OAM under more strict scientific NIH control.[145][148] Senator Harkin responded by elevating OAM into an independent NIH "center", just short of being its own "institute", and renamed to be the National Center for Complementary and Alternative Medicine (NCCAM). NCCAM had a mandate to promote a more rigorous and scientific approach to the study of alternative medicine, research training and career development, outreach, and "integration". In 1999, the NCCAM budget was increased from $20 million to $50 million.[147][148] The United States Congress approved the appropriations without dissent. In 2000, the budget was increased to about $68 million, in 2001 to $90 million, in 2002 to $104 million, and in 2003, to $113 million.[147]

In 2004, modifications of the European Parliament's 2001 Directive 2001/83/EC, regulating all medicine products, were made with the expectation of influencing development of the European market for alternative medicine products.[149] Regulation of alternative medicine in Europe was loosened with "a simplified registration procedure" for traditional herbal medicinal products.[149][150] Plausible "efficacy" for traditional medicine was redefined to be based on long term popularity and testimonials ("the pharmacological effects or efficacy of the medicinal product are plausible on the basis of long-standing use and experience."), without scientific testing.[149][150] The Committee on Herbal Medicinal Products (HMPC) was created within the European Medicines Agency in London (EMEA). A special working group was established for homeopathic remedies under the Heads of Medicines Agencies.[149]

Through 2004, alternative medicine that was traditional to Germany continued to be a regular part of the health care system, including homeopathy and anthroposophic medicine.[149] The German Medicines Act mandated that science-based medical authorities consider the "particular characteristics" of complementary and alternative medicines.[149] By 2004, homeopathy had grown to be the most used alternative therapy in France, growing from 16% of the population using homeopathic medicine in 1982, to 29% by 1987, 36% percent by 1992, and 62% of French mothers using homeopathic medicines by 2004, with 94.5% of French pharmacists advising pregnant women to use homeopathic remedies.[151] As of 2004[update], 100 million people in India depended solely on traditional German homeopathic remedies for their medical care.[152] As of 2010[update], homeopathic remedies continued to be the leading alternative treatment used by European physicians.[151] By 2005, sales of homeopathic remedies and anthroposophical medicine had grown to $930 million Euros, a 60% increase from 1995.[151][153]

In 2008, London's The Times published a letter from Edzard Ernst that asked the FIH to recall two guides promoting alternative medicine, saying: "the majority of alternative therapies appear to be clinically ineffective, and many are downright dangerous." In 2010, Brittan's FIH closed after allegations of fraud and money laundering led to arrests of its officials.[139]

In 2009, after a history of 17 years of government testing and spending of nearly $2.5 billion on research had produced almost no clearly proven efficacy of alternative therapies, Senator Harkin complained, "One of the purposes of this center was to investigate and validate alternative approaches. Quite frankly, I must say publicly that it has fallen short. It think quite frankly that in this center and in the office previously before it, most of its focus has been on disproving things rather than seeking out and approving."[148][154][155] Members of the scientific community criticized this comment as showing Senator Harkin did not understand the basics of scientific inquiry, which tests hypotheses, but never intentionally attempts to "validate approaches".[148] Members of the scientific and biomedical communities complained that after a history of 17 years of being tested, at a cost of over $2.5 Billion on testing scientifically and biologically implausible practices, almost no alternative therapy showed clear efficacy.[10] In 2009, the NCCAM's budget was increased to about $122 million.[148] Overall NIH funding for CAM research increased to $300 Million by 2009.[148] By 2009, Americans were spending $34 Billion annually on CAM.[156]

Since 2009, according to Art. 118a of the Swiss Federal Constitution, the Swiss Confederation and the Cantons of Switzerland shall within the scope of their powers ensure that consideration is given to complementary medicine.[157]

In 2012, the Journal of the American Medical Association (JAMA) published a criticism that study after study had been funded by NCCAM, but "failed to prove that complementary or alternative therapies are anything more than placebos".[158] The JAMA criticism pointed to large wasting of research money on testing scientifically implausible treatments, citing "NCCAM officials spending $374,000 to find that inhaling lemon and lavender scents does not promote wound healing; $750,000 to find that prayer does not cure AIDS or hasten recovery from breast-reconstruction surgery; $390,000 to find that ancient Indian remedies do not control type 2 diabetes; $700,000 to find that magnets do not treat arthritis, carpal tunnel syndrome, or migraine headaches; and $406,000 to find that coffee enemas do not cure pancreatic cancer."[158] It was pointed out that negative results from testing were generally ignored by the public, that people continue to "believe what they want to believe, arguing that it does not matter what the data show: They know what works for them".[158] Continued increasing use of CAM products was also blamed on the lack of FDA ability to regulate alternative products, where negative studies do not result in FDA warnings or FDA-mandated changes on labeling, whereby few consumers are aware that many claims of many supplements were found not to have not to be supported.[158]

By 2013, 50% of Americans were using CAM.[134] As of 2013[update], CAM medicinal products in Europe continued to be exempted from documented efficacy standards required of other medicinal products.[159]

In 2014 the NCCAM was renamed to the National Center for Complementary and Integrative Health (NCCIH) with a new charter requiring that 12 of the 18 council members shall be selected with a preference to selecting leading representatives of complementary and alternative medicine, 9 of the members must be licensed practitioners of alternative medicine, 6 members must be general public leaders in the fields of public policy, law, health policy, economics, and management, and 3 members must represent the interests of individual consumers of complementary and alternative medicine.[160]

Much of what is now categorized as alternative medicine was developed as independent, complete medical systems. These were developed long before biomedicine and use of scientific methods. Each system was developed in relatively isolated regions of the world where there was little or no medical contact with pre-scientific western medicine, or with each other's systems. Examples are traditional Chinese medicine and the Ayurvedic medicine of India.

Other alternative medicine practices, such as homeopathy, were developed in western Europe and in opposition to western medicine, at a time when western medicine was based on unscientific theories that were dogmatically imposed by western religious authorities. Homeopathy was developed prior to discovery of the basic principles of chemistry, which proved homeopathic remedies contained nothing but water. But homeopathy, with its remedies made of water, was harmless compared to the unscientific and dangerous orthodox western medicine practiced at that time, which included use of toxins and draining of blood, often resulting in permanent disfigurement or death.[122]

Other alternative practices such as chiropractic and osteopathic manipulative medicine were developed in the United States at a time that western medicine was beginning to incorporate scientific methods and theories, but the biomedical model was not yet totally dominant. Practices such as chiropractic and osteopathic, each considered to be irregular practices by the western medical establishment, also opposed each other, both rhetorically and politically with licensing legislation. Osteopathic practitioners added the courses and training of biomedicine to their licensing, and licensed Doctor of Osteopathic Medicine holders began diminishing use of the unscientific origins of the field. Without the original nonscientific practices and theories, osteopathic medicine is now considered the same as biomedicine.

Further information: Rise of modern medicine

Until the 1970s, western practitioners that were not part of the medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific, as practicing quackery.[122] Irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.

Dating from the 1970s, medical professionals, sociologists, anthropologists and other commentators noted the increasing visibility of a wide variety of health practices that had neither derived directly from nor been verified by biomedical science.[161] Since that time, those who have analyzed this trend have deliberated over the most apt language with which to describe this emergent health field.[161] A variety of terms have been used, including heterodox, irregular, fringe and alternative medicine while others, particularly medical commentators, have been satisfied to label them as instances of quackery.[161] The most persistent term has been alternative medicine but its use is problematic as it assumes a value-laden dichotomy between a medical fringe, implicitly of borderline acceptability at best, and a privileged medical orthodoxy, associated with validated medico-scientific norms.[162] The use of the category of alternative medicine has also been criticized as it cannot be studied as an independent entity but must be understood in terms of a regionally and temporally specific medical orthodoxy.[163] Its use can also be misleading as it may erroneously imply that a real medical alternative exists.[164] As with near-synonymous expressions, such as unorthodox, complementary, marginal, or quackery, these linguistic devices have served, in the context of processes of professionalisation and market competition, to establish the authority of official medicine and police the boundary between it and its unconventional rivals.[162]

An early instance of the influence of this modern, or western, scientific medicine outside Europe and North America is Peking Union Medical College.[165][n 16][n 17]

From a historical perspective, the emergence of alternative medicine, if not the term itself, is typically dated to the 19th century.[166] This is despite the fact that there are variants of Western non-conventional medicine that arose in the late-eighteenth century or earlier and some non-Western medical traditions, currently considered alternative in the West and elsewhere, which boast extended historical pedigrees.[162] Alternative medical systems, however, can only be said to exist when there is an identifiable, regularized and authoritative standard medical practice, such as arose in the West during the nineteenth century, to which they can function as an alternative.

During the late eighteenth and nineteenth centuries regular and irregular medical practitioners became more clearly differentiated throughout much of Europe and,[168] as the nineteenth century progressed, most Western states converged in the creation of legally delimited and semi-protected medical markets.[169] It is at this point that an "official" medicine, created in cooperation with the state and employing a scientific rhetoric of legitimacy, emerges as a recognizable entity and that the concept of alternative medicine as a historical category becomes tenable.[170]

As part of this process, professional adherents of mainstream medicine in countries such as Germany, France, and Britain increasingly invoked the scientific basis of their discipline as a means of engendering internal professional unity and of external differentiation in the face of sustained market competition from homeopaths, naturopaths, mesmerists and other nonconventional medical practitioners, finally achieving a degree of imperfect dominance through alliance with the state and the passage of regulatory legislation.[162][164] In the US the Johns Hopkins University School of Medicine, based in Baltimore, Maryland, opened in 1893, with William H. Welch and William Osler among the founding physicians, and was the first medical school devoted to teaching "German scientific medicine".[171]

Buttressed by increased authority arising from significant advances in the medical sciences of the late 19th century onwardsincluding development and application of the germ theory of disease by the chemist Louis Pasteur and the surgeon Joseph Lister, of microbiology co-founded by Robert Koch (in 1885 appointed professor of hygiene at the University of Berlin), and of the use of X-rays (Rntgen rays)the 1910 Flexner Report called upon American medical schools to follow the model of the Johns Hopkins School of Medicine, and adhere to mainstream science in their teaching and research. This was in a belief, mentioned in the Report's introduction, that the preliminary and professional training then prevailing in medical schools should be reformed, in view of the new means for diagnosing and combating disease made available the sciences on which medicine depended.[n 18][173]

Putative medical practices at the time that later became known as "alternative medicine" included homeopathy (founded in Germany in the early 19c.) and chiropractic (founded in North America in the late 19c.). These conflicted in principle with the developments in medical science upon which the Flexner reforms were based, and they have not become compatible with further advances of medical science such as listed in Timeline of medicine and medical technology, 19001999 and 2000present, nor have Ayurveda, acupuncture or other kinds of alternative medicine.[citation needed]

At the same time "Tropical medicine" was being developed as a specialist branch of western medicine in research establishments such as Liverpool School of Tropical Medicine founded in 1898 by Alfred Lewis Jones, London School of Hygiene & Tropical Medicine, founded in 1899 by Patrick Manson and Tulane University School of Public Health and Tropical Medicine, instituted in 1912. A distinction was being made between western scientific medicine and indigenous systems. An example is given by an official report about indigenous systems of medicine in India, including Ayurveda, submitted by Mohammad Usman of Madras and others in 1923. This stated that the first question the Committee considered was "to decide whether the indigenous systems of medicine were scientific or not".[174][175]

By the later twentieth century the term 'alternative medicine' entered public discourse,[n 19][178] but it was not always being used with the same meaning by all parties. Arnold S. Relman remarked in 1998 that in the best kind of medical practice, all proposed treatments must be tested objectively, and that in the end there will only be treatments that pass and those that do not, those that are proven worthwhile and those that are not. He asked 'Can there be any reasonable "alternative"?'[179] But also in 1998 the then Surgeon General of the United States, David Satcher,[180] issued public information about eight common alternative treatments (including acupuncture, holistic and massage), together with information about common diseases and conditions, on nutrition, diet, and lifestyle changes, and about helping consumers to decipher fraud and quackery, and to find healthcare centers and doctors who practiced alternative medicine.[181]

By 1990, approximately 60 million Americans had used one or more complementary or alternative therapies to address health issues, according to a nationwide survey in the US published in 1993 by David Eisenberg.[182] A study published in the November 11, 1998 issue of the Journal of the American Medical Association reported that 42% of Americans had used complementary and alternative therapies, up from 34% in 1990.[146] However, despite the growth in patient demand for complementary medicine, most of the early alternative/complementary medical centers failed.[183]

Mainly as a result of reforms following the Flexner Report of 1910[184]medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic.[n 20] Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology.[186] Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine,[187] and engaging in complex clinical reasoning (medical decision-making).[188] Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies.[189]

By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US.[190] Exceptionally, the School of Medicine of the University of Maryland, Baltimore includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration).[191][192] Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD).[193] All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Exam (USMLE).[193]

The British Medical Association, in its publication Complementary Medicine, New Approach to Good Practice (1993), gave as a working definition of non-conventional therapies (including acupuncture, chiropractic and homeopathy): "...those forms of treatment which are not widely used by the orthodox health-care professions, and the skills of which are not part of the undergraduate curriculum of orthodox medical and paramedical health-care courses." By 2000 some medical schools in the UK were offering CAM familiarisation courses to undergraduate medical students while some were also offering modules specifically on CAM.[195]

The Cochrane Collaboration Complementary Medicine Field explains its "Scope and Topics" by giving a broad and general definition for complementary medicine as including practices and ideas outside the domain of conventional medicine in several countriesand defined by its users as preventing or treating illness, or promoting health and well being, and which complement mainstream medicine in three ways: by contributing to a common whole, by satisfying a demand not met by conventional practices, and by diversifying the conceptual framework of medicine.[196]

Proponents of an evidence-base for medicine[n 21][198][199][200][201] such as the Cochrane Collaboration (founded in 1993 and from 2011 providing input for WHO resolutions) take a position that all systematic reviews of treatments, whether "mainstream" or "alternative", ought to be held to the current standards of scientific method.[192] In a study titled Development and classification of an operational definition of complementary and alternative medicine for the Cochrane Collaboration (2011) it was proposed that indicators that a therapy is accepted include government licensing of practitioners, coverage by health insurance, statements of approval by government agencies, and recommendation as part of a practice guideline; and that if something is currently a standard, accepted therapy, then it is not likely to be widely considered as CAM.[106]

That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia.[202]

Critics in the US say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo.[5][203][204][205]

Some opponents, focused upon health fraud, misinformation, and quackery as public health problems in the US, are highly critical of alternative medicine, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch.[206] Grounds for opposing alternative medicine stated in the US and elsewhere include that:

Paul Offit proposed that "alternative medicine becomes quackery" in four ways, by:[85]

A United States government agency, the National Center on Complementary and Integrative Health (NCCIH), created its own classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy.[215]

Alternative medicine practices and beliefs are diverse in their foundations and methodologies. The wide range of treatments and practices referred to as alternative medicine includes some stemming from nineteenth century North America, such as chiropractic and naturopathy, others, mentioned by Jtte, that originated in eighteenth- and nineteenth-century Germany, such as homeopathy and hydropathy,[164] and some that have originated in China or India, while African, Caribbean, Pacific Island, Native American, and other regional cultures have traditional medical systems as diverse as their diversity of cultures.[1]

Examples of CAM as a broader term for unorthodox treatment and diagnosis of illnesses, disease, infections, etc.,[216] include yoga, acupuncture, aromatherapy, chiropractic, herbalism, homeopathy, hypnotherapy, massage, osteopathy, reflexology, relaxation therapies, spiritual healing and tai chi.[216] CAM differs from conventional medicine. It is normally private medicine and not covered by health insurance.[216] It is paid out of pocket by the patient and is an expensive treatment.[216] CAM tends to be a treatment for upper class or more educated people.[146]

The NCCIH classification system is -

Alternative therapies based on electricity or magnetism use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner rather than claiming the existence of imponderable or supernatural energies.[1]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, and minerals, and includes traditional herbal remedies with herbs specific to regions where the cultural practices.[1] Nonvitamin supplements include fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil or pills, and ginseng, when used under a claim to have healing effects.[66]

Mind-body interventions, working under the premise that the mind can affect "bodily functions and symptoms",[1] include healing claims made in hypnotherapy,[217] and in guided imagery, meditation, progressive relaxation, qi gong, tai chi and yoga.[1] Meditation practices including mantra meditation, mindfulness meditation, yoga, tai chi, and qi gong have many uncertainties. According to an AHRQ review, the available evidence on meditation practices through September 2005 is of poor methodological quality and definite conclusions on the effects of meditation in healthcare cannot be made using existing research.[218][219]

Naturopathy is based on a belief in vitalism, which posits that a special energy called vital energy or vital force guides bodily processes such as metabolism, reproduction, growth, and adaptation.[41] The term was coined in 1895[220] by John Scheel and popularized by Benedict Lust, the "father of U.S. naturopathy".[221] Today, naturopathy is primarily practiced in the United States and Canada.[222] Naturopaths in unregulated jurisdictions may use the Naturopathic Doctor designation or other titles regardless of level of education.[223]

Read more from the original source:
Alternative medicine - Wikipedia

Read More...

Genetics – Wikipedia

October 20th, 2016 7:41 pm

This article is about the general scientific term. For the scientific journal, see Genetics (journal).

Genetics is the study of genes, genetic variation, and heredity in living organisms.[1][2] It is generally considered a field of biology, but it intersects frequently with many of the life sciences and is strongly linked with the study of information systems.

The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.

Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics. Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.

Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate. While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.

The word genetics stems from the Ancient Greek genetikos meaning "genitive"/"generative", which in turn derives from genesis meaning "origin".[3][4][5]

The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding.[6] The modern science of genetics, seeking to understand this process, began with the work of Gregor Mendel in the mid-19th century.[7]

Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kszeg before Mendel, was the first who used the word "genetics". He described several rules of genetic inheritance in his work The genetic law of the Nature (Die genetische Gestze der Natur, 1819). His second law is the same as what Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries.)[8]

Other theories of inheritance preceded his work. A popular theory during Mendel's time was the concept of blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents.[9] Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrongthe experiences of individuals do not affect the genes they pass to their children,[10] although evidence in the field of epigenetics has revived some aspects of Lamarck's theory.[11] Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.[12]

Modern genetics started with Gregor Johann Mendel, a scientist and Augustinian friar who studied the nature of inheritance in plants. In his paper "Versuche ber Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brnn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically.[13] Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.

The importance of Mendel's work did not gain wide understanding until the 1890s, after his death, when other scientists working on similar problems re-discovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.[14][15] (The adjective genetic, derived from the Greek word genesis, "origin", predates the noun and was first used in a biological sense in 1860.)[16] Bateson both acted as a mentor and was aided significantly by the work of women scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow.[17] Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London, England, in 1906.[18]

After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies.[19] In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.[20]

Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the AveryMacLeodMcCarty experiment identified DNA as the molecule responsible for transformation.[21] The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hmmerling in 1943 in his work on the single celled alga Acetabularia.[22] The HersheyChase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.[23]

James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA had a helical structure (i.e., shaped like a corkscrew).[24][25] Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what looks like rungs on a twisted ladder.[26] This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.[27]

Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production.[28] It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.[29]

With the newfound molecular understanding of inheritance came an explosion of research.[30] A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs.[31] One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule.[32] In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture.[33] The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.[34]

At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to progeny.[35] This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants.[13][36] In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or whitebut never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles.

In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent.[37] Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous.

The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.[38]

When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation.

Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.[39]

In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.

When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits.[40] These charts map the inheritance of a trait in a family tree.

Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "Law of independent assortment", means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. (Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.)

Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are whiteregardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.[41]

Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes.[42] The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability.[43] Measurement of the heritability of a trait is relativein a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.[44]

The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain.[45]Viruses are the only exception to this rulesometimes viruses use the very similar molecule, RNA, instead of DNA as their genetic material.[46] Viruses cannot reproduce without a host and are unaffected by many genetic processes, so tend not to be considered living organisms.

DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.[47]

Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length.[48] The DNA of a chromosome is associated with structural proteins that organize, compact and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins.[49] The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.

While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene.[37] The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.

Many species have so-called sex chromosomes that determine the gender of each organism.[50] In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. The X and Y chromosomes form a strongly heterogeneous pair.

When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.

Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid).[37] Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.

Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium.[51] Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation.[52] These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated.

The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes.[53] This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells.

The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.

The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated.[54] For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.[55]

Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each of which is composed of a sequence of amino acids, and the DNA sequence of a gene (through an RNA intermediate) is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.

This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code.[56] The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNAa phenomenon Francis Crick called the central dogma of molecular biology.[57]

The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.[58][59] Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.

A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the -globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.[60] Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.

Some DNA sequences are transcribed into RNA but are not translated into protein productssuch RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (e.g. microRNA).

Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. This is the complementary relationship often referred to as "nature and nurture". The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder such as its legs, ears, tail and face so the cat has dark-hair at its extremities.[61]

Environment plays a major role in effects of the human genetic disease phenylketonuria.[62] The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive mental retardation and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.

A popular method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births.[63] Because identical siblings come from the same zygote, they are genetically the same. Fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors whether it has "nature" or "nurture" causes. One famous example is the multiple birth study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.[64] However such tests cannot separate genetic factors from environmental factors affecting fetal development.

The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene.[65] Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genestryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.[66]

Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.

Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells.[67] These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.[68]

During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low1 error in every 10100million basesdue to the "proofreading" ability of DNA polymerases.[69][70] Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure.[71] Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence.

In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations.[72] Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence duplications, inversions, deletions of entire regions or the accidental exchange of whole parts of sequences between different chromosomes (chromosomal translocation).

Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness.[73] Mutations that do have an effect are usually deleterious, but occasionally some can be beneficial.[74] Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations will be harmful with the remainder being either neutral or weakly beneficial.[75]

Population genetics studies the distribution of genetic differences within populations and how these distributions change over time.[76] Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism,[77] as well as other factors such as mutation, genetic drift, genetic draft,[78]artificial selection and migration.[79]

Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.[80] New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.[81] The application of genetic principles to the study of population biology and evolution is known as the "modern synthesis".

By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).[82]

Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research.[83] Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer.

Organisms were chosen, in part, for convenienceshort generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), and the common house mouse (Mus musculus).

Medical genetics seeks to understand how genetic variation relates to human health and disease.[84] When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene.[85] Once a candidate gene is found, further research is often done on the corresponding gene the orthologous gene in model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.[86]

Individuals differ in their inherited tendency to develop cancer,[87] and cancer is a genetic disease.[88] The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body.

Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (37) that allow it to bypass this regulation: it no longer needs growth factors to divide, it continues growing when making contact to neighbor cells, and ignores inhibitory signals, it will keep growing indefinitely and is immortal, it will escape from the epithelium and ultimately may be able to escape from the primary tumor, cross the endothelium of a blood vessel, be transported by the bloodstream and will colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the ras proteins, or in other oncogenes.

DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA.[89] DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.

The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). ("Cloning" can also refer to the various means of creating cloned ("clonal") organisms.)

DNA can also be amplified using a procedure called the polymerase chain reaction (PCR).[90] By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.

DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments.[91] Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.

As sequencing has become less expensive, researchers have sequenced the genomes of many organisms, using a process called genome assembly, which utilizes computational tools to stitch together sequences from many different fragments.[92] These technologies were used to sequence the human genome in the Human Genome Project completed in 2003.[34] New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.[93]

Next generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently.[94][95] The large amount of sequence data available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information. See also genomics data sharing.

On 19 March 2015, a leading group of biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited.[96][97][98][99] In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[100][101]

See the original post here:
Genetics - Wikipedia

Read More...

Biotechnology – Wikipedia

October 20th, 2016 7:41 pm

"Bioscience" redirects here. For the scientific journal, see BioScience. For life sciences generally, see life science.

Biotechnology is the use of living systems and organisms to develop or make products, or "any technological application that uses biological systems, living organisms or derivatives thereof, to make or modify products or processes for specific use" (UN Convention on Biological Diversity, Art. 2).[1] Depending on the tools and applications, it often overlaps with the (related) fields of bioengineering, biomedical engineering, biomanufacturing, molecular engineering, etc.

For thousands of years, humankind has used biotechnology in agriculture, food production, and medicine.[2] The term is largely believed to have been coined in 1919 by Hungarian engineer Kroly Ereky. In the late 20th and early 21st century, biotechnology has expanded to include new and diverse sciences such as genomics, recombinant gene techniques, applied immunology, and development of pharmaceutical therapies and diagnostic tests.[2]

The wide concept of "biotech" or "biotechnology" encompasses a wide range of procedures for modifying living organisms according to human purposes, going back to domestication of animals, cultivation of the plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms such as pharmaceuticals, crops, and livestock.[3] As per European Federation of Biotechnology, Biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services.[4] Biotechnology also writes on the pure biological sciences (animal cell culture, biochemistry, cell biology, embryology, genetics, microbiology, and molecular biology). In many instances, it is also dependent on knowledge and methods from outside the sphere of biology including:

Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and heavily dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products).[5][6][7]

By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals.[8] Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical and/or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.

Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.

Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best suited crops, having the highest yields, to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants one of the first forms of biotechnology.

These processes also were included in early fermentation of beer.[9] These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of lactic acid fermentation which allowed the fermentation and preservation of other forms of food, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.[10]

For thousands of years, humans have used selective breeding to improve production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.[11]

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.[12]

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.[11]

The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty.[13] Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the biotechnology sector's success is improved intellectual property rights legislationand enforcementworldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.[14]

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeansthe main inputs into biofuelsby developing genetically modified seeds which are resistant to pests and drought. By boosting farm productivity, biotechnology plays a crucial role in ensuring that biofuel production targets are met.[15]

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology; for example:

The investment and economic output of all of these types of applied biotechnologies is termed as "bioeconomy".

In medicine, modern biotechnology finds applications in areas such as pharmaceutical drug discovery and production, pharmacogenomics, and genetic testing (or genetic screening).

Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs.[17] It deals with the influence of genetic variation on drug response in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity.[18] By doing so, pharmacogenomics aims to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects.[19] Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.[20][21]

Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at relatively low cost.[22][23] Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.[23]

Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins.[24] Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use.[25][26] Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.

Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases the aim is to introduce a new trait to the plant which does not occur naturally in the species.

Examples in food crops include resistance to certain pests,[27] diseases,[28] stressful environmental conditions,[29] resistance to chemical treatments (e.g. resistance to a herbicide[30]), reduction of spoilage,[31] or improving the nutrient profile of the crop.[32] Examples in non-food crops include production of pharmaceutical agents,[33]biofuels,[34] and other industrially useful goods,[35] as well as for bioremediation.[36][37]

Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers (4,200,000 acres) to 1,600,000km2 (395 million acres).[38] 10% of the world's crop lands were planted with GM crops in 2010.[38] As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the USA, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.[38]

Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding.[39] Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato.[40] To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed, although as of November 2013 none are currently on the market.[41]

There is a scientific consensus[42][43][44][45] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[46][47][48][49][50] but that each GM food needs to be tested on a case-by-case basis before introduction.[51][52][53] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[54][55][56][57] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[58][59][60][61]

GM crops also provide a number of ecological benefits, if not used in excess.[62] However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.

Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as micro-organisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels.[63] In doing so, biotechnology uses renewable raw materials and may contribute to lowering greenhouse gas emissions and moving away from a petrochemical-based economy.[64]

The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g. bioremediation to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g. flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively.[65] Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.

The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the USA and Europe.[66] Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.[67] The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing.[68] The cultivation of GMOs has triggered a debate about coexistence of GM and non GM crops. Depending on the coexistence regulations incentives for cultivation of GM crops differ.[69]

In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support is provided for two or three years during the course of their Ph.D. thesis work. Nineteen institutions offer NIGMS supported BTPs.[70] Biotechnology training is also offered at the undergraduate level and in community colleges.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). "A literature review on the safety assessment of genetically modified plants" (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF). Science, Technology, & Human Values: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). "Governing GMOs in the USA: science, law and public health". Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food... Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). "AAAS Board of Directors: Legally Mandating GM Food Labels Could "Mislead and Falsely Alarm Consumers"". American Association for the Advancement of Science. Retrieved February 8, 2016.

"REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods" (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

"Genetically modified foods and health: a second interim statement" (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

See more here:
Biotechnology - Wikipedia

Read More...

Arthritis – Wikipedia

October 19th, 2016 3:40 pm

Arthritis is a term often used to mean any disorder that affects joints.[1] Symptoms generally include joint pain and stiffness.[1] Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints.[1][2] In some types other organs are also affected.[3] Onset can be gradual or sudden.[4]

There are over 100 types of arthritis.[5][4] The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet.[3] Other types include gout, lupus, fibromyalgia, and septic arthritis.[3][6] They are all types of rheumatic disease.[1]

Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful.[3] Pain medications such as ibuprofen and acetaminophen (paracetamol) may be used.[7] In some a joint replacement may be useful.[3]

Osteoarthritis affects more than 3.8% of people while rheumatoid arthritis affects about 0.24% of people.[8] Gout affects about 1 to 2% of the Western population at some point in their lives.[9] In Australia and the United States more than 20% of people have a type of arthritis.[6][10] Overall the disease becomes more common with age.[6] Arthritis is a common reason that people miss work and can result in a decreased quality of life.[7] The term is from Greek arthro- meaning joint and -itis meaning inflammation.[11]

There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:

Joint pain can also be a symptom of other diseases. In this case, the arthritis is considered to be secondary to the main disease; these include:

An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.[16]

Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition.[17] Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies.[18] The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:

Pain, which can vary in severity, is a common symptom in virtually all types of arthritis. Other symptoms include swelling, joint stiffness and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms.[20] Symptoms may include:

It is common in advanced arthritis for significant secondary changes to occur. For example, arthritic symptoms might make it difficult for a person to move around and/or exercise, which can lead to secondary effects, such as:

These changes, in addition to the primary symptoms, can have a huge impact on quality of life.

Arthritis is the most common cause of disability in the USA. More than 20 million individuals with arthritis have severe limitations in function on a daily basis.[21]Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it very difficult for individuals to be physically active and some become home bound.

It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Each year, arthritis results in nearly 1 million hospitalizations and close to 45 million outpatient visits to health care centers.[22]

Decreased mobility, in combination with the above symptoms, can make it difficult for an individual to remain physically active, contributing to an increased risk of obesity, high cholesterol or vulnerability to heart disease.[23] People with arthritis are also at increased risk of depression, which may be a response to numerous factors, including fear of worsening symptoms.[24]

Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by other tests such as radiology and blood tests, depending on the type of suspected arthritis.[25] All arthritides potentially feature pain. Pain patterns may differ depending on the arthritides and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness; in the early stages, patients often have no symptoms after a morning shower. Osteoarthritis, on the other hand, tends to be worse after exercise. In the aged and children, pain might not be the main presenting feature; the aged patient simply moves less, the infantile patient refuses to use the affected limb.[citation needed]

Elements of the history of the disorder guide diagnosis. Important features are speed and time of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, gelling or locking with inactivity, aggravating and relieving factors, and other systemic symptoms. Physical examination may confirm the diagnosis, or may indicate systemic disease. Radiographs are often used to follow progression or help assess severity.

Blood tests and X-rays of the affected joints often are performed to make the diagnosis. Screening blood tests are indicated if certain arthritides are suspected. These might include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.

Osteoarthritis is the most common form of arthritis.[26] It can affect both the larger and the smaller joints of the body, including the hands, wrists, feet, back, hip, and knee. The disease is essentially one acquired from daily wear and tear of the joint; however, osteoarthritis can also occur as a result of injury. In recent years, some joint or limb deformities, such as knock-knee or acetabular overcoverage or dysplasia, have also been considered as a predisposing factor for knee or hip osteoarthritis. Osteoarthritis begins in the cartilage and eventually causes the two opposing bones to erode into each other. The condition starts with minor pain during physical activity, but soon the pain can be continuous and even occur while in a state of rest. The pain can be debilitating and prevent one from doing some activities. Osteoarthritis typically affects the weight-bearing joints, such as the back, knee and hip. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. More than 30 percent of women have some degree of osteoarthritis by age 65. Risk factors for osteoarthritis include prior joint trauma, obesity, and a sedentary lifestyle.

Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues. The attack is not only directed at the joint but to many other parts of the body. In rheumatoid arthritis, most damage occurs to the joint lining and cartilage which eventually results in erosion of two opposing bones. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe deformity in a few years if not treated. RA occurs mostly in people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and aggressive treatment, many individuals can lead a better quality of life than if going undiagnosed for long after RA's onset. The drugs to treat RA range from corticosteroids to monoclonal antibodies given intravenously. Treatments also include analgesics such as NSAIDs and disease-modifying antirheumatic drugs (DMARDs), while in rare cases, surgery may be required to replace joints, but there is no cure for the disease.[27]

Treatment with DMARDs is designed to initiate an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells.[28] Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).[29]

Bone erosion is a central feature of rheumatoid arthritis. Bone continuously undergoes remodeling by actions of bone resorbing osteoclasts and bone forming osteoblasts. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium, caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts.[29] Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.[30]

Lupus is a common collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.[31]

Gout is caused by deposition of uric acid crystals in the joint, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated.[32] When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol, febuxostat) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout or RCG.[33]

Infectious arthritis is another severe form of arthritis. It presents with sudden onset of chills, fever and joint pain. The condition is caused by bacteria elsewhere in the body. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage.[37]

Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin problem first and then the arthritis. The typical features are of continuous joint pains, stiffness and swelling. The disease does recur with periods of remission but there is no cure for the disorder. A small percentage develop a severe painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.[38]

There is no known cure for either rheumatoid or osteoarthritis. Treatment options vary depending on the type of arthritis and include physical therapy, lifestyle changes (including exercise and weight control), orthopedic bracing, and medications. Joint replacement surgery may be required in eroding forms of arthritis. Medications can help reduce inflammation in the joint which decreases pain. Moreover, by decreasing inflammation, the joint damage may be slowed.

In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.[39]

Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay need for surgical intervention in advanced cases.[40] Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities as well as equipment.

There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.[41]

Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs are less well tolerated.[42]

Rheumatoid arthritis (RA) is autoimmune so in addition to using pain medications and anti-inflammatory drugs, this type uses another category of drug called disease modifying anti-rheumatic drugs (DMARDS). An example of this type of drug is Methotrexate. These types of drugs act on the immune system and slow down the progression of RA.

A number of rheumasurgical interventions have been incorporated in the treatment of arthritis since the 1950s. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to optimized physical and medical therapy.[43]

A Cochrane review in 2000 concluded that transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis was more effective in pain control than placebo.[44][needs update]Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis.[45] Evidence of benefit is tentative.[46][47]

Pulsed electromagnetic field therapy has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis.[48] The FDA has not approved PEMF for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.

Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. More than 70% of individuals in North America affected by arthritis are over the age of 65.[citation needed] Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 20072009 showed 22.2% (49.9 million) of adults aged 18 years had self-reported doctor-diagnosed arthritis, and 9.4% (21.1 million or 42.4% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase.[49]

While evidence of primary ankle osteoarthritis has been discovered in dinosaurs,[50] the first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples.[51] It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from tzi, a mummy (circa 3000 BC) found along the border of modern Italy and Austria, to the Egyptian mummies circa 2590 BC.[52]

In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects.[53]

Continue reading here:
Arthritis - Wikipedia

Read More...

Page 1,110«..1020..1,1091,1101,1111,112..1,1201,130..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick