header logo image



Posts Tagged ‘european’

Outlook Therapeutics® Reports Financial Results for First Quarter Fiscal Year 2025 and Provides Corporate Update

Saturday, February 15th, 2025

ISELIN, N.J., Feb. 14, 2025 (GLOBE NEWSWIRE) -- Outlook Therapeutics, Inc. (Nasdaq: OTLK), a biopharmaceutical company that achieved regulatory approval in the European Union (EU) and the United Kingdom (UK) for the first authorized use of an ophthalmic formulation of bevacizumab for the treatment of wet age-related macular degeneration (wet AMD), today announced financial results for the first quarter of fiscal year 2025 and provided a corporate update.

More:
Outlook Therapeutics® Reports Financial Results for First Quarter Fiscal Year 2025 and Provides Corporate Update

Read More...

Genetic Engineering – The Definitive Guide | Biology Dictionary

Monday, January 27th, 2025

Definition

Genetic engineering or genetic modification is a field of genetics that alters the DNA of an organism by changing or replacing specific genes. Used in the agricultural, industrial, chemical, pharmaceutical, and medical sectors, genetic engineering can be applied to the production of brewing yeasts, cancer therapies, and genetically-modified crops and livestock, among countless other options. The only criterion is that the modified product is or once was a living organism that contains DNA.

Examples of genetic engineering are listed according to sector in this article, where each sector applies DNA modification with a different goal. As the human genome contains between 20,000 and 25,000 genes and as these genes can extend from just a few hundred base pairs to over 2 million, the scope of genetic engineering is huge. However, there are lots of ethical questions that concern how far this kind of research should go and what applications are acceptable.

The chemical industry uses genetic engineering when it produces modified live microorganisms for chemical production. It is not possible to genetically engineer a chemical or material like an acid or a steel bar they do not contain DNA; however, bacteria that produce acid, for example, can be genetically modified.

Natural chemical compounds are essential for the existence of life. These have been mimicked over the years by man-made (synthetic) copies. One example of genetic engineering in todays chemical industry is an enzyme called protease. Protease engineering is the foundation of genetic modification in laundry detergent manufacturing.

Proteases are enzymes found in every living organism; their function is to catalyze (speed up) the breakdown of ester and peptide bonds that are found in many types of laundry stains. Protease genes give cells the manufacturing instructions for protease production inside the cell (protein synthesis). By manipulating these genes, we can change the ultimate form of the protease and some of its characteristics.

Earlier detergents did not have access to genetic-engineering technology but even then, researchers were able to modify proteases by selecting and producing the best strains. With genetic engineering, these enzymes can be further improved for even whiter whites. Once the gene for protease production was decoded it was possible to extract and modify it. Many modifications have been made that improve stain-removing results in varying pH and water temperature levels, for example.

Other genetic engineering examples in the chemical industry include less environmentally-damaging wastewater management. This involves modifying the genes of the many types of bacteria that digest waste without their leaving behind similarly harmful byproducts. Another example is manufacturing biodegradable plastics using genetically-modified strains of cyanobacteria.

Genetic engineering examples relating to crop production are often used to tell us why not to buy or eat them; however, a growing population without the time, space, or often the knowledge to produce crops at home means we need to use our agricultural land more efficiently. At the same time, it is important not to reduce natural habitats around the world. Genetically-modified (GM) crops are an answer in the form of increased crop yield on a smaller plot. Genetically modifying a crop concentrates on increased resistance to disease, increased fiber and nutrient content, or increased yield preferably a combination of all three. If we can obtain all the minerals and vitamins we need from a super-tomato that grows very quickly without needing pesticides or fertilizers, and will even grow in drought conditions, then the topic of GM crops suddenly looks very attractive indeed.

A lot of negative public comments have caused genetically modified crops to be unpopular; many GM crops even when legally grown cant find a big enough market. This means that farmers rarely want to take the financial risk to grow them.

There is no scientific evidence that a GM crop is dangerous to eat in comparison with a non-GM crop, but genetic engineering is quite new and we cant say for sure if the long-term effects are harmful to humans or the animals that eat them (that we might then eat in our burgers). The only GM crop grown legally in the European Union (EU) is MON 810 maize. Production of this maize in the EU might also be banned in the future. Federal law in the US is strict concerning GM testing but the production, sale, and consumption of GM crops are legal.

Genetic engineering examples in livestock rearing should always mention one Food and Drug Administration restriction that has recently been lifted. The import, sale, and raising of GM salmon eggs used to be banned in the US, although this wasnt due to fears that eating these fish could be dangerous to our health the ban was due to labeling laws. This ban has now been lifted.

In AquaAdvantage salmon, scientists combined the genes of Chinook salmon and the rather ugly ocean pout (below) to produce a continuously-growing salmon (salmon usually grows seasonally) that uses and requires fewer calories than wild or farmed alternatives. The company has spent twenty years testing this new food source; arguments against GM salmons use are usually based on the fact that twenty years is not very long in the average human lifespan.

While genetically modified beef is hard to find, it is still possible that your pot roast once ate GM feed. It might also have when alive been injected with genetically-engineered recombinant bovine growth hormone (rBGH). This hormone is also injected into dairy cows. It has been reported that milk from rBGH-treated cows contains higher levels of IGF-1, a hormone that seems to increase breast, prostate, colon, and lung cancer risk in humans. This is just one of the reasons why GM products are so controversial. But studies have also shown that the use of GM feeds increases health levels in animals and often means that farmers do not need to inject antibiotics and hormones into their livestock as these chemicals can pass into the bloodstreams of the people that eat the livestock or drink their milk, this can be a doubly positive result. The jury is still out.

GM chicken is not available in your local supermarket (yet) but chickens fed with GM feeds are often labeled as such. So it is the digested residues of different genetically modified crops and not a genetically modified bird that is roasting in the oven.

Genetically modified chicken eggs are being studied as a future source of natural chemical compounds. Female chickens can be genetically engineered to produce eggs that contain larger amounts of certain proteins. These proteins are commonly used in the manufacturing processes of pharmaceutical drugs. Future drug prices could be much more affordable thanks to genetic modification technology.

Genetic engineering examples in cancer therapy are already starting to show very positive results. The chicken egg makes an appearance here, too. In this field of genetic engineering, bacterial genes that produce particular proteins are modified. These proteins you might have heard of the very heavily studied Cas9 protein form antibodies that help to destroy viruses. This type of protein also supports a mechanism that alerts the immune response in humans. As this response is often suppressed by cancer cells, Cas9 might be able to help the body to recognize and then fight cancer. Cas9 is already being studied and trialed for genetic disorders such as sickle cell disease and cystic fibrosis.

Hereditary diseases and disorders might become a thing of the past thanks to genetic engineering there is just one problem, the ethical use of human embryos for research purposes.

Embryological genetic engineering is legal in some countries and these countries are given a lot of criticism. But when He Jiankui edited the genes of twin embryos and then had them implanted in a female who gave birth to these genetically-modified children, the world went crazy and Jiankui was subsequently jailed. Not only are the long-term effects of genetic engineering unknown, but any changes might carry through to subsequent generations or continue to change without the natural control that is evolution. For people who believe that life begins at conception or consider an embryo a living, conscious person, there are even more ethical arguments.

Many parents who undergo the process of in vitro fertilization (IVF) are offered the option of pre-implantation genetic diagnosis (PGD). This checks the DNA of the fertilized egg before it is inserted into the womb. The aim is to source possible genetic mutations. The parents are allowed to discard faulty eggs. Many believe that this is very wrong as we have not agreed on what is considered an undesired mutation. A genetic fault that causes miscarriage would be acceptable, perhaps. But what about gender, hereditary mental illness, eye color? In the past years, several fertility clinics in India have been called out for promising male offspring to couples, for example. This is not an example of genetic engineering, but many groups fear that certain physiological choices may edge their way into genetic engineering without being controlled. Today, genetic modification in humans follows practically the same ethical arguments as abortion.

The pros and cons of genetic engineering are not at all clear-cut. In the field of human genetic modification, our personal beliefs affect how this technology will develop and move forward. In countries where the law states that human life begins at week 24, the genetic engineering of embryos not carried to term is more likely to be accepted. This ethical question is part of what is known as the fetal personhood argument and is the main reason why genetic engineering in humans is meeting so much resistance.

In an agricultural setting, the publics fears concern the long-term effects of eating GM foods. These fears stop farmers from producing modified crops as they might not be able to sell them and, in many countries, it is unlawful to grow them. Personal issues are often opinions; the actual pros and cons concern the results of long-term scientific research. Unfortunately, genome editing is a new technology and we do not have any data that covers more than a few years certainly nothing that covers the lifetimes of one or more generations.

Genetic engineering pros should start with the fact that this topic has allowed us to learn so much more about our genes and the genes of other organisms. It is thanks to genetic engineering that we are learning how the entire range of DNA-containing organisms from bacteria to humans works.

Genetic engineering has given us fresh and unexpected knowledge that tells us how certain illnesses develop. The field has also provided targeted therapies that can cure or at least relieve these diseases. Not only the action of pharmaceuticals but also their cheaper production as in the case of GM chicken eggs can be made more efficient through this technology.

The combination of a growing global population and the need to maintain a very unstable ratio of agricultural land to natural habitats has led to the development of genetically-engineered crops. These crops are designed to have a greater yield, use fewer nutrients to grow, and require less acreage or fewer chemicals (herbicides and pesticides). Scientists can even improve taste, nutritional values, colors, and shapes.

Genetically-modified bacteria help to produce bio-fuels from genetically-modified crops. Bio-fuels reduce the effects of fossil fuel pollution. Cyanobacteria help us to produce biodegradable plastics and other GM microorganisms break down our waste. Genetic modification is strongly linked to our ecology and future.

And we use less of the earths resources when our livestock grows more quickly. When beef cattle grow to full size in one year instead of two or three, that is two years off of every animals carbon footprint. When bovine genes are modified to fight disease, our milk and meat have less antibiotic and hormone residue. Genetic engineering means less pressure to turn important, disappearing natural ecosystems into food-production factories.

The cons are mainly based on the lack of long-term studies into the effects of genetic engineering, both on an organism and on the organisms that eat it. Maybe even those that live alongside it. As with all new but potentially damaging technology, we just dont have enough data.

Another factor is that, although we have decoded the human genome, we do not know everything we need to about every function in the human body. For example, the gut microbiome is a quite recent hot topic. Scientists now accept that bacteria in the gut directly affect the brain which was rarely the case ten years ago. But exactly how the neurotransmitters of the brain interact with chemicals in the digestive tract is still a mystery. Examples like this mean that many people argue we should not try to fix something if we dont know exactly how it works, know what the long-term effects will be, or know if it is actually broken in the first place.

There are other hurdles, of course. Before knowing whether genetic engineering can safely eliminate a fatal disorder forever, we have to figure out if it is right to change the DNA of embryos, let them grow and be born, and then research their lives from birth to old age (and maybe their children and grandchildren, too) so that we can ensure the new cure is safe.

Show/Hide

See the original post here:
Genetic Engineering - The Definitive Guide | Biology Dictionary

Read More...

The therapeutic potential of stem cells – PMC

Thursday, December 19th, 2024

Abstract

In recent years, there has been an explosion of interest in stem cells, not just within the scientific and medical communities but also among politicians, religious groups and ethicists. Here, we summarize the different types of stem cells that have been described: their origins in embryonic and adult tissues and their differentiation potential in vivo and in culture. We review some current clinical applications of stem cells, highlighting the problems encountered when going from proof-of-principle in the laboratory to widespread clinical practice. While some of the key genetic and epigenetic factors that determine stem cell properties have been identified, there is still much to be learned about how these factors interact. There is a growing realization of the importance of environmental factors in regulating stem cell behaviour and this is being explored by imaging stem cells in vivo and recreating artificial niches in vitro. New therapies, based on stem cell transplantation or endogenous stem cells, are emerging areas, as is drug discovery based on patient-specific pluripotent cells and cancer stem cells. What makes stem cell research so exciting is its tremendous potential to benefit human health and the opportunities for interdisciplinary research that it presents.

Keywords: adult stem cells, ES cells, iPS cells, cell-based therapies, drug discovery

The human body comprises over 200 different cell types that are organized into tissues and organs to provide all the functions required for viability and reproduction. Historically, biologists have been interested primarily in the events that occur prior to birth. The second half of the twentieth century was a golden era for developmental biology, since the key regulatory pathways that control specification and morphogenesis of tissues were defined at the molecular level (Arias 2008). The origins of stem cell research lie in a desire to understand how tissues are maintained in adult life, rather than how different cell types arise in the embryo. An interest in adult tissues fell, historically, within the remit of pathologists and thus tended to be considered in the context of disease, particularly cancer.

It was appreciated long ago that within a given tissue there is cellular heterogeneity: in some tissues, such as the blood, skin and intestinal epithelium, the differentiated cells have a short lifespan and are unable to self-renew. This led to the concept that such tissues are maintained by stem cells, defined as cells with extensive renewal capacity and the ability to generate daughter cells that undergo further differentiation (Lajtha 1979). Such cells generate only the differentiated lineages appropriate for the tissue in which they reside and are thus referred to as multipotent or unipotent (figure1).

Origin of stem cells. Cells are described as pluripotent if they can form all the cell types of the adult organism. If, in addition, they can form the extraembryonic tissues of the embryo, they are described as totipotent. Multipotent stem cells have the ability to form all the differentiated cell types of a given tissue. In some cases, a tissue contains only one differentiated lineage and the stem cells that maintain the lineage are described as unipotent. Postnatal spermatogonial stem cells, which are unipotent in vivo but pluripotent in culture, are not shown (Jaenisch & Young 2008). CNS, central nervous system; ICM, inner cell mass.

In the early days of stem cell research, a distinction was generally made between three types of tissue: those, such as epidermis, with rapid turnover of differentiated cells; those, such as brain, in which there appeared to be no self-renewal; and those, such as liver, in which cells divided to give two daughter cells that were functionally equivalent (Leblond 1964; Hall & Watt 1989). While it remains true that different adult tissues differ in terms of the proportion of proliferative cells and the nature of the differentiation compartment, in recent years it has become apparent that some tissues that appeared to lack self-renewal ability do indeed contain stem cells (Zhao et al. 2008) and others contain a previously unrecognized cellular heterogeneity (Zaret & Grompe 2008). That is not to say that all tissues are maintained by stem cells; for example, in the pancreas, there is evidence against the existence of a distinct stem cell compartment (Dor et al. 2004).

One reason why it took so long for stem cells to become a well-established research field is that in the early years too much time and energy were expended in trying to define stem cells and in arguing about whether or not a particular cell was truly a stem cell (Watt 1999). Additional putative characteristics of stem cells, such as rarity, capacity for asymmetric division or tendency to divide infrequently, were incorporated into the definition, so that if a cell did not exhibit these additional properties it tended to be excluded from the stem cell list. Some researchers still remain anxious about the definitions and try to hedge their bets by describing a cell as a stem/progenitor cell. However, this is not useful. The use of the term progenitor, or transit amplifying, cell should be reserved for a cell that has left the stem cell compartment but still retains the ability to undergo cell division and further differentiation (Potten & Loeffler 2008).

Looking back at some of the early collections of reviews written as the proceedings of stem cell conferences, one regularly finds articles on the topic of cancer stem cells (McCulloch et al. 1988). However, these cells have only recently received widespread attention (Reya et al. 2001; Clarke et al. 2006; Dick 2008). The concept is very similar to the concept of normal tissue stem cells, namely that cells in tumours are heterogeneous, with only some, the cancer stem cells, or tumour initiating cells, being capable of tumour maintenance or regrowth following chemotherapy. The cancer stem cell concept is important because it suggests new approaches to anti-cancer therapies (figure2).

The cancer stem cell hypothesis. The upper tumour is shown as comprising a uniform population of cells, while the lower tumour contains both cancer stem cells and more differentiated cells. Successful or unsuccessful chemotherapy is interpreted according to the behaviour of cells within the tumour.

As in the case of tissue stem cells, it is important that cancer stem cell research is not sidetracked by arguments about definitions. It is quite likely that in some tumours all the cells are functionally equivalent, and there is no doubt that tumour cells, like normal stem cells, can behave differently under different assay conditions (Quintana et al. 2008). The oncogene dogma (Hahn & Weinberg 2002), that tumours arise through step-wise accumulation of oncogenic mutations, does not adequately account for cellular heterogeneity, and markers of stem cells in specific cancers have already been described (Singh et al. 2004; Barab et al. 2007; O'Brien et al. 2007). While the (rediscovered) cancer stem cell field is currently in its infancy, it is already evident that a cancer stem cell is not necessarily a normal stem cell that has acquired oncogenic mutations. Indeed, there is experimental evidence that cancer initiating cells can be genetically altered progenitor cells (Clarke et al. 2006).

In addition to adult tissue stem cells, stem cells can be isolated from pre-implantation mouse and human embryos and maintained in culture as undifferentiated cells (figure1). Such embryonic stem (ES) cells have the ability to generate all the differentiated cells of the adult and are thus described as being pluripotent (figure1). Mouse ES cells are derived from the inner cell mass of the blastocyst, and following their discovery in 1981 (Evans & Kaufman 1981; Martin 1981) have been used for gene targeting, revolutionizing the field of mouse genetics. In 1998, it was first reported that stem cells could be derived from human blastocysts (Thomson et al. 1998), opening up great opportunities for stem cell-based therapies, but also provoking controversy because the cells are derived from spare in vitro fertilization embryos that have the potential to produce a human being. It is interesting to note that, just as research on adult tissue stem cells is intimately linked to research on disease states, particularly cancer, the same is true for ES cells. Many years before the development of ES cells, the in vitro differentiation of cells derived from teratocarcinomas, known as embryonal carcinoma cells, provided an important model for studying lineage selection (Andrews et al. 2005).

Blastocysts are not the only source of pluripotent ES cells (figure1). Pluripotent epiblast stem cells, known as epiSC, can be derived from the post-implantation epiblast of mouse embryos (Brons et al. 2007; Tesar et al. 2007). Recent gene expression profiling studies suggest that human ES cells are more similar to epiSC than to mouse ES cells (Tesar et al. 2007). Pluripotent stem cells can also be derived from primordial germ cells (EG cells), progenitors of adult gametes, which diverge from the somatic lineage at late embryonic to early foetal development (Kerr et al. 2006).

Although in the past the tendency has been to describe ES cells as pluripotent and adult stem cells as having a more restricted range of differentiation options, adult cells can, in some circumstances, produce progeny that differentiate across the three primary germ layers (ectoderm, mesoderm and endoderm). Adult cells can be reprogrammed to a pluripotent state by transfer of the adult nucleus into the cytoplasm of an oocyte (Gurdon et al. 1958; Gurdon & Melton 2008) or by fusion with a pluripotent cell (Miller & Ruddle 1976). The most famous example of cloning by transfer of a somatic nucleus into an oocyte is the creation of Dolly the sheep (Wilmut et al. 1997). While the process remains inefficient, it has found some unexpected applications, such as cloning endangered species and domestic pets.

A flurry of reports almost 10 years ago suggested that adult cells from many tissues could differentiate into other cell types if placed in a new tissue environment. Such studies are now largely discredited, although there are still some bona fide examples of transdifferentiation of adult cells, such as occurs when blood cells fuse with hepatocytes during repair of damaged liver (Anderson et al. 2001; Jaenisch & Young 2008). In addition, it has been known for many years that adult urodele amphibians can regenerate limbs or the eye lens following injury; this involves dedifferentiation and subsequent transdifferentiation steps (Brockes & Kumar 2005).

The early studies involving somatic nuclear transfer indicated that adult cells can be reprogrammed to pluripotency. However, the mechanistic and practical applications of inducing pluripotency in adult cells have only become apparent in the last 2 or 3 years, with the emergence of a new research area: induced pluripotent stem cells (iPS cells). The original report demonstrated that retrovirus-mediated transduction of mouse fibroblasts with four transcription factors (Oct-3/4, Sox2, KLF4 and c-Myc; figure1) that are highly expressed in ES cells could induce the fibroblasts to become pluripotent (Takahashi & Yamanaka 2006). Since then, rapid progress has been made: iPS cells can be generated from adult human cells (Takahashi et al. 2007; Yu et al. 2007; Park et al. 2008a); cells from a range of tissues can be reprogrammed (Aasen et al. 2008; Aoi et al. 2008); and iPS cells can be generated from patients with specific diseases (Dimos et al. 2008; Park et al. 2008b). The number of transcription factors required to generate iPS cells has been reduced (Kim et al. 2008); the efficiency of iPS cell generation increased (Wernig et al. 2007); and techniques devised that obviate the need for retroviral vectors (Okita et al. 2008; Stadtfeld et al. 2008). These latter developments are very important for future clinical applications, since the early mice generated from iPS cells developed tumours at high frequency (Takahashi & Yamanaka 2006; Yamanaka 2007). Without a doubt, this is currently the most exciting and rapidly moving area of stem cell research.

In all the publicity that surrounds embryonic and iPS cells, people tend to forget that stem cell-based therapies are already in clinical use and have been for decades. It is instructive to think about these treatments, because they provide important caveats about the journey from proof-of-principle in the laboratory to real patient benefit in the clinic. These caveats include efficacy, patient safety, government legislation and the costs and potential profits involved in patient treatment.

Haemopoietic stem cell transplantation is the oldest stem cell therapy and is the treatment that is most widely available (Perry & Linch 1996; Austin et al. 2008). The stem cells come from bone marrow, peripheral blood or cord blood. For some applications, the patient's own cells are engrafted. However, allogeneic stem cell transplantation is now a common procedure for the treatment of bone marrow failure and haematological malignancies, such as leukaemia. Donor stem cells are used to reconstitute immune function in such patients following radiation and/or chemotherapy. In the UK, the regulatory framework put in place for bone marrow transplantation has now an extended remit, covering the use of other tissues and organs (Austin et al. 2008).

Advances in immunology research greatly increased the utility of bone marrow transplantation, allowing allograft donors to be screened for the best match in order to prevent rejection and graft-versus-host disease (Perry & Linch 1996). It is worth remembering that organ transplantation programmes have also depended on an understanding of immune rejection, and drugs are available to provide effective long-term immunosuppression for recipients of donor organs. Thus, while it is obviously desirable for new stem cell treatments to involve the patient's own cells, it is certainly not essential.

Two major advantages of haemopoietic stem cell therapy are that there is no need to expand the cells in culture or to reconstitute a multicellular tissue architecture prior to transplantation. These hurdles have been overcome to generate cultured epidermis to provide autologous grafts for patients with full-thickness wounds, such as third-degree burns. Proof-of-principle was established in the mid-1970s, with clinical and commercial applications following rapidly (Green 2008). Using a similar approach, limbal stem cells have been used successfully to restore vision in patients suffering from chemical destruction of the cornea (De Luca et al. 2006).

Ex vivo expansion of human epidermal and corneal stem cells frequently involves culture on a feeder layer of mouse fibroblastic cells in medium containing bovine serum. While it would obviously be preferable to avoid animal products, there has been no evidence over the past 30 years that exposure to them has had adverse effects on patients receiving the grafts. The ongoing challenges posed by epithelial stem cell treatments include improved functionality of the graft (e.g. through generation of epidermal hair follicles) and improved surfaces on which to culture the cells and apply them to the patients. The need to optimize stem cell delivery is leading to close interactions between the stem cell community and bioengineers. In a recent example, a patient's trachea was repaired by transplanting a new tissue constructed in culture from donor decellularized trachea seeded with the patient's own bone marrow cells that had been differentiated into cartilage cells (Macchiarini et al. 2008).

Whereas haemopoietic stem cell therapies are widely available, treatments involving cultured epidermis and cornea are not. In countries where cultured epithelial grafts are available, the number of potential patients is relatively small and the treatment costly. Commercial organizations that sell cultured epidermis for grafting have found that it is not particularly profitable, while in countries with publicly funded healthcare the need to set up a dedicated laboratory to generate the grafts tends to make the financial costbenefit ratio too high (Green 2008).

Clinical studies over the last 10 years suggest that stem cell transplantation also has potential as a therapy for neurodegenerative diseases. Clinical trials have involved grafting brain tissue from aborted foetuses into patients with Parkinson's disease and Huntington's disease (Dunnett et al. 2001; Wright & Barker 2007). While some successes have been noted, the outcomes have not been uniform and further clinical trials will involve more refined patient selection, in an attempt to predict who will benefit and who will not. Obviously, aside from the opposition in many quarters to using foetal material, there are practical challenges associated with availability and uniformity of the grafted cells and so therapies with pure populations of stem cells are an important, and achievable (Conti et al. 2005; Lowell et al. 2006), goal.

No consideration of currently available stem cell therapies is complete without reference to gene therapy. Here, there have been some major achievements, including the successful treatment of children with X-linked severe combined immunodeficiency. However, the entire gene therapy field stalled when several of the children developed leukaemia as a result of integration of the therapeutic retroviral vector close to the LMO2 oncogene locus (Gaspar & Thrasher 2005; Pike-Overzet et al. 2007). Clinical trials have since restarted, and in an interesting example of combined gene/stem cell therapy, a patient with an epidermal blistering disorder received an autologous graft of cultured epidermis in which the defective gene had been corrected ex vivo (Mavilio et al. 2006).

These are just some examples of treatments involving stem cells that are already in the clinic. They show how the field of stem cell transplantation is interlinked with the fields of gene therapy and bioengineering, and how it has benefited from progress in other fields, such as immunology. Stem cells undoubtedly offer tremendous potential to treat many human diseases and to repair tissue damage resulting from injury or ageing. The danger, of course, lies in the potentially lethal cocktail of desperate patients, enthusiastic scientists, ambitious clinicians and commercial pressures (Lau et al. 2008). Internationally agreed, and enforced, regulations are essential in order to protect patients from the dangers of stem cell tourism, whereby treatments that have not been approved in one country are freely available in another (Hyun et al. 2008).

Three questions in stem cell research are being hotly pursued at present. What are the core genetic and epigenetic regulators of stem cells? What are the extrinsic, environmental factors that influence stem cell renewal and differentiation? And how can the answers to the first two questions be harnessed for clinical benefit?

Considerable progress has already been made in defining the transcriptional circuitry and epigenetic modifications associated with pluripotency (Jaenisch & Young 2008). This research area is moving very rapidly as a result of tremendous advances in DNA sequencing technology, bioinformatics and computational biology. Chromatin immunoprecipitation combined with microarray hybridization or DNA sequencing (Mathur et al. 2008) is being used to identify transcription factor-binding sites, and bioinformatics techniques have been developed to allow integration of data obtained by the different approaches. It is clear that pluripotency is also subject to complex epigenetic regulation, and high throughput genome-scale DNA methylation profiling has been developed for epigenetic profiling of ES cells and other cell types (Meissner et al. 2008).

Oct4, Nanog and Sox2 are core transcription factors that maintain pluripotency of ES cells. These factors bind to their own promoters, forming an autoregulatory loop. They occupy overlapping sets of target genes, one set being actively expressed and the other, comprising genes that positively regulate lineage selection, being actively silenced (Jaenisch & Young 2008; Mathur et al. 2008; Silva & Smith 2008). Nanog stabilizes pluripotency by limiting the frequency with which cells commit to differentiation (Chambers et al. 2007; Torres & Watt 2008). The core pluripotency transcription factors also regulate, again positively and negatively, the microRNAs that are involved in controlling ES cell self-renewal and differentiation (Marson et al. 2008).

As the basic mechanisms that maintain the pluripotent state of ES cells are delineated, there is considerable interest in understanding how pluripotency is re-established in adult stem cells. It appears that some cell types are more readily reprogrammed to iPS cells than others (Aasen et al. 2008; Aoi et al. 2008), and it is interesting to speculate that this reflects differences in endogenous expression of the genes required for reprogramming or in responsiveness to overexpression of those genes (Hochedlinger et al. 2005; Markoulaki et al. 2009). Another emerging area of investigation is the relationship between the epigenome of pluripotent stem cells and cancer cells (Meissner et al. 2008).

Initial attempts at defining stemness by comparing the transcriptional profiles of ES cells, neural and haemopoietic stem cells (Ivanova et al. 2002; Ramalho-Santos et al. 2002) have paved the way for more refined comparisons. For example, by comparing the gene expression profiles of adult neural stem cells, ES-derived and iPS-derived neural stem cells and brain tumour stem cells, it should be possible both to validate the use of ES-derived stem cells for brain repair and to establish the cell of origin of brain tumour initiating cells. Furthermore, it is anticipated that new therapeutic targets will be identified from molecular profiling studies of different stem cell populations.

As gene expression profiling becomes more sophisticated, the question of what is a stem cell? can be addressed in new ways. Several studies have used single cell expression microarrays to identify new stem cell markers (Jensen & Watt 2006). Stem cells are well known to exhibit different proliferative and differentiation properties in culture, during tissue injury and in normal tissue homeostasis, raising the question of which elements of the stem cell phenotype are hard-wired versus a response to environmental conditions.

One of the growing trends in stem cell research is the contribution of mathematical modelling. This is illustrated in the concept of transcriptional noise: the hypothesis that intercellular variability is a manifestation of noise in gene expression levels, rather than stable phenotypic variation (Chang et al. 2008). Studies with clonal populations of haemopoietic progenitor cells have shown that slow fluctuations in protein levels can produce cellular heterogeneity that is sufficient to affect whether a given cell will differentiate along the myeloid or erythroid lineage (Chang et al. 2008). Mathematical approaches are also used increasingly to model observed differences in cell behaviour in vivo. In studies of adult mouse interfollicular epidermis, it is observed that cells can divide to produce two undifferentiated cells, two differentiated cells or one of each (figure3); it turns out that this can be explained in terms of the stochastic behaviour of a single population of cells rather than by invoking the existence of discrete types of stem and progenitor cell (Clayton et al. 2007).

The stem cell niche. Stem cells (S) are shown dividing symmetrically to produce two stem cells (1) or two differentiated cells (D) (2), or undergoing asymmetric division to produce one stem cell and one differentiated cell (3). Under some circumstances, a differentiated cell can re-enter the niche and become a stem cell (4). Different components of the stem cell niche are illustrated: extracellular matrix (ECM), cells in close proximity to stem cells (niche cells), secreted factors (such as growth factors) and physical factors (such as oxygen tension, stiffness and stretch).

There is strong evidence that the behaviour of stem cells is strongly affected by their local environment or niche (figure3). Some aspects of the stem cell environment that are known to influence self-renewal and stem cell fate are adhesion to extracellular matrix proteins, direct contact with neighbouring cells, exposure to secreted factors and physical factors, such as oxygen tension and sheer stress (Watt & Hogan 2000; Morrison & Spradling 2008). It is important to identify the environmental signals that control stem cell expansion and differentiation in order to harness those signals to optimize delivery of stem cell therapies.

Considerable progress has been made in directing ES cells to differentiate along specific lineages in vitro (Conti et al. 2005; Lowell et al. 2006; Izumi et al. 2007) and there are many in vitro and murine models of lineage selection by adult tissue stem cells (e.g. Watt & Collins 2008). It is clear that in many contexts the Erk and Akt pathways are key regulators of cell proliferation and survival, while pathways that were originally defined through their effects in embryonic development, such as Wnt, Notch and Shh, are reused in adult tissues to influence stem cell renewal and lineage selection. Furthermore, these core pathways are frequently deregulated in cancer (Reya et al. 2001; Watt & Collins 2008). In investigating how differentiation is controlled, it is not only the signalling pathways themselves that need to be considered, but also the timing, level and duration of a particular signal, as these variables profoundly influence cellular responses (Silva-Vargas et al. 2005). A further issue is the extent to which directed ES cell differentiation in vitro recapitulates the events that occur during normal embryogenesis and whether this affects the functionality of the differentiated cells (Izumi et al. 2007).

For a more complete definition of the stem cell niche, researchers are taking two opposite and complementary approaches: recreating the niche in vitro at the single cell level and observing stem cells in vivo. In vivo tracking of cells is possible because of advances in high-resolution confocal microscopy and two-photon imaging, which have greatly increased the sensitivity of detecting cells and the depth of the tissue at which they can be observed. Studies of green fluorescent protein-labelled haemopoietic stem cells have shown that their relationship with the bone marrow niche, comprising blood vessels, osteoblasts and the inner bone surface, differs in normal, irradiated and c-Kit-receptor-deficient mice (Lo Celso et al. 2009; Xie et al. 2009). In a different approach, in vivo bioluminescence imaging of luciferase-tagged muscle stem cells has been used to reveal their role in muscle repair in a way that is impossible when relying on retrospective analysis of fixed tissue (Sacco et al. 2008).

The advantage of recreating the stem cell niche in vitro is that it is possible to precisely control individual aspects of the niche and measure responses at the single cell level. Artificial niches are constructed by plating cells on micropatterned surfaces or capturing them in three-dimensional hydrogel matrices. In this way, parameters such as cell spreading and substrate mechanics can be precisely controlled (Watt et al. 1988; Thry et al. 2005; Chen 2008). Cells can be exposed to specific combinations of soluble factors or to tethered recombinant adhesive proteins. Cell behaviour can be monitored in real time by time-lapse microscopy, and activation of specific signalling pathways can be viewed using fluorescence resonance energy transfer probes and fluorescent reporters of transcriptional activity. It is also possible to recover cells from the in vitro environment, transplant them in vivo and monitor their subsequent behaviour. One of the exciting aspects of the reductionist approach to studying the niche is that it is highly interdisciplinary, bringing together stem cell researchers and bioengineers, and also offering opportunities for interactions with chemists, physicists and materials scientists.

Almost every day there are reports in the media of new stem cell therapies. There is no doubt that stem cells have the potential to treat many human afflictions, including ageing, cancer, diabetes, blindness and neurodegeneration. Nevertheless, it is essential to be realistic about the time and steps required to take new therapies into the clinic: it is exciting to be able to induce ES cells to differentiate into cardiomyocytes in a culture dish, but that is only one very small step towards effecting cardiac repair. The overriding concerns for any new treatment are the same: efficacy, safety and affordability.

In January 2009, the US Food and Drug Administration approved the first clinical trial involving human ES cells, just over 10 years after they were first isolated. In this trial, the safety of ES cell-derived oligodendrocytes in repair of spinal cord injury will be evaluated (http://www.geron.com). There are a large number of human ES cell lines now in existence and banking of clinical grade cells is underway, offering the opportunity for optimal immunological matching of donors and recipients. Nevertheless, one of the attractions of transplanting iPS cells is that the patient's own cells can be used, obviating the need for immunosuppression. Discovering how the pluripotent state can be efficiently and stably induced and maintained by treating cells with pharmacologically active compounds rather than by genetic manipulation is an important goal (Silva et al. 2008).

An alternative strategy to stem cell transplantation is to stimulate a patient's endogenous stem cells to divide or differentiate, as happens naturally during skin wound healing. It has recently been shown that pancreatic exocrine cells in adult mice can be reprogrammed to become functional, insulin-producing beta cells by expression of transcription factors that regulate pancreatic development (Zhou et al. 2008). The idea of repairing tissue through a process of cellular reprogramming in situ is an attractive paradigm to be explored further.

A range of biomaterials are already in clinical use for tissue repair, in particular to repair defects in cartilage and bone (Kamitakahara et al. 2008). These can be considered as practical applications of our knowledge of the stem cell microenvironment. Advances in tissue engineering and materials science offer new opportunities to manipulate the stem niche and either facilitate expansion/differentiation of endogenous stem cells or deliver exogenous cells. Resorbable scaffolds can be exploited for controlled delivery and release of small molecules, growth factors and peptides. Conversely, scaffolds can be designed that are able to capture unwanted tissue debris that might impede repair. Hydrogels that can undergo controlled solgel transitions could be used to release stem cells once they have integrated within the target tissue.

Although most of the new clinical applications of stem cells have a long lead time, applications of stem cells in drug discovery are available immediately. Adult tissue stem cells, ES cells and iPS cells can all be used to screen for compounds that stimulate self-renewal or promote specific differentiation programmes. Finding drugs that selectively target cancer stem cells offers the potential to develop cancer treatments that are not only more effective, but also cause less collateral damage to the patient's normal tissues than drugs currently in use. In addition, patient-specific iPS cells provide a new tool to identify underlying disease mechanisms. Thus stem cell-based assays are already enhancing drug discovery efforts.

Amid all the hype surrounding stem cells, there are strong grounds for believing that over the next 50 years our understanding of stem cells will revolutionize medicine. One of the most exciting aspects of working in the stem cell field is that it is truly multidisciplinary and translational. It brings together biologists, clinicians and researchers across the physical sciences and mathematics, and it fosters partnerships between academics and the biotech and pharmaceutical industries. In contrast to the golden era of developmental biology, one of stem cell research's defining characteristics is the motivation to benefit human health.

We thank all members of our lab, past and present, for their energy, fearlessness and intellectual curiosity in the pursuit of stem cells. We are grateful to Cancer Research UK, the Wellcome Trust, MRC and European Union for financial support and to members of the Cambridge Stem Cell Initiative for sharing their ideas.

More here:
The therapeutic potential of stem cells - PMC

Read More...

The Progression of Regenerative Medicine and its Impact on Therapy …

Friday, September 13th, 2024

Clin Transl Sci. 2020 May; 13(3): 440450.

1Division of Cardiac Surgery, University of Ottawa Heart Institute, OttawaOntario, Canada

2School of Human Kinetics, University of Ottawa, OttawaCanada

1Division of Cardiac Surgery, University of Ottawa Heart Institute, OttawaOntario, Canada

3Department of Cellular & Molecular Medicine, University of Ottawa, OttawaCanada

1Division of Cardiac Surgery, University of Ottawa Heart Institute, OttawaOntario, Canada

2School of Human Kinetics, University of Ottawa, OttawaCanada

3Department of Cellular & Molecular Medicine, University of Ottawa, OttawaCanada

Received 2019 Nov 6; Accepted 2019 Nov 7.

Despite regenerative medicine (RM) being one of the hottest topics in biotechnology for the past 3decades, it is generally acknowledged that the fields performance at the bedside has been somewhat disappointing. This may be linked to the novelty of these technologies and their disruptive nature, which has brought an increasing level of complexity to translation. Therefore, we look at how the historical development of the RM field has changed the translational strategy. Specifically, we explore how the pursuit of such novel regenerative therapies has changed the way experts aim to translate their ideas into clinical applications, and then identify areas that need to be corrected or reinforced in order for these therapies to eventually be incorporated into the standardofcare. This is then linked to a discussion of the preclinical and postclinical challenges remaining today, which offer insights that can contribute to the future progression of RM.

In 1954, Dr. Joseph Murray performed the first transplant in a human when he transferred a kidney from one identical twin to another.1 This successful procedure, which would go on to have a profound impact on medical history, was the culmination of >50years of transplantation and grafting research. In the following years, organ replacement became more widespread but also led to a plateau in terms of landmark successes.1 The technology was working, but limitations were already being encountered; the most prominent of them being the lack of organ availability and the increasing need from the aging population.2 During the same time period, chronic diseases were on the rise and the associated process of tissue degeneration was becoming evident. Additionally, the available clinical interventions were merely capable of treating symptoms, rather than curing the disease, and, therefore, once a loss of tissue function occurred, it was nearly impossible to regain.3 Overall, the coupling of all these factors that took place in the 1960s and 1970s created urgency for disruptive technologies and led to the creation of tissue engineering (TE).

TE can be described as a field that applies the principles of engineering and life sciences toward the development of biological substitutes that restore, maintain, or improve tissue function or a whole organ.4 TE is considered to be under the umbrella of regenerative medicine (RM) and, according to Dr. Heather Greenwood et al., regenerative medicine is an emerging interdisciplinary field of research and clinical applications focused on the repair, replacement or regeneration of cells, tissues or organs to restore impaired function resulting from any cause, including congenital defects, diseases, trauma and aging.5 It uses a combination of technological approaches that moves it beyond traditional transplantation and replacement therapies. These approaches may include, but are not limited to, the use of soluble molecules, gene therapy, stem cell transplantation, tissue engineering, and the reprogramming of cell and tissue types.3, 6, 7 A summary of the recent history of RM is presented in Figure.

A summary timeline of the recent history of regenerative medicine (RM). Selected milestones in the development of RM are presented starting from the 1950s all the way up to the present day.

Although RM may have seemed novel, the principles of regeneration are as old as humanity and are found in its many cultures.8 A common example used is the tale of Prometheus that appeared in 8th century BCE. Prometheus, an immortal Titan in Greek mythology, stole fire and gave it to humanity for them to use, defying the gods in consequence. As punishment, Zeus decreed that he was to be bound to a rock where an eagle would feast on his liver every day and said liver would regenerate itself every night, leading to a continuous loop of torture.9 RM came about at the time it did, not only because of the combining factors mentioned above, but also because researchers had been successfully keeping tissue alive in vitro and understanding the biological processes involved in regeneration and degeneration. Consequently, possible therapeutic outcomes came into fruition. Since the arrival of TE and RM, strides made on the benchside have been ever increasing with now >280,000 search results on PubMed relating to regeneration. Discoveries and advances made by cell/molecular biologists, engineers, clinicians, and many more led to a paradigm shift from treatmentbased to curebased therapies.10 In addition to Greenwoods definition, RMs arsenal now contains controlled release matrices, scaffolds, and bioreactors.5, 8 Despite this impressive profile on the benchside, RM has so far underperformed in terms of clinical applications (i.e., poor therapy translation).8 Simply put, a disappointing number of discoveries are making it through clinical trials and onto the market.11 Although some experts say that the field is reaching a critical mass in terms of potential therapies and that we will soon see results, others, like Dr. Harper Jr. from the Mayo Clinic in Minnesota, say that the transformative power of RM is well recognized, but the complexity of translating isnt.7, 8, 12

This brings us to the subject matter of the present paper: RM and translation. The goals of this historical review are twofold. The first is to understand how RM, over the past 50years or so, has changed the way discoveries/new technologies are transferred to the clinic. How has the translational strategy changed in response to these new therapies? The second is to identify challenges that have led to RMs modest performance on the bedside. Some articles have already documented these but have focused on the clinical and postclinical factors, and whereas they will be briefly discussed here, the focus will be on preclinical factors.13 To accomplish these objectives, we will begin by summarizing the historical development of RM (which has been extensively documented by other works2, 3, 14, 15), followed by a detailed look at the definition of translational medicine (TM). With this background information established, we then look at the various preclinical and clinical impacts of RM on TM, as well as some of its effects on the private sector. Limiting factors of the field are then described, again focusing on those that are preclinical. This endeavor was initiated via a librarianassisted literature search for original research and historical documentation of the field of RM and other related subjects. The documents were then screened for relevance and the analyzed information was categorized into the themes discussed below. Conclusions were then drawn based on the interplay among these themes.

As mentioned, the idea of regeneration first started in myths and legends. This is logical because, as Drs. Himanshu Kaul and Yiannis Ventikos put it, myths shape ideas, and ideas then shape technologies.8 In addition to the tale of Prometheus, there are many others. For example, there is the Hindu myth of Raktabeej whose blood drops could each form a clone of himself, or the Indian story of the birth of the Kaurava brothers where pieces of flesh were grown in pots and treated with herbs to grow fullsized humans.8 The idea of regeneration has persisted throughout history and started to become a possibility in the early 1900s when scientists like Alexis Carrel (who invented the technique of cell culture) were finally able to keep cells and tissues alive outside of the body. This allowed them to study the mechanisms of cell renewal, regulation, and repair.8 In addition, studying regeneration goes handinhand with developmental biology. Seminal work in experimental embryology began in the 1820s with the detailed description of the differentiation of embryonic germ layers.16 An increased understanding of basic embryological mechanisms led to Hans Spemanns Nobel Prize for his theory of embryonic induction; a field that was further elaborated by his students and others, advancing it toward the possibility of cloning and demonstrating how development and regeneration are intimately linked.16 Before this era, the study of regeneration was done through the study of animals, with scientists studying the phenomena in serpents, snails, and crustaceans, for example.17, 18 However, the modern study of regeneration is said to have started with Abraham Trembleys study of the hydra, which showed that it was possible for an entire organism to regenerate from its cut appendage.19 The 18th century on through to the 19th century is also when scientists became intrigued by the amphibian newts and axolotls for their astonishing regenerative capabilities, which are still used today as the gold standard models for studying regeneration along with certain fish, such as the zebrafish.20

Now, although the term RM as we know it today would only be coined in 1999 by William Haseltine, the field itself started in the late 1970s in the form of TE (pioneered by Drs. Joseph Vacanti and Robert Langer) in the city of Boston.2, 14, 21 To address the need for novel therapies, biomedical engineers, material scientists, and biologists at Harvard and MIT started working on regenerating parts of the largest and simplest organ of the human body: the skin. In 1979, the first cellbased TE product appeared and was named Epicel.15 Developed by Dr. Howard Green et al., this technology consisted of isolating keratinocytes from a skin biopsy and having them proliferate outside of the body to make cell sheets that were then used as an autologous treatment for burn patients.15 Another famous product (this time allogeneic), developed in 1981, was Apligraf, a composite skin invention capable of rebuilding both the dermis and epidermis of skin wounds.15 With these two therapies and many more being created, TE in the 1980s was booming. At the time, researchers were also developing therapies for cartilage regeneration.

Once the 1990s came around, TE strategies were combined with stem cells (which had just been discovered) to create RM.3, 8 At that time, RM was a hot topic. After the first products for skin were commercialized, scientists became more enthused and started trying other tissues.15 Startup companies were popping up left and right, private funding was abnormally high, and public hype was gaining lots of traction. However, governments were not so quick to fund this research and took their time before making decisions, whereas private investors saw this field as very promising and thought it was their ticket to the top.14 Given that 90% of the funding of RM came from the private sector, this greatly influenced the direction of the research and its timeframe.14 People were simply trying to copy tissue formation rather than understanding it, so as to make the development process quicker.3 As a result, many of the technologies that initially looked promising failed in clinical trials or on the market.

These disappointing results coupled with the dot.com crash meant that by the end of 2002, the capital value of the industry was reduced by 90%, the workforce by 80%, and out of the 20 US Food and Drug Administration (FDA) products with clinical trials, only 4 were approved and none had any success.22 This phenomenon has been extensively studied and, according to Lysaght and Hazlehurst, five factors contributed to the industry crash22:

The products were not much better than the existing treatment options and so making the switch was not worth it for clinicians.

Even if the science was good, lowcost manufacturing procedures did not exist.

The approval process for these novel therapies was unrealistically challenging and the regulatory cost was too high.

Companies lacked the skill to market their new products.

The reimbursement strategies were unclear.

Despite these events, the industry had 89 firms survive the crash and stem cell research was not affected. In fact, from 2000 to 2004, the number of companies increased but the number of jobs decreased, which means investors were supporting research in basic and applied science with smaller firms that were lower risk, and by 2004, the field was dominated by startup companies.22 Before the crash, RM was primarily happening in the United States, but in 2004, other countries like the United Kingdom and Japan started catching up.22 The industry slowly started growing again. In 2006, the first engineered tissue (bladder) was implanted, and by 2008, commercial successes were being achieved.3, 10 As an example, hematopoietic stem cell transplants were approved and are now a curative treatment for blood disorders and other immunodeficiencies.7 Now, the RM field had ironically regenerated itself.3 It has gained increased governmental attention (federal funding has increased) and has been recognized as being at the forefront of health care.7, 22 There is once again intense media coverage that is raising public expectations.23 The number and variety of clinical trials is also increasing everywhere.23 According to allied market research, RM is predicted to be worth US $67.5 billion by 2020.10

Unfortunately, regardless of these seemingly cheerful notes, the fact remains that cell therapies remain experimental, except for the aforementioned hematopoietic stem cell treatments.13 The market for RM is still small and will remain so until RM proves that its therapies are better and cheaper than the existing ones.15 Yet, the pressure for clinical translation is increasing through the needs of the population, investors that are eager to make a return on their investments, and scientists who believe that these technologies are the future.23 Moreover, there has been a growing appreciation of the magnitude and complexity of the obstacles the field is facing, but it remains to be seen how they will be solved; although initial steps have already been taken, which will be discussed further below.

Now that we have established the background for RM, there needs to be a proper understanding of TM before conclusions on how the two are related can be drawn, which is the purpose of the following section.

The European Society for Translational Medicine (EUSTM) has defined TM as an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside, and community. The goals of TM are to combine disciplines, resources, expertise, and techniques within these pillars to promote enhancements in prevention, diagnosis, and therapies.24 TMs goals can be split into two categories: T1 and T2. T1 is to apply research from bench to bedside and back, whereas T2 is to help move successful new therapies from a research context to an everyday clinical context.25 In other words, TM is a medical practice explicitly devoted to helping basic research attain clinical application. Conceptual medical research, preclinical studies, clinical trials, and implementation of research findings are all included within TM.26

Between basic science and the clinic is an area that is popularly referred to as the valley of death.25 This gap is fraught with not only scientific obstacles (like an unknown molecular mechanism), but social and economic ones as well. This is where many novel ideas die and, consequently, companies are weary of going through this valley for fear of wasted financial resources.25 For these reasons, many of the approved drugs we get now are derivatives of others that have been previously approved.25 This is the area that TM seeks to impact, to be the bridge between idea and cure, and to act as a catalyst to increase the efficiency between laboratory and clinic.25, 26 The term bench to bedside and back is commonly used. The cost of development for a therapy is very high (estimated at US $800 million to $2.6 billion for a drug) because of increasing regulatory demands and the complexity of clinical trials, among others. TM aims to streamline the early development stages to reduce the time and cost of development.24

What will be important to note for the discussion below is that TM focuses more on the pathophysiological mechanisms of a disease and/or treatment and favors a more trialanderror method rather than an evidencebased method. Dr. Miriam Solomon argues in his book chapter entitled What is Translational Medicine? that most medical innovations proceed unpredictably with interdisciplinary teams and with shifts from laboratory to patient and back again, and that freedom of trialanderror is what will lead to more therapeutic translation.25 Furthermore, for years, TM did not have any technical suggestions for improving translation, only two broad categories that were claimed to be essential for translatability: improving research infrastructure and broadening the goals of inquiry. This discrepancy has since been identified and efforts have been made to address it. For example, the EUSTM provided a textbook called Translational Medicine: Tools and Techniques as an initiative to provide concise knowledge to the fields stakeholders.24

Presently, TM has attracted considerable attention with substantial funding and numerous institutions and journals committed to its cause.25, 27 But before this, its arrival had to be incited. TM emerged in the late 1990s to offer hope in response to the shortcomings of evidencebased medicine and basic science research, such as the unsatisfactory results from the Human Genome Project, for instance.25 There were growing concerns that the explosion of biomedical research was not being translated in a meaningful manner proportionate with the expenditures and growing needs of the patients.27 The research had ignored what it took to properly disseminate new ideas.25 The difficulties of translation from bench to bedside have always been known, but what is different with TM is the amount of emphasis that is now put on translation and the recognition on how difficult and multifaceted it is to translate technologies.25 Over the past 20years, the role, power, and research volume of the field has increased, and TM is now a top priority for the scientific community.26 TM is also often used as common justification for research funding and conveys the message to politicians and taxpayers that research activities ultimately serve the public, which is also why it appeals to todays generation of students who want to work on big, realworld problems and make a meaningful difference.28, 29

As already mentioned, RM therapies are proving difficult to translate to the clinic.11 Although the basic research discoveries are never ceasing (books such as NewPerspectives in Regeneration by Drs. HeberKatz and Stocum30, and articles such as "Tissue Engineering and Regenerative Medicine: Past, Present, and Future" by Dr. Antnio Salgado et al.,31 provide comprehensive summaries of these advancements), therapy approval is practically nonexistent.30, 31, 32 This may be due, in part, to a tendency for people to blame the lack of translation of their technologies on extrinsic factors, thus removing responsibility.11 Additionally, the failures are not being studied. For example, stem cell research looks good in small animals but often fails in larger ones and then does not progress beyond phase II or III clinical trials because no benefits are found, and historically we have not been exploring why.11, 32 Consequently, the next therapies that are developed are improved by guesses rather than through a better understanding of the disease in mind (Figure).11

The negative feedback cycle currently present in most discovery and development processes of regenerative medicine. This cycle obstructs progression of the field.

RM has the potential to impact not only the quality of healthcare but also the economy, because the costs that could be avoided with curative therapies are immense.33 For this reason, analyzing the impact of RM on the translational strategy over time can help identify aspects that should be encouraged or discouraged to drastically improve translation. Reflecting on this history cannot only help us to avoid past mistakes but can also aid in redirecting the field to a onceproductive path.34 In the following section, the preclinical impact of RM on TM will be discussed, focusing on the shift from evidencebased medicine to trialanderror, the role of the basic scientist, and the emergence of the multidisciplinary approach. Clinical impact is also covered, concentrating on regulatory modifications. Last, changes in the private sector are considered as the shift in business models is detailed.

Because the RM field is essentially comprised of new ideas on cell renewal and tissue healing, it is logical that most of its impact would be on the preclinical side, as this is where ideas are tested, finetuned, and developed. Coincidentally, it is also where the translational strategy begins. Considering certain aspects early in the developmental process, such as realistic applications and ease of use, can help facilitate translation. RMs influence on TM can thus be separated into the three themes below.

Before the late 20th century, the majority of medical research was done using evidencebased medicine. This is a systematic approach to solving a clinical problem that integrates the best available research evidence together with clinical signs, patient values, and individual clinical experience all to support scientific decision making and research progression.35 As such, evidencebased medicine favors clinical trials and does not allow for much tinkering and only that which possesses highquality clinical evidence is to be pursued. This has its limitations, as it devalues mechanistic reasoning, and both in vitro and animal studies. Therefore, evidencebased medicine may have played a role in RMs downfall in the early 2000s. TE in the 1990s was using evidencebased medicine and was simply trying to copy tissue formation rather than trying to understand it.3 That most of the funding was coming from the private sector probably did not help either. Investors saw TE as an opportunity for quick returns on their investments, so therapies were rushed to clinical trials, which led to inconsistent results.14, 25, 32

As well, evidencebased medicine obscured the need for different methods of discovery. After RMs decline and the idea of TM came about, a trialanderror method was adopted. This technique favors a team effort, mechanistic reasoning, and seeks to change the social structure of research.25 Although clinical trials are still deemed important, the trialanderror method identifies that an idea needs to first be explored and should not necessarily require the confirmation of a hypothesis.11, 25 This new method is based more so on facts and has stimulated a more informed dialogue among stakeholders (whereas the confirmation or refusal of a hypothesis cannot always be made relevant to people outside the field). This, in turn, can help the regulatory agencies reduce the burden on their review boards in the evaluation and acceptance of novel strategies.11 Therefore, the failures of RM had helped to highlight the boundaries of evidencebased medicine and, combined with the rising intensity put on TM in the 1990s, assisted in defining the trialanderror based method.

Another thing that is changed with the historical development of RM has been the role of the basic scientist. Please see Figure for a summary of the differences between the traditional and modern scientist discussed in this review. Traditionally, basic scientists have worked with a discovery mindset, but without a noticeable regard for potential therapeutic applications. It has been noted that RM has made us realize how important it is to take the practical and industrializing aspects (like cost, for example) into account even at the basic research level.7, 14 The needs of the end users need to be considered during the developmental phase if RM is to establish a proper foothold within the market.15 In view of this, over the past 2decades, medical philosophy has changed in that it encourages basic scientists to communicate more with clinicians and vice versa. Experts like Barry Coller, MD, Vice President for Medical Affairs and PhysicianinChief at the Rockefeller University Medical Center, have identified various skills that a basic scientist must possess if translational research is to be improved.26, 28 Additionally, other researchers have commented that more and more basic scientists are motivated to have an impact on global health and this passion can be a source of inspiration that can help fuel interdisciplinary cooperation.28 Efforts have also been made to familiarize basic scientists with regulatory requirements. For example, the FDA publishes guide documents with recommendations on how to address these requirements.36 Despite this, much remains to be done, as there is still a lack of TM professionals and the current research environment hampers cooperation between experts (e.g., specialization is still encouraged, and achievement awards are individualized).26, 28

A comparison between the traditional and modern scientist. Although traditional scientists are more hypothesisdriven and rigid in terms of research methodology, if the concepts shown above are used, it can generate the modern scientist who is better suited for the translation of regenerative therapies. RM, regenerative medicine.

An additional point that can be argued is that because RM got basic scientists more involved in the translational process, this has consequently made them more realistic.37 As already mentioned, early RM therapies were comprised of complex cell therapies that were not fully understood. From 2004 onward, the field diversified to include research into simpler acellular products.38 Other avenues, such as induced pluripotent stem cells, endogenous repair, nanotechnology, and regenerative pharmacology, are also being explored.37, 39, 40, 41 Increasingly, experts are trying to spread this message; for instance, in the field of cardiology, Dr. Mark Sussman, a world renowned cardiac researcher, and his colleague Dr. Kathleen Broughton at the San Diego State University Heart Institute and the Integrated Regenerative Research Institute, recently stated that After over a decade of myocardial regenerative research studies, the initial optimism and enthusiasm that fueled rapid and widespread adoption of cellular therapies for heart failure has given way to more pragmatic, realistic, and achievable goals.9

The last preclinical impact of RM to be discussed is the arrival of the multidisciplinary approach. This now widespread notion identifies that to improve translation and accelerate technology development, it is better to have a team composed of experts from multiple disciplines, because the various backgrounds and schools of thought can be combined with each contributing to a project in a different way.25, 39 What has surely incited its evolution is that RM inherently requires contributions from biologists, chemists, engineers, and medical professionals. This need has led to the formation of institutions that house all the required expertise under the same roof (such centers have increased in number since 2003), which promotes more teamwork between laboratories and clinics.28 Dr. Jennifer Hobin et al.28 states that bringing dissimilar research expertise together in close proximity is the key to creating an environment that facilitates collaboration. In addition, it could be said that these collaborative environments help minimize the flaws of medical specialization, which occurred in the second half of the 19th century; where the ideological basis that the human body can be categorized combined with the rapid arrival of new medical technologies led to the specialization of medical practice, which, in turn, led to the segregation of medical professionals from each other and the patient.42 Coincidentally, if one recalls the definition of TM, it, along with the trialanderror based method, suggests that improved research infrastructures and team efforts can facilitate the translation of therapies.

We now look at the influence that RM has had on the clinical side of therapy development. Before the subject is discussed, it is important to note that the reason clinical research has been affected is because of the uniqueness of RM therapies. Their novelty does not fit within the current regulatory process or use in clinical trials, and although the latter has yet to adapt, the regulatory sector has attempted over the years to facilitate the journey from bench to bedside.7, 43, 44

Initially, when RM was in its infancy, its therapies were regulated by the criteria originally developed for drugs; and as we have seen, this was identified as a factor that led to its downfall. Now, in 2019, several regulatory changes have been implemented to rectify this. What has helped has been the input from other countries. As mentioned above, RM started in the United States, but after the crash, other countries like the United Kingdom and Japan caught up, and their less stringent regulatory procedures have allowed them to better adapt the framework for these new therapies.22 In 2007, the European Union passed the Advanced Therapy Products Regulation law, which defined regenerative therapies, categorized them, and provided them with separate regulatory criteria for advanced approval.13, 43 In 2014, public pressure and researcher demands led Japan to enact three new laws: the Regenerative Medicine Promotion Act, the Pharmaceuticals, Medical Devices, and Other Therapeutic Products Act, and the Act on the Safety of Regenerative Medicine. These unprecedented national policies now help therapies gain accelerated and conditional approval to better conduct clinical trials and to better meet the demands of the patients.7, 13, 44, 45 During this time, the United States has not stood idle. In 2012, the US Congress passed the FDA Safety and Innovation Act (FDASIA), which expanded its existing Accelerated Approval Pathway to include breakthrough therapies, a category created for new emerging technologies, including regenerative strategies.13, 46 Drs. Celia Witten, Richard McFarland, and Stephanie Simek provide a wellwritten overview on the efforts of the FDA to accommodate RM.36 By and large, it is safe to say that RM has spurred a drastic change in traditional regulatory pathways to not only better manage these novel therapies but also put more weight on efficient translation.

It is also important to discuss changes in the private sector because manufacturing and marketing is and will remain one of the greatest obstacles facing RM, and, once again, the novelty of the field is responsible. Although the bulk of the problems remain, there has nonetheless been a change in business strategies that is worth appreciating.

Throughout its history, RM research has been carried out by academic research institutions or small and mediumsized enterprises.23, 47 With this in mind, the business model used in the health industry varies depending on the type of company. The royalty model is the one primarily used by biotech companies.8, 14 Here, businesses will develop a therapy up to the clinical stage and then hand it off to a company with more resources (usually a pharmaceutical one) who can carry out the larger scale studies. With this model, biotech companies make money simply through royalties and this carries both pros and cons (Figure).

A comparison of both the royalty and integrated business models used by private companies in the biomedical industry. The pros and cons are listed with the assumption that they are for a startup company in regenerative medicine.

Because the market for regenerative therapies currently is not big enough for the royalty model, startups have had to shift to an integrated model where the discovery, development, approval, and manufacturing of a new therapy are all done internally (which is unusual for small startups).8 Using this strategy, the companies can reap all the rewards but obviously also assume all the risk.

The market for regenerative therapies has so far been small enough that smaller firms do not have to manufacture large quantities of their products (like they do in the pharmaceutical industry) and they can start making money in a quicker fashion.8 Whether the business model will change again as the market grows or if the original startups will grow in proportion remains to be seen.14 What is to be highlighted here is that those who seek to commercialize regenerative therapies have had to shift to an integrated business model (that was not previously the norm for smaller ventures), which has affected translation by letting them have more influence in determining how their therapy is being developed, marketed, and manufactured.

Having detailed RMs relationship with the translation strategy and the aspects that changed in conjunction with the fields development, the remainder of the review will summarize the challenges that are contributing to RMs modest performance in the clinic.

With increased funding and a growing number of committed institutions, many countries have become increasingly invested in RMs success. For example, the US Department of Health and Human Services recognizes RM as being at the forefront of healthcare.7 As well, the UK government has identified RM as a field in which they can become global leaders and that will generate significant economic returns.44 The literature indicates that RM is reaching a critical mass and is on the verge of a significant clinical transition. The optimism is as high as it has ever been and the rush to succeed with clinical trials is equally felt.23 However, the bottom line is that the clinical and market performance is still very poor. Being that a gold standard for treatment in RM remains elusive, clinicians are often illinformed about current applications, and studies on safety and efficacy are lacking.23, 44, 48, 49 The National Institute of Health estimates that 8090% of potential therapies run into problems during the preclinical phase.28 Naturally, scientists have offered various explanations for these results, such as deficiencies in translational science and poor research practices in the clinical sciences.50 Shockingly, in a 2004 analysis, 101 articles by basic scientists were found that clearly promised a product with major clinical application, and yet 20years later, only 5 were licensed and only 1 had a major impact.50 Therefore, it is easily deducible that many challenges still lie ahead. The perceived riskbenefit ratio remains high and, as a consequence, clinical trials have been proceeding with caution.13, 23, 33 Numerous reviews have been published on these challenges but with an emphasis on those relating to the clinical phase.11, 13, 22, 51 Although these will be summarized below, the present study highlights the identification and analysis of the preclinical challenges. Please see Figure for a summary of the preclinical and clinical obstacles discussed herein.

Summary of the preclinical and postclinical challenges discussed. Even though preclinical obstacles to the translation of regenerative medicine therapies are more elusive, they are just as significant as their counterparts.

To begin, a possible explanation for the preclinical obstacles being underrepresented in the literature is because of the pliability of the phase itself. Although the clinical phase is composed of numerous subphases and strict protocols, the preclinical research is much less structured with less oversight. Whereas rigorous scientific method is applied to the experiments themselves, which usually consist of in vitro followed by in vivo experiments, the basic scientist has more flexibility regarding experimental organization, structure, and backtracking; thus, making explicit challenges possibly harder to recognize.

Some researchers have nevertheless attempted to do so. For example, Dr. Jennifer Hobin et al. have identified three major risks associated with RM technologies as being tumorigenicity, immunogenicity, and risks involved with the implantation procedure.13 The first two relate to arguably the largest preclinical challenge, which have been identified as needing a better understanding of the mechanism of action.12 Although the difficulties of identifying a mechanism are appreciated in the scientific community, it is imperative that improvements in this area are made as it will affect application and manufacturing decisions. Hence, greater emphasis on identifying the mechanism of action(s) will need to be adopted by basic scientists who are looking to develop a technology.

Another significant preclinical challenge is the lack of translation streamlining for basic scientists. Although basic scientists have become more involved in the translational process and more pragmatic over the years, there is, in general, still a lack of incentive and available resources to help a scientist translate their research. Academic faculty members are given tenure and promotion based on funding success (grants) and intellectual contributions (publications).28 Thus, researchers who have received money to conduct research and publish their work on a promising new therapy might stop short of translation as there may be no additional recognizable accomplishment or motivation for such an endeavor. For example, Jennifer Hobin et al. described the case of Dr. Daria MochlyRosen at Stanford Universitys Translational Research Program, who sought help for an interesting idea for a heart rate regulation therapy.13 She was turned down by numerous companies that found the clinical challenges too daunting and her colleagues offered no support but rather discouraged her from pursuing the idea saying that it would not be worthwhile for her career.

Last, a very important preclinical challenge that has gained recognition over the past few years is the lack of appropriate preclinical testing models. It is often reported that novel therapies that do well in the laboratory but then fail in larger animal studies or clinical trials. This is partly due to a lack of mechanistic insight, but also because of a shortage of appropriate in vitro, in vivo, and ex vivo models.9, 36 With properly validated preclinical models, we would be better able to gauge the performance of novel therapies and predict their future clinical success, but instead we are misidentifying the potential of therapies. Notably, the lack of appropriate models also contributes to the difficulty in obtaining reliable data on the underlying mechanism(s) of action of RM therapies, as differences may exist between the preclinical and clinical settings.

As far as clinical challenges go, they are numerous. Stem cell trials in particular have received criticism from a perceived lack of rigor and controlled trials.23 Related to this, a potent point that has arisen over the past few years is the absence of longterm followup studies for clinical trials, which is clearly necessary to establish the safety and efficacy of these interventions.13, 33 Unfortunately, they are costly and they are timeconsuming. Efforts are nonetheless being made to overcome these obstacles. For example, in 2015, the Mayo Clinic released an RM buildout perspective offering a blueprint for the discovery, translation and application of regenerative medicine therapies for accelerated adoption into standard of care.7 Institutions, such as Canadas Center for Commercialization of Regenerative Medicine, have been launched to help researchers mitigate the risks of cell therapy development by offering technical as well as business services.12, 51 Experts are also stepping up; for example, Drs. Arnold Caplan and Michael West proposed a new regulatory pathway that incorporates large postmarket studies into clinical trials.33

In terms of manufacturing, it is difficult to engage industry because the necessary technology to produce RM therapies at an industrial level does not exist yet. Scaleout and automated production methods for the manufacturing of regenerative therapies are needed.7, 10, 12, 23, 52, 53 This challenge stems from the complexity and natural intrinsic variation of the biological components, which makes longterm stability difficult to achieve and increases manufacturing costs.13, 44 Now, if RM therapies could establish their superiority over conventional treatments, then this would potentially alleviate costs and increase the likelihood of being reimbursed, but it remains to be seen.13 A hot topic at the moment is the choice between autologous or allogeneicbased products, which would entail either a centralized or decentralized manufacturing model, respectively (although hybrid models have been proposed).7, 23, 54 Autologous products, being patientspecific, have the advantage of having smaller startup costs, simpler regulations, and point of care processing.47 As for allogeneic products, they are more suitable for an off the shelf product, for a scaleout model and quality controls can be applied in bulk.47, 54 Dr. Yves Bayon et al.51 provided a thorough description of this topic while simultaneously indicating areas that have been identified for improvement.

As mentioned above, regulatory challenges are what have been most addressed thus far through scientific and public pressure. Moving forward, the goal identified by expert thinktank sessions is to harmonize RMspecific regulations across agencies and countries.7, 36 Reimbursement is the last of the regulatory challenges to be considered. In order for RM treatments to become broadly available, reimbursement is a necessity and both public and private healthcare need to determine how the regulations will be modified for disruptive therapies coming down the pipeline.13, 23, 44

RM has had an undeniable influence on the process of bench to bedside research. Preclinically, it has helped identify the limitations of evidencebased medicine and contributed to the paradigm shift to the trialanderror method. Likewise, the field has changed its mindset and the basic scientist is adopting new responsibilities becoming more motivated, pragmatic, and involved in TM, rivaling researchers in the applied sciences. The multidisciplinary approach has also been promoted by RM over the years and institutions dedicated to fostering collaborative research in RM have increased in numbers. Clinically, regulatory pathways that were developed for drugs and biomedical devices, and which have been in place for decades, have been adapted to aid RMs disruptive technologies, leading to new guidelines that favor translation. In the private sector, the novel nature of RM therapies has led to startup companies using an alternative business model that provides them toptobottom authority over the development of their products and it is yet to be seen if the business strategy in place will be sufficient as the industry grows.

If the translation of RM therapies is to be improved, many of the challenges to be overcome lie in the early stages of therapy development, such as identifying the mechanism(s) of action, validating preclinical experimental models, and incentivizing translational research for basic scientists. In later stages, regulatory changes have been made, but much still needs to be addressed. This includes the adoption of clinical trials that are more rigorous and include longterm followup studies, the development of appropriate manufacturing technology, the synchronization of regulatory agencies, and a clear plan for reimbursement strategies. Once again, these challenges have been discussed in greater detail in previous works.2, 3, 7, 12, 13, 15, 22, 23, 26, 31, 38, 44, 48, 51, 52 While it seems that the field may be at a tipping point with many challenges remaining, the fact that translation has been influenced in a positive way gives promise to the future progression of RM therapies.

This work was supported by a Collaborative Research Grant from the Canadian Institutes of Health Research (CIHR) and the Natural Sciences and Engineering Research Council (NSERC; CPG158280 to E.J.S.), and the Hetenyi Memorial Studentship from the University of Ottawa (to E.J.).

All authors declared no competing interest for this work.

Visit link:
The Progression of Regenerative Medicine and its Impact on Therapy ...

Read More...



2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick