| Olga Fink
AssistantProfessor,IntelligentMaintenanceand Operations Systems, Swiss Federal Institute of Technology in Lausanne
| Thomas Hartung
Professor, Bloomberg School of Public Health, Johns Hopkins University
| SangYupLee
Senior Vice-President, Research; Distinguished Professor, Korea Advanced Institute of Science and Technology
| Andrew Maynard
Professor, School for the Future of Innovation in Society, Arizona State University
Breakthroughs in artificial intelligence (AI) – such as deep learning, generative AI and other foundation models – enable scientists to make discoveries that would have been near-impossible otherwise and accelerate the rate of scientific discovery more broadly.
Over the past few years, there has been a transformation in how AI is used in scientific discoveries. From Deep Mind’s AlphaFold – an AI system that accurately predicts the 3D models of protein structures – to discovering a new family of antibiotics and materials for more efficient batteries, the world is on the cusp of an AI-driven revolution in how new knowledge is discovered and used.1,2,3 According to a recent report from the United States President’s Council of Advisors on Science and Technology, “AI has the potential to transform every scientific discipline and many aspects of the way we conduct science”.4
While AI has been used in research for many years, recent advances in deep learning, generative AI and foundation models are transformative. Scientists are building and using large language models to mine scientific literature, working with AI chatbots to brainstorm new hypotheses, creating AI models capable of analysing vast amounts of scientific data, and using deep learning to make discoveries. They are also exploring how AI and robotics can be integrated with lab-based methods to accelerate research in innovative ways.
As a result, AI is emerging as a transformative general-purpose technology in scientific research that can unearth discoveries that would have otherwise remained hidden. With the current rate of innovation, these are likely to lead to advances in the areas of:
Scientists predict that general-purpose AI will transform every part of the scientific discovery process over the next few years. Researchers can draw on past findings to envision new possibilities – AI allows connections to be made and inferences to be drawn that lie beyond the capacity of human minds alone.
Ethical considerations and challenges remain – the extent of the risk to individual privacy, autonomy and identity and the possibility of societal disruptions caused by these powerful technologies are not yet fully known.5 Additionally, environmental impacts resulting from the energy consumption and resource extraction needed to sustain AI growth must also be considered.
Equally, more research is needed to manage the impact of the technology effectively.6 For example, tackling inherent biases in data sets and enhancing the reliability of model-generated content is crucial to scientific integrity. Ensuring ethical data use and safeguarding research subject privacy require stringent security measures. Navigating intellectual property rights, particularly ownership and copyright of model-generated content is essential to a collaborative environment and must be addressed.
Read more: For more expert analysis, visit the AI for scientific discovery transformation map.
| Olga Fink
Assistant Professor, Intelligent Maintenance and Operations Systems, Swiss Federal Institute of Technology in Lausanne
| Lisette van Gemert-Pijnen
Professor, Persuasive Health Technology, University of Twente
| Dongwon Lee
Professor, Pennsylvania State University
| Andrew Maynard
Professor, School for the Future of Innovation in Society, Arizona State University
| Bastiaan van Schijndel
Innovation Manager, ZORGTTP
Access to increasingly large datasets – especially when using AI – transforms research, discovery and innovation. However, concerns around privacy, security and data sovereignty limit the degree to which high-value data can be shared and used nationally and globally. An emerging and powerful suite of technologies makes it possible to share and use sensitive data in ways that address these concerns.
In recent years, there has been growing interest in “synthetic data”.7 These data replicate the patterns and trends in sensitive datasets but do not contain specific information that could be linked to individuals or compromise organizations or governments. Powered by advances in AI, synthetic data removes many of the restrictions to working with sensitive data and opens new possibilities in global data sharing and collaborative research on biological phenomena, health-related studies, training AI models and more. However, even with the advent of synthetic data at a national level, health trends in a source nation will be exposed, and such concerns will need to be overcome.
There has also been renewed interest in homomorphic encryption, a technology from the 1970s.8,9 Rather than recreate datasets with the same characteristics as the raw data, homomorphic encryption allows encoded data to be analysed without the raw data being directly accessible. While promising, such encryption requires significantly more energy and
time to achieve a secure result.
As advances in AI transform the value of data, techniques like synthetic data generation and homomorphic encryption are predicted to enable sharing and access to data while ensuring privacy, security and data sovereignty. Within health-related research, in particular, access to data in ways that don’t compromise the rights and safety of individuals and communities is already showing promise for accelerating advances in disease detection, treatment and prevention.10
Effective data sharing and utilization technologies that protect privacy, security and data sovereignty are essential if the emerging potential of AI is to be realized. Yet, despite their potential, synthetic data and homomorphic encryption have several limitations. These include poor representation of potentially significant edge cases or outliers in the case of synthetic data and the potential ability to infer or reconstruct sensitive data despite the de-identification inherent in both techniques. Further work on the technologies and the use policies surrounding them will be necessary to ensure their success.11
Read more: For more expert analysis, visit the privacy-enhancing technologies transformation map.
| Mohamed-Slim Alouini
Al-Khwarizmi Distinguished Professor, Electrical
and Computer Engineering, King Abdullah University of Science and Technology
| Joseph Costantine
Associate Professor, Electrical and Computer Engineering, American University of Beirut
| Marco Di Renzo
CNRS Research Director, Laboratory of Signals
and Systems (L2S), Paris-Saclay University
| Javier Garcia-Martinez
Professor, Chemistry and Director, Molecular Nanotechnology Lab, University of Alicante
Global demand for higher data rates, lower latency and energy-efficient connectivity is skyrocketing.12 The highly anticipated launch of 6G by 2030 is expected to intensify this pressure even further. To meet these challenges, future networks will need to be engineered for enhanced capacity and connectivity and with a strong focus on environmental sustainability. Enter reconfigurable intelligent surfaces (RIS), platforms that use metamaterials, smart algorithms and advanced signal processing to turn ordinary walls and surfaces into intelligent components for wireless communication.
Akin to the idea of “smart mirrors”, RIS enable the precision focusing control of electromagnetic waves, reducing interference and the need for high transmission power. Equally, RIS are highly adaptive and can dynamically adjust configurations according to real-time demands. This adaptability enables efficient use of resources and enhances energy efficiency in wireless networks.13,14,15
The development of hardware platforms and a surge in experimental initiatives in the field of RIS have drawn considerable interest from telecommunication stakeholders keen on exploring its potential for next-generation wireless networks. A significant milestone was the effective integration of RIS into existing wireless networks. Several RIS platforms have showcased the technology’s impressive capabilities from a hardware perspective.16
The growth of RIS is likely to impact several industrial sectors broadly.17 For example, tailored radio wave propagation in smart factories can ensure reliable communication in a highly complex environment. RIS allow sensors to transmit data with minimal power for the internet of things (IoT), which demands considerable energy. For vehicular networks, RIS enhance safety by enabling robust communications between vehicles and infrastructure. To improve coverage in agricultural settings, RIS are a promising solution with low energy consumption and high-cost efficiency.18
Market intelligence reports suggest RIS are on the cusp of exponential adoption and growth. Several companies, including Rhode & Schwarz, Huawei, ZTE, Intel and Samsung, are all investing in RIS, sending a strong signal that RIS will be central to the telecommunications landscape in the coming years.19
Before this happens, however, several outstanding challenges will have to be addressed, including high hardware costs and the need for clear standards and regulations on the secure and ethical use of
the technology.20
Read more: For more expert analysis, visit the RIS transformation map.
| Mohamed-Slim Alouini
Al-Khwarizmi Distinguished Professor of Electrical and Computer Engineering, King Abdullah University of Science and Technology
| Mariette DiChristina
Dean and Professor, Practice in Journalism, Boston University College of Communication
High altitude platform stations (HAPS) operate at stratospheric altitudes, approximately 20 kilometres above Earth. Typically taking the form of balloons, airships, or fixed-wing aircraft, they offer a stable platform for observation and communication and can operate for months. Advances in solar panel efficiency, battery energy density, lightweight composite materials, autonomous avionics and antennas, coupled with the expansion of frequency bands and new aviation standards, make HAPS viable in the near term. HAPS can deliver connectivity, coverage and performance enhancements that neither satellites nor terrestrial towers can match, particularly in areas with difficult terrains such as mountains, jungles or deserts.21
Access to the connected world serves as a bridge to the future, creating pathways to prosperity and new educational possibilities as well as strengthening the fabric of social connectivity. Yet, according to the International Telecommunication Union (ITU), about one-third of people worldwide remain offline. Women and older adults are disproportionally affected.22 A key component in addressing this challenge is better infrastructure.
HAPS could improve connectivity for communities underserved by traditional communications infrastructure, particularly in remote areas. The COVID-19 pandemic highlighted the critical nature of internet access, revealing how disparities in connectivity perpetuate socioeconomic inequalities. By bridging this digital divide, HAPS technology could enable access to educational, healthcare and economic opportunities.
In addition to providing internet access, these adaptable platforms can play an important role in various critical applications, from supporting disaster management to enhancing broadband coverage and environmental monitoring. The ability of HAPS to quickly deploy and adapt to changing conditions could make them an invaluable tool in managing emergencies, where timely information and communication can save lives.23
Investment in HAPS from aerospace engineering leaders has created advancements in materials, propulsion systems and solar cell technology.24 HAPS are now economically viable for commercial and real-world deployment. Organizations with extensive knowledge in and resources for developing reliable, long-endurance HAPS have aided its evolution and role in the future of communications infrastructure.
Industry examples include the Airbus Zephyr, Thales’ Stratobus and Boeing Aurora projects. Lower latency, reduced costs, higher capacity, easy hardware upgrades and faster deployment are attractive commercial propositions. The market size was valued at $783.3 million in 2023 and is expected to grow at a compound annual growth rate of 10.4% from 2023 to 2033.25
However, HAPS, operating at stratospheric altitudes for extremely long durations, are different from traditional crewed aircraft in several ways, and current regulatory frameworks are not fit for purpose. Organizations such as the International Civil Aviation Organization (ICAO) are actively discussing new policies and guidance to enable the responsible deployment of HAPS.26
Read more: For more expert analysis, visit the HAPS transformation map.
| Mohamed-Slim Alouini
Al-Khwarizmi Distinguished Professor, Electrical and Computer Engineering, King Abdullah University of Science and Technology
| Joseph Costantine
Associate Professor, Electrical and Computer Engineering, American University of Beirut
| Christos Masouros
Professor, Signal Processing and Wireless Communications, University College London
Decades of separate development in sensing and communications technologies have resulted in a surplus of devices with overlapping functions, leading to device congestion, spectrum inefficiency and financial loss.27 Integrated sensing and communications (ISAC) addresses this by bringing sensing and communication capabilities into a single system, facilitating simultaneous data collection and transmission. This integration optimizes hardware, energy and cost efficiency while also enabling novel applications beyond conventional communication paradigms.28
ISAC makes wireless networks environment-aware, enabling capabilities like localization, environment mapping and infrastructure monitoring. Examples of this include environmental monitoring systems that use sensors and data analytics to monitor air and water quality, soil moisture and weather conditions. These systems help in smart agriculture, environmental conservation and urban planning. Additionally, smart grids integrate sensors and communication technologies into power grids, enhancing efficiency and reliability while enabling the monitoring of electricity consumption and generation.29,30
The adoption of ISAC also promises to render device utilization more sustainable. Potential benefits include reduced energy and silicon consumption alongside improved options for device reuse, recycling or repurposing.31
Optical-wireless ISAC technology represents a particularly exciting advancement. By integrating sensing and communication capabilities, lighting and display systems can seamlessly become part of the wireless ecosystem. Illuminated surfaces can serve as network nodes, facilitating communication and sensing without electromagnetic interference. This is especially advantageous in sensitive environments such as smart healthcare and industrial manufacturing.32
However, the realization of ISAC’s potential hinges on surmounting technical hurdles, establishing communication standards and ensuring network-level coordination. Its success will be gauged by its adoption across various industries, from connected cars to e-health.33 This underscores the imperative for ongoing innovation and collaboration in this field.
Read more: For more expert analysis, visit the ISAC transformation map.
| Carlo Ratti
Professor, Urban Technologies, Massachusetts Institute of Technology
| Landry Signe
Professor, Thunderbird School of Global Management, Arizona State University
| Izuru Takewaki
President and Professor, Kyoto Arts and Crafts University Professor Emeritus of Architectural Engineering, Kyoto University
As major tech platforms search for utility in the metaverse, one industry stands poised for transformation: construction. Immersive and AI-driven immersive reality tools for the built world allow designers and construction professionals to check the congruence between the physical and digital, ensuring accuracy and safety and advancing sustainability.
Construction is one of the world’s largest and most impactful industries, contributing 40% of global carbon dioxide (CO2) emissions.34 Despite its immense footprint, the industry has been slow to embrace the digital revolution. However, immersive technology holds the promise of transforming this landscape.
Immersive design experiences help anticipate the challenges that could evolve during construction by testing hypotheses, identifying potential errors and providing solutions before construction starts. Virtual prototyping and experimentation increase accuracy. Digital twins, already in widespread industrial use, could be used to simulate outcomes of far more complex proposals for urban development projects, better develop infrastructure and serve constituents, and allow greater efficiency and effectiveness. Crucially, this would streamline the construction process from design to implementation, allowing waste to be identified and eliminated, improving both efficiency and sustainability.35
Equally, for an industry that is booming, a skill and labour shortage is emerging to the point where supply is now critically low. In the US alone, the national trade association Associated Builders and Contractors estimates that in 2025, the industry will need to bring in nearly 454,000 new workers on top of normal hiring to meet industry demand.36 The metaverse has the potential to mitigate skill and labour shortages through the creation of immersive learning and training environments, regardless of location, for professionals in the architecture, engineering and construction industries.37
The metaverse also stands to improve efficiencies in upkeep and inspection. A Japanese construction company, for example, estimates that nationwide, one million hours are spent simply travelling to inspections.38 If the metaverse provides robust and reliable remote inspection capabilities, these million hours could be reallocated towards other critical work.
Arguably, the next leap forward in this field will be the incorporation of generative AI, with text-to-building information modelling possibly converting textual prompts directly into detailed, three-dimensional building models, encompassing construction specifications, safety information and other metadata.39
Although risks may include privacy and access to energy, especially in the developing world, a proactive and collaborative approach will encourage innovation while making it inclusive and safe. The promise to reduce the gap between conceptualization and implementation might end up rendering obsolete some of the most technical professional figures in the design field, calling for novel training paths and upskilling programmes.
Read more: For more expert analysis, visit the immersive technology transformation map.
| Mine Orlu
Professor, Pharmaceutics, University College London
| Wilfried Weber
Scientific Director, Leibniz Institute for New Materials
As global temperatures rise, the need for cooling solutions is set to soar. The International Energy Agency (IEA) estimates that the global energy demand for space cooling will more than triple over the next 30 years, accounting for about 37% of global electricity demand growth by 2050.40 Elastocaloric heat pumps are an innovative technology that can drastically reduce the energy required for heating and cooling several times over.41
The potential impact of elastocaloric heat pumps, particularly in the context of heightening demand for cool air, is substantial. A US Department of Energy study ranks them as the most promising alternative to current systems.42 The heart of this technology is elastocaloric materials, which emit heat when subjected to mechanical stress and cool down when the stress is relaxed. This allows them to operate on a continuous stress and relaxation cycle. The added benefit of elastocaloric heat pumps is that they do not rely on refrigerant gases, which are potentially damaging to the environment. Instead, they make use of widely available metals like nickel and titanium.
Taken together, the environmental impact of catering to emerging energy requirements for temperate control can be significantly reduced by elastocaloric technology. Socially, this technology can enhance access to cooling in regions with limited or no grid-based electricity, thereby improving quality of life and addressing a key aspect of climate change impact.43
Research and development in the field is advancing quickly, with the rate of scientific publications doubling every 22 months. The surge in patent applications, with automotive and cooling industries taking the lead, underscores the growing commercial interest in this technology. On the technological side, there has been steady improvement in materials and device designs; new prototypes are able to demonstrate what elastocaloric heat pumps can achieve. Similarly, universities and businesses have introduced several functional elastocaloric heat pump models, exploring the use of complementary materials and innovative production techniques.44
Scaling elastocaloric heat pumps involves overcoming some big hurdles. These pumps need materials that can last through millions of cycles of being stretched and relaxed without breaking down – a process that’s being tackled by experimenting with different metal alloys and manufacturing techniques. Engineers are working on systems that can efficiently move energy using hydraulics to help squeeze or stretch materials, which can trigger heating or cooling.45
Additionally, for these heat pumps to become widely available, the production of these materials needs to scale up significantly to align with the constantly increasing demand for cooling that has been forecast in the face of global warming. However, with growing commercial interest and technological innovation, the future looks promising for the widespread adoption of elastocaloric heat pumps, ushering in a new era of efficient and environmentally friendly cooling solutions.
Read more: For more expert analysis, visit the elastocalorics transformation map.
| Sang Yup Lee
Senior Vice-President, Research; Distinguished Professor, Korea Advanced Institute of Science and Technology
| Hailong Li
Professor, School of Energy Science and Technology, Central South University
| Wilfried Weber
Scientific Director, Leibniz Institute for New Materials
| Zequn Yang
Associate Professor, School of Energy Science and Technology, Central South University
Amid the urgency of climate change, a silent revolution brews: microorganisms are being used to capture greenhouse gases from air or exhaust gases and convert them into high-value products. To drive this process, the organisms use sunlight or chemical energy such as hydrogen. Engineering the organisms promises a wide palette of sustainable products while simultaneously reducing global warming.
Microbial carbon capture is emerging as a promising strategy to control atmospheric CO2 and mitigate global warming.46 Simultaneously, it can produce various products with significant market potential, such as fuels, fertilizers and animal feed. To achieve this, researchers are developing microorganisms – including bacteria and microalgae – that use sunlight or sustainable chemical energy to absorb and transform gases.
There are two main designs for microbial carbon capture. The first, photobioreactors, use photosynthetic organisms like cyanobacteria and microalgae to capture CO2, employing sunlight to process CO2-laden gas bubbled through a bath containing such organisms. The second is when microorganisms capture CO2 by using energy from sources like hydrogen, organic waste streams or other chemicals derived from CO2 using renewable energy.47 Regardless of whether they use sunlight or chemicals for energy, both systems modify organisms to convert CO2 into new products, such as biodiesel or protein-rich animal feed.48 The product value of each system varies significantly; the choice between which system to use depends on the specific needs and capabilities of the implementing company, such as available resources. This also means that companies could, once implemented, generate new products for the market instead of paying between $50 and $100 per tonne of CO2 to offset their emissions.
The technology is driven by organizations specializing in cell modification to boost specific substance production.49 Following a series of successful demonstrations and proofs-of-concept, microbial carbon capture is now ready to transition from pilot to full-scale production. By 2022, global investment in the technology had already reached $6.4 billion, highlighting its readiness to be brought to market.50 Companies such as Seambiotic in Israel, Alga Energy in Spain and Bio Process Algae in the US have deployed pilot-scale facilities to explore the commercial viability of microbial carbon capture systems.
Despite significant progress, microbial carbon capture systems still face challenges that hinder their widespread adoption and commercialization. Firstly, microorganisms are mostly adapted to low-temperature conditions and are less effective in capturing CO2 from hot industrial exhaust gases. Additional energy consuming cooling facilities are needed. Optimization requires investigating how to improve microbial resistance to industrial exhaust levels of heat, as well as resistance against acidic impurities.51 Secondly, existing microbial carbon capture systems are still very expensive.52 However, the high value of the products could offset at least part of this cost. Lastly, production sites need an abundance of sunlight and access to renewable or clean energy, which is not guaranteed across all global regions.53 Only when these challenges are overcome will the full potential of the technology be realized as part of the global effort to achieve a net-zero emission world.
Read more: For more expert analysis, visit the microbial carbon capture transformation map.
| Mariette DiChristina
Dean, Boston University College of Communication
| Javier Garcia-Martinez
Professor, Chemistry and Director, Molecular Nanotechnology Lab, University of Alicante
Alternative livestock feeds offer sustainable solutions to address the growing demand for protein in animal agriculture. These feeds, sourced from insects, single-cell proteins, algae and food waste, provide viable alternatives to traditional ingredients like soy, maize and wheat.54
Feed alternatives offer substantial sustainability improvements. Currently, nearly 80% of soy production is used as animal feed, leading to significant negative environmental consequences.55 This demand drives deforestation, biodiversity loss, over-fertilization and greenhouse gas emissions from land-use changes. Transitioning to alternative livestock feeds could mitigate these challenges and promote more environmentally sustainable practices in animal agriculture.
A further advantage of alternative animal feed is the diversity and nutritional value it adds, which can play a critical role in protecting animal welfare. It can offer a broader range of nutrients than conventional feeds, improving animal health and well-being and, potentially, the quality of the produce itself.56 For instance, insects can be produced on an industrial scale to yield high-quality protein, while single-cell proteins or algae can supply essential proteins and fats for several species of animals. Additionally, capturing human food waste or using ingredients like algae, azolla, chickpeas and orange pulp are emerging as promising alternatives.57
The cost-benefit of these alternative sources is also a key factor. They are often cheaper to produce and obtain. The use of black soldier fly larvae (BSFL) is an example; studies show that adding BSFL into animal diets can reduce the costs associated with feed. This is primarily because BSFL can be cultivated from organic waste, reducing the need for traditional, more expensive feed ingredients like fish meal or soybean meal.58
The market for alternative ingredients to feed livestock is vibrant, and multiple companies worldwide have now successfully introduced quality alternative options.59 In 2023, the global animal feed alternative protein market was valued at $3.96 billion. It is projected to significantly grow in value over the next decade, increasing to $8.2 billion by 2033.60
Alternative animal feed is, however, more than a one-size-fits-all solution. Its feasibility varies based on local availability, manufacturing costs and environmental and social conditions. Other challenges, including environmental regulations, ethical concerns and competition, remain. Sustainable feed resources are increasingly competing with sustainable fuel production, for example. This competition could limit the availability of livestock feeds, potentially driving up prices and hindering widespread adoption. The future success of the alternative animal feed industry depends on its ability to navigate these challenges and adapt to the demand for more sustainable and efficient feed options.
Read more: For more expert analysis, visit the alternative livestock feeds transformation map.
| David K Cooper
Senior Investigator, Center for Transplantation Sciences, Massachusetts General Hospital, Harvard Medical School
| Emanuele Cozzi
Professor, Transplantation Immunology, Padua University Hospital
| Geoffrey Ling
Professor, Neurology, Johns Hopkins Hospital
| Bernard Meyerson
Chief Innovation Officer Emeritus, IBM
Organ transplantation, a significant advancement in medicine during the latter half of the 20th century, has continued to progress. This ongoing evolution was underscored by a remarkable milestone in March 2024: the first successful transplantation of a non-human (pig) kidney into a living human recipient.61 This progress is driven by fundamental enablers such as our ability to understand and precisely edit the genome.
Organ transplants save lives – but the need far outstrips the available donor pool. In the US alone, more than 100,000 patients are awaiting an organ transplant, and yet only approximately 30,000 organs will become available this year.62
To meet this need, for more than three decades, steady progress has been made in the science dealing with the transplantation of organs from animals into humans. Thanks to technology like CRISPR-Cas9, it is now possible to create multiple genetic manipulations in a single pig to overcome the immunological (rejection) barrier. These include inserting genes that may impact the function of the transplanted pig organ and deleting genes for viruses that might infect the patient who receives a pig graft. While some pigs have undergone as many as 69 gene edits, the majority have approximately 10 gene edits.63
This ability to understand and precisely edit the genome, coupled with novel immunosuppressive drug regimens, has enabled the survival of non-human primates with life-supporting pig kidneys or hearts for periods now extending months or even years in the case of kidney transplantation.
Furthermore, understanding genomes offers much more than organs for transplantation. Over one million patients in the US have type 1 diabetes (juvenile diabetes), and an estimated 30 million have type 2 diabetes, which could be cured by the transplantation of pig pancreatic islet cells (which produce insulin).64 There are over one million patients in the US with debilitating Parkinson’s disease; implanting specialized pig cells could improve their condition.65
If “xenotransplantation”, or the transplantation of organs from animals into humans, becomes a common form of therapy, it would impact not only the quality of life of millions of patients but could also bring about changes in the healthcare economy. For example, there could be significant reductions in the number of staff involved in dialysis programmes and an increase in those involved in all aspects of organ and cell transplantation, including pig breeding. Although xenotransplantation will initially be expensive, it might soon prove less costly than maintaining a patient on long-term dialysis or a patient with heart failure who requires frequent emergency admissions to the hospital.
Progress in the laboratory has been sufficiently encouraging, enabling the US Food and Drug Administration (FDA) to approve pig heart transplants in two living patients (in 2022 and 2023) and a pig kidney transplant in one patient (in 2024).66,67,68 Although the recipients of all three transplants sadly passed away after the procedures, the trajectory of human organ donations indicates that survival rates will significantly improve as research progresses and techniques advance.
Xenotransplantation raises ethical considerations that need further exploration, ideally by various leaders in policy, business and societal spaces. In addition, a vast amount of data still needs to be acquired from initial patient trials to ensure the efficacy of treatments is maximized. However, solid prior learnings from established transplant technology, combined with the increasing capability and dropping costs of gene-editing techniques, indicate good reasons to be optimistic regarding the future of interspecies transplants to prevent the needless loss of hundreds of thousands of human lives each year. How quickly these changes in healthcare and industry occur will also depend on how regulatory authorities and society respond to this new therapy field.
Read more: For more expert analysis, visit the genomics for transplants transformation map.