Entrepreneurship Exploratory research Generalities

Introduction to DeSci

How Science of the Future is being born before our eyes

« [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and limbs of actual human beings » Jessica Sacher, Phage Directory co-founder

In a previous article, one of the very first published on Resolving Pharma, we looked at the problems posed by the centralizing role of scientific publishers, which in addition to raising financial and ethical issues, is a brake on innovation and scientific research. At that time, in addition to making this observation, we proposed ways of changing this model, mainly using NFTs and the Blockchain. For several months now, and thanks to the popularization of Web3 and DAOs, initiatives have been emerging from the four corners of the world in favour of a science that facilitates collective intelligence, redesigns the methods of funding and scientific publication and, ultimately, considerably reduces the path between the laboratory and patients. It is time to explore this revolution, which is still in its infancy, and which is called DeSci for Decentralized Science.

The needed emergence of DeSci

One story that illustrates the inefficiencies of current science is often taken as an example in the DeSci world: that of Katalin Kariko, a Hungarian biochemist who carried out numerous research projects from the 1990s onwards (on in vitro-transcribed messenger RNA) which, a few decades later, would be at the origin of several vaccines against Covid-19. Despite the innovative aspects of Kariko’s research, she was unable to obtain the research grants necessary to pursue her projects because of political rivalry: the University of Pennsylvania, where she was based, had chosen to give priority to research on therapeutics targeting DNA directly. This lack of resources led to a lack of publications, and K. Kariko was demoted in the hierarchy of her research unit. This example shows the deleterious consequences of centralized organization on funding allocation (mainly from public institutions and private foundations) and on the reputation of scientists (from scientific publishers). 

How many researchers spend more time looking for funding than working on research topics? How many applications do they have to fill in to access funding? How many promising but too risky, or unconventional, research projects are abandoned for lack of funding? How many universities pay scientific publishers a fortune to access the scientific knowledge they themselves have helped to establish? How many results, sometimes perverted by the publication logic of scientific journals, turn out to be non-reproducible? With all the barriers to data exchange related to scientific publication, is science still the collective intelligence enterprise it should be? How many scientific advances that can be industrialized and patented will not reach the market because of the lack of solid and financed entrepreneurial structures to support them (although considerable progress has been made in recent decades to enable researchers to create their own start-ups)? 

DeSci, which we could define as a system of Science organization allowing, by relying on Web3 technologies and tools, everyone to finance and take part in research and scientific valorization in exchange for a return on investment or a remuneration, proposes to answer all the problems mentioned above. 

This article will first look at the technical foundations of Decentralized Science and then explore some cases in which decentralization could improve Science efficiency.

Understanding Web3, DAOs and Decentralized Science

In the early days of the Web, there were very high barriers to entry for users wishing to post information: before blogs, forums and social networks, one had to be able to write the code for one’s website or pay someone to do it in order to share content. 

With the advent of blogs and social networks, as we mentioned, Web2 took on a different face: expression became considerably easier. On the other hand, it has been accompanied by a great deal of centralization: social networking platforms now possess the content that their users publish and exploit it commercially (mostly through advertising revenue) without paying them a cent.

Web3 is a new version of the Internet that introduces the notion of ownership thanks to the Blockchain. Indeed, whereas Web2 was built on centralized infrastructures, Web3 uses the Blockchain. Data exchanges are recorded in a Blockchain and can generate a remuneration in cryptocurrencies with a financial value but also giving, in certain cases, a decision-making power on the platforms used by the contributors. Web 3 is therefore a way of marking the ownership of content or easily rewarding a user’s action. Web 3 is without doubt the most creative version of the Internet to this day. 

Finally, we cannot talk about Web3 without talking about Decentralized Autonomous Organizations (DAOs). These organizations are described by Vitalik Buterin, the iconic co-founder of the Ethereum blockchain, as: “entities that live on the Internet and have an autonomous existence, while relying on individuals to perform the tasks it cannot do itself”. In a more down-to-earth way, they are virtual assemblies whose rules of governance are automated and transparently recorded in a blockchain, enabling its members to act collectively, without a central authority or trusted third party, and to take decisions according to rules defined and recorded in smart contracts. Their aim is to simplify and make collective decisions-making and actions more secure, transparent and tamper-proof. DAOs have not yet revealed their full potential, but they have already shown that they can operate as decentralized and efficient investment funds, companies or charities. In recent months, science DAOs have emerged, based on two major technological innovations.

The technological concepts on which DeSci relies on: 

To understand the inner workings of DeSci and especially its immense and revolutionary potential, it is important to clarify two concepts, which are rather uncommon in the large and growing Web3 domain, but which lie at the heart of a number of DeSci projects:

  • IP-NFTs: The concept of IP-NFTs was developed by the teams of the company Molecule (one can find their interview on Resolving Pharma). It is a meeting point between IP (intellectual property) and NFTs (non-fungible tokens): it allows scientific research to be tokenized. This means that a representation of a research project is placed on the Blockchain in the form of an exchangeable NFT. A legal agreement is automatically made between the investors (buyers of the NFT) and the scientist or institution conducting the research. The owners of the NFT will be entitled to remuneration for licensing the intellectual property resulting from the research or creating a start-up from this intellectual property.

Figure 1 – Operating diagram of the IP-NFT developed by Molecule (Source:

  • Data-NFTs: Many Blockchain projects are concerned with Data ownership , but one of the most successful project is Ocean Protocol.  A Data-NFT represents a copyright (or an exclusive licence) registered in the Blockchain and relating to a data set. Thus, it is possible for a user to exploit its data in several ways: by charging other users for temporary licences, by selling its datasets or by collectivizing them with other datasets in a “Data Union”.

These two concepts make it possible to make intellectual property liquid, and thus to create new models of financing and collaboration. To take a simple example, a researcher can present a project and raise funds from investors even before a patent is filed. In exchange, the investors have an IP-NFT that allows them to benefit from a certain percentage of the intellectual property and revenues that will potentially be generated by the innovation. 

Let’s now turn to some DeSci examples.

Transforming scientific reviewing

When researchers want to communicate to the scientific community, they write an article and submit it to scientific publishers. If the publishers accept the research topic, they will seek out other researchers who verify the scientific validity of the article, and a process of exchange with the authors ensues: this is called peer-reviewing. The researchers taking part in this process are not paid by the publishers and are mainly motivated by their scientific curiosity.

This system, as it is currently organized – centrally, gives rise to several problems:

  • It takes a long time: in some journals, it takes several months between the first submission of an article and its final publication. This avoidable delay can be very damaging to the progress of science (but we will come back to this later in the article!). Moreover, given the inflation in the number of scientific articles and journals, the system based on volunteer reviewers is not equipped to last in the future.
  • The article is subject to the bias of the editor as well as the reviewers, all in an opaque process, which makes it extremely uncertain. Studies have shown that by resubmitting a sample of previously published papers and changing the names and institutions of the authors, 89% of them were rejected (without the reviewers noticing that the papers were already published)
  • The entire process is usually opaque and unavailable to the final reader of the paper.

Peer-reviewing in Decentralized Science will be entirely different. Several publications have demonstrated the possibility of using thematic scientific DAOs to make the whole process more efficient, fair and transparent. We can thus imagine that decentralization could play a role in different aspects: 

  • The choice of reviewers would no longer depend solely on the editor , but could be approved collectively.
  • Exchanges around the article could be recorded on the blockchain and thus be freely accessible.
  • Several remuneration systems, financial or not, can be imagined in order to attract quality reviewers. We can thus imagine that each reviewer could earn tokens allowing them to register in a reputation system (see below), to participate in the DAO’s decision-making process but also to participate in competitions with the aim of obtaining grants. 

Decentralized peer-reviewing systems are still in their infancy and, however promising they may be, there are still many challenges to be overcome, starting with interoperability between different DAOs.

Creating a new reputation system

The main value brought about by the centralized system of science is that of the reputation system of the actors. Why do you want to access prestigious schools and universities, and why are you sometimes prepared to go into debt over many years to do so? Having the name of a particular university on your CV will make it easier for you to access professional opportunities. In a way, companies have delegated some of their recruitment to schools and universities.  Another system of reputation, which we mentioned earlier in this article, is that of scientific publishers. Isn’t the quality of a researcher measured by the number of articles he or she has managed to have published in prestigious journals?

Despite their prohibitive cost (which allows scientific publishers to be one of the highest gross margin industries in the world – hard to do otherwise when you are selling something you get for free!), these systems suffer from serious flaws: does being accepted into a university and graduating accurately reflect the involvement you had during your studies and the skills you acquired through various experiences at the intersection of the academic and professional worlds? Is a scientist’s reputation proportional to his or her involvement in the ecosystem? Jorge Hirsch, the inventor of the H-index, which aims to quantify the productivity and scientific impact of a researcher according to the level of citation of his or her publications, has himself questioned the relevance of this indicator.  Peer-reviews, the quality of courses given, the support of young researchers and the real impact of science on society are not considered by the current system.

Within the framework of DeSci, it will be possible to imagine a system based on the Blockchain that makes it possible to trace and authenticate a researcher’s actions – and not just the fact of publishing articles – in order to reward him or her through non-tradable reputation tokens. The main challenge of this reputation system will be the transversality, the interoperability and  adoption by different DAOs. We can imagine that these tokens could be used to participate in votes (in the organization of conferences, in the choice of articles, etc.) and that they will themselves be allocated according to voting mechanisms (for example, students who have taken a course will be able to decide collectively on the number of tokens to allocate to the professor). 

Transforming the codes of scientific publication to bring out collective intelligence

Science is a collective and international work in which, currently, as a researcher, you can only communicate with other research teams around the world through:

  • Publications in which you cannot give access to all the data generated by your research and experiments (it is estimated that about 80% of the data is not published, which contributes to the crisis of scientific reproducibility)
  • Publications that other researchers cannot access without paying the scientific publishers (in the case of Open Science, it is the research team behind the publication that pays the publisher so that readers can access the article for free)
  • Publications which, because of their form and the problems linked to their access, make it very difficult to use Machine Learning algorithms which could accelerate research 
  • Finally, scientific publications which, because of the length of the editorial approval mechanisms, only reflect the state of your research with a delay of several months. Recent health crises such as COVID-19 have shown us how important it can be to have qualitative data available quickly.

The Internet has enabled a major transformation in the way we communicate. Compared to letters, which took weeks to reach their recipients in past centuries, e-mail and instant messaging allow us to communicate more often and, above all, to send shorter messages as we obtain the information they contain, without necessarily aggregating it into a complex form. Only scientific communication, even though most of it is now done via the Internet, resists this trend, to the benefit of scientific publishers and traditional forms of communication, but also and above all at the expense of the progress of science and patients in the case of biomedical research.

How, under these conditions, can we create the collective intelligence necessary for scientific progress? The company thinks it has the solution: micro-publications, consisting of a title designed to be easily exploited by an NLP algorithm, a single figure, a brief description and links giving access to all the protocols and data generated. 

Figure 2 – Structure of a micro-publication (Source:

This idea of micro-publications, if not directly linked to the Blockchain, will be, since it allows for the rapid and easy sharing of information, a remarkable tool for collective intelligence and certainly the scientific communication modality best suited to the coming era of Decentralised Science. The objective will not be to replace traditional publications but rather to imagine a new way of doing science, in which the narrative of an innovation will be built collectively throughout successive experiments rather than after several years of work by a single research team. Contradictory voices will be expressed, and a consensus will be found, not fundamentally modifying the classic model of science but making it more efficient.

Facilitating the financing of innovation and the creation of biotechnology start-ups

Today, the financing of innovation, particularly in health, faces a double problem: 

  • From the point of view of scientists and entrepreneurs: despite the development of numerous funding ecosystems, non-dilutive grants and the maturation of venture capital funds, the issue of fundraising remains essential and problematic for most projects. Many projects do not survive the so-called “Valley of Death”, the period before the start of clinical studies, during which raising funds is particularly complicated. 
  • On the investor side: It is particularly difficult for an individual to participate in the financing of research and biotech companies in a satisfactory way. 
  • It is possible to be a Business Angel and to enter early in the capital of a promising start-up: this is not accessible to everyone, as a certain amount of capital is required to enter a start-up (and even more so if one wishes to diversify one’s investments to smooth out one’s risk)
  • It is possible to invest in listed biotech companies on the stock market: the expectation of gain is then much lower, as the companies are already mature, and their results consolidated
  • It is possible to fund research through charities, but in this case, no return on investment is possible and no control over the funded projects can be exercised.
  • It is possible to invest through crowdfunding sites, but here again there are structural problems: the choice of companies is limited, and the investors are generally in the position of lenders rather than investors: they do not really own shares in the company and will be remunerated according to a predefined annual rate.

These days, one of the pharmaceutical industry’s most fashionable mantras is to put the patient at the center of its therapeutics, so shouldn’t we also, for the sake of consistency, allow him to be at the center of the systems for financing and developing therapeutics?

DeSci will allow everyone – patients, relatives of patients or simply (crypto)investors wishing to have a positive impact on the world – via IP-NFT, data-NFT or company tokenization systems to easily finance drug development projects whatever their stage, from the academic research of a researcher to a company already established. 

This system of tokenization of assets also makes it possible to generate additional income, both for the investor and for the project seeking to be financed:

  • The “Lombard loan” mechanisms present in DeFi will also allow investors to generate other types of income on their shares in projects. Indeed, DeFi has brought collateralized loans back into fashion: a borrower can deposit digital assets (cryptocurrencies, but also NFTs or tokenized real assets (companies, real estate, etc) in exchange for another asset (which represents a fraction of the value they deposited, in order to protect the lender) that they can invest according to different mechanisms specific to Decentralized Finance (we will not develop in this article). Thus, in a classic private equity system, the money invested in a start-up is blocked until the possibility of an exit and does not generate returns other than those expected due to the increase in the company’s value. In the new decentralized system, part of the money you have invested can be placed in parallel in the crypto equivalent of a savings account (let’s simplify things, this site is not dedicated to Decentralized Finance!)
  • Furthermore, another possibility for biotech projects, whether they are already incorporated or not, to generate additional revenues is to take advantage of the liquidity of the assets (which does not exist in the traditional financing system): it is quite possible to apply a tax of some % to each transaction of an IP-NFT or a data-NFT.

We are in a world where it is sometimes easier to sell a picture of a monkey for $3 or $4 million than to raise that amount to fight a deadly disease. It’s time to understand this and pull the right levers to get the money where it is – sometimes far off the beaten track. 

Conclusion: a nascent community, a lot of work and great ambitions

Despite the high-potential initiatives presented in this article, and the growing involvement of a scientific community throughout the world, DeSci is still young and has yet to be structured. One of the main ones, apart from the aspects related to the regulatory framework, will undoubtedly be that of education in the broadest sense, which is not yet addressed by the current projects. By using Web3 tools to reinvent the way in which a high-level curriculum can be built and financed (tomorrow you will be paid to take online courses – yes!), the DeSci will give itself the means to integrate the most creative and entrepreneurial minds of its time, in the same way that large incubators or investment funds such as Y Combinator or Tech Stars have relied on education to create or accelerate the development of some of the most impressive companies of recent years. The DeSci Collaborative Universities need to emerge, and the connection between Ed3 (education and learning in the Web3 era) and DeSci has yet to be implemented.

Figure 3 – Presentation of the embryonic DeSci ecosystem at the ETH Denver conference, February 17, 2022 (in the last 3 months, the burgeoning ecosystem has grown considerably with other projects)

Web 3.0 and DAOs have the great advantage of allowing people to be rewarded with equity, or the equivalent, for contributing their skills or financial resources to a project at any stage of its development.  Thus, in a decentralized world where skills and research materials are at hand, and where the interests of the individuals involved in a project are more aligned, the time between the emergence of an idea and its execution is significantly shorter than in a centralized world. This model, which can reinvent not only work but also what a company is, applies to all fields but is particularly relevant where collective intelligence is important and where advanced expertise of various kinds is needed, such as scientific research. 

In the same way that we can reasonably expect Bitcoin to become increasingly important in the international monetary system in the coming years and decades, we can expect DeSci, given its intrinsic characteristics and qualities, to become increasingly important in the face of what we may in the next few years call “TradSci” (traditionally organized Science). By allowing a perfect alignment of interests of its different actors, DeSci will probably constitute the most successful and viable large-scale and long-term collaborative tool of Collective Intelligence that Homo Sapiens will ever have. Whether it is the fight against global warming, the conquest of space, the eradication of all diseases, or the extension of human longevity, DeSci will probably be the catalyst for the next few decades of scientific innovation and, in so doing, will positively impact your life. Don’t miss the opportunity to be one of the first to do so!

Further reading: 
  • General information on DeSci: 
  • Understanding DAOs:
  • Understanding Web3:
  • On the IP-NFTs concept:
  • On the Data-NFTs concept:
  • On the decentralized peer-reviewing:
  • On the micro-publication concept:
  • On the decentralized construction and financing of Biotechs:
  • On the ED3:

Credits for the illustration of the article :
  • Background: @UltraRareBio @jocelynnpearl and danielyse_, Designed by @katie_koczera
  • Editing: Resolving Pharma

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !

Exploratory research Generalities Preclinical

Oligonucleotides and Machine Learning Tools

Today, oligonucleotides – short DNA or RNA molecules – are essential tools in molecular biology projects, but also in therapeutics and diagnostics. In 2021, ten or so antisense therapies are authorised on the market, and much more are under clinical trials.

The recent Covid-19 crisis has also brought PCR tests to the public’s knowledge, these tests use small sequences of about 20 nucleotides to amplify and detect genetic material. Oligos have been so successful that, since their synthesis was automated, their market share has grown steadily. It is estimated that it will reach $14 billion by 2026.

Oligonucleotides have an elegance in their simplicity. It was in the 1950s that Watson and Crick described the double helix that makes up our genetic code, and the way in which the bases Adenine/Thymine and Cytosine/Guanine pair up. Thanks to this property, antisense therapies can virtually target our entire genome, and regulate its expression. Diseases that are difficult to treat, such as Spinal Dystrophy Disorder or Duchenne’s disease, are now benefiting some therapeutic support (1).

This article does not aim to restate the history of oligonucleotides used in clinic (many reviews are already available in the literature (2), (3), (4)), but to provide a quick overview of what has been developed in this area, with a Machine Learning tint.

We hope that the article will inspire some researchers, and that others may find new ideas of research and exploration. At a time when Artificial Intelligence has reached a certain maturity, it is particularly interesting to exploit it and to streamline all decision making in R&D projects.

This list is not exhaustive, and if you have a project or article to share with us, please contact us at We will be happy to discuss it and include it in this article.

Using Deep Learning to design PCR primers

As the Covid-19 health crisis has shown, diagnosing the population is essential to control and evaluate a pandemic. Thanks to two primers of about twenty nucleotides, a specific sequence can be amplified and detected, even at a very low level (PCR technique is technically capable of detecting up to 3 copies of a sequence of interest (5)).

A group from Utrecht University in the Netherlands (6) has developed a CNN (for Convolutional Neural Network, a type of neural network particularly effective in image recognition) capable of revealing areas of exclusivity in a genome. This allows the development of highly specific primers for the target of interest. In their case, they analysed more than 500 genomes of viruses from the Coronavirus family in order to train the algorithm to sort the different genomes. The primers designed by the model showed similar efficiency to the sequences used in practice. This tool could be used to develop PCR diagnostic tools with greater efficiency and speed.

Predicting the penetration power of an oligonucleotide

There are many peptides that improve the penetration of oligonucleotides into cells. These are called CPPs for Cell Penetrating Peptides, small sequences of less than 30 amino acids. Using a random decision tree, a team from MIT (7) was able to predict the activity of CPPs for oligonucleotides, modified by morpholino phosphorodiamidates (MO). Although the use of this model is limited (there are many chemical modifications to date and MOs cover only a small fraction of them), it is still possible to develop it for larger chemical families. For example, the model was able to predict experimentally whether a CPP would improve the penetration of an oligonucleotide into cells by a factor of three.

Optimising therapeutic oligonucleotides

Although oligonucleotides are known to be little immunogenic (8), they do not escape the toxicity associated with all therapies. “Everything is poison, nothing is poison: it is the dose that makes the poison. “- Paracelsus

This last parameter is key in the future of a drug during its development. A Danish group (9) has developed a prediction model capable of estimating the hepatotoxicity of a nucleotide sequence in mouse models. Again, here “only” unmodified and LNA (Locked Nucleic Acid, a chemical modification that stabilises the hybridisation of the therapeutic oligonucleotide to its target) modified oligonucleotides were analysed. It would be interesting to increase the chemical space studied and thus extend the possibilities of the algorithm. However, it is this type of model that will eventually reduce attrition in the development of new drugs. From another perspective (10), a model has been developped for optimising the structure of LNAs using oligonucleotides as gapmers. Gapmers are hybrid oligonucleotide sequences that have two chemically modified ends, that are resistant to degrading enzymes, and an unmodified central part that can be degraded once hybridised to its target. It is this final ‘break’ that will generate the desired therapeutic effect. Using their model, the researchers were able to predict the gapmer design that has the best pharmacological profile.

Accelerating the discovery of new aptamers

Also known as “chemical antibodies”, aptamers are DNA or RNA sequences capable of recognising and binding to a particular target with the same affinity as a monoclonal antibody. Excellent reviews on the subject are available here (11) or here (12). In clinic, pegatinib is the first aptamer to be approved for use. The compound is indicated for certain forms of AMD.

Current research methods, based on SELEX (Systematic Evolution of Ligands by Exponential Enrichment), have made it possible to generate aptamers directed against targets of therapeutic and diagnostic interest, such as nucleolin or thrombin. Although the potential of the technology is attractive, it is difficult and time-consuming to discover new pairs of sequence/target. To boost the search of new candidates, an American team (13) was able to train an algorithm to optimise an aptamer and reduce the size of its sequence, while maintaining or even increasing its affinity to its target. They were able to prove experimentally that the aptamer generated by the algorithm had more affinity than the reference candidate, while being 70% shorter. The interest here is to keep the experimental part (the SELEX part), and to combine it with these in silico tools in order to accelerate the optimisation of new candidates.

There is no doubt that the future of oligonucleotides is promising, and their versatility is such that they can be found in completely different fields, ranging from DNA-based nanotechnology to CRISPR/Cas technology. The latter two areas alone could be the subject of individual articles, as their research horizons are so important and exciting.

In our case, we hope that this short article has given you some new ideas and concepts, and inspired you to learn more about oligonucleotides and machine learning.

  1. Bizot F, Vulin A, Goyenvalle A. Current Status of Antisense Oligonucleotide-Based Therapy in Neuromuscular Disorders. Drugs. 2020 Sep;80(14):1397–415.
  2. Roberts TC, Langer R, Wood MJA. Advances in oligonucleotide drug delivery. Nat Rev Drug Discov. 2020 Oct;19(10):673–94.
  3. Shen X, Corey DR. Chemistry, mechanism and clinical status of antisense oligonucleotides and duplex RNAs. Nucleic Acids Res. 2018 Feb 28;46(4):1584–600.
  4. Crooke ST, Liang X-H, Baker BF, Crooke RM. Antisense technology: A review. J Biol Chem [Internet]. 2021 Jan 1 [cited 2021 Jun 28];296. Available from:
  5. Bustin SA, Benes V, Garson JA, Hellemans J, Huggett J, Kubista M, et al. The MIQE Guidelines: Minimum Information for Publication of Quantitative Real-Time PCR Experiments. Clin Chem. 2009 Apr 1;55(4):611–22.
  6. Lopez-Rincon A, Tonda A, Mendoza-Maldonado L, Mulders DGJC, Molenkamp R, Perez-Romero CA, et al. Classification and specific primer design for accurate detection of SARS-CoV-2 using deep learning. Sci Rep. 2021 Jan 13;11(1):947.
  7. Wolfe JM, Fadzen CM, Choo Z-N, Holden RL, Yao M, Hanson GJ, et al. Machine Learning To Predict Cell-Penetrating Peptides for Antisense Delivery. ACS Cent Sci. 2018 Apr 25;4(4):512–20.
  8. Stebbins CC, Petrillo M, Stevenson LF. Immunogenicity for antisense oligonucleotides: a risk-based assessment. Bioanalysis. 2019 Nov 1;11(21):1913–6.
  9. Hagedorn PH, Yakimov V, Ottosen S, Kammler S, Nielsen NF, Høg AM, et al. Hepatotoxic Potential of Therapeutic Oligonucleotides Can Be Predicted from Their Sequence and Modification Pattern. Nucleic Acid Ther. 2013 Oct 1;23(5):302–10.
  10. Papargyri N, Pontoppidan M, Andersen MR, Koch T, Hagedorn PH. Chemical Diversity of Locked Nucleic Acid-Modified Antisense Oligonucleotides Allows Optimization of Pharmaceutical Properties. Mol Ther – Nucleic Acids. 2020 Mar 6;19:706–17.
  11. Zhou J, Rossi J. Aptamers as targeted therapeutics: current potential and challenges. Nat Rev Drug Discov. 2017 Mar;16(3):181–202.
  12. Recent Progress in Aptamer Discoveries and Modifications for Therapeutic Applications | ACS Applied Materials & Interfaces [Internet]. [cited 2021 Jul 25]. Available from:
  13. Bashir A, Yang Q, Wang J, Hoyer S, Chou W, McLean C, et al. Machine learning guided aptamer refinement and discovery. Nat Commun. 2021 Apr 22;12(1):2366.

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !

Entrepreneurship Generalities

Blockchain, Mobile Apps: will technology solve the problem of counterfeit drugs?

« Fighting counterfeit drugs is only the start of what blockchain could achieve through creating [pharmaceutical] ‘digital trust’.»

Andreas Schindler, Blockchain Expert

20% of the medicines circulating in the world are counterfeit, most of them do not contain the right active substance or not in the right quantity. Representing 200 billion dollars per year, this traffic – 10 to 20 times more profitable for organized crime than heroin – causes the death of hundreds of thousands of people every year, the majority of whom are children, whose parents think they are treating them with real medicine. To fight this scourge, laboratories and international health authorities must form a united front, where technology could be the keystone.

The problem of counterfeit drugs

It is an almost invisible scourge, which contours are difficult to define, a low-key global epidemic, which does not provoke confinements or massive vaccination campaigns, but which nevertheless kills hundreds of thousands of patients every year. Counterfeit medicines, defined by the WHO as “medicines that are fraudulently manufactured, mislabeled, of poor quality, conceal the details or identity of the source, and do not meet defined standards”, generally concern serious diseases such as AIDS, tuberculosis or malaria, and lead to the death of approximately 300,000 children under the age of 5 from pneumonia and malaria. In fact, the general term “counterfeit drugs” covers very different products: some containing no active ingredient, some containing active ingredients different from what is indicated on the label, and others containing the indicated Active Pharmaceutical Ingredient (API) in different quantities. In addition to their responsibility for the countless human tragedies, counterfeit medicines also contribute to future issues by increasing antibiotic resistance in areas of the world where health systems are already failing and will probably not be able to cope with this new challenge.

Now, from a financial perspective. Apart from public health considerations, counterfeit medicines are also an economic and political problem for countries: this traffic, which represents 200 billion dollars per year, feeds organized crime networks and represents a very high cost for health systems. As far as the pharmaceutical industry is concerned, the problems caused by this traffic are numerous: it represents a 20% loss of revenue of their worldwide sales; a lack of confidence from their patients – not knowing, most of the time, that the counterfeit drugs are not the originals; and finally considerable expenses in order to fight the counterfeits.

Initiatives against counterfeit drugs

Counterfeit medicines are usually distributed through highly complex networks, which makes it particularly difficult to curb their spread. In its “Guide for the development of measures to eliminate counterfeit medicines”, the WHO identifies various legal-socio-political initiatives that can be put in place for States in order to limit the spread of these counterfeit medicines. While these recommendations are relevant, they are particularly difficult to implement in regions of the world where countries have few resources and whose structures are plagued by endemic corruption. In this article, we will therefore focus on solutions implemented by private companies: start-ups specialized in fighting against counterfeit drugs or large pharmaceutical companies.

One of the methods used by various start-ups – such as PharmaSecure based in India, or Sproxil based in Nigeria, and actively collaborating with the government of that country – is to use the widespread access of the populations to smartphones to allow them to identify counterfeit drug boxes according to the following model: drug manufacturers collaborate with these start-ups to set up codes (in the form of numerical codes or QR codes) concealed  inside the box  or on the packaging of the drug, under a surface that needs to be scratched or removed. Patients can download a free app and scan these codes to verify the medication is authentic. These applications also allow patients to receive advice on their treatments. They function as a trusted third party to certify the patient, the final consumer of the drug, that no one has fraudulently substituted the legitimate manufacturer.

Figure 1 – Model for drug authenticity verification using mobile apps

The system described above works almost the same way as serialization. The implementation began several years ago and is described in European Regulation 2016/61; with the exception that the verification is performed by the patient and not by the pharmacist.

Other mobile apps, such as CheckFake and DrugSafe, are developing a different verification system, taking advantage of the smartphone’s camera to check the shape, content, and color compliance of drug packaging. Finally, another category of mobile apps implements a system that analyses the shape and the color of the drugs themselves to identify which tablets they are, and certify they are authentic.

These different solutions have a number of qualities, in particular their ease of deployment and use by patients in all over the world. On the other hand, they have the disadvantage of being launched in a speed race with counterfeiters, pushed to produce more and more realistic and similar counterfeits. Nevertheless, these technologies can hardly be applied in other circuits: securing the entire supply chain or tracking the circuit of drugs in hospitals. This is why many large pharmaceutical groups, such as Merck or Novartis for example, bet on a different technology: the Blockchain. Explanations.

Presentation of the Blockchain technology

Blockchain is a technology conceived in 2008, on which cryptocurrencies have been built since then. It is a cryptographically secured technology for storing and transmitting information without a centralized control body. The main objective is to allow a computer protocol to be a vector of trust between different actors without an intermediary third party. The Blockchain mechanism allows the different actors participating to obtain a unanimous agreement on the content of the data, and to avoid their subsequent falsification. Thus, the historical method of consensus between actors is the so-called “proof of work”: a number of actors provides computing power to validate the arrival of new information. In the context of cryptocurrencies, these actors are called miners: very powerful computing machines with high energy expenditure are all given a complex mathematical problem to solve at the same time. The first one to succeed will be able to validate the transaction and be paid for it. Each of the participants, called “nodes”, has therefore an updated history of the ledger that is the Blockchain. The way to corrupt a proof-of-work blockchain is to gather enough computational power to carry out a so-called “51%” attack, i.e., to carry the consensus towards a falsification of the chain: the double spending in particular. In fact, this attack is hardly conceivable on blockchains such as Bitcoin, as the computing power to be developed would be phenomenal (perhaps one day the quantum computer will make what we currently consider to be cryptography obsolete, but that is another debate…) Other validation techniques now exist; such as proof of participation or proof of storage. They were essentially designed to address the issues of scalability and energy sustainability of blockchains.

Figure 2 – Diagram of how to add a block to a blockchain.

Conceived in the aftermath of the 2008 financial crisis, this technology has a strong political connotation, and Bitcoin’s philosophy, for example, is to allow individuals to free themselves from banking and political control systems. Thus, the original blockchains, such as Bitcoin, are said to be “open”: anyone can read and write the chain’s registers. Over time, and for greater convenience by private companies, semi-closed blockchains (everyone can read but only a centralizing organization can write) or closed blockchains (reading and writing are reserved for a centralizing organization) have been developed. These new forms of blockchains move away considerably from the original philosophy, and one can legitimately question their relevance: they present some disadvantages of the blockchain in terms of difficulty of use while also retaining the problems associated with a centralized database: a single entity can voluntarily decide to corrupt it or suffer from a hacking.

This closed configuration often allows for greater scalability but raises a question that is as much technological as it is philosophical: is a blockchain, when fully centralized, still a blockchain?

Prospects for the use of technology in the fight against counterfeit drugs

At a time when trust is more than ever a central issue for the pharmaceutical industry, which sees its legitimacy and honesty questioned relentlessly, it is logical that the players in this sector are interested in this technology of trust par excellence. Among the various use cases, which we will no doubt come back to in future articles, the fight against counterfeit drugs is one of the most promising and most important in terms of human lives potentially saved. For example, Merck recently began collaborating with Walmart, IBM, and KPMG on an FDA-led pilot project to use blockchain to allow patients to track the entire pathway of the medication they take. This concept is already being functionally tested in Hong Kong on Gardasil, and using mobile applications downloaded by pharmacists and patients. Thus, the entire drug supply chain is built around the blockchain, making it possible to retrieve and assemble a large amount of data concerning, for example, shipping dates or storage conditions and temperatures. The aforementioned consortium is also exploring the use of Non-Fungible Tokens (NFT): unique and non-interchangeable digital tokens. Each box of medication produced would have an associated NFT, which would follow the box through its circuit, from the manufacturer to the wholesaler, from the wholesaler to the pharmacist and from the pharmacist to the patient. Thus, in the future, each patient would receive an NFT at the same time as the medication in order to certify the inviolability of its origin. None of the actors in the supply chain could take the liberty of fraudulently adding counterfeit drugs since they would not have their associated NFT. Future is probably pleasing and in favor of increased drug safety, but it will only be achievable after significant work, on the one hand to educate stakeholders and on the other hand to set up digital interfaces accessible to all patients.


With the emergence of e-commerce and its ever-increasing ease of access, the problem of counterfeit drugs has exploded in recent years, and it will be necessary for the pharmaceutical ecosystem to mobilize and innovate in order to curb it, as well as to restore the deteriorated trust. Several fascinating  initiatives using blockchain technology are currently being carried out by various stakeholders in the health sector, we can see in these projects the outline of a potential solution to drug counterfeiting, but we must however consider them with a certain critical mind. The temptation to market the buzz-word “blockchain” since the explosion of crypto-currencies in 2017 can be strong – and even, unfortunately, when the issues could be perfectly satisfied with a centralized database. Can we go so far as to think, as some specialists in this technology do, that blockchain is only viable and useful when it is used for financial transfers? The debate is open and there is no doubt that the future will quickly bring an answer!

Would you like to discuss the subject? You want to take part in the writing of articles for the Newsletter? You want to participate in an entrepreneurial project related to PharmaTech?

Contact us at! Join our LinkedIn group!

To subscribe to the monthly Newsletter for free: Registration

For further information:

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !

Exploratory research Generalities

Dismantling the scientific publishers’ cartel to free innovation ?

” Text mining of academic papers is close to impossible right now. “

Max Häussler – Bioinformatics researcher, UCSC

Faced with the explosion of published scientific articles and the exponential increase in computing capacities, the way we will read the scientific literature in the future will probably have nothing to do with the tedious, slow, and repetitive current reading work and will undoubtedly involve more and more the use of intelligent text-mining techniques. By increasing tenfold our analytical capacities, these techniques make it possible – and will make it even easier in the future – to unleash creativity and bring about scientific innovation faster and cheaper. For the time being, however, this bright outlook faces a major obstacle: the scientific publishing cartel – one of the world’s most lucrative industries, which is determined to not jeopardize its enormous profits.

Text-mining and its necessity :

Text-mining is a technology that aims to obtain key and previously unknown information very quickly from a very large quantity of text – in this case the biomedical literature. This technology is multi-disciplinary in nature, using machine learning, linguistic and statistical techniques.

The purpose of this article is not to constitute a technical study of text-mining, but it is nevertheless necessary, for the full understanding of the potential of this technology, to describe its main steps :

  • Selection and collection of texts to be analyzed : This first step consists of using search algorithms to automatically download abstracts of interest from scientific article databases (such as PubMed, for example, which alone references 12,000,000 scientific articles). A search of the grey literature can also be conducted to be as exhaustive as possible.
  • Preparation of the texts to be analyzed : The objective of this step is to put the texts to be analyzed in a predictable and analyzable form according to the task to be accomplished. There is a whole set of techniques to carry out this step which will make it possible to remove the “noise” of the text and to “tokenize” the words inside the sentences.
  • Analysis of data from the texts : The analysis of the data will largely depend on the preparation of the text. Different statistical and data science techniques can be used: support vector machines, hidden Markov models or, for example, neural networks.
  • Data visualization : The issue of data visualization is probably more important than one might think. Depending on the chosen option: tables or 3D models, for example, the information and meta-information to which the user of the model has access will be more or less relevant and explanatory.

Text-mining has already proven its usefulness in biomedical scientific research: among other things, it has been used to discover associations between proteins and pathologies; to understand interactions between proteins or to elucidate the docking of certain drugs to their therapeutic target. However, most of the time, this technology is only implemented on the abstracts of articles, which considerably reduces its power in terms of reliability of the obtained data as well as the number of its applications.

So why not using the millions of scientific articles available online? New research hypotheses could be formulated, new therapeutic strategies could be created. This is technologically within reach, but scientific publishers seem to have decided differently for the moment. Here are some explanations.

The problems posed by scientific publishers :

At their emergence, at the end of the second world war, scientific publishers had a real utility in the diffusion of science: indeed, the various learned societies had only weak means to diffuse the work and conclusions of their members. At that time, the dissemination of published articles was done through the publication of paper journals, which were too expensive for most learned societies. Since the birth of this industry and despite the considerable changes in the means of transmission of scientific knowledge with the Internet, its business model has not evolved, becoming anachronistic and bringing its gross margins to percentages that make online advertising giants like Google or Facebook look like unprofitable businesses. Scientific publishers are indeed the only industry in the world that obtains the raw material (scientific articles) for free from its customers (scientists from all over the world) and whose transformation (peer-reviewing) is also carried out by its customers on a voluntary basis.


The triple-payment system set up by scientific publishers.

Scientific publishers have set up an “odd triple-payment system”, allowing private entities to capture public money intended for research and teaching. The States finance the research leading to the writing of scientific articles, pay the salaries of the scientists who voluntarily participate in the peer-reviewing and finally pay once again, through the subscriptions of universities and research laboratories, to have access to the production of scientific knowledge that they have already financed twice! Another model, parallel to this one, has also been developing for a few years, the author-pays model in which researchers pay publication fees in order to make their work more easily accessible to readers…are we heading towards a quadruple-pay system?

The deleterious consequences of the system put in place by scientific publishers are not only financial but also impact the quality of the scientific publications produced and therefore the validity of potential artificial intelligence models based on the data in these articles. The business model based on journal subscriptions leads publishers to favor spectacular and deeply innovative discoveries over confirmatory work, which pushes some researchers, driven by the race for the “impact factor”, to defraud or to publish statistically unconsolidated results very early on: This is one of the reasons of the reproducibility crisis that science is currently experiencing and also one of the possible causes of the insufficient publication of negative, yet highly informative, results: one out of every two clinical trials does not result in any publication.

Finally, and this is the point that interests us most in this article, scientific publishers are an obstacle to the development of text-mining on the huge databases of articles they possess, which has, in fine, a colossal impact on our knowledge and understanding of the world as well as on the development of new drugs. Indeed, it is currently extremely difficult to perform text-mining on complete scientific articles on a large scale because it is not allowed by the publishers, even when you have a subscription and are legally entitled to read the articles. Several countries have legislated so that research teams implementing text-mining are no longer required to seek permission from scientific publishers. In response to these legal developments, scientific publishers, taking advantage of their oligopolistic situation, have set up completely artificial technological barriers: for example, it has become impossible to download articles rapidly and in an automated way, the maximum rate imposed being generally 1 article every 5 seconds, which means that it would take about 5 years to download all the articles related to biomedical research. The interest of this system for scientific publishers is to be able to hold to ransom – the term is strong, but it is the right one – the big pharmaceutical companies who wish to remove these artificial technical barriers for their research project.

The current system of scientific publications, as we have shown, benefits only a few companies at the expense of many actors – researchers from all over the world, and even more when they work from disadvantaged countries, governments and taxpayers, health industries and finally, at the end of the chain, patients who do not benefit from the full potential of biomedical research. Under these conditions, many alternatives to this model are emerging, some of which are largely made possible by technology.

Towards the disruption of scientific publishing ?

” You only really destroy what you replace. “

Napoléon III – 1848

Doesn’t every innovation initially come from a form of rebellion? This is especially true when it comes, so far, to the various initiatives undertaken to unleash the potential of free and open science, as these actions have often taken the form of piracy operations. Between manifestos and petitions, notably the call for a boycott launched by Mathematics researcher Timothy Gowers, based on the text “The cost of knowledge”, the protest movements led by scientists and the creation of open-source platforms like have been numerous. However, few actions have had as much impact as those of Aaron Swartz, one of the main theorists of open source and open science, who tragically commit suicide at the age of 26, one month before a trial during which he was facing 35 years of imprisonment for having pirated 4.8 million scientific articles, or of course, those of Alexandra Elbakyan, the famous founder of the Sci-Hub website, which allows free – and illegal – access to most of the scientific literature.

Aaron Swartz and Alexandra Elbakyan

More recently, the proponents of the open-source movement have adapted to the radical turn of text-mining, notably through Carl Malamud’s project, aiming to take advantage of a legal grey area to propose to academic research teams to mine the gigantic database of 73 million articles he has built. The solution is interesting but not fully completed, this database is for the moment not accessible from Internet for legal reasons, it is necessary to travel to India, where it is hosted, to access it.

These initiatives operate on more or less legal forms of capturing articles after their publication by scientific publishers. In the perspective of a more sustainable alternative, the ideal would be to go up the value chain and therefore work upstream with researchers. The advent of the blockchain technology – a technology for storing and exchanging information with the particularity of being decentralized, transparent and therefore highly secure, on which future articles of Resolving Pharma will come back in detail – is thus for many researchers and thinkers of the subject a great opportunity to definitively replace scientific publishers in a system inducing more justice and allowing the liberation of scientific information.

The transformation of the system will probably be slow – the prestige accorded by researchers to the names of large scientific journals belonging to the oligopoly will persist over time – perhaps it will not even happen, but the Blockchain has, if successfully implemented, the capacity to address the issues posed earlier in this article in a number of ways :

A fairer financial distribution

As we have seen, the business model of scientific publishers is not very virtuous, to word it mildly. At the other end of the spectrum, Open Access, despite its undeniable and promising qualities, can also pose certain problems, being sometimes devoid of peer-reviewing. The use of a dedicated cryptocurrency for the scientific publishing world could eliminate the triple-payment system, as each actor could be paid at the fair value of their contribution. A researcher’s institution would receive a certain amount of cryptocurrency when he or she publishes as well as when he or she participates in peer-reviewing another paper. As for the institutions’ access to publications, it would be done through the payment of a cryptocurrency amount. Apart from the financial aspects, the copyright, which researchers currently waive, would be automatically registered in the blockchain for each publication. Research institutions will thus retain the right to decide at what price the fruits of their labor will be available. A system of this kind would allow, for example, anyone wishing to use a text-mining tool to pay a certain amount of this cryptocurrency, which would go to the authors and reviewers of the articles used. Large-scale text-mining would then become a commodity.

Tracking reader usage and defining a real « impact factor »

Currently, even if we try to count the number of citations to articles, the use of scientific articles is difficult to quantify, although it could be an interesting metric for the different actors of the research ecosystem. The Blockchain would allow to precisely trace each transaction. This tracing of readers would also bring a certain form of financial justice: one can imagine that through a Smart Contract, a simple reading would not cost exactly the same amount of cryptocurrency as the citation of the article. It would thus be possible to quantify the real impact of a publication and replace the “impact factor” system by the real-time distribution of “reputation tokens” to scientists, which can also be designed in such a way as not to discourage the publication of negative results (moreover, in order to alleviate this problem, researchers have set up a platform dedicated to the publication of negative results:

With the recent development of Non-Fungible Tokens (NFT), we can even imagine tomorrow the emergence of a secondary market for scientific articles, which will be exchanged from user to user, as is already possible for other digital objects (video game elements, music tracks, etc.).

A way to limit fraud

Currently, the peer-reviewing system, in addition to being particularly long (it takes on average 12 months between the submission and the publication of a scientific article, compared to two weeks on a Blockchain-based platform such as ScienceMatters), is completely opaque to the final reader of the article, who has no access to the names of the researchers who took part in the process, nor even to the chronological iterations of the article. The Blockchain could allow, through its unforgeable and chronological structure, to record these different modifications. This is a topic that would deserve another article on its own, but the Blockchain would also allow to record the different data and metadata that led to the conclusions of the article, whether it is for example preclinical or clinical trials, and thus avoid fraud while increasing reproducibility.

Manuel Martin, one of the co-founders of Orvium, a Blockchain-based scientific publishing platform, believes: “by establishing a decentralized and competitive marketplace, blockchain can help align the goals and incentives of researchers, funding agencies, academic institutions, publishers, companies and governments.”

The use of the potential of artificial intelligence in the exploitation of scientific articles is an opportunity to create a real collective intelligence, to make faster and more efficient research happen and probably to cure many diseases around the world. The lock that remains to be broken is not technological but organizational. Eliminating scientific publishers from the equation will be a fight as bitter as it is necessary, which should bring together researchers, governments and big pharmaceutical companies, whose interests are aligned. If we can be relatively pessimistic about the cooperation capacities of these different actors, we cannot doubt the fantastic power of transparency of the Blockchain which, combined with the determination of some entrepreneurs like the founders of Pluto, Scienceroot, ScienceMatters or Orvium platforms, will be a decisive tool in this fight to revolutionize the access to scientific knowledge.

The words and opinions expressed in this article are those of the author. The other authors involved in Resolving Pharma are not associated with it.

To go further :
Stephen Buranyi ; Is the staggeringly profitable business of scientific publishing bad for science? ; The Guardian ; 27/06/2017;
The Cost of Knowledge :
Priyanka Pulla ; The plan to mine the world’s research papers ; Nature ; Volume 571 ; 18/07/2019 ; 316-318
Bérénice Magistretti ; Disrupting the world of science publishing ; TechCrunch ; 27/11/2016
Daniele Fanelli ; Opinion : Is science really facing a reproducibility crisis, and do we need it to ? ; PNAS March 13, 2018 115 (11) 2628-2631; first published March 12, 2018;
D.A. Eisner ; Reproducibility of science: Fraud, impact factors and carelessness ; Journal of Molecular and Cellular Cardiology, Volume 114, January 2018, Pages 364-368
Chris Hartgerink ; Elsevier stopped me doing my research ; 16 Novembre 2015 ;
Joris van Rossum, The blockchain and its potential and academic publishing, Information Services & Use 38 (2018) 95-98 ; IOS Press
Douglas Heaven, Bitcoin for biological literature, Nature, 7/02/2019/ Volume 566
Manuel Martin ; Reinvent scientific publishing with blockchain technology ;
Sylvie Benzoni-Gavage ; The Conversation ; Comment les scientifiques s’organisent pour s’affranchir des aspects commerciaux des revues ;

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !


Eroom’s Law, the pharmaceutical industry of tomorrow and Resolving Pharma

372. It is the number of days between the discovery of the first Covid-19 case in Wuhan, and the vaccination of Margaret Keenan in central England, the first person to receive a dose of Covid vaccine after its commercialization. Never in its history has humanity been so quick to find a solution to a new disease. However, this dazzling success of the pharmaceutical industry should not blind us, the development of new drugs is becoming increasingly inefficient. More than ever, initiatives that enable the use of technology in the research and development of new drugs are essential to maintain innovation.

The pharmaceutical industry is broken. It will not be able, based on its current means and methods of R&D, to replicate the medical progress it has made in the past years. Each new molecule brought to the market will inevitably cost more to develop than the previous one. This is what Eroom’s Law states, describing empirically the decline in the efficiency of the pharmaceutical industry (1). For example, the R&D profitability of the world’s 12 largest pharmaceutical groups reached an all-time low of 1.9% in 2018, whereas it was still 10.1% in 2010 (2).

Figure 1 – An illustration of the decreasing efficiency of the pharmaceutical industry: every 9 years, the number of drugs approved by the FDA per billion dollars spent on R&D decreases by half (1).

Despite the many scientific advances we have witnessed recently (i.e. increase in the size of chemical libraries, identification of new therapeutic targets through DNA sequencing, three-dimensional protein databases, high-throughput screening, use of transgenic animals, etc) and the fact that they allow us both to produce more drug candidates and to select them with greater acuity, various structural problems in the pharmaceutical industry have led over the years to a considerable increase in the amount of money needed to bring a new molecule to the market.

A quick review of the literature helped us to identify some causes of this phenomenon :

  • The structurally incremental nature of the quality of each new product proposed by the pharmaceutical industry: to be marketed and reimbursed, each new drug must be superior or at least non-inferior to the drug corresponding to the treatment of reference for the targeted disease.
  • The gradual tightening of regulations, which is difficult to fight against and is even, for patients and health systems, most likely a good thing.
  • The tendency for pharmaceutical companies to over-invest unnecessarily, based on past returns on investment.
  • The concentration of research projects in therapeutic areas corresponding to unmet medical needs, with higher failure rates and less well understood biological mechanism3.

From an economic point of view, it is conceivable that the cost of capital (corresponding to the rate of return required by capital providers within a company with the regard to the remuneration they could obtain from an investment with the same risk profile on the market) becomes higher than the expected return on R&D : mechanically, available capital will decrease and companies will cut back on their research budgets, which will weaken the position of pharmaceuticals in the drug value chain.

Faced with this bleak outlook for the pharmaceutical industry, there are several answers to it: developing new models of collaboration with biopharmaceutical companies, subcontracting to specialized players, developing a policy of risk aversion, but also and above all, the development of new methods of innovation. This last point will be the focus of our attention.

Thus, Resolving Pharma will be interested, first through a newsletter, in documenting the various technologies that will improve and make the development of new therapeutics and treating patients more efficient. In response to this problem, Resolving Pharma will attempt to unite diverse and complementary actors around a bold and radical approach to innovation and entrepreneurship. Topics will include artificial intelligence, blockchain, quantum computing, 3D printing and many others. Each issue, through articles and interviews, will explore the different opportunities that these disruptive technologies bring or could bring to a particular field of therapeutic development. It will also highlight the emergence of the «PharmaTech» field, technology companies providing services to the pharmaceutical industry.

The fight against the fatality of Eroom’s law is huge and uncertain, but it is our responsibility as health professionals to fight it for the tens of millions of patients around the world, suffering from incurable diseases and for whom research and science are the only hope for a better future. Every long journey begins with a first step and Resolving Pharma is ours. We will see where it lead us.

(1) Scannell et al. «Diagnosing the decline in pharmaceutical R&D efficiency», Nature Reviews Drug Discovery, Volume 11/March 2012
(2) Unlocking R&D productivity – Measuring the return from pharmaceutical innovation 2028, Deloitte Centre for Health Solutions, 2019
(3) Pammolli et al. «The productivity crisis in pharmaceutical R&D» Nature Reviews Drug Discovery, Volume 10

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !