Categories
Entrepreneurship Exploratory research Generalities

Introduction to DeSci

How Science of the Future is being born before our eyes

« [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and limbs of actual human beings » Jessica Sacher, Phage Directory co-founder

In a previous article, one of the very first published on Resolving Pharma, we looked at the problems posed by the centralizing role of scientific publishers, which in addition to raising financial and ethical issues, is a brake on innovation and scientific research. At that time, in addition to making this observation, we proposed ways of changing this model, mainly using NFTs and the Blockchain. For several months now, and thanks to the popularization of Web3 and DAOs, initiatives have been emerging from the four corners of the world in favour of a science that facilitates collective intelligence, redesigns the methods of funding and scientific publication and, ultimately, considerably reduces the path between the laboratory and patients. It is time to explore this revolution, which is still in its infancy, and which is called DeSci for Decentralized Science.

The needed emergence of DeSci

One story that illustrates the inefficiencies of current science is often taken as an example in the DeSci world: that of Katalin Kariko, a Hungarian biochemist who carried out numerous research projects from the 1990s onwards (on in vitro-transcribed messenger RNA) which, a few decades later, would be at the origin of several vaccines against Covid-19. Despite the innovative aspects of Kariko’s research, she was unable to obtain the research grants necessary to pursue her projects because of political rivalry: the University of Pennsylvania, where she was based, had chosen to give priority to research on therapeutics targeting DNA directly. This lack of resources led to a lack of publications, and K. Kariko was demoted in the hierarchy of her research unit. This example shows the deleterious consequences of centralized organization on funding allocation (mainly from public institutions and private foundations) and on the reputation of scientists (from scientific publishers). 

How many researchers spend more time looking for funding than working on research topics? How many applications do they have to fill in to access funding? How many promising but too risky, or unconventional, research projects are abandoned for lack of funding? How many universities pay scientific publishers a fortune to access the scientific knowledge they themselves have helped to establish? How many results, sometimes perverted by the publication logic of scientific journals, turn out to be non-reproducible? With all the barriers to data exchange related to scientific publication, is science still the collective intelligence enterprise it should be? How many scientific advances that can be industrialized and patented will not reach the market because of the lack of solid and financed entrepreneurial structures to support them (although considerable progress has been made in recent decades to enable researchers to create their own start-ups)? 

DeSci, which we could define as a system of Science organization allowing, by relying on Web3 technologies and tools, everyone to finance and take part in research and scientific valorization in exchange for a return on investment or a remuneration, proposes to answer all the problems mentioned above. 

This article will first look at the technical foundations of Decentralized Science and then explore some cases in which decentralization could improve Science efficiency.

Understanding Web3, DAOs and Decentralized Science

In the early days of the Web, there were very high barriers to entry for users wishing to post information: before blogs, forums and social networks, one had to be able to write the code for one’s website or pay someone to do it in order to share content. 

With the advent of blogs and social networks, as we mentioned, Web2 took on a different face: expression became considerably easier. On the other hand, it has been accompanied by a great deal of centralization: social networking platforms now possess the content that their users publish and exploit it commercially (mostly through advertising revenue) without paying them a cent.

Web3 is a new version of the Internet that introduces the notion of ownership thanks to the Blockchain. Indeed, whereas Web2 was built on centralized infrastructures, Web3 uses the Blockchain. Data exchanges are recorded in a Blockchain and can generate a remuneration in cryptocurrencies with a financial value but also giving, in certain cases, a decision-making power on the platforms used by the contributors. Web 3 is therefore a way of marking the ownership of content or easily rewarding a user’s action. Web 3 is without doubt the most creative version of the Internet to this day. 

Finally, we cannot talk about Web3 without talking about Decentralized Autonomous Organizations (DAOs). These organizations are described by Vitalik Buterin, the iconic co-founder of the Ethereum blockchain, as: “entities that live on the Internet and have an autonomous existence, while relying on individuals to perform the tasks it cannot do itself”. In a more down-to-earth way, they are virtual assemblies whose rules of governance are automated and transparently recorded in a blockchain, enabling its members to act collectively, without a central authority or trusted third party, and to take decisions according to rules defined and recorded in smart contracts. Their aim is to simplify and make collective decisions-making and actions more secure, transparent and tamper-proof. DAOs have not yet revealed their full potential, but they have already shown that they can operate as decentralized and efficient investment funds, companies or charities. In recent months, science DAOs have emerged, based on two major technological innovations.

The technological concepts on which DeSci relies on: 

To understand the inner workings of DeSci and especially its immense and revolutionary potential, it is important to clarify two concepts, which are rather uncommon in the large and growing Web3 domain, but which lie at the heart of a number of DeSci projects:

  • IP-NFTs: The concept of IP-NFTs was developed by the teams of the company Molecule (one can find their interview on Resolving Pharma). It is a meeting point between IP (intellectual property) and NFTs (non-fungible tokens): it allows scientific research to be tokenized. This means that a representation of a research project is placed on the Blockchain in the form of an exchangeable NFT. A legal agreement is automatically made between the investors (buyers of the NFT) and the scientist or institution conducting the research. The owners of the NFT will be entitled to remuneration for licensing the intellectual property resulting from the research or creating a start-up from this intellectual property.

Figure 1 – Operating diagram of the IP-NFT developed by Molecule (Source: https://medium.com/molecule-blog/molecules-biopharma-ipnfts-a-technical-description-4dcfc6bf77f8)

  • Data-NFTs: Many Blockchain projects are concerned with Data ownership , but one of the most successful project is Ocean Protocol.  A Data-NFT represents a copyright (or an exclusive licence) registered in the Blockchain and relating to a data set. Thus, it is possible for a user to exploit its data in several ways: by charging other users for temporary licences, by selling its datasets or by collectivizing them with other datasets in a “Data Union”.

These two concepts make it possible to make intellectual property liquid, and thus to create new models of financing and collaboration. To take a simple example, a researcher can present a project and raise funds from investors even before a patent is filed. In exchange, the investors have an IP-NFT that allows them to benefit from a certain percentage of the intellectual property and revenues that will potentially be generated by the innovation. 

Let’s now turn to some DeSci examples.

Transforming scientific reviewing

When researchers want to communicate to the scientific community, they write an article and submit it to scientific publishers. If the publishers accept the research topic, they will seek out other researchers who verify the scientific validity of the article, and a process of exchange with the authors ensues: this is called peer-reviewing. The researchers taking part in this process are not paid by the publishers and are mainly motivated by their scientific curiosity.

This system, as it is currently organized – centrally, gives rise to several problems:

  • It takes a long time: in some journals, it takes several months between the first submission of an article and its final publication. This avoidable delay can be very damaging to the progress of science (but we will come back to this later in the article!). Moreover, given the inflation in the number of scientific articles and journals, the system based on volunteer reviewers is not equipped to last in the future.
  • The article is subject to the bias of the editor as well as the reviewers, all in an opaque process, which makes it extremely uncertain. Studies have shown that by resubmitting a sample of previously published papers and changing the names and institutions of the authors, 89% of them were rejected (without the reviewers noticing that the papers were already published)
  • The entire process is usually opaque and unavailable to the final reader of the paper.

Peer-reviewing in Decentralized Science will be entirely different. Several publications have demonstrated the possibility of using thematic scientific DAOs to make the whole process more efficient, fair and transparent. We can thus imagine that decentralization could play a role in different aspects: 

  • The choice of reviewers would no longer depend solely on the editor , but could be approved collectively.
  • Exchanges around the article could be recorded on the blockchain and thus be freely accessible.
  • Several remuneration systems, financial or not, can be imagined in order to attract quality reviewers. We can thus imagine that each reviewer could earn tokens allowing them to register in a reputation system (see below), to participate in the DAO’s decision-making process but also to participate in competitions with the aim of obtaining grants. 

Decentralized peer-reviewing systems are still in their infancy and, however promising they may be, there are still many challenges to be overcome, starting with interoperability between different DAOs.

Creating a new reputation system

The main value brought about by the centralized system of science is that of the reputation system of the actors. Why do you want to access prestigious schools and universities, and why are you sometimes prepared to go into debt over many years to do so? Having the name of a particular university on your CV will make it easier for you to access professional opportunities. In a way, companies have delegated some of their recruitment to schools and universities.  Another system of reputation, which we mentioned earlier in this article, is that of scientific publishers. Isn’t the quality of a researcher measured by the number of articles he or she has managed to have published in prestigious journals?

Despite their prohibitive cost (which allows scientific publishers to be one of the highest gross margin industries in the world – hard to do otherwise when you are selling something you get for free!), these systems suffer from serious flaws: does being accepted into a university and graduating accurately reflect the involvement you had during your studies and the skills you acquired through various experiences at the intersection of the academic and professional worlds? Is a scientist’s reputation proportional to his or her involvement in the ecosystem? Jorge Hirsch, the inventor of the H-index, which aims to quantify the productivity and scientific impact of a researcher according to the level of citation of his or her publications, has himself questioned the relevance of this indicator.  Peer-reviews, the quality of courses given, the support of young researchers and the real impact of science on society are not considered by the current system.

Within the framework of DeSci, it will be possible to imagine a system based on the Blockchain that makes it possible to trace and authenticate a researcher’s actions – and not just the fact of publishing articles – in order to reward him or her through non-tradable reputation tokens. The main challenge of this reputation system will be the transversality, the interoperability and  adoption by different DAOs. We can imagine that these tokens could be used to participate in votes (in the organization of conferences, in the choice of articles, etc.) and that they will themselves be allocated according to voting mechanisms (for example, students who have taken a course will be able to decide collectively on the number of tokens to allocate to the professor). 

Transforming the codes of scientific publication to bring out collective intelligence

Science is a collective and international work in which, currently, as a researcher, you can only communicate with other research teams around the world through:

  • Publications in which you cannot give access to all the data generated by your research and experiments (it is estimated that about 80% of the data is not published, which contributes to the crisis of scientific reproducibility)
  • Publications that other researchers cannot access without paying the scientific publishers (in the case of Open Science, it is the research team behind the publication that pays the publisher so that readers can access the article for free)
  • Publications which, because of their form and the problems linked to their access, make it very difficult to use Machine Learning algorithms which could accelerate research 
  • Finally, scientific publications which, because of the length of the editorial approval mechanisms, only reflect the state of your research with a delay of several months. Recent health crises such as COVID-19 have shown us how important it can be to have qualitative data available quickly.

The Internet has enabled a major transformation in the way we communicate. Compared to letters, which took weeks to reach their recipients in past centuries, e-mail and instant messaging allow us to communicate more often and, above all, to send shorter messages as we obtain the information they contain, without necessarily aggregating it into a complex form. Only scientific communication, even though most of it is now done via the Internet, resists this trend, to the benefit of scientific publishers and traditional forms of communication, but also and above all at the expense of the progress of science and patients in the case of biomedical research.

How, under these conditions, can we create the collective intelligence necessary for scientific progress? The company flashpub.io thinks it has the solution: micro-publications, consisting of a title designed to be easily exploited by an NLP algorithm, a single figure, a brief description and links giving access to all the protocols and data generated. 

Figure 2 – Structure of a micro-publication (Source: https://medium.com/@flashpub_io

This idea of micro-publications, if not directly linked to the Blockchain, will be, since it allows for the rapid and easy sharing of information, a remarkable tool for collective intelligence and certainly the scientific communication modality best suited to the coming era of Decentralised Science. The objective will not be to replace traditional publications but rather to imagine a new way of doing science, in which the narrative of an innovation will be built collectively throughout successive experiments rather than after several years of work by a single research team. Contradictory voices will be expressed, and a consensus will be found, not fundamentally modifying the classic model of science but making it more efficient.

Facilitating the financing of innovation and the creation of biotechnology start-ups

Today, the financing of innovation, particularly in health, faces a double problem: 

  • From the point of view of scientists and entrepreneurs: despite the development of numerous funding ecosystems, non-dilutive grants and the maturation of venture capital funds, the issue of fundraising remains essential and problematic for most projects. Many projects do not survive the so-called “Valley of Death”, the period before the start of clinical studies, during which raising funds is particularly complicated. 
  • On the investor side: It is particularly difficult for an individual to participate in the financing of research and biotech companies in a satisfactory way. 
  • It is possible to be a Business Angel and to enter early in the capital of a promising start-up: this is not accessible to everyone, as a certain amount of capital is required to enter a start-up (and even more so if one wishes to diversify one’s investments to smooth out one’s risk)
  • It is possible to invest in listed biotech companies on the stock market: the expectation of gain is then much lower, as the companies are already mature, and their results consolidated
  • It is possible to fund research through charities, but in this case, no return on investment is possible and no control over the funded projects can be exercised.
  • It is possible to invest through crowdfunding sites, but here again there are structural problems: the choice of companies is limited, and the investors are generally in the position of lenders rather than investors: they do not really own shares in the company and will be remunerated according to a predefined annual rate.

These days, one of the pharmaceutical industry’s most fashionable mantras is to put the patient at the center of its therapeutics, so shouldn’t we also, for the sake of consistency, allow him to be at the center of the systems for financing and developing therapeutics?

DeSci will allow everyone – patients, relatives of patients or simply (crypto)investors wishing to have a positive impact on the world – via IP-NFT, data-NFT or company tokenization systems to easily finance drug development projects whatever their stage, from the academic research of a researcher to a company already established. 

This system of tokenization of assets also makes it possible to generate additional income, both for the investor and for the project seeking to be financed:

  • The “Lombard loan” mechanisms present in DeFi will also allow investors to generate other types of income on their shares in projects. Indeed, DeFi has brought collateralized loans back into fashion: a borrower can deposit digital assets (cryptocurrencies, but also NFTs or tokenized real assets (companies, real estate, etc) in exchange for another asset (which represents a fraction of the value they deposited, in order to protect the lender) that they can invest according to different mechanisms specific to Decentralized Finance (we will not develop in this article). Thus, in a classic private equity system, the money invested in a start-up is blocked until the possibility of an exit and does not generate returns other than those expected due to the increase in the company’s value. In the new decentralized system, part of the money you have invested can be placed in parallel in the crypto equivalent of a savings account (let’s simplify things, this site is not dedicated to Decentralized Finance!)
  • Furthermore, another possibility for biotech projects, whether they are already incorporated or not, to generate additional revenues is to take advantage of the liquidity of the assets (which does not exist in the traditional financing system): it is quite possible to apply a tax of some % to each transaction of an IP-NFT or a data-NFT.

We are in a world where it is sometimes easier to sell a picture of a monkey for $3 or $4 million than to raise that amount to fight a deadly disease. It’s time to understand this and pull the right levers to get the money where it is – sometimes far off the beaten track. 

Conclusion: a nascent community, a lot of work and great ambitions

Despite the high-potential initiatives presented in this article, and the growing involvement of a scientific community throughout the world, DeSci is still young and has yet to be structured. One of the main ones, apart from the aspects related to the regulatory framework, will undoubtedly be that of education in the broadest sense, which is not yet addressed by the current projects. By using Web3 tools to reinvent the way in which a high-level curriculum can be built and financed (tomorrow you will be paid to take online courses – yes!), the DeSci will give itself the means to integrate the most creative and entrepreneurial minds of its time, in the same way that large incubators or investment funds such as Y Combinator or Tech Stars have relied on education to create or accelerate the development of some of the most impressive companies of recent years. The DeSci Collaborative Universities need to emerge, and the connection between Ed3 (education and learning in the Web3 era) and DeSci has yet to be implemented.

Figure 3 – Presentation of the embryonic DeSci ecosystem at the ETH Denver conference, February 17, 2022 (in the last 3 months, the burgeoning ecosystem has grown considerably with other projects)

Web 3.0 and DAOs have the great advantage of allowing people to be rewarded with equity, or the equivalent, for contributing their skills or financial resources to a project at any stage of its development.  Thus, in a decentralized world where skills and research materials are at hand, and where the interests of the individuals involved in a project are more aligned, the time between the emergence of an idea and its execution is significantly shorter than in a centralized world. This model, which can reinvent not only work but also what a company is, applies to all fields but is particularly relevant where collective intelligence is important and where advanced expertise of various kinds is needed, such as scientific research. 

In the same way that we can reasonably expect Bitcoin to become increasingly important in the international monetary system in the coming years and decades, we can expect DeSci, given its intrinsic characteristics and qualities, to become increasingly important in the face of what we may in the next few years call “TradSci” (traditionally organized Science). By allowing a perfect alignment of interests of its different actors, DeSci will probably constitute the most successful and viable large-scale and long-term collaborative tool of Collective Intelligence that Homo Sapiens will ever have. Whether it is the fight against global warming, the conquest of space, the eradication of all diseases, or the extension of human longevity, DeSci will probably be the catalyst for the next few decades of scientific innovation and, in so doing, will positively impact your life. Don’t miss the opportunity to be one of the first to do so!


Further reading: 
  • General information on DeSci: 
  •  
  •  
  • Understanding DAOs:
  •  
  • Understanding Web3:
  • On the IP-NFTs concept:
  • On the Data-NFTs concept:
  • On the decentralized peer-reviewing:
  • On the micro-publication concept:
  • On the decentralized construction and financing of Biotechs:
  • On the ED3:

Credits for the illustration of the article :
  • Background: @UltraRareBio @jocelynnpearl and danielyse_, Designed by @katie_koczera
  • Editing: Resolving Pharma

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Entrevues

Interview with Christophe Baron, Founder of Louis App: “NFTs to promote disease prevention and longevity”

Questions asked by Alexandre Demailly

We thank Mr Christophe Baron for his time and answers and wish Louis App a long life! To discover the app: https://www.louisapp.io/

1] Could you introduce your company and the project it is developing?

We started from the fact that life expectancy is decreasing, and that healthy life expectancy is stagnating because of our lifestyles: sedentary lifestyle and bad diet. These habits are not easy to change, and we have tried to find a way to make the user understand that his lifestyle has a direct impact on his life expectancy. (See studies cited at the bottom of the article).

2] Why do you think it is necessary to focus on prevention? Can you tell us more about the “life minutes” concept developed by your company? How was the intellectual property referring to it created?

The adage “il vaut mieux prévenir que guérir” (prevention is better than cure) is not only common sense but also science. Prevention really helps to prevent diseases from appearing. Sometimes people change their lifestyle, but it is too late. Someone who has smoked 20 cigarettes a day for 25 years will take a long time to get their lungs back in good condition and the disease may still set in.

Louis App is a preventive health application that measures your progress through a counter of life minutes gained by eating better and moving more. It is unique and we have filed a patent with the INPI. (Notes from Resolving Pharma:  French National Institute of Intellectual Property)

Today, you can record your physical activity and your meals and measure the impact with a 20-year projection. We are in the process of setting up AI to take a photo of the meal and then the application will recognize the content and the food family. In the same way, in a second version, we are going to offer automatic recovery of physical activity from the smartphone and other devices to reduce data entry to a minimum.

3] How does your app differ from a project like Step’N which encourages users to engage in physical activity but focuses on the financial aspects (including the need for an initial investment)?

StepN is a great app that is aimed at a cryptocurrency savvy audience and requires the purchase of a pair of trainers (Notes from Resolving Pharma: Indeed, the purchase of an NFT – representing a trainer – is required to use the app)

In contrast, the Louis app is aimed at a lay audience in which users will see their progress through levels but also through money: Health and Earn. Furthermore, we consider diet and smoking, which is not the case in StepN.

4] What is the purpose of the NFTs distributed to the users of the application?

This is still being finalized, but it is likely that users will be rewarded with discounts from health, sport and nutrition partners. We are in contact with dieticians who could coach our users with a discounted price. In addition, other highly motivating and understandable rewards are also planned.

5] You will also integrate a token into your project. What will it be used for? What mechanisms will govern its economy?

NFT and tokens will be linked and at this stage we prefer not to say too much. Again, this terminology will not be used, and we will speak a language that everyone can understand.

6] What is the infrastructure blockchain used by your project and what were the arguments that motivated your choice?

The technical choices have not yet been made, we are studying different solutions including Polygon (Notes from Resolving Pharma: Polygon is a second layer solution of the Ethereum Blockchain, which allows transactions to take place on a network with lower fees and higher speed than on the Ethereum Blockchain while maintaining interoperability with it)

7] Do you plan to carry out an ICO/STO/IDO or to use the cryptocurrency universe to find funding (I’m thinking of the grant programmes set up by certain blockchains)?

Yes, an ICO is planned: it will allow us to reward our users, to set up a DAO in order to vote for projects related to R&D (against diabetes for example), it will also allow us to recruit to improve the application and launch new modules; We have just launched a very easy to use diabetes prevention module and plan to launch a heart prevention module, cancer prevention, compliance…. Our road map is very well structured. We know that at least 40% of cancers are caused by our lifestyles and could therefore be avoided!

Very soon we will launch an Ulule campaign to communicate. Please follow us and participate…. There will be a very nice surprise in return.

8] Currently, only 8% of French people hold cryptocurrencies. What do you plan to do to demystify this world for the patients who will use your application?

We will not talk about crypto in the application but about progress linked to prevention through life minutes gained, milestones reached and monetary rewards.

9] What are the pillars on which your company’s business model is based?

The app is free today, but we will move to a freemium model with monthly or annual subscriptions.

10] Projects combining Blockchain and Health are still relatively rare in France. How are Web3 themes perceived in the French health start-up ecosystem?

Some people talk about medical training in the metaverse, health coaching…. I think it is still too early to have a clear vision.

11] What advice would you give to a young entrepreneur wishing to launch a project at the crossroads of Web3 and health?

3 very simple and essential things:

  • Set up a scientific council
  • Don’t hesitate to get coaching
  • Read, be constantly on the lookout

To go further:

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Clinic Exploratory research Preclinical

Artificial intelligence against bacterial infections: the case of bacteriophages

« If we fail to act, we are looking at an almost unthinkable scenario where antibiotics no longer work and we are cast back into the dark ages of medicine » – David Cameron, former UK Prime Minister

Hundreds of millions of lives are at stake. The WHO has made antibiotic resistance its number one global priority, showing that antibiotic resistance could lead to more than 100 million deaths per year by 2050, and that it already causes around 700,000 deaths per year, including 33,000 in Europe. Among the various therapeutic strategies that can be implemented, there is the use of bacteriophages, an old and neglected alternative approach that Artificial Intelligence could bring it back. Explanations.

Strategies that can be put in place to fight antibiotic resistance

The first pillar of the fight against antibiotic resistance is the indispensable public health actions and recommendations aimed at reducing the overall use of antibiotics. For example :

  • The continuation of communication campaigns aimed at combating the excessive prescription and consumption of antibiotics (in France a famous slogan is: “Antibiotics are not automatic”?)
  • Improving sanitary conditions to reduce the transmission of infections and therefore the need for antibiotics. This measure concerns many developing countries, whose inadequate drinking water supply causes, among other things, many cases of childhood diarrhea.
  • Reducing the use of antibiotics in animal husbandry, by banning the addition of certain antibiotics to the feed of food-producing animals.
  • Reducing environmental pollution with antibiotic molecules, particularly in establishing more stringent anti-pollution standards for manufacturing sites in the pharmaceutical industry.
  • The improvement and establishment of comprehensive structures, for monitoring human and animal consumption of antibiotics and the emergence of multi-drug resistant bacterial strains.
  • More frequent use of diagnostic tests, to limit the use of antibiotics and to select more precisely which molecule is needed.
  • Increased use of vaccination

The second pillar of the fight is innovative therapeutic strategies, to combat multi-drug resistant bacterial strains against which conventional antibiotics are powerless. We can mention :

  • Phage therapy: the use of bacteriophages, natural predatory viruses of bacteria. Phages can be used in therapeutic cases where they can be put directly in contact with bacteria (in the case of infected wounds, burns, etc.) but not in cases where they should be injected into the body, as they would be destroyed by the patient’s immune system.
  • The use of enzybiotics: enzymes, mainly from bacteriophages like lysine, that can be used to destroy bacteria. At the time of writing, this approach is still at an experimental stage.
  • Immunotherapy, including the use of antibodies: Many anti-infective monoclonal antibodies – specifically targeting a viral or bacterial antigen – are in development. Palivizumab directed against the F protein of the respiratory syncytial virus was approved by the FDA in 1998. The synergistic use of anti-infective antibodies and antibiotic molecules is also being studied.

Each of the proposed strategies – therapeutic or public health – can be implemented and their effect increased tenfold with the help of technology. One of the most original uses of Artificial Intelligence concerns the automation of the design of new bacteriophages.

Introduction to bacteriophages

Bacteriophages are capsid viruses that only infect bacteria. They are naturally distributed throughout the biosphere and their genetic material can be DNA, in the vast majority of cases, or RNA. Their discovery is not recent and their therapeutic use has a long history, in fact, they started to be used as early as the 1920s in Human and Animal medicine. Their use was gradually abandoned in Western countries, mainly because of the ease of use of antibiotics and the fact that relatively few clinical trials were conducted on phages, their use being essentially based on empiricism. In other countries of the world, such as Russia and the former USSR, the culture of using phages in human and animal health has remained very strong: they are often available without prescription and used as a first-line treatment.

The mechanism of bacterial destruction by lytic bacteriophages

There are two main types of bacteriophages:

  • On the one hand, lytic phages, which are the only ones used in therapeutics and those we will focus on, destroy the bacteria by hijacking the bacterial machinery in order to replicate.
  • On the other hand, temperate phages, which are not used therapeutically but are useful experimentally because they add genomic elements to the bacteria, potentially allowing it to modulate its virulence. The phage cycle is called lysogenic.

The diagram below shows the life cycle of a lytic phage:

This is what makes lytic phages so powerful, they are in a “host-parasite” relationship with bacteria, they need to infect and destroy them in order to multiply. Thus, the evolution of bacteria will select mainly resistant strains, as in the case of antibiotic resistance, however, unlike antibiotics, which do not evolve – or rather “evolve” slowly, in step with the scientific discoveries of the human species – phages will also be able to adapt in order to survive and continue to infect bacteria, in a kind of evolutionary race between the bacteria and the phages.

The possible use of Artificial Intelligence

One of the particularities of phages is that, unlike some broad-spectrum antibiotics, they are usually very specific to a bacterial strain. . Thus, when one wishes to create or find appropriate phages for a patient, a complex and often relatively long process must be followed, even though a race against time is usually engaged for the survival of the patient: the bacteria must be identified, which implies sample cultivation from the patient, characterizing the bacterial genome and then determining which phage will be the most likely to fight the infection. Until recently, this stage was an iterative process of in-vivo testing, which was very time-consuming, but as Greg Merril, CEO of the start-up Adaptive Phage Therapeutics (a company which is developing a phage selection algorithm based on bacterial genomes), points out: “When a patient is severely affected by an infection, every minute is important.”

Indeed, to make phage therapy applicable on a very large scale, it is necessary to determine quickly and at a lower cost which phage will be the most effective. This is what the combination of two technologies already allows and will increasingly allow: high frequency sequencing and machine learning. The latter makes it possible to process the masses of data generated by genetic sequencing (the genome of the bacteriophage or the bacterial strain) and to detect patterns in relation to an experimental database indicating that a phage with a genome X was effective against a bacterium with a genome Y.  The algorithm is then able to determine the chances of success of a whole library of phages on a given bacterium and determine which will be the best without performing long iterative tests. As with every test-and-learn domain, phage selection can be automated.

In addition to the determination of the best host for a given bacteriophage (and vice versa) discussed below, the main use cases described for artificial intelligence in the use of phages are

  • Classification of bacteriophages: The body in charge of classification is the International Committee on Taxonomy of Viruses (ICTV). More than 5000 different bacteriophages are described and the main family is the Caudovirales. Traditional approaches to the classification of bacteriophages are based on the morphology of the virion protein that is used to inject the genetic material into the target bacterium. These approaches are mainly based on electron microscopy techniques. A growing body of scientific literature suggests that Machine Learning is a relevant alternative for a more functional classification of bacteriophages.
  • Predicting the functionality of bacteriophage proteins: Machine Learning can be useful to elucidate the precise mechanisms of the PVP (Phage Virion Protein), involved, as mentioned above, in the injection of genetic material into the bacterium.
  • Determining the life cycle of bacteriophages: As discussed earlier in this article, there are two categories of phages: lytic and temperate. Traditionally, the determination of whether a phage belongs to one of these two families was determined by culture and in-vitro The task is more difficult than one might think because under certain stress conditions and in the presence of certain hosts, temperate phages have the ability to survive by performing lytic cycles. At present, PhageAI algorithms are able to determine 99% of the phage category.

It is also possible, as illustrated in the diagram below, for rare and particularly resistant bacteria, to combine the techniques seen above with synthetic biology and bio-engineering techniques in order to rapidly create “tailor-made” phages. In this particular use case, Artificial Intelligence offersits full potential in the development of an ultra-personalised medicine.

Conclusion

Despite its usefulness, phage therapy is still complicated to implement in many Western countries. In France, this therapy is possible within the framework of a Temporary Authorisation for Use under the conditions that the patient’s life is engaged or that his functional prognosis is threatened, that the patient is in a therapeutic impasse and that he or she is the subject of a mono-microbial infection. The use of the therapy must also be validated by a Temporary Specialised Scientific Committee on Phagotherapy of the ANSM and a phagogram – an in vitro test that studies the sensitivity of a bacterial strain to bacteriophages, in the manner of antibiograms – must be presented before treatment is started. Faced with these multiple difficulties, many patient associations are mobilizing to campaign for simplified access to phagotherapy. With the help of Artificial Intelligence, more and more phagotherapies can be developed, as illustrated in this article, and given the urgency and scale of the problem of antibiotic resistance, it is essential to prepare the regulatory framework within which patients will be able to access the various alternative treatments, including bacteriophages. The battle is not yet lost, and Artificial Intelligence will be a main key ally.

Would you like to discuss the subject? Would you like to take part in writing articles for the newsletter? Would you like to participate in an entrepreneurial project related to PharmaTech?

Contact us at hello@resolving-pharma.com!


To go further :

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Entrepreneurship Entrevues

Interview – Molecule, the start-up that wants to revolutionise the financing of drug development with the Blockchain

The Resolving Pharma team is pleased to inaugurate a series of interviews with start-ups creating the pharmaceutical world of tomorrow with this interview with Molecule, a young and ambitious German company willing to change the rules of Drug Development by using Blockchain technology in a new way.

We would like to thank the Molecule team for this exchange and especially Heinrich Tessendorf.

Some of the terms used in this interview are technical and very specific to the field of Blockchain, in order to facilitate the understanding of the Molecule.to project, a glossary has been added at the end of the interview. Do not hesitate to contact us if you have any questions or wish to discuss the subject. Have a good read!

This interview was conducted by Alexandre Demailly and Quentin Vicentini.

Resolving Pharma: With Molecule, you are trying to reinvent, among other things, the financing of pharmaceutical research and development. Can you explain how your platform works?

Molecule: Our platform is a marketplace that moves early-stage IP into web3 via NFTs. This is coupled with frameworks to build biotech DAOs and communities coming together to fund research in specific therapeutic areas. These communities consist of patients, researchers, and enthusiasts.

Practically, all of this comes together when researchers upload a project on our website. From here on forward, other researchers, investors or patient communities can discover these (and other) projects and decide where to invest. Once these role players have decided where to invest, they can connect their web3 wallet (e.g. Metamask) and fund the project by purchasing it as an IP-NFT. IP rights could immediately be transferred to the purchaser and funds could be transferred to the researcher at the exact same time.

Resolving Pharma: What are your company’s goals? What is your vision?

Molecule: Our vision is simple – we see patient, researcher, and investor communities forming to fund and govern end-to-end drug development. We enable this by making IP a highly liquid, data-driven asset class.

Over the next 2+ years, our goal is that our protocol will fund as much R&D as a mid-sized pharma company. With this, we’re ambitious to double our team, launch Molecule V2 and the Molecule DAO, see the first asset out licensed to Pharma, and realize the first patient-led use cases just to name a few.

Our hope is that decentralised biotech will do for access to therapies and medicine what FinTech and Decentralised finance did for how we manage and get access to financial services.

Resolving Pharma: How are submitted projects selected and evaluated? 

Molecule: Projects can be submitted on Molecule’s Discover App or VitaDAO’s Project Submission Form.

On Molecule’s Discover App, any researcher can upload their project and investors can discover them. Currently we have over 300 projects listed on this platform. We, as Molecule, don’t evaluate these projects – it’s up to investors to decide what projects they want to invest in. 

On VitaDAO’s Project Submission Form you can submit your longevity-focused project, but the concept is different in that there, you apply for funding for your project from VitaDAO. We do due diligence in ways similar to how the biopharmaceutical industry currently operates. Namely, they evaluate assets and research as a business opportunity where they’ll take into account market size, competition, team, etc. However, VitaDAO wants to pursue more high-risk and earlier stage projects than those in which traditional funding mechanisms show interest. Also, they want to focus on projects that promote longevity/healthspan/lifespan per se. This is notable because aging isn’t recognized as a disease by government agencies such as the FDA. Therefore, its market can’t be estimated traditionally. They accept this risk and have strategies including pursuing clinical trials in countries with favorable legal framework and/or countries willing to work together to design clinical trials with biomarkers that are relevant to longevity/healthspan/lifespan per se.

Projects submitted for funding through VitaDAO are evaluated by VitaDAO’s scientific evaluation board. They’ll then come up with a suggestion for or against funding. Evaluation is independent of the final decision for funding. If a project qualifies for funding it moves over to an on-chain funding proposal and VitaDAO token holders eventually vote for or against funding the project.

Resolving Pharma: How to invest in a research project using your platform?

Molecule: Currently, every investor needs to be a verified user on Molecule to invest in research projects. To enable you to directly invest in a research project, we need some information from the investor. Our platform is web3 enabled, so once investors have been whitelisted and selected a project they would like to fund it would be similar to how you would purchase an NFT on OpenSea.  Practically, the steps would look like this:

  1. Create an investor account on discover.molecule.to

  2. Explore research projects in your field of interest. If you want to get in touch with specific researchers that have no contact information listed, feel free to reach out to us via info@molecule.to

  3. Get whitelisted for IP-NFT sales: To participate in IP-NFT sales and make binding offers to researchers, Molecule needs to collect certain information from investors. This information will be used primarily to enable investors to sign the underlying legal agreements connected to IP-NFTs. To trigger the whitelisting process, please get in touch with info@molecule.to

  4. Bid on IP-NFTs: You are now ready to make offers for new research projects or existing IP-NFTs. We will keep you informed of new funding opportunities arising on Molecule Discovery. If you are interested in funding research projects which are not listed on Molecule yet, feel free to put the researcher in touch with the Molecule team.

  5. Transfer of funds and receiving the IP-NFT: After your bid has been accepted by a researcher, you will be asked to transfer the funds to an escrow account. As soon as the funds are received, the escrow contract will release the IP-NFT to the origin address of the funds.

  6. Manage your IP-NFT: After you have received the IP-NFT, you are now able to manage it on the molecule platform. View the IP-NFT, make selling offers, or review the underlying legal agreement and data (to be added) via the Molecule platform.

Resolving Pharma: How can individual investors choose between different projects?

Molecule: Individual investors will need to do their own due-diligence (DYOR) and consult a scientific advisor. Individuals will most likely choose projects that interest them personally, e.g. someone with a family member living with a certain disease. A lot of the information they require will be on the project page, but they can reach out to individual researchers through the project page on our Discovery app to ask further questions.

In the case where a DAO (e.g. VitaDAO) funds a project, the DAO has a group of subject matter experts (the scientific evaluation board) which advise the DAO on which projects to fund. The decision is then formalised by a governance proposal which is put up to a vote and the final decision is made by all token holders ($Vita in this case). Token holders then vote on these proposals through a simple yes or no vote.

Resolving Pharma: What are the advantages of decentralizing drug development?

Molecule: If IP is siloed and owned by individual companies, these companies could have a very strong bias towards only publishing positive data and this leads to information asymmetry. That’s not how science is supposed to be done. The research community could achieve desired outcomes much faster if research were done more openly and collaboratively. Learning can be done much faster and costs saved by reducing the duplication work through failed experiments. One thing which can help facilitate this is getting attention on research projects through a global public marketplace.

Resolving Pharma: How does your model differ from that offered by crowdfunding platforms?

Molecule: Molecule’s platform is different from crowdfunding, because novel approaches to democratised ownership mean stakeholders can directly co-own the therapies that affect them. Imagine a world where a new insulin treatment is collectively owned by diabetics – what would that do to access and pricing? What if patients could have a direct impact and say in the drugs developed for them? Communities help bring drugs to market through crowd intelligence and curation markets, not just funding, but co-owning.

Resolving Pharma: Can you explain the concept of IP-NFT? How is it secured from a legal point of view?

Molecule: The IP-NFT is a new NFT standard that we’ve developed. IP-NFTs represent the full legal intellectual property rights and provide data access to biopharma research. Think of the IP-NFT as a unique token on the Ethereum blockchain. This token will link to a legal agreement that the researcher will have concluded with investors. Through fractionalization, frictionless transfer, and collateralisation of IP in decentralised financial (DeFi) systems, it unlocks new value in biopharma IP. Fundamentally, IP-NFT enables funding, liquidity and valuation of the IP and research. 

From a legal perspective, the IP-NFT transacts real-world legal rights/licences of the IP. It does this by means of a legal contract and a smart contract that cross-references one another. The legal contract is an IP license with language referencing blockchain transactions, addresses, and signatures. The smart contract is an NFT with code referencing the IP licensing agreement, obfuscating certain data components and storing them on decentralised file storage networks. Combined, the legal contract and the smart contract create the IP-NFT. This gives secure access control to the IP and data to buyers and in the process speeds up due diligence and saves costs. You can learn more about the technical and legal setup of an IP-NFT in this Medium article.

Resolving Pharma: How are decisions made regarding the management of the project’s intellectual property? What is the role of the DAO?

Molecule: VitaDAO is governed by its members. All decisions undergo a pre-defined decision-making process that is inclusive and transparent to all members. Smaller decisions are made informally on VitaDAO’s Discourse forum or Discord, but can be escalated to require an on-chain vote where anyone who owns Vita tokens can vote. Decisions that are contested, have a notable impact on VitaDAO’s stakeholders, affect processes in a fundamental way, or involve a significant use of funds, always undergo an on-chain vote and require a relative majority of token holders to agree.

Resolving Pharma: In this regard, can you introduce us to VitaDAO? How could this project extend human life expectancy?

Molecule: VitaDAO is a decentralised organisation funding longevity research and governing biotech IP and data via IP-NFTs. Think about VitaDAO as the vehicle towards the democratization of access to therapeutics in the biotech world in order to make these assets widely accessible to people across the globe. 

Considering the project’s role in extending human life expectancy, VitaDAO funds early stage research, and could, for example, turn these research projects into biotech companies. As an example, the first project that VitaDAO funded is seeking to validate longevity observations through a series of wet lab experiments and if successful, this work could potentially result in the repurposing of several FDA-approved therapeutics to extend human lifespan, at a lower cost and over faster timelines than conceivably possible with de novo drug discovery.

Resolving Pharma: If our readers want to help you and participate in your projects, what can they do?

Molecule: The best way is to join our Discord, introduce yourself and talk to us there. You can also reach out to our community manager via email at heinrich@molecule.to  

If you wish to learn more about the project, you can refer to

  • The company’s website: https://www.molecule.to/
  • The company’s Medium blog: https://medium.com/molecule-blog
  • As well as the various talks and conferences given by Tyler and Paul, the two co-founders of Molecule: https ://youtube.com/playlist?list=PLeOXpfDM0Oy7aIg7wIfFRxiTADbBbUyLC

Glossary :

  • Web3: “Web3 refers to a third generation of the Internet where online services and platforms move to a model based on blockchains and cryptocurrencies. In theory, this means that infrastructures are decentralised and anyone who has a token associated with that infrastructure has some control over it. This model of the web represents a financialised vision of the internet.”
  • NFT for Non-Fungible Token: “An NFT refers to a digital file to which a digital certificate of authenticity has been attached. More precisely, the NFT is a cryptographic token stored on a blockchain. The digital file alone is fungible, whether it is a photo, video or other, the associated NFT is non-fungible.”
  • DAOs: “A DAO (Decentralized Autonomous Organization) is an entity powered by a computer program that provides automated governance rules to a community. The DAO is a complex, smart contract deployed on the Ethereum blockchain, similar to a decentralised venture capital fund. These rules are immutably and transparently written into a blockchain, a secure information storage and transmission technology that operates without a central controlling body. A DAO differs, in theory, from a traditional entity in three ways: it cannot be stopped or closed, no one or no organisation can control it (and thus manipulate its numbers) and, finally, everything is transparent and auditable, all within a supranational framework. A DAO is based on computer code: its operating rules are public and it is not based on any jurisdiction.
  • WhiteList: “The term whitelist defines, in the context of Blockchain projects, a set of people who are assigned a maximum level of freedom or trust in a particular system.

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Clinic Exploratory research

Reshaping real-world data sharing with Blockchain-based system

“[Blockchain] is a complicated technology and one whose full potential is not necessarily understood by healthcare players. We want to demonstrate […] precisely that blockchain works when you work on the uses!” Nesrine Benyahia, Managing Director of DrData

***

Access to real-world health data is becoming an increasingly important issue for pharmaceutical companies and facilitating the acquisition of this data could make the development of new drugs faster and less costly. After explaining the practices of data acquisition in the pharmaceutical industry, and the current initiatives aiming at facilitating them, this article will then focus on the projects using the Blockchain, in the exchange, monetization and securing of these precious data.

Use of real-world data by the Pharmaceutical Industry, where do we stand?

Real-world data are commonly defined as data that are not collected in an experimental setting and without intervention in the usual way patients are managed, with the aim of reflecting current practice in care. These data can sometimes complement data from randomized controlled trials, which have the disadvantage of being true only in the very limited context of clinical trials. The use of real-world data is likely to grow for two key reasons. First, new technological tools allow us to collect them (connected medical devices, for example) while others allow us to analyze them (data science, text-mining, patient forums, exploitation of grey literature, etc.). Secondly,  for a few years now, we have been observing a regulatory evolution that allows more and more early access and clinical evidence on small number of patients (especially in the case of cancer drug trials) and that tends to move the evidence cursor towards real-world data.

The uses of real-world data are varied and concern the development of new drugs – in particular in order to define new management algorithms, or to discover unmet medical needs through the analysis of databases – but also the monitoring of products already on the market – we can cite several cases of use such as the monitoring of safety and use, access to the market with conditional financial support or payment on performance. These data can be used to inform the decisions of health authorities and also the strategic decisions of pharmaceutical companies.

Current acquisition and use of real-world data: Data sources are varied, with varying degrees of maturity and availability, as well as varying access procedures. Some of these data come directly from healthcare, such as data from medico-administrative databases or hospital information systems, while others are produced directly by patients, through social networks, therapy management applications and connected medical devices. Access to this data for the pharmaceutical industry takes place in various ways. Like many other countries, France is currently working to implement organizational and regulatory measures to facilitate access to this real-world data, and to organize its collection and use, notably with the creation of the Health Data Hub. However, to this day, in the French and European context, no platform allows patients to have access to all of their health data and to freely dispose of them in order to participate in a given research project.

Imagining a decentralized health data sharing system, the first steps:

As a reminder, blockchain is a cryptographic technology developed in the late 2000s that allows to store, authenticate, and transmit information in a decentralized (without intermediaries or trusted third parties), transparent and highly secure way. For more information about how blockchain works, please refer to our previous article about this technology: “Blockchain, Mobile Applications: Will technology solve the problem of counterfeit drugs?” As we already explained in that article, the young Blockchain technology has so far mainly expressed its potential in the field of crypto currencies, but it is possible to imagine many other applications.

Thus, several research teams are working on how this technology could potentially address the major challenges of confidentiality, interoperability, integrity, and secure accessibility – among others – posed by the sharing of health data.

These academic research teams have envisioned blockchains that bring together different stakeholders: healthcare services, patients, and data users (who may be the patients themselves or other healthcare-producing organizations). These systems do not provide data to third parties (industrialists, for example); their only objectives are to improve the quality of care and to offer patients a platform that brings together their fragmented health data: in the United States, data is siloed because of the organization of the health system; in France, although the Social Security system has a centralizing role, the “Mon Espace Santé” service, which allows patients to access all of their data and is a descendant of the Shared Medical Record, is slow to be implemented.

These academic projects propose, on the one hand, to store medical information on a private blockchain – and on the other hand to operate Smart Contracts with different uses. Smart Contracts are computerized equivalents of traditional contracts, but they are different because their execution does not require a trusted third party or human intervention (they are executed when the conditions provided by the computer code are met). In these proposals for real-world data sharing systems, they allow, among other things, to authenticate the identity of the users, to guarantee the integrity of the data, their confidentiality, and the flexibility of their access (unauthorized persons cannot access the patient data).

Despite their theoretical qualities, these academic projects do not integrate the possibility for patients to share their data in an open access fashion, to different research projects. In the last part of this article, we will review two examples of start-ups seeking to address this issue using the Blockchain.

Examples of two blockchain projects that allow patients to share their health data:

Embleema is a startup that offers a platform where patients can upload their health data – ranging from their complete genome to the results of their medical tests, to data from connected medical devices. At the same time, pharmaceutical companies can express their needs, and an algorithm on the platform will then select patients who could correspond to this need, by their pathology or by the treatments they are prescribed. They will then be asked to sign a consent document to participate in an observational study, in exchange for which they will be paid (in the USA) or may choose a patient association that will receive funding (in France).  The data produced by patients are stored on centralized servers of specialized health data hosts, and only the industrialists who have purchased it have access to it. The Ethereum blockchain and its system of smart contracts are used in the Embleema model only to certify compliance and organize the sharing of documents related to the study (collection of patient consent, etc.). We can therefore wonder about the added value of the blockchain in this model. Couldn’t these documents have been stored on centralized servers? And the actions triggered by the smart contracts carried out from a centralized database, with Embleema acting as a trusted third party? How much of the marketing use of the term Blockchain is in this model? In any case, the Patient Truth platform developed by Embleema has the great merit of proposing a model in which patients have control over their health data, and the choice to get involved in this or that academic or industrial research project.

***

The second company we will focus on is MedicalVeda, a Canadian start-up in which blockchain plays a more central role, including the launch of an ERC-20 token (a standard cryptocurrency using the Ethereum blockchain that can be programmed to participate in a Smart Contract). The workings of this company, which seeks to solve several problems at once – regarding access to healthcare data by the healthcare industries but also about access to care on the patient side – is quite complex and conceptual and we will try to simplify it as much as possible. MedicalVeda’s value proposition is based on several products:

  • The VEDA Health Portal, which is a platform to centralize patient’s health data for the benefit of caregivers and pharmaceutical industry research programs to which the patient can choose to provide access. Similar to the projects previously mentioned in this article, the goal is to overcome the challenge of data siloing. The data is secured by a private blockchain.
  • The Medical Veda Data Market Place, which aims to directly connect patients and pharmaceutical companies according to their needs. Transactions are made using the blockchain and are paid for in crypto-currencies.
  • Two other products are worth mentioning: the MVeda token, which is the cryptocurrency of the data sales platform, which pays patients, and Medfi Veda, a decentralized finance system that allows American patients to borrow money to fund medical interventions by collateralizing their MVeda crypto-currency tokens. This collateral lending system is classic in decentralized finance, but admittedly the details of the system developed by MVeda remain murky. The objective of the system is to allow patients to collateralize their health data in order to facilitate their access to healthcare.
***

In conclusion, Blockchain is still a young technology that experienced a very high level of interest in the healthcare world in 2018 before gradually drying up since then, mainly due to a misunderstanding of its potential and a lack of education of healthcare professionals on the subject on the one hand, and on the other hand due to too much marketing use of what had become a “buzz-word.” The intrinsic qualities of this technology make it possible to imagine creative and ambitious models for sharing health data, which may be the source of accelerated development of new drugs in the future. For this time being, and despite courageous and intelligent initiatives, some of which have already been commercialized, no solution is fully functional on a very large scale; everything remains to be built.


To go further:

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Clinic Exploratory research Preclinical

3D printing and artificial intelligence: the future of galenics?

“Ten years from now, no patient will take the same thing as another million people. And no doctor will prescribe the same thing to two patients.”

Fred Paretti from the 3D drug printing startup Multiply Labs.

3D printing – also known as additive manufacturing – is one of the technologies capable of transforming pharmaceutical development, and will certainly play a role in the digitalization of the drug manufacturing sector. This short article will attempt to provide an overview of how 3D printing works, its various use cases in the manufacture of personalized medicines, the current regulatory framework for this innovative technology, and the synergies that may exist with Artificial Intelligence.

3D printing, where do we stand?

The principle of 3D printing, developed since the early 2000s and now used in a large number of industrial fields, consists of superimposing layers of material in accordance with coordinates distributed along three axes (in three dimensions) following a digital file. This 3D file is cut into horizontal slices and sent to the 3D printer, allowing it to print one slice after another. The terminology “3D printing” brings together techniques that are very different from each other:

  • The deposition of molten wire or extrusion: a plastic wire is heated until it melts and deposited at points of interest, in successive layers, which are bound together by the plastic solidifying as it cools. This is the most common technique used by consumer printers.
  • The photopolymerization of the resin: a photosensitive resin is solidified with the help of a laser or a very concentrated light source, layer by layer. This is one of the techniques that allows a very high level of detail.
  • Sintering or powder fusion: a laser is used to agglomerate the powder particles with the energy it releases. This technique is used to produce metal or ceramic objects.

In the pharmaceutical industry, 3D printing is used in several ways, the main ones being :

  • The realization of medical devices, using the classic techniques of printing plastic or metallic compounds or more particular techniques allowing medical devices to acquire original properties, like the prostheses of the start-up Lattice Medical allowing adipose tissue to regenerate.
  • Bio-printing, allowing, by printing with human cells, to reconstitute organs such as skin or heart patches, like what is done by another French start-up: Poietis
  • Finally, and this is what will be discussed in this article, 3D printing also has a role to play in galenics by making it possible to print, from a mixture of excipient(s) and active substance(s), an orally administered drug.

What are the uses of 3D printing of medicines? 

3D printing brings an essential feature to drug manufacturing: flexibility. This flexibility is important for:

  • Manufacturing small clinical batches: clinical phases I and II often require small batches of experimental drugs for which 3D printing is useful: it is sometimes economically risky to make large investments in drug manufacturing at this stage. Moreover, it is often necessary to modify the active ingredient content of the drugs used, and 3D printing would enable these batches to be adapted in real time. Finally, 3D printing can also be useful for offering patients placebos that are as similar as possible to their usual treatments.
  • Advancing towards personalized medicine: 3D printing of drugs allows the creation of “à la carte” drugs by mixing several active ingredients with different contents for each patient. In the case of patients whose weight and absorption capacities vary over time (children or the elderly who are malnourished, for example), 3D printing could also adapt their treatments in real time according to changes in their weight, particularly in terms of dosage and speed of dissolution.

To address these issues, most major pharmaceutical companies are increasingly interested in 3D printing of drugs. They are investing massively in this field or setting up partnerships, like Merck, which is cooperating with the company AMCM in order to set up a printing system that complies with good manufacturing practices. The implementation of this solution has the potential to disrupt the traditional manufacturing scheme, as illustrated in the diagram below.

Figure 1 – Modification of the manufacturing steps of a tablet by implementing 3D printing (Source : Merck)

Regulation

The first commercialized 3D printed drug was approved by the FDA in 2015. Its active ingredient is levetiracetam. The goal of using 3D printing for this drug was to achieve a more porous tablet that dissolves more easily and is more suitable for patients with swallowing disorders. Despite these initial approvals and market accesses, the regulatory environment has yet to be built, as it is still necessary to assess the changes in best practices that 3D printing technology may impose and determine what types of tests and controls should be implemented. Destructive quality controls are not particularly well suited to the small batches produced by the 3D printer technique. To our knowledge, there are currently no GMP-approved 3D printers for the manufacture of drugs.

Will the future of drug 3D printing involve artificial intelligence? 

A growing number of authors believe that 3D printing of drugs will only be able to move out of the laboratory and become a mainstream technology in industry if artificial intelligence is integrated. Indeed, as things stand at present, because of the great flexibility mentioned above, the use of 3D printing requires a long iterative phase: it is necessary to test thousands of factors concerning in particular the excipients used, but also the parameters of the printer and the printing technique to be selected. The choice of these different factors is currently made by the galenics team according to its objectives and constraints: what is the best combination of factors to meet a given pharmacokinetic criterion? Which ones allow to minimize the production costs? Which ones allow to respect a possible regulatory framework? Which ones allow for rapid production? This iterative phase is extremely time-consuming and capital-intensive, which contributes to making 3D printing of drugs incompatible with the imperatives of pharmaceutical development for the moment. Artificial Intelligence seems to be the easiest way to overcome this challenge and to make the multidimensional choice of parameters to be implemented according to the objectives “evidence-based”. Artificial Intelligence could also be involved in the quality control of the batches thus manufactured.

The use of Artificial Intelligence to design new drugs opens up the prospect of new technical challenges, particularly with regard to the availability of the data required for these Machine Learning models, which are often kept secret by pharmaceutical laboratories.  We can imagine that databases can be built by text-mining scientific articles and patents dealing with different galenic forms and different types of excipients and then completed experimentally, which will require a significant amount of time. In addition to these technical challenges, it will also be necessary to ask more ethical questions, particularly with regard to the disruption of responsibilities caused by the implementation of these new technologies: who would be responsible in the event of a non-compliant batch being released? The manufacturer of the 3D printer? The developer of the algorithm that designed the drug? The developer of the algorithm that validated the quality control? Or the pharmacist in charge of the laboratory?

All in all, we can conclude that 3D printing of medicines is a technology that is already well mastered, whose market is growing by 7% each year to reach a projected market of 440 million dollars in 2025, but whose usefulness is so far limited to certain cases of use, but which could tomorrow, due to the unlocking of its potential through the combination of Artificial Intelligence, allow us to achieve a fully automated and optimized galenic development and manufacturing of oral forms, finally adapted to the ultra-customized medicine that is coming.

To subscribe to the monthly newsletter for free: Registration

Would you like to take part in writing articles for the newsletter ? You wish to participate in an entrepreneurial project on these themes ?

Contact us at hello@resolving-pharma.com ! Join our LinkedIn Group !


To go further:

  • Moe Elbadawi, Laura E. McCoubrey, Francesca K.H. Gavins, Jun J. Ong, Alvaro Goyanes, Simon Gaisford, and Abdul W. Basit ; Disrupting 3D Printing of medicines with machine learning ; Trends in Pharmacological Sciences, September 2021, Vol 42, No.9
  • Moe Elbadawi, Brais Muñiz Castro, Francesca K H Gavins, Jun Jie Ong, Simon Gaisford, Gilberto Pérez , Abdul W Basit , Pedro Cabalar , Alvaro Goyanes ; M3DISEEN: A novel machine learning approach for predicting the 3D printability of medicines ; Int J Pharm. 2020 Nov 30;590:119837
  • Brais Muñiz Castro, Moe Elbadawi, Jun Jie Ong, Thomas Pollard, Zhe Song, Simon Gaisford, Gilberto Pérez, Abdul W Basit, Pedro Cabalar, Alvaro Goyanes ; Machine learning predicts 3D printing performance of over 900 drug delivery systems ; J Control Release. 2021 Sep 10;337:530-545. doi: 10.1016/j.jconrel.2021.07.046
  • Les médicaments imprimés en 3D sont-ils l’avenir de la médecine personnalisée ? ; 3D Natives, le média de l’impression 3D ; https://www.3dnatives.com/medicaments-imprimes-en-3d-14052020/#!
  • Les médicaments de demain seront-ils imprimés en 3D ? ; Le mag’ Lab santé Sanofi ; https://www.sanofi.fr/fr/labsante/les-medicaments-de-demain-seront-ils-imprimes-en-3D
  • Press Releases – Merck and AMCM / EOS Cooperate in 3D Printing of Tablets ; https://www.merckgroup.com/en/news/3d-printing-of-tablets-27-02-2020.html

Ces articles pourraient vous intéresser

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…
Categories
Clinic Exploratory research Preclinical

Why are we still conducting meta-analyses by hand?

« It is necessary, while formulating the problems of which in our further advance we are to find solutions, to call into council the views of those of our predecessors who have declared an opinion on the subject, in order that we may profit by whatever is sound in their suggestions and avoid their errors. »

Aristotle, De anima, Book 1, Chapter 2

Systematic literature reviews and meta-analyses are essential tools for synthesizing existing knowledge and generating new scientific knowledge. Their use in the pharmaceutical industry is varied and will continue to diversify. However, they are particularly limited by the lack of scalability of their current methodologies, which are extremely time-consuming and prohibitively expensive. At a time when scientific articles are available in digital format and when Natural Language Processing algorithms make it possible to automate the reading of texts, should we not invent meta-analyses 2.0? Are meta-analyses boosted by artificial intelligence, faster and cheaper, allowing more data to be exploited, in a more qualitative way and for different purposes, an achievable goal in the short term or an unrealistic dream?

Meta-analysis: methods and presentation

A meta-analysis is basically a statistical analysis that combines the results of many studies. Meta-analysis, when done properly, is the gold standard for generating scientific and clinical evidence, as the aggregation of samples and information provides significant statistical power. However, the way in which the meta-analysis is carried out can profoundly affect the results obtained.

Conducting a meta-analysis therefore follows a very precise methodology consisting of different stages:

  • Firstly, a search protocol will be established in order to determine the question to be answered by the study and the inclusion and exclusion criteria for the articles to be selected. It is also at this stage of the project that the search algorithm is determined and tested.
  • In a second step, the search is carried out using the search algorithm on article databases. The results are exported.
  • Articles are selected on the basis of titles and abstracts. The reasons for exclusion of an article are mentioned and will be recorded in the final report of the meta-analysis.
  • The validity of the selected studies is then assessed on the basis of the characteristics of the subjects, the diagnosis, and the treatment.
  • The various biases are controlled for in order to avoid selection bias, data extraction bias, conflict of interest bias and funding source bias.
  • A homogeneity test will be performed to ensure that the variable being evaluated is the same for each study. It will also be necessary to check that the data collection characteristics of the clinical studies are similar.
  • A statistical analysis as well as a sensitivity analysis are conducted.
  • Finally, the results are presented from a quantitative and/or non-quantitative perspective in a meta-analysis report or publication. The conclusions are discussed.

The systematic literature review (SLR), unlike the meta-analysis, with which it shares a certain number of methodological steps, does not have a quantitative dimension but aims solely to organize and describe a field of knowledge precisely.

The scalability problem of a powerful tool

The scalability problem is simple to put into equation and will only get worse over time: the increase in the volume of data generated by clinical trials to be processed in literature reviews is exponential while the methods used for extracting and processing these data have evolved little and remain essentially manual. The intellectual limits of humans are what they are, and humans cannot disrupt themselves.

As mentioned in the introduction to this article, meta-analyses are relatively costly in terms of human time. It is estimated that a minimum of 1000 hours of highly qualified human labor are required for a simple literature review and that 67 weeks are needed between the start of the work and its publication. Thus, meta-analyses are tools with a high degree of inertia and their temporality is not currently adapted to certain uses, such as strategic decision-making, which sometimes requires certain data to be available quickly. Publications illustrate the completion of full literature reviews in 2 weeks and 60 working hours using automation tools using artificial intelligence.

“Time is money”, they say. Academics have calculated that, on average, each meta-analysis costs about $141,000. The team also determined that the 10 largest pharmaceutical companies each spend about $19 million per year on meta-analyses. While this may not seem like a lot of money compared to the various other expenses of generating clinical evidence, it is not insignificant and it is conceivable that a lower cost could allow more meta-analyses to be conducted, which would in turn explore the possibility of conducting meta-analyses of pre-clinical data and potentially reduce the failure rate of clinical trials – currently 90% of compounds entering clinical trials fail to demonstrate sufficient efficacy and safety to reach the market.

Reducing the problem of scalability in the methodology of literature reviews and meta-analyses would make it easier to work with data from pre-clinical trials. These data present a certain number of specificities that make their use in systematic literature reviews and meta-analyses more complex: the volumes of data are extremely large and evolve particularly rapidly, the designs of pre-clinical studies as well as the form of reports and articles are very variable and make the analyses and the evaluation of the quality of the studies particularly complex. However, systematic literature reviews and other meta-analyses of pre-clinical data have different uses: they can identify gaps in knowledge and guide future research, inform the choice of a study design, a model, an endpoint or the relevance or not of starting a clinical trial. Different methodologies for exploiting preclinical data have been developed by academic groups and each of them relies heavily on automation techniques involving text-mining and artificial intelligence in general.

Another recurring problem with meta-analyses is that they are conducted at a point in time and can become obsolete very quickly after publication, when new data have been published and new clinical trials completed. So much time and energy is spent, in some cases after only a few months or weeks, to present inaccurate or partially false conclusions. We can imagine that the automated performance of meta-analyses would allow their results to be updated in real time.

Finally, we can think that the automation of meta-analyses would contribute to a more uniform assessment of the quality of the clinical studies included in the analyses. Indeed, many publications show that the quality of the selected studies, as well as the biases that may affect them, are rarely evaluated and that when they are, it is done according to various scores that take few parameters into account – for example, the Jadad Score only takes into account 3 methodological characteristics – and this is quite normal: the collection of information, even when it is not numerous, requires additional data extraction and processing efforts.

Given these scalability problems, what are the existing or possible solutions?

Many tools already developed

The automation of the various stages of meta-analyses is a field of research for many academic groups and some tools have been developed. Without taking any offence to these tools, some examples of which are given below, it is questionable why they are not currently used more widely. Is the market not maturing enough? Are the tools, which are very fragmented in their value proposition, not suitable for carrying out a complete meta-analysis? Do these tools, developed by research laboratories, have sufficient marketing? Do they have sufficiently user-friendly interfaces?

As mentioned above, most of the tools and prototypes developed focus on a specific task in the meta-analysis methodology. Examples include Abstrackr, which specialises in article screening, ExaCT, which focuses on data extraction, and RobotReviewer, which is designed to automatically assess bias in reports of randomised controlled trials.

Conclusion: improvement through automation?

When we take into account the burgeoning field of academic exploration concerning automated meta-analysis as well as the various entrepreneurial initiatives in this field (we can mention in particular the very young start-up: Silvi.ai), we can only acquire the strong conviction that more and more, meta-analysis will become a task dedicated to robots and that the role of humans will be limited to defining the research protocol, assisted by software that will allow us to make the best possible choices in terms of scope and search algorithms. Thus, apart from the direct savings that will be made by automating meta-analyses, many indirect savings will be considered, particularly those that will be made possible by the best decisions that will be taken, such as whether or not to start a clinical trial. All in all, the automation of meta-analyses will contribute to more efficient and faster drug invention.

Resolving Pharma, whose project is to link reflection and action, will invest in the coming months in the concrete development of meta-analysis automation solutions.

Would you like to discuss the subject? Would you like to take part in writing articles for the Newsletter? Would you like to participate in an entrepreneurial project related to PharmaTech?

Contact us at hello@resolving-pharma.com! Join our LinkedIn group!


To go further:
  • Marshall, I.J., Wallace, B.C. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev 8, 163 (2019). https://doi.org/10.1186/s13643-019-1074-9
  • Clark J, Glasziou P, Del Mar C, Bannach-Brown A, Stehlik P, Scott AM. A full systematic review was completed in 2 weeks using automation tools: a case study. J Clin Epidemiol. 2020 May;121:81-90. doi: 10.1016/j.jclinepi.2020.01.008. Epub 2020 Jan 28. PMID: 32004673.
  • Beller, E., Clark, J., Tsafnat, G. et al. Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR). Syst Rev 7, 77 (2018). https://doi.org/10.1186/s13643-018-0740-7
  • Lise Gauthier, L’élaboration d’une méta-analyse : un processus complexe ! ; Pharmactuel, Vol.35 NO5. (2002) ; https://pharmactuel.com/index.php/pharmactuel/article/view/431
  • Nadia Soliman, Andrew S.C. Rice, Jan Vollert ; A practical guide to preclinical systematic review and meta-analysis; Pain September 2020, volume 161, Number 9, http://dx.doi.org/10.1097/j.pain.0000000000001974
  • Matthew Michelson, Katja Reuter, The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials, Contemporary Clinical Trials Communications, Volume 16, 2019, 100443, ISSN 2451-8654, https://doi.org/10.1016/j.conctc.2019.100443
  • Vance W. Berger, Sunny Y. Alperson, A general framework for the evaluation of clinical trial quality; Rev Recent Clin Trials. 2009 May ; 4(2): 79–88.
  • A start-up specializing in meta-analysis enhanced by Artificial Intelligence: https://www.silvi.ai/
  • And finally, the absolute bible of meta-analysis: The handbook of research synthesis and meta-analysis, Harris Cooper, Larry V. Hedges et Jefferey C. Valentine

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Exploratory research

Health data: an introduction to the synthetic data revolution

Data, sometimes considered as the black gold of the 21st century, are the essential fuel for artificial intelligence and are already widely used by the pharmaceutical industry. However, and especially because of the particular sensitivity of Health, their use has several limitations. Will synthetic data be one of the solutions to solve these problems?

What is synthetic data and why use it?

Synthetic data are data created artificially through the use of generative algorithms, rather than collected from real events. Originally developed in the 1990s to allow work on U.S. Census data, without disclosing respondents’ personal information, synthetic data have since been developed to generate high-quality, large-scale datasets.

These data are generally generated from real data, for example from patient files in the case of health data, and preserve their statistical distribution. Thus, it is theoretically possible to generate virtual patient cohorts, having no real identity, but corresponding statistically in all points to real cohorts. Researchers have succeeded in synthesizing virtual patient records from publicly available demographic and epidemiological data. In this case, we speak of “fully synthetic data“, as opposed to “partially synthetic data“, which are synthetic data manufactured to replace missing data from real data sets collected in the traditional way.

***

Currently, and despite various initiatives – such as the Health Data Hub in France,  for which we will come back to in future articles – aiming to democratize their use, many problems still limit the optimal and massive use of patient data, despite their ever growing volume. Synthetic data are one of the solutions that can be used.

  • Health data privacy:

Naturally, health data are particularly sensitive in terms of confidentiality. The need to preserve patient anonymity leads to a certain number of problems in terms of accessibility and data processing costs. Many players do not have an easy access to these data, and even when they do manage to gain access, their processing involves significant regulatory and cybersecurity costs. Access times are also often extremely long, which slows down the research projects. For some databases, it is sometimes a regulatory requirement to hire a third-party company, that is accredited to handle these data.

To allow their use, patient data are generally anonymized using methods such as the deletion of identifying variables; their modification by the addition of noise; or the grouping of categorical variables in order to avoid certain categories containing too few individuals. However, the efficiency of these methods has been regularly questioned by studies showing that it was generally possible to trace the identity of patients, by making matches (probabilistic or deterministic) with other databases. Synthetic data generation can, in this context, be used as a safe and easy-to-use alternative.

  • Data quality:

The technique of synthetic data generation is commonly used to fill in missing data in real data sets that are impossible or very costly to collect again. These new data are representative of the statistical distribution of variables from the real data set.

  • The volume of health data datasets is too small to be exploited by artificial intelligence:

The training of Machine or Deep Learning models sometimes requires large volumes of data in order to obtain satisfying predictions: it is commonly accepted that a minimum of about 10 times as many examples as degrees of freedom of the model are required. However, when Machine Learning is used in health care, it is common that the volume of data does not allow good results, for example in rare pathologies that are poorly documented, or sub-populations representing few individuals. In such cases, the use of synthetic data is part of the data scientists’ toolbox.

The use of synthetic data is an emerging field, some experts believe it will help overcoming some of the current limitations of AI. Among the various advantages brought by synthetic data in the field of AI, we can mention: the fact that it is fast and inexpensive to create as much data as you want, without the need to label them by hand as it is often the case with real data, but also that these data can be modified several times in order to make the model as efficient as possible, in its processing of real data.

The different techniques for generating synthetic data

The generation of synthetic data involves several phases:

  • The preparation of the sample data from which the synthetic data will be generated: in order to obtain a satisfying result, it is necessary to clean and harmonize the data if they come from different sources
  • The actual generation of the synthetic data, we will detail some of these techniques below
  • The verification and the evaluation of the confidentiality offered by the synthetic data

Figure 1 – Synthetic Data Generation Schema

The methods of data generation are numerous, and their use depends on the objective one is aiming for and the type of data one wants to create: should we create data from already existing data, and thus follow their statistical distributions?  Or fully virtual data following rules, allowing them to be realistic (like text for example)? In the case of “data-driven” methods, taking advantage of existing data, generative Deep Learning models will be used. In the case of “process-driven” methods, allowing mathematical models to generate data from underlying physical processes, it will be a question of what we call agent-based modelling.

Operationally, synthetic data are usually created in the Python language – very well known to Data Scientists. Different Python libraries are used, such as: Scikit-Learn, SymPy, Pydbgen and VirtualDataLab. A future Resolving Pharma article will follow up this introduction by presenting how to create synthetic health data using these libraries.

***
Evaluation of synthetic data

It is common to evaluate anonymized patient data according to two main criteria: the quality of the use that can be made with the data, and the quality of anonymization that has been achieved. It has been shown that the more the data is anonymized, the more limited the use is, since important but identifying features are removed, or precision is lost by grouping classes of values. There is a balance to be found between the two, depending on the destination of the data.

Synthetic data are evaluated according to three main criteria:

  • The fidelity of the data to the base sample
  • Fidelity of the data to the distribution of the general population
  • The level of anonymization allowed by the data

Different methods and metrics exist to evaluate the criteria: 

By ensuring that the quality of the data generated is sufficient for its intended use, evaluation is an essential and central element of the synthetic data generation process.

Which use cases for synthetic data in the pharmaceutical industry?

A few months ago, Accenture Life Sciences and Phesi, two companies providing services to pharmaceutical companies, co-authored a report urging them to integrate more techniques involving synthetic data into their activities. The use case mentioned in this report is about synthetic control arms, which however, generally use real data from different clinical trials and is statistically reworked.

Outside the pharmaceutical industry, in the world of Health, synthetic data are already used to train visual recognition models in imaging: researchers can artificially add pathologies to images of healthy patients and thus test their algorithms on their ability to detect the pathologies. Based on this use-case, it is also possible to create histological section data that could be used to train AI models in preclinical studies.

***

There is no doubt that the burgeoning synthetic data industry is well on its way to improve artificial intelligence as we currently know it, and its use in the health industry. This is particularly true when handling sensitive and difficult-to-access data. We can imagine, for example, a world where it is easier and more efficient for manufacturers to create their own synthetic data, than to seek access to medical or medico-administrative databases. This technology would then be one of those that would modify the organization of innovation in the health industries, by offering a less central place to real data.


To go further:

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Entrepreneurship Generalities

Blockchain, Mobile Apps: will technology solve the problem of counterfeit drugs?

« Fighting counterfeit drugs is only the start of what blockchain could achieve through creating [pharmaceutical] ‘digital trust’.»

Andreas Schindler, Blockchain Expert

20% of the medicines circulating in the world are counterfeit, most of them do not contain the right active substance or not in the right quantity. Representing 200 billion dollars per year, this traffic – 10 to 20 times more profitable for organized crime than heroin – causes the death of hundreds of thousands of people every year, the majority of whom are children, whose parents think they are treating them with real medicine. To fight this scourge, laboratories and international health authorities must form a united front, where technology could be the keystone.

***
The problem of counterfeit drugs

It is an almost invisible scourge, which contours are difficult to define, a low-key global epidemic, which does not provoke confinements or massive vaccination campaigns, but which nevertheless kills hundreds of thousands of patients every year. Counterfeit medicines, defined by the WHO as “medicines that are fraudulently manufactured, mislabeled, of poor quality, conceal the details or identity of the source, and do not meet defined standards”, generally concern serious diseases such as AIDS, tuberculosis or malaria, and lead to the death of approximately 300,000 children under the age of 5 from pneumonia and malaria. In fact, the general term “counterfeit drugs” covers very different products: some containing no active ingredient, some containing active ingredients different from what is indicated on the label, and others containing the indicated Active Pharmaceutical Ingredient (API) in different quantities. In addition to their responsibility for the countless human tragedies, counterfeit medicines also contribute to future issues by increasing antibiotic resistance in areas of the world where health systems are already failing and will probably not be able to cope with this new challenge.

Now, from a financial perspective. Apart from public health considerations, counterfeit medicines are also an economic and political problem for countries: this traffic, which represents 200 billion dollars per year, feeds organized crime networks and represents a very high cost for health systems. As far as the pharmaceutical industry is concerned, the problems caused by this traffic are numerous: it represents a 20% loss of revenue of their worldwide sales; a lack of confidence from their patients – not knowing, most of the time, that the counterfeit drugs are not the originals; and finally considerable expenses in order to fight the counterfeits.

***
Initiatives against counterfeit drugs

Counterfeit medicines are usually distributed through highly complex networks, which makes it particularly difficult to curb their spread. In its “Guide for the development of measures to eliminate counterfeit medicines”, the WHO identifies various legal-socio-political initiatives that can be put in place for States in order to limit the spread of these counterfeit medicines. While these recommendations are relevant, they are particularly difficult to implement in regions of the world where countries have few resources and whose structures are plagued by endemic corruption. In this article, we will therefore focus on solutions implemented by private companies: start-ups specialized in fighting against counterfeit drugs or large pharmaceutical companies.

One of the methods used by various start-ups – such as PharmaSecure based in India, or Sproxil based in Nigeria, and actively collaborating with the government of that country – is to use the widespread access of the populations to smartphones to allow them to identify counterfeit drug boxes according to the following model: drug manufacturers collaborate with these start-ups to set up codes (in the form of numerical codes or QR codes) concealed  inside the box  or on the packaging of the drug, under a surface that needs to be scratched or removed. Patients can download a free app and scan these codes to verify the medication is authentic. These applications also allow patients to receive advice on their treatments. They function as a trusted third party to certify the patient, the final consumer of the drug, that no one has fraudulently substituted the legitimate manufacturer.

 
Figure 1 – Model for drug authenticity verification using mobile apps

The system described above works almost the same way as serialization. The implementation began several years ago and is described in European Regulation 2016/61; with the exception that the verification is performed by the patient and not by the pharmacist.

Other mobile apps, such as CheckFake and DrugSafe, are developing a different verification system, taking advantage of the smartphone’s camera to check the shape, content, and color compliance of drug packaging. Finally, another category of mobile apps implements a system that analyses the shape and the color of the drugs themselves to identify which tablets they are, and certify they are authentic.

These different solutions have a number of qualities, in particular their ease of deployment and use by patients in all over the world. On the other hand, they have the disadvantage of being launched in a speed race with counterfeiters, pushed to produce more and more realistic and similar counterfeits. Nevertheless, these technologies can hardly be applied in other circuits: securing the entire supply chain or tracking the circuit of drugs in hospitals. This is why many large pharmaceutical groups, such as Merck or Novartis for example, bet on a different technology: the Blockchain. Explanations.

***
Presentation of the Blockchain technology

Blockchain is a technology conceived in 2008, on which cryptocurrencies have been built since then. It is a cryptographically secured technology for storing and transmitting information without a centralized control body. The main objective is to allow a computer protocol to be a vector of trust between different actors without an intermediary third party. The Blockchain mechanism allows the different actors participating to obtain a unanimous agreement on the content of the data, and to avoid their subsequent falsification. Thus, the historical method of consensus between actors is the so-called “proof of work”: a number of actors provides computing power to validate the arrival of new information. In the context of cryptocurrencies, these actors are called miners: very powerful computing machines with high energy expenditure are all given a complex mathematical problem to solve at the same time. The first one to succeed will be able to validate the transaction and be paid for it. Each of the participants, called “nodes”, has therefore an updated history of the ledger that is the Blockchain. The way to corrupt a proof-of-work blockchain is to gather enough computational power to carry out a so-called “51%” attack, i.e., to carry the consensus towards a falsification of the chain: the double spending in particular. In fact, this attack is hardly conceivable on blockchains such as Bitcoin, as the computing power to be developed would be phenomenal (perhaps one day the quantum computer will make what we currently consider to be cryptography obsolete, but that is another debate…) Other validation techniques now exist; such as proof of participation or proof of storage. They were essentially designed to address the issues of scalability and energy sustainability of blockchains.

Figure 2 – Diagram of how to add a block to a blockchain.

Conceived in the aftermath of the 2008 financial crisis, this technology has a strong political connotation, and Bitcoin’s philosophy, for example, is to allow individuals to free themselves from banking and political control systems. Thus, the original blockchains, such as Bitcoin, are said to be “open”: anyone can read and write the chain’s registers. Over time, and for greater convenience by private companies, semi-closed blockchains (everyone can read but only a centralizing organization can write) or closed blockchains (reading and writing are reserved for a centralizing organization) have been developed. These new forms of blockchains move away considerably from the original philosophy, and one can legitimately question their relevance: they present some disadvantages of the blockchain in terms of difficulty of use while also retaining the problems associated with a centralized database: a single entity can voluntarily decide to corrupt it or suffer from a hacking.

This closed configuration often allows for greater scalability but raises a question that is as much technological as it is philosophical: is a blockchain, when fully centralized, still a blockchain?

***
Prospects for the use of technology in the fight against counterfeit drugs

At a time when trust is more than ever a central issue for the pharmaceutical industry, which sees its legitimacy and honesty questioned relentlessly, it is logical that the players in this sector are interested in this technology of trust par excellence. Among the various use cases, which we will no doubt come back to in future articles, the fight against counterfeit drugs is one of the most promising and most important in terms of human lives potentially saved. For example, Merck recently began collaborating with Walmart, IBM, and KPMG on an FDA-led pilot project to use blockchain to allow patients to track the entire pathway of the medication they take. This concept is already being functionally tested in Hong Kong on Gardasil, and using mobile applications downloaded by pharmacists and patients. Thus, the entire drug supply chain is built around the blockchain, making it possible to retrieve and assemble a large amount of data concerning, for example, shipping dates or storage conditions and temperatures. The aforementioned consortium is also exploring the use of Non-Fungible Tokens (NFT): unique and non-interchangeable digital tokens. Each box of medication produced would have an associated NFT, which would follow the box through its circuit, from the manufacturer to the wholesaler, from the wholesaler to the pharmacist and from the pharmacist to the patient. Thus, in the future, each patient would receive an NFT at the same time as the medication in order to certify the inviolability of its origin. None of the actors in the supply chain could take the liberty of fraudulently adding counterfeit drugs since they would not have their associated NFT. Future is probably pleasing and in favor of increased drug safety, but it will only be achievable after significant work, on the one hand to educate stakeholders and on the other hand to set up digital interfaces accessible to all patients.

***

With the emergence of e-commerce and its ever-increasing ease of access, the problem of counterfeit drugs has exploded in recent years, and it will be necessary for the pharmaceutical ecosystem to mobilize and innovate in order to curb it, as well as to restore the deteriorated trust. Several fascinating  initiatives using blockchain technology are currently being carried out by various stakeholders in the health sector, we can see in these projects the outline of a potential solution to drug counterfeiting, but we must however consider them with a certain critical mind. The temptation to market the buzz-word “blockchain” since the explosion of crypto-currencies in 2017 can be strong – and even, unfortunately, when the issues could be perfectly satisfied with a centralized database. Can we go so far as to think, as some specialists in this technology do, that blockchain is only viable and useful when it is used for financial transfers? The debate is open and there is no doubt that the future will quickly bring an answer!

Would you like to discuss the subject? You want to take part in the writing of articles for the Newsletter? You want to participate in an entrepreneurial project related to PharmaTech?

Contact us at hello@resolving-pharma.com! Join our LinkedIn group!

To subscribe to the monthly Newsletter for free: Registration

For further information:

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !

Categories
Entrevues

Using Real World Data, an interview with Elise Bordet – RWD and Analytics Lead

Every month, Resolving Pharma interviews the stakeholders who shape the health and pharmaceutical industries of tomorrow. In this first interview, Elise Bordet honors us with her participation, many thanks for your time and your insights!

“Data access and analytics capabilities will become an increasingly important competitive advantage for pharmaceutical companies.”

Resolving Pharma] To begin with, could you introduce yourself and talk about your background? Why did you choose to work at the intersection of Data and Pharma?

[Elise Bordet] I am an agronomist, I did a PhD in Immunology-Virology and I then did an MBA before joining my current company. I am passionate about very technical and cutting-edge topics, and the implementation of new research approaches. I was very impressed by a conference on Artificial Intelligence and the notion of a 4th industrial revolution, I didn’t want to miss this subject.

I was very attached to fundamental research in the public sector, but I still wanted to form my own opinion about the pharmaceutical industry, and I am not disappointed at all. I think that it is a great place to contribute to research and the common good.

I love the ever-changing topics, where everything changes on a daily basis, where you always have to challenge yourself to stay updated on the latest innovations. Pharma, Data and AI subjects are heaven for me!

Can you tell us what Real World Data is and how the pharmaceutical industry uses it?

Real World Data is defined as data that is not collected in a randomized clinical trial. Therefore, it is a huge topic. It ranges from data collected in registries to larger databases such as medico-administrative databases.

This data allows the pharmaceutical industry to create drugs that are better adapted to the reality of Health systems. It also allows the creation of new research approaches, to support “drug repurposing” approaches for example.

How do Real World Evidence-based approaches differ from traditional pharmaceutical industry approaches? What are their added values?

Actually, these approaches have existed for a long time, particularly in Pharmacovigilance (the famous Phase IV). However, the amount of data available, its quality, our calculation and analysis capacities have been turned upside down. All these changes allow us to answer new research questions. Questions that remained unanswered because we did not have the capacity to look at what was happening in reality. The second subject is the major contributions of Artificial Intelligence: scientifically, we will be able to go much further.

In your opinion, how is the pharmaceutical industry going to balance the use of Real World Evidence with more traditionally generated clinical and pre-clinical data in the future?

Real World data will play an increasingly important role. Each type of data has its advantages and disadvantages. In fact, it is not a question of opposing data against each other, quite the contrary, the most interesting thing is to be able to bring all these data together and extract the most of information from them.

What impact could this type of data have on the drug value chain and the partnerships that the pharmaceutical industry needs to put in place?

Data access and analysis capabilities will become an increasingly important competitive advantage for pharmaceutical companies. The Data strategy of companies is one of the essential pillars. I imagine that in the future we will look not only at the value of a company’s portfolio, but also at the value and the impact of the analytics that can be performed by the company. Data is going to play so much on the projects’ probability of success that it is difficult to imagine not taking it into account in the metrics of economic valuation.

You recently gave a presentation on digital twin technology. Can you explain what it is?

Digital twin is a very elegant concept that can be summarized as follows: with each development, we generate new data we have to rely on for the next projects. This data should allow us to model most of the levels of biological organization: molecular, cellular, tissular and then at the scale of organs or even of organisms. This modeling will prevent replicating knowledge that has already been created and will notably allow us to accelerate pre-clinical and clinical development, and why not to model the first Phase I results very precisely.

How do you see the pharmaceutical industry in 30 years’ time?

Wow! Everything is going to be different! First of all, I think that, as in all industries, technology will have enabled a profound transformation of all decision making, what we call “data-driven decision making”. Science will have made incredible progress, calculation and prediction capacities will have been multiplied, there will be new approaches in Artificial Intelligence that we do not know today. We will have made immense progress in the interoperability of the various health databases that are fragmented today. It is a good exercise to try projecting ourselves in 30 years’ time. We won’t remember how we did things before, that’s the principle of technological revolutions; we’ve already forgotten how we lived without cell phones and the Internet! We will no longer see ourselves without Data and AI at the center of our decisions and projects. From a more organizational point of view, data sharing will have facilitated public and private scientific collaborations and the implementation of projects that will accelerate research, such as the Health Data Hub in France or the European Health Data Space that will be launched by the European Union.

Do you have any advice for someone who wants to work in Data Science in the Healthcare sector?

We scientists learned through doubt and are still haunted by it. Just because you have expertise in one field (clinical trials, laboratory research, etc.) does not mean that you cannot acquire other skills in Data Science or Artificial Intelligence, for example. Versatile profiles are and will be the most sought after. So my advice is: don’t panic!

If you can, start quickly to train yourself, the Internet puts us at a click of the best courses on programming, Data Science and many other advanced subjects, take advantage of it!

Go ahead and start tomorrow!

These articles should interest you

Vitalik_Buterin_Scientist_Landscape

Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at hello@resolving-pharma.com ! Join our group LinkedIn !