Shaping the Next Generation of Pharma Marketing
As we stand on the precipice of an accelerating artificial intelligence (AI) revolution in the pharma industry, the convergence of AI and practical, real-world applications has taken center stage. On June 8, approximately six months after the launch of OpenAI’s ChatGPT, five industry leaders with expertise in leveraging AI to enhance marketing strategies, customer engagement, data analysis, and commercialization of pharma products came together for a robust discussion.
The roundtable, steered by Sharlene Jenner, vice president of engagement strategy for Abelson Taylor and professor, AI and personalization for digital marketers, at Southern Methodist University, dives into the implications of AI’s rapid evolution through an exploration of present-day applications and the associated ethical and regulatory challenges. The participants shed light on the immediate impact and future potential of AI, offering a fleeting glimpse into the present (things are moving very fast) and a speculative look at how these rapidly evolving technologies could reshape pharmaceutical marketing in the years to come.
Jenner:We’re six months into the latest iteration of AI evolution in pharma, and our entire world is abuzz. Let’s skip theory and immediately talk about the concrete practices all of us are implementing. How are you presently harnessing the power of AI to bolster your own marketing technology strategies?
FARUK CAPAN, CHIEF INNOVATION OFFICER, EVERSANA, AND CEO, EVERSANA INTOUCH: The excitement is real, but practicality is more important. We are in the pharmaceutical industry, which is highly regulated, and patient and HCP (healthcare provider) data is very important.
As an organization, we advise our clients about what to do and not to do. With any new technology, pharma is usually accused of being laggard, but those of you [who have] been around the block know that isn’t true—there are very good reasons for it.
We work with an outside legal firm to make sure that everything we do for our clients is completely regulated and legally correct. We also asked our own teams—different departments, creative teams, [and] strategy—how we can use this technology. After we educated and empowered them, they came up with very concrete examples.
The simplest example, that’s already on the market, involves video production. Instead of video, where you write a script, get it approved, and shoot with experts—which takes a lot of time and money—now, we use generative AI.
We’re able to get the script approved and can change the video because we are using “near real people” online in video production. So, we can do the videos in a couple [of] days rather than months.
JEFF HEADD, VICE PRESIDENT, COMMERCIAL DATA SCIENCE, THE JANSSEN PHARMACEUTICAL COMPANIES OF JOHNSON & JOHNSON: In terms of AI and marketing, we’ve been on a journey for some time thinking about our omnichannel efforts. In the past, we’ve used machine learning to improve how we identify [HCPs] and institutional customers and find patients along their journey that will benefit most from our products. That has allowed us to rethink our strategy of when, where, and how we deploy our field forces as well as our marketing efforts across our digital channels.
Now, when we think about generative AI, there are two areas that come to mind. We’ve built many custom tools that require a lot of software. Generative AI code has the potential to greatly speed up our development cycles, testing, unit testing, etc. We’re experimenting with how we can augment our development teams to move more quickly, in an agile way, to adjust to market conditions. That’s a tremendous application of this technology.
The second area is deriving insights out of rich textual data—that can be publicly available data, medical literature, or treasure troves of market research that we all have in our data stores in pharma companies. So, it’s a mix of driving new and different insights out of what we already have. But what I love about foundation models is the ability to combine internal and external data in unique ways that we’ve not been able to do before and take advantage of the power of those large-scale models.
STEPHEN ONIKORO, SENIOR PRINCIPAL, HEAD OF STRATEGY, PHARMAFORCEIQ: When we started PharmaforceIQ, the original idea was to integrate across the marketing ecosystem in pharma—across sales platforms as well as non-personal promotional platforms. We quickly realized that you can’t orchestrate across those platforms without some sort of machine learning tools. So, we pivoted and dove deep into becoming a true, machine-learning AI company.
We needed to do two things. First, to be able to orchestrate intelligently across the ecosystem, you need to understand your customers very well. What are their preferences when it comes to HCP marketing? That requires building HCP profiles for each individual HCP in your ecosystem. We invested in more than 20 AI models to characterize these HCP preferences across many different types of features that got us rolling as an AI marketing company.
The second aspect came from the first—now that we know their preferences, how can we orchestrate across these ecosystems in an intelligent way? Now, we don’t need to set up rules in a system that says, “This physician needs to get this message.” Instead, we can ask AI to do that for us based on their preferences and some other business rules we apply to it.
ANURAG BANERJEE, CEO AND FOUNDER, QUILT.AI:I always think of business in a bidirectional way. It must be accretive to the P&L—so, it has to make us money, make our clients money, save time, and/or create efficiency. In understanding consumers, patients, and HCPs, the ability of large-scale machine-learning tools to do that is brilliant. It’s very fast, and you don’t need focus groups as much—or at all, in my opinion. You don’t need surveys or require less of them. And even if you have survey data, you can analyze it very quickly. So, there is a speed to market that a pharma company can have that wasn’t true even 12 or 18 months ago.
ChatGPT has made massive strides with each model released—and the other models are great. So, we see that as huge in terms of being able to respond to consumers quickly. The other thing is content curation. Now, you can generate so many different pieces of content—and we have MLR approval to think through. But the truth is: to win on the internet with HCPs or with patients, you must have multiple, personalized pieces of content. And AI allows you to do that—not amazingly today, but [it is] much better than it [was] three months ago. We see true personalization at scale finally being possible. It’s been theoretical for a long time, and now the time is here.
Jenner: Let’s talk about intellectual property and content creation. When AI tools came into the marketplace, the content on the internet became the unwitting trainers of generative AI. At this point, there isn’t compensation for the many articles and journals they’ve consumed, or for the myriad of ways that information is being absorbed to enable these powerful AI platforms. What is going to be the cost to some organizations and content creators as AI continues to absorb our market?
CAPAN: It’s an unfortunate and important topic that will change and evolve, but we need to be very practical right now. There’s a potential for big copyright issues in terms of not getting permission before going to train the set. And this happens with our clients, too, as each company has its own proprietary data [and] information. If you happen to share it with the public, what happens? Obviously, we want to protect our copyright knowledge.
However, [this is for] any profession, any expertise area—AI is not going to replace you. People using AI will replace you if you don’t use it, and I encourage people to be careful but do not stay on the sideline.
But if you’re a good writer (or medical writer) [or] if you have good expertise on any topic, it won’t replace you because AI still needs human feedback. We are not 100% efficient, but AI will definitely help us do our jobs better. For copyright issues, it will evolve, and regulation will be needed to make sure we do things right.
ONIKORO: From the perspective of the human creator—as well as all the content that has already been created—AI is a customer of intellectual property just like a human can be a customer of intellectual property. The question then becomes: is AI going to be a paying customer, and do you need to be paid for your content? Or is AI going to be a non-paying customer? This takes me back to the ’90s when Napster disrupted the music industry. You could download music for free online, but that’s someone’s intellectual property. And now the industry has transformed into Spotify, Pandora, [and] Apple Music to still download and stream music—but paying for it using a method that wasn’t available 25 years ago. I see us moving slowly into that kind of format. AI will always be able to consume information that you don’t want it to consume in many instances, but how do we frame it properly in which owners of the IP are getting credit for it in terms of payments? Once your content is online, AI will pick it up, consume it, and get trained on it. A framework around compensating content owners is the key.
BANERJEE: I gently disagree with Faruk and Steve. The whole construct of content creators on the web—”I wrote a blog and that went into ChatGPT; so, ChatGPT should pay me”—I find that logic flawed. I was taught by somebody; I learned art from somebody else; and I’m making a living. I’m not compensating those people. Yes, I pay taxes, and I buy certain things; but it’s not a customer model as such.
As we see with larger models, over a period of time, they’re extremely general. My colleagues here have probably done thousands of prompts [and] tried to play with the models and API access it; but what truly works well—and probably what works in Jeff’s world—are smaller large language models (LLMs) on tight data, like the Bloomberg Large LLM. Some LLMs that have been released are fascinating on tight data. So, your in-house data is managed and interrogated well.
Jenner: There’s so much legislation and regulation coming into AI. On a national and a global scale, we are starting to see some legislation take shape, especially in the European Union—which, when they introduced GDPR, was at the forefront from a privacy perspective and validated their citizens’ concerns. What are going to be the most significant repercussions or changes from a legislative and regulatory perspective for AI?
BANERJEE: Legislation, sadly, is almost always retroactive. It’s like the tobacco industry saying, “Regulate us.” This is how I think about us in the AI industry saying, “Regulate us,” which is a little duplicitous. Regulation should include four key stakeholders:
- First, content creators. All of us create content in some shape or form. What are the terms and conditions that content platforms should adhere to, and what should they be permitted to provide? What can Instagram or Twitter give or not give that potentially goes into ChatGPT?
- The second stakeholder is the platforms. What are the platforms going to do with this amount of content, and what can they build or not build? What is the monetization scheme?
- The third is the buyers of the product. If I am buying an Instagram-based LLM, what will that mean for them?
- The fourth stakeholder is privacy—that means having a privacy screen around it.
My theory is that legislation is going to come in a very draconian way and target small-use cases. Being GDPR-compliant is very defined, and there are ways to manage and optimize to that. And if we follow the GDPR model, it’s an easy model to execute against. I know we don’t have it in the US; but in Europe, it’s an easy model to understand.
As an industry, we shouldn’t jump and say, “Hey, legislate us,” because that sounds unethical and inappropriate. We should remain at an arm’s distance. And while I’m not worried about legislation, it should occur keeping those four stakeholders in view.
HEADD: As Faruk highlighted earlier, it’s a highly regulated industry. So, in the context of global legislation that governs a dynamically evolving capability, a thoughtful approach is essential to ensure quality and compliance remain inherent in the system—and, therefore, in decision-making.
An example of this dynamism is in the large language model and foundational model world, where there are two camps. There are the folks who will argue that bigger is better, and bigger will win. But that’s where one could quickly run into questions such as: ‘What was it trained on, and are we really being compliant, fair, and ethical?’ And that’s difficult to answer.
Then there’s the other side where smaller, focused models are catching up very quickly and probably will soon meet and potentially surpass functionality on a use-case level that we aim to achieve in our industry or any business.
There’s also a push in the open-source community to be declarative of what a model is trained on, getting appropriate permission to use underlying data sets for clearly specified purposes, and having clear traceability of the model inputs and outputs.
These are important conversations about integrity, provenance, and quality, and there’s not one single answer to the questions and issues raised. I think we can continue to look forward to having a rich set of options to consider depending on what the use case is.
Jenner: What do you think are the biggest challenges as we examine ethical concerns, privacy, and data security? And as marketers, what do you think are the biggest challenges we should be aware of as the legislation starts to come into play?
ONIKORO: In pharma, we tread carefully around the use of patient data and identifiable data and put those into our models. As it gets more competitive, you may want to get an edge on identifying the particular patient who has the rare disease, but how exactly is that done? It may take a while for regulators to catch up, but as an industry, we need to make sure we’re policing what we’re doing in terms of privacy and use of data. As a company, we think about this because on the backside of platforms, we need to protect our risk and manage compliance regardless of what the current regulations are.
CAPAN: I feel like a dinosaur [with] 30+ years in pharma on both the client side and agency side. I’ve been through arguments about the font size and type on a website, and [I’ve] been sued by those claiming, “This is our font.” We want to protect our customers, and we don’t want any of them ever being sued.
Right now, the top two important concerns are privacy and data security. We have clients [who] say, “Do not use my data in any shape or form regarding the ChatGPT environment,” which they have a right to say. And we do have to police ourselves. Do not ever use the final product of a ChatGPT-created content or image. Only use it for ideation.
The No. 1 process regarding content, customization, and personalization from a marketing standpoint is medical regulatory. That pipeline process is hard to solve, but we are working on a project to make our regulatory approval associates more efficient by creating private references and creating client environments. Now, we have the right references and wide checks, and we’re making sure the copy and claims are correct.
What’s most important in a regulatory environment is protecting patient, HCP, and client data; so, I highly suggest being careful. As an innovator, you may be the one [who] makes the first mistake, and everyone else picks on it, resulting in a chilling effect. I have seen this on social media and mobile websites in the past. We must be careful, minimize mistakes, minimize risk, and then we can adopt this technology in a better, faster way.
JENNER: Wonderfully said. We’re similar at AbelsonTaylor, where we look at the ways we can use generative AI, ChatGPT, and machine learning in ways that can expand and increase our efficiency from a client, HCP, and patient perspective. But we’re also focusing on where we are to make sure that we’re staying at the forefront of technology.
We always think about how we can make sure we’re bringing the right tools to our clients at the right time very quickly in order to stay ahead.
Jenner: Where will AI make a big change in pharmacy marketing within the next five to 10 years?
HEADD: In two ways. Applied change, transformative change, and changing things at scale have longer timelines than generally assumed. When you think about big companies adopting internet access, using cloud platforms instead of home servers—that took years. The hype cycle will reach a saturation point, and then we’ll get to actual tactical execution. Five years from now, on the marketing side, we should see increased speed and agility. There will be faster paths to producing new content, new campaigns that are very relevant at the individual level that can be turned around very quickly as market conditions change and external events happen or exciting things happen within a company’s own pipeline.
To Anurag’s comment earlier about finding patients with rare diseases—if you look at where the industry’s headed, there’s a lot of focus on personalized medicine and on rare indications. I think that does marry well with this technology when you think of how we can find those patients in order to initially study the disease in the clinic; and once we get a treatment approved, help find other patients earlier in their disease progression so we can help them on their health journey. That’s where we may see a lot of generative methods in production systems so that we can help those patients at those times.
BANERJEE: When I was 19 years old, my grandfather died of lung cancer. I remember Googling tobacco (as he was a chain smoker) and looking at research papers online as a sophomore in university. The information access and availability to the caregiver or patient were inadequate and not easy to navigate.
Conversely, my 13-year-old daughter had her tonsils taken out last summer, and I started explaining to her what the procedure is like, what’s going to happen, and how she was going to get a little ice cream. She said, “Dad, I got this. I saw two TikTok videos. I know what’s going to happen.” Information availability to a patient is much better today.
I’d love for an environment to exist in which personalized information about my health and my conditions were available to me and (as an empowered patient) I could find that information in an easy-to-access generative AI, personalized format.
ONIKORO:From our perspective of helping pharmaceutical marketers in HCP marketing, we see AI becoming more a part of the transactional aspects of what pharmaceutical marketers do. For example, segmentation. We’re not just looking at huge segments of positions, but we’re looking at micro-segments and tailoring messages specifically to them. And that’s where modular content and truly streamlining the ML processes come into play. I look at it more from a transactional level. It might not be a big change, but it’s also getting us faster and more personalized.
CAPAN: This trend is much more accelerated than the internet, mobile, or social acceleration we’ve seen in the past. In five to 10 years, the way we live and work is going to be much different than it is today. I’ll give you an example. Are we still going to need websites, or will we have personalized AI, personalized education, or [personalized] medicine? Will this expertise be available to us [on an] individual level? It’s going to be more about influencing the AI models than building websites or content. There is talk of tools coming very soon that will offer a personalized AI educator, trainer, doctor, [or] legal expertise—everything will be available to us in a much different way.
And we will likely have to be more specialized [as individuals], and jobs will likely be more specialized to be able to compete with AI. It’s exciting, but it’s kind of scary when you think about the things that can really go wrong in five to 10 years, but I like to stay optimistic. If we put our hearts and minds in the right place, we can overcome these challenges together. It’s essential for individuals with good intentions to join this effort, as there’s a likelihood that those with less altruistic motives are already working against our progress.
JENNER: I take a little bit of a more science-fiction approach to it. Within five or 10 years, because technology is moving so quickly, we’re going to get to a place where wearables—like augmented reality [and] virtual reality—become more common. We’re going to see a huge uptick in wearable technology, which is going to help people understand their health in a new way. I think it will provide a new perspective for healthcare providers and people who are in the industry to be able to understand population health on a more global and robust scale. Everyone is attached to their phones. So, if there is a way to implement augmented reality or virtual reality to meet patients exactly where they are and give them information about critical components of their healthcare—to manage symptoms [and have] early detection—it would be helpful.
And I think we’re going to see some healthcare providers really embracing this technology a lot more in research areas and how HCPs can communicate with their patients in a more free and open way. We already have telemedicine. But in the future, will we start seeing things where your healthcare provider is meeting you in a world [of augmented reality] that you maybe hadn’t thought of before? Much like how gaming has merged into an all-world technology.
Source link
#Shaping #Generation #Pharma #Marketing