Session Notes: Governance by Design: Securing and Governing AI in Pharma PPM
Executive Summary
Marc Leijten presented a comprehensive framework for governing AI in pharmaceutical portfolio management, emphasizing that while AI adoption is accelerating across industries, most companies lack proper governance structures. He outlined a three-stage approach: establishing foundational governance (data quality, security, explainability), then progressing through productivity gains, decision optimization, and eventually business transformation.
Full Notes
Current AI Adoption Landscape
Leijten opened with sobering statistics from Deloitte research showing that while 60% of large enterprise employees now have access to sanctioned AI tools, the business impact remains limited. Most usage (37%) focuses on basic productivity tasks like email writing and text analysis rather than transformational applications. Only 21% of companies using agentic AI have governance frameworks in place, creating significant risk exposure. In pharmaceutical portfolio management specifically, 68% of SPM solutions don't use AI, with companies remaining cautious due to regulatory requirements and the high cost of wrong decisions. The presenter emphasized this isn't a temporary trend but a permanent shift, describing AI evolution as 'climate change, not a storm.'
The Trust and Governance Challenge
A critical barrier emerged around trust and governance. While AI agents are increasingly making important decisions autonomously, most organizations lack frameworks to validate these decisions. Leijten highlighted particular challenges in pharmaceutical companies, where regulatory scrutiny is intense and mistakes carry enormous costs. Only 8% of companies would trust AI-generated SPM recommendations without human review, while the majority require human validation. Key concerns include data quality ('garbage in, garbage out'), security risks from shadow AI usage, and the black box nature of AI decision-making that auditors and financial teams cannot easily explain.
Foundation Requirements for Governed AI
Leijten presented three foundational requirements for trustworthy AI implementation. First, data quality and governance must be established through consistency checks, business rules validation, and controlled data feeds from external systems. Second, security models must propagate role-based access controls into AI systems, ensuring users cannot leverage AI to access confidential information beyond their authority. Third, explainable AI capabilities are essential, providing timestamped audit trails, transparent prompts, and step-by-step analysis of AI recommendations. The VAIA platform demonstrates these principles through controlled grounded sources (RAG), configurable prompts, and security model propagation.
Three-Stage Value Delivery Framework
The implementation approach follows three progressive stages. Stage one focuses on 'liberating the human' through productivity improvements like automated summaries and project health checks. Stage two 'optimizes the leader' by providing AI-driven analysis for better decision-making, including demand intake recommendations and portfolio analysis combining internal and external data. Stage three envisions 'transforming the business' through fully agentic AI systems that could revolutionize SPM processes, though this requires significant advances in multi-agent coordination and human-in-the-loop versus human-in-the-lead governance models.
Addressing Implementation Challenges
During Q&A, critical challenges emerged around transparency, hallucinations, and bias. Leijten acknowledged that true transparency requires clear, succinct logic flows rather than overwhelming audit trails, comparing AI complexity to Monte Carlo simulations. Hallucinations remain a persistent problem that hasn't improved with newer models, making grounded data quality crucial. Bias presents an ongoing challenge, as source data may already contain biases that AI systems can amplify. The discussion highlighted that while these are significant issues, they can be mitigated through careful governance frameworks and controlled data sources.
Action Items
- → Interested attendees — Register for March 29th AI and SPM online seminar open
Key Insights (16)
AI adoption accelerating but governance lagging
Marc Leijten Pharma portfolio management hesitant on AI
Marc Leijten Three-stage AI value delivery framework
Marc Leijten Shadow AI creating governance gaps
Marc Leijten Trust barrier blocking AI value realization
Marc Leijten Data quality foundation critical for AI success
Marc Leijten Role-based security models essential
Marc Leijten Explainable AI non-negotiable for pharma
Marc Leijten Register for AI and SPM seminar
Marc Leijten AI as climate change not storm
Marc Leijten Enterprise AI expectations vs reality
Marc Leijten Hallucination accountability challenge
Marc Leijten Deloitte AI research study
Marc Leijten VAIA AI platform architecture
Marc Leijten RAG and grounded source control
Marc Leijten Human-in-the-loop vs human-in-the-lead
Marc Leijten Full Transcript (click to expand)
Apr 22, 2026 Governance by Design: Securing and Governing AI in Pharma PPM - Transcript 00:00:00 Vera Örså: works. Yes, it works. Welcome everybody. Basel again. I was here last year. I saw some familiar faces also. Last year I I did a presentation about the different types of AI that we see in our world and the different types of AI that things have evolved quite quickly. I think topic I want to talk about today is probably something you facing in your company. It's really nice this but how can we govern it? We make sure that it's basically what I want to talk about. Now this is nice purple slide. This is nice white. It's not a mistake. I couldn't make up my mind which one do I put in my disclaimer because some of the functionality mentioned on the slides is in a future release we release quarterly so and we project about three quarters so it won't take ages but put that in it's okay the reality check state of AI in the market I mean delight did a really big research about 3600 sea level people all having to do with AI all kind of industries also in healthcare or life sciences asking like where are we with AI but it is scaling in a way that now about 60% of the employees in large enterprises have access to a sanctioned AI tool something they are allowed to 00:01:38 Vera Örså: use not everything you can use comes with security risk if you use kind of funky AI tools in your own organization and it's data Now for most this is giving productivity benefits. It doesn't really give them like large enterprise business benefits. So this top I don't know all the present the percentages by heart. So I got to look at the 37% use it for productive purposes. It's probably what we all do. We use it for making a summary for writing an email um analyzing text improving text. Very nice. Helps our productivity. it doesn't really scale business results surrounding and there's about 30% which are using AI to kind of improve their key processes. Benefit of that BMW has deployed AI in their production plant in Munich to detect when a certain piece of equipment will fail. They had maintenance groups and they had scheduled runs. The thing is the equipment doesn't break on a schedule. So they had AI figuring it out. Now they can predict when something fails which saved them a lot of cost and saves them like I think 20% time there which is a key process right they're doing things. 00:02:56 Vera Örså: And the last one is which companies that are using AI to transform their business. Example of that is zipline which you may have heard of. It's the company that uses drones specifically in areas where it's very hard to reach hospitals and patients to deliver blood and medicines. Now, it's not the drone that does the AI. That's the benefit of the AI. It's their AIdriven logistic system to do all. None of their traditional competitors can compete with the AI that tells the drone what's the best way to get to your target the fastest. They have hospitals now paying subscription fees for having their medicines delivered by the drug exactly when they need. They they transform the business model. So a few more you can mention that but not that many. Not that many. This graph kind of says like okay so what are we getting from it today and what are we hoping for? That's an interesting one. What are we hoping for? Well, enhanced decision making and datadriven insights. 00:04:04 Vera Örså: That's a good statistic, right? 53% of people say we're getting that. That's because we start to use AI in combination with natural language processing. Instead of an executive having to click on six different dashboards and ask four different reports, he just types a natural language, show me the sales results from last month. Yes. The other one though increase revenue well 20% are doing that 74% are hoping that they can do it one day that's not the best statistic this AI thing by the way it's a strange phenomena I was at a session two days ago where they have a picture we all know open AI one who created that company lost 83 million in 2025. No, every day of 2025 they lost 83 billion and still they want to go for an IPO and not just an IPO because up till now Saudi Aram go was the biggest IPO ever they collected 20 billion going public. Open AAI wants to go public their goal is 60 billion. I don't think any of us could run a company losing 83 million a day coming with that intention. 00:05:27 Vera Örså: So it's big. It's it's happening all around us and in every company you have to act now because it's it's around the corner and it's really going fast. It's not a storm. It's a climate change. It's happening. What's really big at the moment is that agentic stage of AI. Most companies are looking into this. When we have generative AI giving you all kind of benefits, a genetic AI is something that does something, right? It takes actions. It can be one agent. You ask an agent, hey, we got a new product. Can you write an email to introduce this new product? That could be generative AI. Got a clever agent. You can say, we got a new product. Can you launch the marketing campaign? Can you launch the product entry? and it does everything together with other agents that you need to put that product in the market. It's moving fast. This is a different model and agents are used already in particular parts of the industry. 00:06:31 Vera Örså: It shouldn't surprise you that pharmaceutical is kind of needing back there certain evidence. I must say most of them are used in IT departments that use them for data anal analysis and for getting that knowledge matching but the yellow bar indicating pharmaceutical and life science you are kind of ahead of the but it's in the data analysis it's not necessarily in portfolio management just out of curiosity I know the question was asked before who of you are using AI already in portfolio management early doctors next year if I ask again probably one more interesting though these agents are making important decisions only 21% as a governance framework for the agent means they're having the agent doing things but They don't really define how do we check if they're doing the right thing. They're very ugly examples of doing this type of stuff with AI. You think about the war that's going on now. They use AI to pick certain targets and afterwards it turned out to be wrong. There was no governance on their AI. Same happening in companies. 00:07:57 Vera Örså: We see examples of so that's the state of AI moving fast everywhere. What's the state of SPM strategic portfolio management? Something we're a little closer to. Well, you know, what can I say? Everybody, that little graph on the left, 95% of people agree that this is really important stuff for my business. We need to make the right decisions. Currently, 62% of all large enterprises are doing it with independent planning and not a whole lot of aggregation of information. Right? So they're separated. 19% which is the top left. They say we have an automatic grow up and we align to our business objectives. There are few parts in between that say yeah we do some aggregation and we do exchange information. They are not aligned to strategic objectives. So what you get you get a team culture. Teams are deciding themselves. What are we going to do? which is the best for maybe our business unit or maybe even a team within the business unit but it doesn't aggregate to the strategic objectives. 00:09:10 Vera Örså: So maybe you're wasting resources and funding on the wrong thing if the exact tasks like how did you uh do about your strategic objectives right so experienced results of not doing this well 45% misalign project work strategic objectives 44% inability to track portfolio and ROI does it get us something return 44% impaired strategic decisions slow it's wrong this is the result of not connecting them and then we got the next one what is the state of by AI in SPM using it or not. Well, 68% are not using AI within SPM solutions. But you're not the only ones who are not doing this. This is by the way across industries including pharmaceutical. 60% are somewhere in planning and selection phases. 22% And then 10% by using all the also a bit of a weird statement if you see how fast AI evolves and what it can do that these people say no no no would your company trust AI generated SPM recommendations 8% says no way not even with a human review on we don't trust it. 00:10:54 Vera Örså: biggest amount says well you know if if a human looks at it we okay and then there's a small percentage in whatever it tells us we believe it where should we all right so why is that why do we see that AI everywhere in the trust it's a lack of trust fear of making the wrong decisions. This applies to farmer probably more than in many other industries like regulation is strict. You don't want to make a wrong decision. One with regards to regulation. Second, it's very expensive to make a wrong decision. So the lack of trust or data quality the the gig syndrome garbage in is garbage out. I think that's another very big one. We hear it a lot within our customers. Provide us with AI providers with AI. Is your data good enough? Security concerns AI and the security information high cost of deployment and usage. AI does not exactly come for free. Even if your company has a big agreement with Google or Microsoft or Open AI, it it's it comes at a price. 00:12:31 Vera Örså: You probably heard the word tokens, you know, every token you use comes at a price. And there's a reason why AI keeps on telling you, oh, I got this answer. You want me to create it in this way or you want me to formulate it here? Do you want me to create an Excel? It's all using tokens. So, it's getting more and more expensive. It is hard to control because AI is everywhere. There's a lot of shadow AI. We're using it and not everybody knows what it's for. The second one, lack of a clear AI strategy. A lot of companies doing all private and things, but we don't know exactly what it will last in AI. Well, you need to know a little more than typing a prompt. Can you tell me a nice weekend destination? It's it's almost a science how to prompt it. It's a science how to control it. Science how to govern it. If you don't have people to do that, you probably don't want to do it. 00:13:33 Vera Örså: So that's the state of the situation. What do our customers, the enterprises want from I don't want to read all of them, but what they don't want at the enterprise level is wow, this is cool. Look at the picture created for me in power really good. No, that's that's not what they after risk reduction, investment optimization, revenue optimization, risk reduce and cost control. That's what they want in a change with how can we help you with that? So in our solution we have something called via which is a value artificial intelligence agent. It's not exactly an agent but the name came up when it all started and we thought this is a good so remember vaia it's a platform which can provide you with governed AIdriven value deliver. So the things that are the companies are asking for we're trying to facilitate that in our product. So it's a platform. We have a number of products running on top of it. Ones you're probably most interested are clarity which is an SPM solution. 00:14:51 Vera Örså: Connect all is something that connects all kind of components in your organization to this SPM solution. It obviously also interfaces with other systems in your company. Right? I mentioned at the bottom can be others. It's just an example. And the road to that value delivery you can see in the schedule there. There's a foundation and then we think there are kind of three steps and I I'll talk you through these in the slide. So the foundation what what is it that you got to put in place to make sure that the AI used on top of that is kind of trusted is valuable. First thing data quality data governance what's the data we're using and how good is that data that we expect you guys I think are kind of fortunate because I think in pharma your data quality is kind of pretty good problem is it may be all over the place so how do you get it how do you use it second one security AI secure AI who's using it who and see what and what do we get out of it and what do we do with what we get out of it. 00:16:03 Vera Örså: Think about it. And the third one is something called explainable AI. AI is a black box. You put a lot of stuff in and it comes out with an output or a recommendation. But how how did they get to that conclusion? The auditor will need an explanation. Your financial people will need some kind of explanation. It won't do say yeah well AI told me we need an explanation. Right? So this is the foundation of reliable ed. So how do we do it in our product suite? First there's a consistent you can do some builtin data quality checks but you have many different objects in your system projects you have programs you have mole molecules you have clinical trials all that object all those objects they come with a certain amount of information and it needs to be consistent and it needs so the first thing we can build in business rules different for every type of object that checks this information is it consistent when A is filled then B should be filled in when this value is here this value cannot be there so that's a consistency check the first then we can validate data coming from external systems by using AI to do some kind of a clean you know AI is very good in factual checks so if we get data from another system we have something in the middle which we can instruct AI based instructions like can you check all this this this and if not we want you to do it some we can also do that with our 00:17:44 Vera Örså: internal system which we call a governance so you got data in our system we push it out we have AI doing the kind of check then it pushes it back so we make sure there's governance on that data and then finally there's full control of what we call the grounded source also also known as rack in AI. So AI not only runs on historical data was mentioned before. AI doesn't have the concept of time but it runs on historical data. It also runs on external if you want it to run on your data which is very very familiar because you got your IP your knowledge feed that into the AI system but you control what you feed and who feeds it. We have a very strong control on grounded source who can feed what to our AI to come to a conclusion. So this is the things we do for when you think about security in an SPM solution I think many but ours for sure it's a role based system. So depending on the role in the company you have you can execute certain functionality and you have access to a certain piece of your information. 00:19:05 Vera Örså: Normally this is not propagated into AI in our system we propagate it. So if your role is working in a certain department and I'm a project manager then do project management functions and I can only see the data of my department if you use AI in the system that's what will that's what the access you get. So it will keep you within your authority. You can't use AI to be clever get some more proprietary confidential information. We propagate the security model on that rounded source. Well, there is something that controls the source like okay, what is the type of data we can feed and what's the size of the data feed? It's a set system. It's a little bit overlapping with the data quality. We support proprietary elements which do you all and we support all the three big or four public ones Gemini open AI uh co-pilot you run that fine you can talk to them you can work with them if you have a proprietary one which is based on one of those it may have a different name in your company it might be named cop something if they use the out of the box APIs for those LLMs. 00:20:23 Vera Örså: We can communicate just as well with those but that is a secure. So we're not we're not learning from your model. We're only enabling the SPM solution talk to your information which is your LM we don't learn from. So that's always the fear right. If you feed data into a system that uses AI, it can use it when you can learn from it which you don't want with your IP your property. So we don't learn from it. We we support them but we don't learn from and we can also determine the use of AI within the system. Can you use a certain AI agent to analyze like a full portfolio? No concept. Can you maybe use AI to only act on a certain project in that portfolio? Yeah, we can do that. Or do we only allow this AI for a certain attribute, a description field in this particular project. So we can very specifically define a system AI that's a secure AI. When you talk about explainable AI, we have a lot of generative stuff in the system, right? 00:21:37 Vera Örså: So on a certain field you can open the AI box and say create some description I do this do all generative AI we define what the context is and something really nice productivity and efficiency. So it's in there but besides having them out of the box and you click it and it does something we give you access to the prompt we use. So we show you exactly what our instruction prompt to your system is to get that information into our application so you know what it does and we allow you to change it if you want it to be slightly different in your organization to allow you to change it. We allow you to configure follow-up prompts. Oh, give me this. Analyze this road map for me. All right, it does that. The next prompt can be can you compare this to the competition's road? That's a little I mean it won't be perfect but think about followup prompts can do that. We have a timestamped audit trail everything that changes in the system important for the regulator important for AI because AI has no concept of time. 00:22:49 Vera Örså: It cannot think in time. But if you give it data with a time stamp it it becomes suddenly very clever. in analyzing what happened in my development cycle at what moment and what changed by which person. Those are things we have in a system. This is not something it's giving you what steps come to a conclusion come to a recommendation. It's all part of explainable AI which is very important for getting that trust people. So these were the three basic steps, right? Data quality and governance, secure AI, explainable. Now, how's that all going to help you to be successful with AI in your organization? It's three steps. First step, liberate the human. This is the productivity efficiency test. Am I running late? Almost. Are you ran over by the other? So how do we do it? These pop-up boxes you see they're examples of the agent output. It can summarize a lot criteria analyze everything on my road. 00:24:40 Vera Örså: Is it good, bad? Does it align to our strategy? Yes. No. Think so. The other one there is a project health check. Can you check the health of this project? What's the cost? What's the risk? What's the planning? how we do it. Make it easy for the human to work. Get them more productive and more efficient. Second one, optimize the leader. Now that people can work faster and more efficient, the data is correct. What do the leaders need to make decisions? So, we have the agent running additional analysis like a demand intake recommendation. This one you should do because it's very well aligned to your strategy. It has a very well ROI and competition is not doing it yet, right? The AI can use internal data and external data, right? So recommendations like that which are enabling better decisions. And then this one is step three which is transform the business. Now this is a bit of a future thinking. 00:25:40 Vera Örså: This is very much thinking about that transforming the business as we discussed all in the beginning you know ... [transcript truncated]