Session Notes: Optimize Your Scenarios with Project Data Simulation
Executive Summary
PlanisWare presented a comprehensive demonstration of AI-powered portfolio optimization, emphasizing that successful AI implementation requires solid data foundations first. The session showcased advanced project management capabilities through their Oscar AI bot, while highlighting that 67% of companies don't trust their data—the primary reason AI initiatives fail. Live demonstrations included natural language project creation, intelligent duration adjustments, and portfolio-level optimization features.
Full Notes
The Data Quality Foundation Challenge
Leaud Le Bacq opened with a stark reality check about portfolio optimization approaches. When asked to optimize scenarios, most professionals either resort to Excel spreadsheets with copy-paste workflows that become incomprehensible the next day, or turn to LLM systems spending 20 minutes on prompts without trusting the results. This highlighted a fundamental problem: 67% of companies don't fully trust their data, according to survey results from governance universities. This lack of data trust is the primary reason AI initiatives fail today. Le Bacq emphasized that before implementing AI, organizations need three foundational pillars: clear organizational responsibility (who owns what), defined processes (what to do, when, and how), and technical capabilities that ensure data quality control. PlanisWare positions itself as the foundation for this data quality framework, serving as a single source of truth for R&D project management.
Tool Consolidation and Industry Impact
The session highlighted impressive consolidation results from PlanisWare implementations. BMS built a planning center of excellence that achieved 98% data accuracy increase, while TEDA replaced 17 different tools with PlanisWare's single platform. These examples demonstrated the significant productivity gains possible when organizations move from scattered tool ecosystems to unified platforms. PlanisWare serves 90% of the top 20 biopharma companies and has 30 years of industry experience, with Sanofi as their first customer in 1996 still using the platform today with over 10,000 users. This foundation of data quality and tool consolidation creates the necessary basis for effective AI implementation.
AI-Powered Project Management Demonstration
Yann Zabbal demonstrated PlanisWare's Oscar AI bot through live project management scenarios. The system enables natural language project creation, where users can describe project requirements and automatically generate structured projects with appropriate templates. The AI incorporates semantic search capabilities for auto-tagging and similar object identification, helping project managers find comparable historical projects to inform current planning. A particularly powerful feature showed AI-assisted duration adjustment, where the system analyzes historical project data to suggest optimal timelines. The demonstration included RAG (Retrieval Augmented Generation) implementation that indexes all project documentation, enabling intelligent search across health authority documents and project files. Users can query in any language, and the system provides summaries and filtered document previews relevant to their current project context.
Advanced Portfolio Analytics and Optimization
At the portfolio level, the system provides sophisticated scenario modeling capabilities. The AI can analyze dashboards, provide strategic context around objectives and key results, and generate automated chart views from database queries. A standout feature is swarm particle optimization for resource allocation, where users can set constraints (such as protecting critical projects) and let the AI find optimal resource distribution paths. Zabbal noted that 50% of his consulting time now focuses on AI feature deployment, including advanced use cases like lessons learned indexation. The system can analyze historical lessons learned across projects, suggest relevant risks for new activities, and automatically propose countermeasures based on successful mitigation strategies from similar past projects.
Integration Flexibility and Implementation Realities
When questioned about customization and integration capabilities, Zabbal emphasized that 90% of implementations require tool interfaces, reflecting the reality that organizations operate in complex software ecosystems. PlanisWare provides significant flexibility for portfolio analytics and project structure customization, though they recommend sandbox/scenario modes for portfolio-level changes to prevent unintended impacts on live projects. The challenge isn't technical capability but governance—determining data ownership and master systems when multiple tools interact. The session concluded with discussion of interdependency analysis at portfolio level, which the system supports through comprehensive project linkage tracking and resource allocation visibility, though governance questions around multi-level planning ownership remain key considerations for implementation success.
Action Items
- → Leaud Le Bacq and Yann Zabbal — Available at booth for detailed technical discussions and specific use case exploration open
Key Insights (13)
AI optimization requires solid data foundation
Leaud Le Bacq PlanisWare demonstrates comprehensive AI portfolio features
Leaud Le Bacq Integration complexity remains key challenge
Leaud Le Bacq Data quality as AI prerequisite
Leaud Le Bacq PlanisWare tool consolidation impact
Leaud Le Bacq AI-assisted project management capabilities
Leaud Le Bacq Advanced predictive analytics deployment
Leaud Le Bacq Available for detailed discussions
Leaud Le Bacq Excel versus LLM dilemma
Leaud Le Bacq AI deployment reality
Yann Zabbal PlanisWare three-pillar data quality framework
Leaud Le Bacq RAG implementation for documentation search
Leaud Le Bacq Swarm particle optimization for portfolio scenarios
Leaud Le Bacq Full Transcript (click to expand)
[0:00] **Participant 2**: I think we have a spotted furniture here. Um, so uh in a good sense, of course. And um, so last time we talked about uh risk management, uh, we talked about AI. Um, this time we thought about talking uh some use cases on uh how to optimize um your uh portfolio scenario based on project data simulation. So that's why we we we tried to uh um to um uh to demonstrate today uh to about um concrete use cases. [0:38] **Participant 2**: Um so while preparing this um this presentation um I thought um okay if someone were coming to me and say um hey can you optimize your portfolio scenario based on your project data simulation what would I do? Um I think they are two types of reactions. I guess the first one is to open an Excel spreadsheet, you know, widescreen, and copy-paste, copy, paste data in it, do calculations, etc. So spend about a bit of time, copy paste the results, and then present it. [1:14] **Participant 2**: And if you're like me, you didn't get the time to put any titles, so it just means that the next day you're not even know what you did the day before. So um huh, not optimal. Um Um, second one, um, second reaction would be to open your favorite um LLM system and say, okay, can you help me doing uh optimization? So then you will start with one prompt, then a second one, then a third one, fourth, fifth, etc. So you will spend maybe 20 minutes saying, Can I trust this data? Probably not. [1:50] **Participant 2**: So you just lost a bit of time. So um basically that's the starting point of of our presentation. [2:00] **Participant 2**: So before talking about AI, um I just wanted to um uh to talk about um data data quality um so we'll see if if the screens uh I'm kind of uh uh running out of uh speech speech but um um okay when when it will come back you will see that um i wanted to share a few survey uh results uh from from the governor for some universities saying that actually 67 percent of the companies don't trust fully their data which is a huge amount so how can you take good decisions if you can't trust [2:49] **Participant 2**: the data the data. And secondly, most uh I mean one of the main reasons why AI initiatives are failing today is because of lack of trust in the data again. So everyone is talking about AI and it's great, but I mean we need to take some step back. I think we'll have interesting um uh we'll have interesting uh uh session tomorrow about this um by team. Um But just to be very quick, data quality is really key. And I guess I don't teach you anything new here today. But that's uh that's really key. [3:30] **Participant 2**: So um so I'm trying to see what's what next slides I have in my side. Um okay, so talking about data quality, if we want to summarize it really in three main pillars. First one is organization, so who is responsible for what? Second pillar is the processes. I mean what do you need to do, when, how, and then the third one is probably um the tool, the system, or the technical capabilities that helps you control the data and make sure the data quality is optimal. optimal. So of course you see me coming. [4:28] **Participant 2**: This is when I'm going to take you like Channel Swift, because China's work can really help in um giving you the basis for managing your data quality. [4:40] **Participant 2**: When we when we implement and swear in new companies um uh the the the the objective the target is really to have one single search excuse for uh for for the whole uh company for managing RLD projects so um and this is the promise that we made yeah so that was the three tellers I didn't lie to you huh um but uh so basically is where when you implement it, it's it's we are replacing tools in the company uh because it's mainly uh scattered, and then um this uh the tool serves as as a basis [5:25] **Participant 2**: for um enhancing the processes. It's really used as a basis on uh with our best practices to harmonize the processes. And it's I mean it's a long process, I don't teach you anything, but um really it gives you a a step in into um streamlining all these processes. [5:47] **Participant 2**: So just wanted to share a few um few experiences of of some customers like BMS, they built um a planning center of excellence um on the basis of Plan Square, and what they report is that um these efforts led to the data accuracy increase of 98%, meaning that with Planet Sway and the capabilities inside it, they were able to um uh accompany their um their their processes. Then another one which I find interesting is that TEDA, um that I know very well, they with Planet Sway, they replaced 17 tools. [6:30] **Participant 2**: You can imagine 17 tools, and maybe it's the same for you. Um A lot of different tools, they replaced it with one single platform. And this is really the key in having such a tool to be able to have everybody working in the same place, and that's making a huge difference. So, um, of course, I mean you I guess you know plan is right. If you don't know it, it's a project portfolio management tool, the leader on the market, as well. we're working with uh um 90% of the top 20 biopharma. [7:06] **Participant 2**: Um, so the biggest ones, of course, we are working with a lot of smaller ones. Uh, but we have 30 years of experience in the in the industry. I mean, I think our first customers, uh, customer was Sanofi uh back in 1996, and they are still using Clients for today with more than 10,000 users for uh RD projects, of course. Um So yeah, it's uh it has a lot of capabilities. You will see this in a minute. Uh, but we have also best practices to really help you streamline your processes and going further. [7:41] **Participant 2**: So once you have a solid basis as a tool like PN Square, then you can go up to the next level and put AI in this and then um save time and productivity. Uh, but that's the most solid basis that you will have. will have. So that was my quick introduction. Now I guess everybody wants to see the tool and the features, so I will add on to uh to that. [8:14] **Participant 1**: Thank you for presenting without these slides. Uh people maybe we start the demonstration out of curiosity, I like to do it sometimes, just to see. [8:26] **Participant 1**: maybe you can raise your hand with all rate uh how many of you are using AI either personally or for your work in your work but I would say like uh junior meaning for instance helping you as an assistant for emails or uh you launch uh something to record all your meetings not to have horizontals things like this how many of you yes how many would be like medium using it uh for instance helping you know in uh documentation to make an extract of something to really speak a little bit more deeper with the ai [9:08] **Participant 1**: and try to speak with it and who will qualify himself as a ai advanced user uh it could be really oh it's really good Uh it could be, for instance, of course, coding, but not even coding, like really trying to do some uh uh people using copilots to share some AI agent with your teams. Uh, using this okay, some to see it uh along the years because more and more people are using it anyway. [9:43] **Participant 1**: Uh so we have only I don't know, 10 minutes if we have questions, which will be a really short demo, of course. it's not possible to show all the AI use cases we have. I will just uh do a quick walkthrough on different items. We have the boost, so if you have more questions, we are of course here the two dates. So if you have any questions, anything you want to share with us, participate. We split the pro this uh quick demo into two topics. [10:14] **Participant 1**: So I will start with the project view, uh like an AI assist. [10:18] **Participant 1**: assistance for let's imagine a project manager a junior project manager so i will open my peniswealt tool if those in two so um it's right in parisway so it's the project uh module for those who are using familiar with it you have all the list of your projects with some information and the things flexible in the system you can file on the right hand side yeah we don't see it with my toolbar uh on the right hand side you have our uh jumps bot so Oscar bot uh that we have in Variswell uh so for [11:06] **Participant 3**: instance what I can do I can say I want to create this object Clinical or obesity, [11:30] **Participant 1**: but it's L LN switch works. So what's happening behind the scene? We are connecting here to LN and our language model, and we are asking, so we are pushing out information on what we are looking at. So it's just a problem. prompt we speaking but here is a given me okay some more detailed information uh you can ask as a template but maybe I want to add some more information before I play my project so you can see for instance what [11:59] **Participant 3**: the microphone is uh JP1 sustaining very class [12:18] **Participant 1**: The system will now store my information and he is understanding that I need I have a rationale, the clinical PDPT market consideration form. From there, uh in my system I already preset a lot of templates because uh we have a lot of uh projects in our company, so either we have some similar projects that we're looking into or we want just to uh have some set templates that are prefined for the end users. So I can ask is [12:47] **Participant 3**: what kind of templates can I use. And from there exactly all my templates [12:59] **Participant 1**: so I can pick one of them so let's say the accelerated one again, it's a LM, so you don't need to be accurate if you type anything. It should work. So it's a little mean that's a great choice. Uh we will accelerate code drive. So basically, if you do a lot of typos, my P is not working as you see. So I have a lot of cycles, it's still working. Uh would you like to proceed with creating the project? Yes. [13:26] **Participant 1**: And now the system will automatically create me the project with the data given template and given information marginal and so on. As you see on the top here, I created my project, it's highlighted in blue, and I have the logo to mention that this AI can create it. Pay attention when you create something in AI, of course. We are doing a lot of implementation with AI. You need the human blue, of course. So now it's just a draft status. [13:51] **Participant 1**: So it just has a simulation, a draft, and someone needs to take the decision for me to approve this project. On top of that, what we can do is may be not the best examples, but I'm currently working with a customer to actually I just build in it so it will cool quite soon. We have what we call auto-tacking features and helps to automatically uh preset our information. Now it's not the best example, but if you click here, the system will auto-tack with semantic search. What's the best feature what's the best hatch? [14:24] **Participant 1**: Well, here it's easy, I could obesity, it's obesity. But uh in semantic search it uh works. uh we are using it a lot with subcustoms. On top of that, now I'm happy I created a project. Let's go inside an old project. So I can select one of the projects. So here, as you can see, I have some past activities and past data, and now I have so an executive project at an execution phase. I'm entering phase two close to uh let's say the uh enrollment period for instance. [14:58] **Participant 1**: So, what I can do, I can maybe look uh if I want to, because as I said, I'm a bit of junior, so maybe I need some research in my and in all the document I have in my system. It could be health center document, it could also be project documentation. So I can put it in the start plate as you said with LLM. You can put it in any way in any language. So I don't know who Switch it or like here are someone speak Swedish is not how to say this function is Swedish. [15:38] **Participant 1**: But uh anyway, so basically the system should answer the lag in any language. It's going through your documentation, so this is a rag. Uh rag that means uh innovative elementary generation. So we are pushing out and indexing all the documentation that you have in your current system. could be also generated from documentation that are outside of Caniswell system, but we just need a tunnel to map it. And basically, here it pre-filters when documents are inside finished. You can have a preview, you could also set a preview with external documentation. [16:09] **Participant 1**: On the top, you have a summary. It could be a summary on the process, it could be a summary on steps like how to create a response, or how should we get that? So it's really depending. And here it pre-filters. [16:22] **Participant 1**: on the two documentation that are related to my project now what I would like to do is I would like to okay the minutes it's a bit short anyway uh enrollment period so now what I want to do uh once again I'm a project manager but also the question is my project was created two years ago maybe actually we improved the way for closest project projects to uh for this specific activity like the enrollment project. Maybe we have from our database inside the system more accurate information. [16:59] **Participant 1**: We have a tool called Similar Search for Similar Objects. Search for similar objects, how does it work? You will map different fields. The system will automatically suggest you to map fields. This is not going through an external LLM. We can do it inside Planet itself. We can also add it with some semantic cracks. crack search. Two things to consider what we are doing. We are looking in the database, the closest object, from the database structure, which is closer to the reality. [17:27] **Participant 1**: And on top of that, we are looking on text fields, so notepad description and so on, by using our own LMM vectorization that we have to see the closest object. Okay, it's a little bit less uh close, so it's not a PD semantic search, but still we have some quite good results to see from the data model structure. structure, what are the product objects in your whole database? What is the purpose of that? [17:51] **Participant 1**: It's for instance, if I take a project quite fast, I can see that okay, here, for instance, this one, I have a good match. Of course, I could click and have a direct link, but it's not that difficult to highlight that. It's where you just search for your project, you have it here, you open it, and you look at the project information if you want to look at what happened specifically on this project. project. In addition to that, what we have, so we have this similar analysis analysis. [18:23] **Participant 1**: We also have some different information you can find on the bottom, such as risk, issues, and so on. Then the system suggests you some analysis set in the system. Okay, you can put any measure you want. So here we have the mapping, depending on your similarity, okay, and uh the measure you want to compare it to. I took the duration in my example. So here I have what I what me as a user I found as a good match, and maybe it's not so visible the red the bad match anyway. [18:56] **Participant 1**: And here it's not the closest one. The idea is to keep it simple for our. But I know it's not good to have the average, but at least you have the information on average because the information on the building. What we are doing with our customers, we are we are enforcing it with predictive analytics, but sometimes it takes longer to set up. I will not show we have time to do. But uh anyway, uh let's move on. Uh what we can do, go back to our friend Oscar. [19:27] **Participant 1**: Okay, and now I'm getting back to my box Oscar. [19:31] **Participant 1**: So what I [19:32] **Participant 3**: would like to see is okay, what's this project status? So then I'm asking what's the [19:45] **Participant 1**: current project, it's the same. I'm choosing what I want to push out. I have a quickly change with you, but I'm pushing out all the project information, okay, that I selected, and um I can ask because I've sent it to the MM, anything I want. I can ask with the project manager of the status, what is uh this activity like I'm released. chatting from there what I can do sorry I don't take this replich sometimes okay so what is the kind project from there what I can do is is we define to create a version. [20:36] **Participant 1**: So we are asking we ask the system now. So here I just retrieve by principle is just showing not a lot of information I can ask a lot. This is really useful. [20:49] **Participant 1**: But from there you can ask him to [20:51] **Participant 3**: uh create a version. So now [21:05] **Participant 1**: the system is automatically creating a version and making a copy. So my version is created. I can ask him to display this version. I'm doing step by step just to show you. In real case, you would do all at once. We could also set, of course, uh something to have it all at once. Uh okay, and I don't know if you see it here in my dent. Actually, I can close that. In my Gent, here I have underneath my activities. Here is my version. [21:37] **Participant 1**: So, what I want to do now is like I want to create with my version, I want to make an AI version. So, AI graph, just to see how it looks like in the GENT. So, what the AI will suggest me compared to uh what I can have in my project. By default, I will not take uh I can set up rules such as I won't take what is in the past and what has already been accomplished, there is no need for the use of so from there. [22:01] **Participant 1**: What I do, I say to my Oscar friends, uh adjust duration, and the system will automatically adjust the duration compared to what you saw with behind the scene. I don't know if you saw that. So basically, you pushed me uh to all the data, and here I have my AI duration. a dummy data, so here it's uh longer, it shouldn't be. But the principle is exactly that. It's just to compare it, and at least the bottom manager can see that the average of the system is suggesting this. We are currently using it. [22:39] **Participant 1**: I do I did do the quick demo, we have a bit longer on this topic because we are pushing all the main metrics and we are able to go through every activity to say which one we want to update, which uh figure you want to take, what kind of mass and algorithm. algorithm you want to shoot and people? I think I'm already running out of time actually. It's okay to move. We have one, I won't show it now. We have one as well. We have one as well on data quality analytics. [23:15] **Participant 1**: Just to show you briefly, we are able, of course, also in Planet Swell, we have what we call the deep queue. DQA, okay, data quality analytics, and we are taking out of the box re delivering the DCMS uh best practices on scheduling in general. Uh, we are also able to quite easily create our own set of rules. [23:39] **Participant 1**: So, the idea is thanks to that, we can also ask the Oscar but uh what do you feel or what is the quality of my planning, and then from either the deficit of rules, you will go through them and highlight you and summarize you where you have issues exactly. And then you can ask him uh how can I uh counteract on that? If you're on the demo, I will use it on my book. I'm just showing everything. Um I'm speeding up, sorry about that. [24:12] **Participant 1**: I think I have the too long menu for the demo, and back to back to portfolio level. So on portfolio as well, uh, we can so on the higher level we are focusing on projects, we can go on portfolio views. Portfolio views, it could be exactly the same kind of principle. So now I'm logging in in another screen. Uh this other screen here on the bottom, you have projects, you have different views, charts. We are also working currently on AI to gener ... [transcript truncated]