Please find below a transcription of the audio portion of John Owen’s showcase session, Making Light Work of Schedule Risk Analysis, being provided by MPUG for the convenience of our members. You may wish to use this transcript for the purposes of self-paced learning, searching for specific information, and/or performing a quick review of webinar content. There may be exclusions, such as those steps included in product demonstrations. You may watch the live recording of this webinar at your convenience.

Kyle: Hello, everyone, and welcome to today’s MPUG vendor showcase session, Making Light Work of Schedule Risk Analysis, presented by Barbecana. My name is Kyle, and I’ll be the moderator today. And before we begin, I’d just like to remind everyone that this is a vendor showcase session, which is a bit different from the typical MPUG training webinar. The presenter will discuss third party Microsoft Project add-ons, or services, in a no-pressure, stress free environment for you. These items are not out of the box functionality, and do typically require a separate product trial, or purchase to use.

Kyle: Today’s session is eligible for one PMI PDU in the technical category, and the MPUG activity code for claiming the session with PMI is on the screen now. And like all MPUG webinars, a recording of this session will be posted to MPUG.com, shortly after the live presentation ends, and all MPUG members can watch these recordings at any time, and still be eligible to earn the PDU credit.

Kyle: All the sessions you watch on demand can be submitted to your webinar history, and the live sessions you attend are automatically submitted. And within your history, you can print or download your training transcript, certificates of completion, including one for today. And you can access that by logging on to MPUG.com, click the My Account button, and then click on the Webinar Reports link.

Kyle: If you have any questions during today’s presentation, please send those over at any time using the chat question box on the Go To Webinar control panel. We do plan to answer those for you at the end of the session today. All right, we’ll go ahead and get started. We’re very happy to welcome back John Owen today.

Kyle: John joined Barbecana as CLO in 2014, and assumed the role of President, and CEO in 2019. He has extensive experience of both project management, and software development. Previous positions have included VP of development for Welcom, senior director for product management at Deltek, and manager for computerized project management at Worley engineering. So with that said, I’d like to welcome you back, John, and hand it over to you to get started with today’s presentation.

John Owen: Thank you very much, Kyle. And good morning. Good morning, good afternoon, everybody.

Kyle: That looks great.

John Owen: Cool. All right. So, thank you. I’m going to talk about schedule risk analysis. And the important point I’m trying to get across is that it doesn’t have to be onerous, and that the benefits easily outweigh the small incremental work that you have to do to be able to perform a schedule risk analysis.

John Owen: The objectives today are, as I’ve just described, to show it can be easily achieved. I’m going to demonstrate how you can use schedule risk analysis to validate, or test a risk mitigation strategy. So where you know you’ve got a potential issue that you’ve identified, correctly identified in your risk register, how you can actually build that information into your project model, and see that the risk mitigation plan that you have got gives you a satisfactory result.

John Owen: And then specifically, I’m going to have a quick look at how schedule risk analysis can be applied to Agile projects. There’s a lot of sort of hearsay that you can’t use schedule risk analysis with Agile, and I’ll discuss why that hearsay exists, and describe counter arguments, and show how it can indeed be used.

John Owen: So what problem are we trying to solve? Well basically, everybody knows that projects are sometimes unfortunately delivered late, or don’t include the original plan scope. It happens all too often. And a lot of times it’s outside of our control. The project scope was changed by the owner, or the management, or you had unfortunate estimates of the time, and resources that would be required to complete particular steps in the schedule. Or, the execution, unfortunately, is poor, when those works are actually performed, or it could be some completely external influence, like an act of God.

John Owen: However, what I’m going to briefly show you now is that oftentimes, it’s really got nothing to do with poor estimates or execution. It’s simply that the forecast produced by scheduling tools that use the critical path method, including Microsoft Project, Oracle Primavera, Deltek Open Plan, [inaudible 00:04:54] Powerproject, they all use fundamentally the same critical path method algorithm. And it has a sort of fundamental flaw that I’m going to show to you.

John Owen: First, let’s talk about the estimates that we’re using in our schedule. I would encourage you that an estimate basically will give you a 50% chance of completing the work in the time that you are estimating. A lot of people are horrified by that, they say, “Surely I should have an estimate that gives me 100% chance of completing that work.” But the problem with that is that you’re effectively padding the estimate. You’re building in a contingency for uncertainty into the task duration. And at the end of the day, that is going to give you an unrealistically pessimistic forecast into the future, because every task will have that uncertainty built into it, and that will push out the forecast completion date. And if you’re working on a proposal, that basically means that you’re unlikely to get the work, because you’re forecasting a date further into the future than perhaps your competitors are.

John Owen: So what we’re looking at is an estimate that gives you a 50/50 chance of delivering each of the components in the time that you estimate. And the sort of extrapolation of that is that you have a 50% chance of delivering the project on time. But unfortunately, that’s not true.

John Owen: I’ve got two simple examples here. The top example is two tasks in series, I’m going to assume everything is subject to uncertainty. And that’s a reasonable assumption. Even if you’ve done something before, it may take a slightly different amount of time to complete in the future.

John Owen: In my case, I’m going to assume that uncertainty is symmetrical, things are just as likely to start a little early, or finish a little early, as finish a little late. I’m saying they’re just as likely maybe to finish up to a day early, versus a day late. So let’s execute this project. Task A, unfortunately happens to take six days. Well, we may get lucky, and Task B only takes four days, or we can, as project managers we’ll have observed the delay, and we put all of our effort into ensuring Task B finishes in four days. So the project in that simple scenario still gets to finish on day 10.

John Owen: But, if you look at the second example of below the yellow line, the tasks are in parallel. And if Task A, again unfortunately slips to six days, then it has pushed out that successor, be it a milestone, or another task to begin at the end of day six. And there’s nothing you can do with Task B except perhaps make matters worse, if it happened to take seven days to do Task B.

John Owen: And if you tabulate that as I have on the right hand side, in this simple example, where you have two tasks with the same duration, with symmetrical uncertainty, so on average, they’re going to finish on their estimated duration 50% of the time. If you tabulate that, we actually only have a 25% chance that that successor task will begin on time, according to the CPM schedule. This effect is called merge bias, and we regard it as one of the most important reasons that projects get delivered late. It’s got nothing to do with poor estimates, poor execution, and so on. It’s simply the fact that they didn’t take into account everything is subject to uncertainty. And that cumulatively builds up through merge bias, when successors have more than one predecessor.

John Owen: So, how do we quantify? How are we going to capture the effect of this merge bias? And unfortunately, there is no arithmetic way of doing so. Well, that’s not entirely true. With a couple of tasks, maybe two, three tasks, you can do it arithmetically. But, it becomes a Herculean task.

John Owen: A much better approach is to use a simulation. And what we’re going to do is to simulate the execution of the project thousands of times, and see how the effect of uncertainty cumulatively pushes out that merge bias, through tasks that have more than one predecessor, as well as take into account other risks, and uncertainty that we apply on top of that.

John Owen: The key thing here is that it doesn’t have to be a lot of extra work. What we are going to be doing is capturing three point estimates. So rather than just capture a single point estimate, say a task is expected to take 10 days, we are going to give the software some additional information. We’re going to say it could take between nine and 11 days, and it’s most likely to complete in the 10 days that we originally estimated.

John Owen: So, what we’re going to do is we’re going to capture this information. Now, again, I would stress, we’re not going to do it for every single task in the project, or we don’t have to do it for every single task in the project, we’re going to focus on known high risk items. So, things that we haven’t done before, so we’re not quite sure how long they’re going to take, so it’s worth putting some extra estimating effort into capturing that range of durations.

John Owen: I would also suggest that we also need to look at everything that’s on the critical path in Microsoft Project, and everything that is near the critical path, so it has relatively low quantities of slack, or float. So it may affect the outcome once we start to take into account uncertainty.

John Owen: There’s different ways of quantifying this uncertainty. You can use discrete durations, or you can indeed use percentages of the original duration that you have. The other thing I like to do is to encourage people to think in terms of their confidence in the estimates that have been provided. If you know somebody has historically under estimated, then perhaps you have a lower confidence in that particular estimate. You could also describe it as a higher risk associated with that estimate. And that’s all information that we can easily, and quickly put into the simulations that we’re going to do.

John Owen: One of the best ways of doing it is mining historical information. So, our tool has a report called the History Report, and this enables you to do an analysis on work that’s already been completed, via the complete separate project, or the work to date in your current project. And effectively, it does an analysis of actual over estimated durations. So we can see how we have performed in the past.

John Owen: Obviously, it’s important that you look at similar types of work. It wouldn’t do any good to apply uncertainty for a high tech project to a simple engineering, or a construction project. But as long as you have previous work experience of a similar type of work, then you can use that to give you a defensible forecast for the future.

John Owen: In this particular case, on the screen, you’ll see we’re looking at a history report, and the good news is we can see a significant number of tasks, 33% in actual fact, completed on their estimated duration. So that’s the highest histogram bar there. We can actually see that quite a significant number of tasks finished in as little as 50% of the estimated duration, so that was some good news. Some tasks were completed sooner than expected.

John Owen: But on the opposite side of the most likely value there, we can see that some tasks took significantly longer than estimated to complete. In actual fact, the worst case on this particular chart, and this is a real project, a task took over six times longer than it was estimated. You may say, “Well perhaps it was only a one day task, and the fact that it took six days doesn’t really matter.”

John Owen: In actual fact, this was a three week task, and it took six times longer than estimated, so it had a considerable impact on the schedule. And basically, in this particular case, it wasn’t a data entry error, or anything like that. It was simply a task had a risk occur that had not been identified and mitigated, and it pushed out the project.

John Owen: But ignoring the outliers, we can see, I could use this information to justify for future work, saying my best case duration might be 50% of the estimated duration. My most likely, I’m going to put at the estimated durations of the guide tells me 10 days, I’m going to say it’s most likely to finish in 10 days. And then the worst case scenario, I might choose somewhere around 200%. And a lot of people are quite horrified to hear that we’re actually going to put into the plan that we expect some tasks to take twice as long as we estimate. But, you have the justification here in the historical data from similar type of work to defend that choice for estimating into the future.

John Owen: So, we’re going to run them on something called a Monte Carlo Simulation. Effectively, simulate the execution of the project thousands of times. And I stress the thousands. The more the better. As with any statistical process, the more information you have, the more reliable the results will become into the future. Full Monte is fortunately very fast, you can run thousands of simulations.

John Owen: Just yesterday, I was running a simulation on a project that had 25,000 tasks, and I was able to run 5,000 simulations in about eight minutes. In reality, I would encourage you to run even more. Go to lunch, and run 10,000 simulations. And obviously, if your project is smaller, if you only have 800, or 1,000 tasks, you can run 100,000 simulations in a relatively short amount of time.

John Owen: And then for each of those individual simulations, we will sample a duration for each task from within the range that has been specified for them, so that best case, most likely, worst case, or, we call it optimistic, most likely, pessimistic in our tool. We can also specify a probability distribution. These curves, the triangular, the normal distribution, the beta distribution, and so on. And effectively, they allow us to give more information to the tool about how likely we think it is that the actual duration is going to be closer to the most likely, or perhaps closer to the most optimistic, or closer to the worst case, and so on. So, we’ve got four data points that we’re giving the software for each task.

John Owen: In reality, don’t worry too much about the probability distribution. A lot of organizations mandate the use of triangular. In reality, a beta distribution probably is closer to what will actually happen. But the advantage of the triangular distribution is it does actually increase the probability of samples being taken closer to the extreme best case, worst case, actually gives you a slightly worse forecast for the future, which, if you think about it, is building in more contingency for risk, just by using that distribution rather than the beta. So, it’s very defensible to use the triangular, even if in reality, as we saw on that previous history report, it didn’t look very triangular, it was actually a beta distribution.

John Owen: And again, I would count on it, don’t do it for every single task, or you don’t have to do it for every single task in your project. Focus on those critical, near critical, and unknown high risk tasks, in terms of capturing how you want to define those three point estimates. I do encourage you to put some uncertainty onto every single task in the project. I call this background uncertainty. So, just in case the critical path changes significantly, you’ll still have some uncertainty on other things that may jump onto the critical path.

John Owen: One technique I see people try to use, and it’s not a good technique, is to effectively assume that all critical tasks are high risk. Well, that’s not the case. You may have tasks that, yes, they’re on the critical path, but they may not be high risk. You may be very confident that they will be delivered in the estimated time. So don’t make the mistake of assuming that everything on the critical path is high risk. It is actually worth giving some thought to it.

John Owen: So, the next thing I want to talk about is probability of success. We’ve kind of mentioned we have a 50/50 chance of delivering some of those earlier examples I gave you, but in reality, a 50/50 chance is not acceptable to most organizations. You want a better than 50% chance of delivering on time. So most organizations, most commercial organizations, shoot for somewhere between 80% and 90% confidence in the dates that they’re forecasting for the future.

John Owen: It is fairly uncommon to shoot for 100% confidence, that obviously we can do it, and I’ll show you it on a histogram. But the problem is, if you’re going to commit with 100% confidence to a particular delivery date, then you have to keep back your resources to be able to continue work through to that 100% confidence date in the future, which means you can’t take on additional work, or can’t plan to take on additional work as that project nears completion, because you have to keep the resources back, just in case the other project does slip to that worst case scenario. And commercially, that can be cost prohibitive. Basically it’s an opportunity cost.

John Owen: Now, some projects, for instance, launching a probe going to the outer planets, there may be a very well defined launch window of five days that you have to launch the probe in. And obviously, you would be forecasting at 100% confidence, to ensure that you achieve that, because the cost of missing that launch window is prohibitive. You may have to then store the probe for an additional two years, until everything’s in the right place to have another go.

John Owen: Briefly, schedule quality is extremely important. So you must have the entire scope of the project, it must be logically sequenced, and basically what that means is, the tasks have to be logically linked as predecessors and successors, and must not be artificially constrained. So you must have finish ons, or start ons, finish no later than. They’re all artificial constraints that should be avoided. I mean, it’s the best practice to avoid those anyway. But for the purposes of schedule risk analysis, they absolutely have to be avoided, because once we start to apply uncertainty to the model, things will move, and unfortunately they do move into the future. And so, you don’t want them artificially constrained by a finish no later than that prevents the logic moving into the future.

John Owen: One thing to watch out for, especially with Microsoft Project is, if you’re modeling level of effort, your support work, your management, your IT support as additional tasks in the schedule, make sure that they will not end up driving the critical path if everything else happens to finish early in a particular simulation. So, level of effort tasks need to be linked to alternate ends, typically, when you’re contemplating running a schedule risk analysis.

John Owen: And then, put in milestones to identify important deliverables along the way. It may be stage payments, completions of various components and so on. Because by having milestones, we can focus the software to look at those milestones, and tell us the things that are important to achieve for each individual milestone. So they’re additional information points that we can give the software.

John Owen: So, the process is we add the uncertainty, and then we run the simulation, we review the output. And almost certainly, the probability of success will not be what you want it to be, and what your management has requested it to be. And at that point, you basically have to go back into the schedule, and modify the schedule to finish sooner. And it’s an unfortunate fact of life that if you’ve committed to deliver on September the 30th, 2020, if that’s your commitment, and the client has asked you to forecast at 80% confidence, then the finish shown by your schedule will be sooner. It may be in August. So the finish date in the schedule maybe in August, but you are committing at 80% confidence to the end of September to the customer.

John Owen: And so, you will probably need to adjust the schedule to finish sooner. Use better resources to reduce durations of tasks, change the logic, so you’re doing more things in parallel. There’s various things that you can do. But it is an unfortunate fact that the schedule will have to almost certainly show an earlier finish when you’re forecasting at higher levels of confidence.

John Owen: So, once we’ve run the schedule risk analysis, this is the chart that everybody is expecting to see. It’s called a probability distribution histogram, or just the probability histogram for short. The funny thing with this chart is that the histogram bars in themselves are not very useful, not very meaningful. They’re simply showing us the number of times that the simulation finished, or this particular milestone finished, or was completed on a particular date.

John Owen: So, I can see that the date that the project during the simulations finished on most frequently was around 9/12. But that’s not very useful information. It’s the S curve overlaid over the histogram which is the important takeaway from this chart. And the S curve is scaled on the right hand Y axis. And what we’re showing there is the cumulative probability of finishing by a particular date.

John Owen: So, based on the simulations that we just ran, we have an 80% chance of being able to deliver this milestone by 9/17. By September the 17th. The underlying schedule was showing that we were going to deliver on nine four. I’m reading that from the title bar at the top, where it says Deterministic Value. So, that’s the result calculated by Microsoft Project. And our software has calculated that we only have a 2% chance of actually delivering by nine four.

John Owen: It’s not particularly useful information, or important information about that 2% of nine four, unless of course, you’ve already committed nine four to the customer, in which case you do have a problem. If you haven’t committed, and you’re in the lucky situation of being able to say, “Mr. Customer, we’re going to deliver by 9/17 at 80% confidence.” And again, that’s a commercial decision, as to the level of confidence that you want to forecast that. Obviously, that’s a 20% chance that the delivery will slip past 9/17, but that’s a question of looking at the penalty clauses, versus the opportunity cost to your organization. So, the right hand Y axis is the important data from these probability distribution histograms.

John Owen: The other very useful component, and this is one of the reasons that having milestones to represent the completion of important components in your project is important, is the sensitivity tornado chart. And this effectively allows us to see the things creating the most uncertainty in whatever we selected as the important thing we’re looking at. In this case, project delivery.

John Owen: So, I’m seeing here the eight tasks that were creating the most uncertainty in that particular outcome, the project delivery. I can see that despite the hardware tasks being in the critical path in Microsoft Project, after we take into account uncertainty, the software tasks are actually creating the biggest range of uncertainty.

John Owen: The good news is the green means that on average, when for example, software task one finished, the top row there, when it finished closest to its most optimistic, or best case duration, then the project on average tended to finish earlier. The split between the green and the red is the overall average finish for the project. So when those top four tasks finished early, the project tended to finish early.

John Owen: So those are opportunities for schedule compression. If that 80% data at 9/17 is not acceptable, then these are the opportunities for schedule compression. Can I reduce the duration of these tasks by using more, or better resources? Can I change the logic so they’re not on the critical path? Can I reduce the uncertainty if I go back and revisit the estimates, and change them from a high risk scenario, to a low risk scenario? So, several things that you can do, but these are the things to focus on.

John Owen: You’ll notice that the hardware tasks there only have red. And that was because after taking into account uncertainty, the only time they actually affected the outcome was when everything else happily finished early. So, all of the software tasks finished closer to their most optimistic duration, at which point the critical path changed to flow through the hardware tasks, and on balance, that tended to mean that they only impact things when they’re pushing the dates out into the future. So, very little opportunity for schedule compression on the hardware tasks. We should be focusing on the software tasks.

John Owen: Another useful output is what we call risk path analysis, and it’s basically grouping the tasks into logical chains, based on their probability of being the driving path to your selected milestone, in this case. Again, the project delivery. So I can see that the primary critical path, most likely critical path, goes through and initiates software task one to four, software complete, and so on.

John Owen: If software task two happens to finish early, then it is possible that the next most likely scenario is that software task three will jump onto the critical path, and push out software task four. So I can see the probabilities of affecting the outcome banded together. It’s useful again, but just for checking that everything that you expect to see is driving the delivery, and nothing that you don’t want to see is driving delivery. So it’s a useful validation of what’s going on in the schedule.

John Owen: Briefly talk about schedule margin. Basically, what we’re trying to do is to protect that deliverable, so we’re forecasting at say 80% confidence for our future dates. The difference between the date being forecast by Microsoft Project, which as I indicated earlier, unfortunately will be earlier, and that 80% confidence date at 9/17, the difference between the two is effectively a buffer for risk.

John Owen: You can call it different things, risk margin, risk contingency, or buffer. The official term from the PMI, is schedule margin. So they use that term to define this buffer, that represents the gap between the finish being shown by Microsoft Project, or the critical path method model, and the date that you’re committing to the customer.

John Owen: And I would encourage you to put that actually as a task into your schedule, to act as a placeholder, to show, “This is the amount of margin we have.” Be it two weeks, 10 weeks. This is the margin. This is the contingency for risk. And the biggest reason for putting it in there is to prevent somebody saying, “Oh, look at that. Microsoft Project says that they’re going to finish in the middle of August. Let’s tell the customer.” No. The customer should still be expecting you to deliver on 9/17, and that schedule margin task will highlight why there is a difference.

John Owen: The only important thing to do is to zero that out before you rerun a schedule risk analysis, because obviously, you don’t want that delaying the results being shown by the schedule risk analysis. And one of the simplest ways to do that is just to mark them inactive in Microsoft Project before you run the schedule risk analysis.

John Owen: Briefly want to talk about risk mitigation. And so, we have our risk register, we have identified potential threats to our project, and we have put in place responses to those, be they avoidance, acceptance, mitigation, and so on. If it’s mitigation, we can actually build that into our project, and we can use a thing called conditional branching. So this is where a task has multiple successors, we can choose which of those successors is considered active, based on some condition of the predecessor, which in our case is going to be the date that the predecessor is planned to finish.

John Owen: So, at the top here, I’ve got my basic schedule. I’ve got two assemblies, Assembly A and B. They come together into integration testing, that runs in the system test, and then we have the launch. We need to plan at 100% confidence, because if we miss this launch window, we’re in trouble. So in this particular case, we have identified that Assembly A is high risk. We have low confidence that it will be delivered on time. Assembly B, we’re more happy about. We’ve considered it low risk. You may have a little bit of uncertainty, but nothing drastic.

John Owen: We put that information into a risk assessment, and then we get the bottom chart here, where we can see that the purple bars are representing the likely… Or sorry, the purple and green bars, are representing the likely dates that we expect this work to occur, based on the uncertainty that we’ve put into the model.

John Owen: And overall, the simulation is suggesting we have just over a 9% chance of achieving that launch on July the ninth. So, we have a problem, and we have identified that in our risk register. We said there is a low probability of Assembly A being delivered on time. So, we planned a risk response. Well, what can we do with that problem? And the solution we’ve come up with is that okay, we will accept that Assembly A may get delivered late. There’s nothing we can do about that. So how are we going to handle it?

John Owen: What we’ll do is we won’t actually merge it into integration testing, what we’ll do is we’ll do some additional unit testing, and then later merge it into the final system test. So, we’re going to have two options, after Assembly A is delivered. If it’s delivered on time, or within a particular timeframe, we will include it in integration testing. But, if it gets delivered late, we will take an alternate path, do some additional unit testing, and then merge it into integration testing at the end.

John Owen: And when we rerun the simulations, taking this additional conditional information into account, we can see that the final launch task, we now have at 100% confidence of that occurring on time. So basically, when the simulations were running, if the assembly was actually delivered after 6/15 on a particular iteration of the simulation, we took the alternate logic, we did the additional unit test, and you can see the yellow bar for the unit test is pushed significantly out, compared to where it was in Microsoft Project. But it’s not pushed far enough out to significantly delay the final system test, and we have a project that works.

John Owen: Briefly want to talk about Agile methodologies, and then I’m going to actually just give you a quick demonstration of the real product. So, Agile methodologies are an excellent, excellent way of improving the chance of delivering projects on time. Obviously, it’s often applied to software projects, but it can in reality be applied to many different types of project. And the basic concept of an Agile methodology is you define components of work, and you assign them to what are called sprints, and sprints are time boxed. That might be two weeks, four weeks, for a sprint, and that sprint will not move. So, you know when that sprint will occur, and the work that you’re planning to do in that sprint.

John Owen: And that’s one of the reasons that… I guess, two of the reasons, that people mistakenly assume that you can’t use schedule risk analysis, because that sprint is time boxed, effectively, there’s no uncertainty with the sprint. And unfortunately, that doesn’t mean there’s no uncertainty with the work in the sprint, and indeed, the methodologies account for that by having the concept of a backlog list. So if work isn’t completed in its allocated sprint, it moves to the backlog, and will have to be done in a future sprint.

John Owen: So, the fact that the sprint itself is time boxed doesn’t mean that the work will necessarily be completed within the time allocated. So, there is again, uncertainty associated with the work. And the other concern is that within a sprint, oftentimes work isn’t performed in a preordained order. So you may have five components of work that need to be performed, and that the sprint manager can choose who is doing what sprint, with which work, at any particular moment in time.

John Owen: However, even if that is true, there is probably some logic, like for example, you only have limited resources. Otherwise, you could do all the tasks simultaneously. So, even if they’re not logically ordered, it is not a prerequisite that Task A is done before Task B. It may be that you can’t do them both at the same time because of resource constraints. So again, there is something pushing work out into the future. So, those two factors are simple arguments, but you can apply a schedule risk analysis to a sprint methodology project.

John Owen: One of the biggest issues that we see is people unfortunately organize the project schedule around the sprints. So, they’ve got these sprints, time boxed sprints, and then they apply the work, build the work in the schedule into the various sprints. Unfortunately, this is a poor technique because effectively it makes it difficult to manage the project if work does indeed slip from one sprint, become part of a backlog, and moves to a future sprint. You’ve got to rejig the project, and so on. So this is a poor technique, you shouldn’t do it.

John Owen: The project should be structured around the work, and the deliverables, as with any other project. So, here’s a better example of the same work needing to be done, but it’s been organized into sprints using code. So, we actually have the project work as an entity. Sure, I’ve created some tasks to represent when the various sprints are occurring, but they aren’t driving the logic. And I’ve assigned each of the individual tasks to a particular sprint using a code.

John Owen: So, if one slips into a future sprint, it’s very easy just to change the code, and they will appear in the backlog list for that particular assigned sprint. So, this is a better way of structuring an Agile project. We are focusing on the work, and not the fact that it is being driven by sprints.

John Owen: And then I can, still, as normal, apply uncertainty. Obviously, the sprint tasks are just placeholders to represent when sprints are occurring. They have no uncertainty, they’re just reporting information. But the work itself, in this particular case, I’ve given each of the work tasks, plus or minus 25% uncertainty. So, their best case duration is 75% of the estimated duration, then most likely is 100% of the estimated duration, and the pessimistic, or worst case duration is 125%. So, what I’m saying here, is each task individually has a 50% chance of being completed in the estimated duration. It might finish a little early, it might finish a little late.

John Owen: If I look at the probability distribution histogram for the delivery, I actually see that we have a 50% of completing the project on time. And if we go back to the project logic, we can see it’s a very simple project. No task has more than one predecessor. So, merge bias is not a factor in this particular schedule. So, because each task has a 50% chance of completing on time, ultimately the project has a 50% chance of completing on time.

John Owen: But we already discussed that having a 50/50 chance of delivering on time isn’t really the best business practice. We should be looking for a higher level of confidence. And if we look on the S curve, and the right hand Y axis, we can see 6/29 is a more realistic forecast date, or maybe even 6/30, if we want to be more confident of being able to complete the work.

John Owen: So, if things go badly, we are looking towards the end of June as the finish date. So, what does that actually mean for the underlying schedule? Here, I’ve created a risk adjust again. So, we’ve got the original tasks from Microsoft Project, the bright red critical task. And then underneath each of those, we have a purple bar, representing when we expect the work to be performed at an 80% level of confidence.

John Owen: So, we’re taking the information from the simulations, and we’re looking at the 80th percentile on those S curves, and using those to generate the start and finish date. And the takeaway from this simple example is the work at 80% confidence, is likely to blow past the end of the second sprint. So we are going to need, before we even start, to budget for having a third sprint, to complete the backlog items that have slipped from prior work. So that’s a very quick example of why, and how you can still apply schedule risk analysis, even in an Agile methodology environment.

John Owen: So, quick summary, it helps produce more achievable plans. It is true that some projects are delivered on time, without schedule risk analysis. But, they either involved herculean efforts on the part of the project team to achieve that, or perhaps all the estimates had been padded, and you just got lucky that people were willing to accept a longer project timeframe than potentially you could deliver in. So, it does help you produce those more achievable plans. That unfortunate truth, if you’re forecasting a 80%, or 90%, or 100% confidence, then that date will be after the finish date being shown by Microsoft Project. So you will probably have to move that date earlier, if you’ve already committed to the original date forecast by Microsoft Project.

John Owen: It’s always true that you execute against Microsoft Project. So, we’re not saying change the way that you execute the project. What we’re saying is, we’re going to produce more realistic forecasts that you commit to, but you still execute against the underlying plan. So, you’re not changing your execution, the way you execute your business at all. It is a good practice, a best practice, to put in schedule margin to represent that contingency, that buffer between what you’re trying to do, and what you’ve committed to do.

John Owen: It can be applied to Agile projects, and the other thing that I would suggest is it’s well worth continuing to run the risk analysis throughout the lifecycle of the project. So, after the project started, our software will automatically prorate down the remaining uncertainty, so you don’t need to actually update anything. All you have to do is enter your status information, and then press the Risk Analysis button, and confirm that at the required level of confidence, you are still on target to deliver on time.

John Owen: Before I flip into a very quick demonstration, that’s who I am, is John Owen. There’s my email address, jowen@barbecana.com. I would be delighted, if you want to send me any questions, and have a discussion about any of this presentation, or risk analysis in general. And you can also get a free trial of our software, either from Microsoft Project, or Primavera P6 from our website, at www.barbecana.com.

John Owen: So let’s have a quick look at the product itself. Full Monte works with Project Standard, Project Professional, anything from 2010 through 2019 is supported, both 32, and 64 bit. We also support things like Project Server, Project Online, and so on. So, you can use it in most environments, including standalone [inaudible 00:44:21]. Here I have a very simple demonstration project. This is actually the demo project that’s included with the software. So, if you install it, this is the project that you will see.

John Owen: Very simple project, it’s basically got two main components, the hardware and the software component. The hardware tasks, according to Microsoft Project are on the critical path, and they are resulting in a final delivery on September the fourth. We have also done a risk assessment, so we’ve basically gone back to the team, and say, “Okay, you estimated 20 days to do this hardware task one component. How confident are you in that estimate?” And if they say they’re very confident, then we’re going to say that’s low risk. Or we could call it high confidence. The terminology is down to you.

John Owen: And likewise, we went back to the software people and said, “How confident are you that you can deliver your software to us in 18 days?” And they said, “Not quite so confident.” So, we’re going to consider that high risk, or put in a lower confidence that it will be achieved in 18 days. So, all we can do is go to the add ins menu of Microsoft Project, and just click on Full Monte. That will load up the project inside our tool.

John Owen: By default, here, it’s applied in my case, plus or minus 25% uncertainty to all the tasks. So we’re saying they’re most likely to complete in their estimated duration. They may be up to 75% as the best case of that estimated duration, in the worst case 125% of their estimated duration. So, it’s symmetrical uncertainty, just as likely to finish up to 25% early, as 25% late.

John Owen: I have used the triangular distribution, just because most organizations that we deal with are required to use a triangular distribution, or prefer to use a triangular distribution. But you can change that for other distribution types if your organization requires it. I’m just going to run a quick risk analysis. I’m going to run 10,000 simulations on this particular project, and then we can go and look at that probability distribution histogram for the final delivery.

John Owen: Because we actually basically gave every single individual task a 50/50 chance of completing on time, again, people sometimes expect that the project should have a 50% chance of completing on time. But the software is actually saying here it’s only 36%, and that is because we have several merge points. We’ve got these Hardware Tasks Two and Three merging into Hardware Task Four. The same with the software tasks, and then those too, integrate into integration and so on.

John Owen: So merge bias is pushing out the finish date, and actually it’s saying at a 80% confidence. Basically, this means 80% of the simulations finished on, or before 9/11. So, we might, if we were forecasting at 80%, or requested to forecast at 80%, say, “Yes, we think we can deliver this project by 9/11.”

John Owen: If somebody says, “Well, actually, I want 100% confidence. I don’t want any uncertainty here.” Then you’re looking at 9/23. And this is why most commercial organizations don’t forecast at 100%, because this curve gets very flat. If you look at the incremental values here, we’ve got 9/7, 9/8, 9/9, 9/10, 9/11, 9/14. So it’s starting a bigger jump there, and then all of a sudden there’s 9/23. And if we are forecasting at 100% confidence, we would have to reserve our resources. We wouldn’t be able to commit to doing anything else before 9/23. So that’s why most commercial companies forecast somewhere between 80% and 90%. Because, effectively after 9/14 though, we’re saying, “Okay, we should be done by then. We’re not guaranteeing it, but we should be. So let’s commit to doing some other work.” A bid for other contracts that will start in that period.

John Owen: You saw that in the underlying Microsoft Project schedule here, I put our assessments of our confidence in the various task duration estimates, and I can take that into account in Full Monte. So I’m just going to say, “Apply from my assessment.” I have multiple assessments in here, it’s actually going to bring that information in. And what that has in practice done, is it’s modified the uncertainty that was being applied to the hardware and software task, because those were the ones that I filled in, in Microsoft Project.

John Owen: So now, these high risk tasks, I wasn’t very confident in the estimate durations for the software tasks. They’re now saying they might finish in as little as 95% of the 18 days that was estimated. We actually think it’s more realistic that they’ll take 110% of 18 days, so we’re thinking they’ll take another day anyway. And the worst case is half as much again, so 150% of the 18 days. And the uncertainty for the hardware tasks is a much narrower band, because we have a higher confidence in that particular estimate. So I can rerun the simulations, and again, we’ll go and look at the probability distribution histogram for the final delivery.

John Owen: Unfortunately, our chance of achieving that 9/4 has dropped from 36%, to just 2%, taking into account the improved information that we’ve given the model, and the 80% is 9/17. The 100% as pushed out into the beginning of October. So, the good news here, there’s a relatively little chance of it pushing further out, but, the chance of achieving the date in Microsoft Project has significantly dropped.

John Owen: So then we can go and look at the tornado chart, to find out why. So, the tornado chart here is showing the things that are creating uncertainty in the outcome. So, I’ve got my software tasks initiate integration [inaudible 00:50:35], and so on. The good news is, I do have some green. So, when these tasks finished, on average early, the project on average, finished earlier. So, these are the opportunities where I can go back into the underlying schedule. So, if that at 80% confidence, if 9/17 is not acceptable, perhaps we unfortunately already committed to the customer that we would deliver on 9/4, based on the original finish from Microsoft Project. At that point, we are actually going to have to go back into the underlying schedule, and start to make some modifications.

John Owen: I told Full Monte to save the results, so it’s updated the chart here, to show what’s called a risk adjusted GANTT. So, I’ve got the original plan, and then what we are committing to do, in this case at 80% level of confidence. And that’s done simply using bar styles, and so on, and various custom fields that have been updated by Full Monte.

John Owen: So, knowing that the software tasks are the ones creating the most angst in our outcome, I can start to go back to the team and say, “Can we reduce this? Can we use better resources? Can we reduce the uncertainty by re-estimating, using a better estimator? What can we do to reduce the uncertainty here?” So I’m going to say, “We’re going to say this is now medium risk because, we re-estimated, and so on.”

John Owen: So, various things that we can do. I’m going to say initiate, we’re going to bring that down to four days. Integration, we’re going to use, again, more and better people, bring that down to three days. Go back into Full Monte, tell it to reanalyze based on the updated information, and then I can go and look at my histogram, and see whether we have managed to improve the situation.

John Owen: And we have. Not dramatic improvement. We’ve got an 80% confidence now, of finishing on September the eighth. So, recall the date we committed to was September the fourth, based on the original forecast by Microsoft Project. We do actually have a 60% chance of delivering by September the fourth, based on the simulations, but we have more work to do to get the 80 or 90%, that our management, or the client has requested to be 9/4.

John Owen: Again, it’s worth going, and looking at that sensitivity chart, to see what can we do to improve the situation. Now, the interesting thing here is, the hardware tasks are now higher up, and they are showing that when they finish closer to their most optimistic duration, or best case duration, that it tended to improve the outcome for the project.

John Owen: So, at that point, I’m going to go back into Microsoft Project, I’m going to mess with those durations, because I know it will ultimately affect it. I’m going to take that down a day as well. Rerun the analysis, and look at the delivery histogram. And here we have 80% 9/4. So we have achieved our objective of an 80% level of confidence in being able to deliver by nine four.

John Owen: The underlying Microsoft Project schedule is now showing we will finish on 8/28, and that is what we will strive to do. We will execute against the underlying schedule in Microsoft Project, exactly as normal, and we will try and deliver by 8/28. But, we have that contingency, that schedule margin between 8/28 and 9/4, that will protect us, protect that delivery of 9/4, should, unfortunately, uncertainty affect the delivery of individual tasks, and they get delayed.

John Owen: So, that basically in a nutshell, is using Monte Carlo Simulation to improve your chance of achieving a required deliverable date to your customers. So, what I would like to do now, is to throw this back to Kyle, and see if there have been any questions from the audience.

Kyle: Thanks, John. Just a reminder, anyone that has questions, you can chat those over, and we’ll answer those now for you. One question here, if you could repost your contact info, and what’s the best approach, if one of the attendees here is interested in learning more about the tool?

John Owen: Okay, so the first instance is go and take a look at our website, the www.barbecana.com. We have several whitepapers describing the process. When you download the software, there are some documents, including a getting started guide that walks you through the process for that for the first time. And then always, you can contact me at jowen@barbecana.com, or you can call us. The number’s on our website, but just for your information, it’s 281-971-9825, and support is option two, and we would be delighted to talk to you. We actually offer new customers who buy the software, they get an hour’s online one-on-one free training, to help you get started, and resolve any issues. And we also have How To Documents, if you go to our website.

John Owen: So, under Full Monte here, you can request the free trial download, which gets you the software. It will work for 14 days, fully functional. After 14 days, it changes to what we call Academic Mode, where it will only work on projects of less than 100 tasks. So, that’s good for universities and training, and so on, or experimentation. But after 14 days, if you need to process 10,000 tasks, then you would need to buy a license. And then under How To Documents, we’ve got some further, sort of more friendly documents talking about particular aspects of schedule risk analysis, and using our software, along with some demonstration projects that you can download.

Kyle: Great, that’s perfect, John. We did have a question that came in from Andrew. He was just curious, just for clarification, if this is a standalone software, or if it’s web based?

John Owen: It’s integrated into Microsoft Project. So it is an add on. Technically, it’s a Visual Studio Tools for Office add in for Microsoft Project. So you do need to have either Microsoft Project Professional, or Standard, running on your desktop, to be able to use our software. If you’re using Project Online, you will need to have Project Professional to be able to connect to the projects in Project Online, and then you can use Full Monte to run the analysis against those projects.

Kyle: All right, great. I think that does it for questions that have come in, and we’ve got a couple minutes left. Anything else before we close out today, John?

John Owen: I can’t think of anything that we haven’t already briefly mentioned. I just want to thank everybody for taking the time out of their day. I hope it has been useful, and I would love to hear from you, if you wish to discuss, or have questions. So thank you very much.

Kyle: Excellent. Thanks so much for the session today, John. We really appreciate it. That was a great overview of the Full Monte tool. And those of you claiming the PDU credit for today’s session, I’ll get that info back on the screen for you. All right, today session eligible for one Technical PDU.

Kyle: And if you missed any of today’s session, or would like to go back and review anything that John shared with us, or possibly share with your fellow colleagues, the recording will be posted to MPUG.com a bit later today, and you’ll receive an email in just a couple hours that will link to that. It will also include a link to the Barbecana website, as well. MPUG members have full access to our PDU eligible library of on demand recordings on MPUG.com.

Kyle: And we do have one more session in the vendor showcase series, that’s from Triskell, and they’ll be demoing their software for enterprise governance. That’ll be next Wednesday at noon, same place, same time, and we look forward to seeing you there. I just chatted a link out to the schedule of events, where you can register for that session, as well.

Kyle: All right, and that does it for today’s session. So, once again, thank you, John, for your time today, and for demoing the Full Monte tool, and thank you to everyone that joined us live, or is watching this on demand. We hope you have a great rest of your day, and we’ll see you back next week for final vendor showcase session.

John Owen: Thank you, Kyle. Take care.

Kyle: Thanks, john. Bye-bye.

 

Watch the on-demand recording