Keeping It Simple PMO Strategy
In the last year we started a journey to revamp our existing PMO. The goal was to adjust for lessons learned, match leadership changes, and align to what the company needed. We had a high performing team, but there is always something to tweak in the never-ending pursuit of perfection. This was based really on two key Epics: (1) to use more Agile methodologies for delivery and (2) improving our ability to proactively report, track, and resolve issues with projects.
Becoming More Agile
Growing our adoption of Agile along with creating bench strength was a key goal. The larger organization (our customers) was transitioning to DevOps and using more Agile methodologies to deliver just like the rest of the industry. At the same time the waterfall driven hardware projects that had overrun us in the past were declining. The requests for Scrum Masters and PM’s with Agile backgrounds were increasing, but not everyone had enough experience to immediately jump in and be successful. These were all excellent PMs that had consistently delivered under pressure over the years, so it was evident that we needed to add Scrum and Kanban as available tools to these veteran resources. The problem we were attempting to solve was clear, but the path was not as immediately transparent.
Initial Iteration
The initial iteration was based on a small team meeting were the focus was the need to become more Agile. One of the PMs spoke up saying that Agile background or knowledge wasn't the issue. The issue was our chosen method of running stand-ups that put the PMs in uncomfortable spaces. As a largely distributed team, JIRA was the tool most often used to manage Sprints and Kanban boards. The PM’s experience in Agile roles had been in the sticky notes era, and using JIRA medium wasn't in her wheelhouse today. Thinking through the root cause identified two problems to solve that really were more human behavior issues tooling knowledge.
Very few professionals like to appear unprepared. If you're a junior Scrum Master starting to work with a new team, learning new technical jargon, trying to perfect the various ceremonies, and fumbling around with the tool, you’re most likely going to push back in fear of failing publicly. For me personally, nothing sticks until you do it for real. Being a person that learns through doing, I always like to have a real problem to solve in place of just building a fictitious trial, which influenced my direction.
My strategy of making people comfortable started with finding safe ways to learn privately and reduce the number of variables that could go wrong when doing it live. This goal would provide value to our team while learning in a real-world situation. The plan initially was to use JIRA to run a basic Kanban board for managing our organization’s transformation. This basically worked out to us having a scrum master rotation with things like "Learning Paths", "Data Driven Culture", and "Agile Standards" represented as epics. The Epics were assigned to a lead who would break them down into stories, get a virtual team of volunteers from the organization to help, and then report status back to the team. We quickly adopted Scrum as our chosen methodology, and the underlying teams were encouraged to operate in a similar manner. This would allow us a next level sandbox for the scrum of scrums concept with demos weekly based on the underlying sprints of each Epic. This worked very, very well.
We had a ton of lessons learned, but what happened was a natural progression of the organization’s use and understanding of Agile. You could visibility see and measure:
- How much more comfortable people were with the ceremonies and tooling of the your everyday scrum master.
- Our ability to develop uniform standards across the 40 PMs since the virtual team stand-ups became a live training ground for learning and process changes.
- Proving the velocity and scale value of Agile. We cleared an immense backlog quickly through allowing the teams to work autonomously and providing quick feedback.
Velocity Data from our Process Improvements Scrum Team
Second Iteration
As we said, people want to be comfortable. To further enable this, we looked across the organization and identified 2-3 key resources that had considerable experience with Agile. While experience with Agile was key, the ability to mentor and pass along this knowledge is something we evaluated even harder when looking at each person. Following identification, we set up a small agile coaching and implementation practice through earmarking 30% of the coaches’ time to train people who were less experienced with Agile, document training/standards, and implement Scrum for engineering/dev teams wanting to adopt this process.
The process was simple to build out as we would identify or be approached by potential customers. Once this occurred we would assign a leads and get 1-2 junior scrum masters assigned to mirror the more senior resource. Once the team started to stabilize and understood the process, the Scrum Master role would be transitioned to the more junior person. The coach would over time back away, but stay involved over time to ensure whatever support needed was available. This helped us develop a bench of PMs with Scrum Master experience.
The Agile Coaching team also spent a considerable amount of time documenting the process, recording training sessions, and building a process to get dev teams onboarded faster. Some of the more successful actions were (1) getting a training deck built that was used to educate and on board new teams to the process and (2) recording live sessions. This could be a retrospective, sprint planning, or story point methods done live with a team so people could see not just a lab simulation, but understand real questions that real teams ask.
Through the initial six months we on boarded at least 15 new scrum masters with real experience and at least 6-7 new teams from varying backgrounds. Neither of these two iterations was revolutionary, but just basic solutions to address a large problem.
Driving a Data Driven Culture
When I took over the Cloud Delivery PMO, it came to my attention that some leaders had the impression we weren’t delivering their projects on time. But when I dug into the data, it turned out we were delivering on time in the great majority of cases – so why didn’t these leaders know that? Did we just need better portfolio level reporting, better change management, or was there another problem?
We started to investigate improvements through first starting with the root causes. Through this analysis the key problems were available headcount, tooling, and data quality. We ran a portfolio in excess of 600 projects with around 45 PMs but understanding headcount wasn't the first place to start investigating - we analyzed the projects themselves. Key issues that were identified included:
- Many of the projects were standard assembly line hardware builds that were repeatable and cumbersome to track.
- The PPM tool today, while enterprise quality was heavy to maintain from a data entry standpoint for the more assembly line and smaller projects
- Reporting from this tool was complicated and sometimes not possible
We also knew that to get quality portfolio reporting to remove the impression everything was always late we needed high accuracy data. This would have resulted in the PMs spending an enormous amount of time if we used the current PPM tooling, which would take away from managing the actual projects. This also would have driven a very unhappy and under-performing team, in my experience, since good PMs want to solve problems, not enter data.
The strategy we implemented was simple. We replaced our internal PPM tool with a simple in-house developed tool that was purpose built to track the minimum required data and allow reporting integration with other tools such as JIRA. This allowed us to lower the time spent on data entry/upkeep by 70% and reporting creation went from hours to seconds in most cases. Through being able to integrate with the procurement system and JIRA, we automated schedule and data population so we could go to a “PM light” model on 25% of the portfolio, allowing us to focus on other initiatives that were less repeatable. This solved our resourcing issue and greatly improved our tooling situation.
The next step was data. One of the new tool’s features was better exports that allowed for easier quality checks. The first quality check showed data quality of 9%! This includes basics like having a baselined set of milestones, a good executive summary, accurate status color, etc. This first quality check gave us nowhere to go but up and the goal I set was 95% quality data, or better, in the next four months. We used an Agile approach to see quick improvements, focusing on one set of fields in the first Sprint, adding the next set in the 2nd Sprint, and we got to 98% steady state on data accuracy well before the end of the quarter. This required consistent follow up, reporting and education. If everyone was failing a quality check we held a quick training session or reviewed whether that quality check was valid. Adding the quality OKRs into the PMs’ individual performance goals also helped us to improve more quickly – they fully owned their results.
As the data quality improved we were able to start building automated status reports, real time dashboards, and began to experiment with calculations on things like average baseline deviation by sponsor and phase. This gave us a bullet proof set of real-time dashboards that provided visibility into the root causes of issues outside our group that needed to be solved.
Conclusion
At the end of the day none of this could be considered a revolutionary strategy that was going to be tagged with #disrupt or #nextgeneration on Twitter. The lesson I learned was simple solutions fix complex problems. Clearly identifying the root cause and applying simple fixes is revolutionary in today’s complex world since we forget the basic concept of KISS (Keep It Simple Stupid).
Very special thanks to OneTeam & OneTeam Leadership for helping lead these initiatives. Many if not all were handled completely or initiated by them.
Special thanks to Craig Cowden & Dee Ann Gordon for helping review the article!