A quick peek into PwC Venture's Development Methodology
This post is a follow on from my previous post that discusses how our team how we run as fast as a startup within the walls of a large corporate.
A lot of people have been asking me to dig deeper into the methodologies we employee in PwC Ventures from a developer's perspective. This article is an excerpt from my writings within our internal development handbook.
? ? ?
In PwC Ventures we try to minimise as much project management as possible. We use the minimal amount of tools, systems and processes as possible. This is how we keep our team lean and nimble.
Tools
GitHub is our base of operations in which we've integrated some apps to help us run lean whilst delivering high quality code. Here are our integrations:
- Circle CI for continuous development and deployment
- Heroku & AWS for hosting (automatic deployments from master to staging)
- Code Climate to spot security issues and track code quality & test coverage
We also use a number of other tools such as Rubocop to ensure code quality is maintained at a high level before it's pushed to GitHub.
Discussions
Discussions happen in two places; Issues and Pull Requests. Issues are for discussing how feature works prior to creating a branch. Pull requests are for discussing how to implement the feature after committing code.
Methodology
The product manager is responsible for syncing the issues backlog and sprint milestones with the product roadmap. We manage our weekly sprints (Tuesday to Tuesday) using GitHub Issues. Typically we overload the sprints by ~50%.
We always make quality the highest priority. Features have to solve their related problem completely in a considered manner, with high quality code and comprehensive testing. We don't rush deployments and don't set time deadlines.
Labels & Assignment
Everything should have someone assigned (unless the issue isn't in the current sprint or a pull request is waiting review).
Everything should also have labels attached. Labels are explained below:
-
Size labels should be assigned by the person who picks up the issue. All PRs and Issues should have one size label.
-
Type labels indicates what the issue is related to. All PRs and Issues should have at least one type label.
-
Action labels indicate if someone has to do something and where appropriate should have that person assigned to the the PR.
-
Emergency labels must be done immediately. Drop what you're doing and close out this issue immediately.
-
Blocked labels represent that someone or something is blocking this issue from being resolved. Anything with this label cannot be merged in master.
-
In progress label indicated the issue/PR is currently being worked on and should always have someone assigned.
-
QA needed label indicates that if this feature is not done correctly, the firm's brand reputation could be at risk. Anything with this label needs approval from the product manager or product owner before it's merged into master, and again before it is deployed.
-
Bug labels indicate that something isn't behaving as expected. These labels are often coupled with Emergency labels.
Milestones
There are two milestones - "Current sprint dd/mm" and "Current release". The product manager will manage the issues within current sprint, with the exception of anyone within the team who spots a bug or hotfix. All other issues (can be created by anyone) sit in the backlog without a milestone.
The Current sprint dd/mm milestone includes all issues that have been earmarked for completion with in the current sprint. Just before someone merges a PR they assign it to the Current release milestone. If there are things that need to be done in the deploy for each milestone, place that in the milestone's description and replicate in Slipway's deployment manifest.
Deployments
Senior developers are responsible for deploying code to production. Unless the "QA needed" is applied, the developer that wrote the feature should do a QA on staging then let the Senior developer working on that product know the feature is ready for deployment. We aim to deploy after each feature is merged. Post deployment we closely monitor New Relic and Bugsnag.
Aerospace Engineer | Full-stack Founder | GAICD
9 年Hey Nick Whiteside - I haven't heard of sentry before. Looks like a good application. When we were initially looking at tools it came between rollbar and bugsnag. I really like bugsnag because instead of installing different environment variables for each environment you can pass through different environment names and and track the same errors all the way up from dev providing an extra source of truth to easily identify the release that caused them or if a similar error ever appeared in a past release. We also use New Relic to look for major changes in load time both Frontend and backend which helps identify PRs that may contain code that a particular device / browser combo doesn't like.
Founder & CEO at Checked
9 年Great methodology Ben. We use a very similar approach/stack within the development team at EvolutionLive (with the exception of Heroku of course being Angular/Laravel stack ;). Did you also weigh up Sentry.io for post-deployment monitoring before choosing Bugsnag, would be good to know your thoughts on that one?