Hey all! My team at work is struggling with growing pains of getting into a formalized review process, so I was wondering if any of you guys have some things to live or die by in your code reviews. How much of it is manual, or how much is just static code analysis + style guide stuff, etc?
I’m a senior at a large tech company - I push all the teams I work with to automate the review process as much as possible. At minimum, teams must have a CI hook on their pull request process that runs a remote dryrun build of the changed packages. The dryrun makes sure the packages compile, pass unit tests and meet linter rules. A failed build blocks the pull request from being merged.
I try to encourage developers to turn the outcome of every code style discussion into a lint rule that fails the dryrun build when its violated. It saves time by automating something devs were doing manually anyway (reading every line of code to look for their code style pet peeves) and it also makes the dialogue healthier - devs can talk about the team standards and whether the code meets them, instead of making subjective comments about one another’s code style.
Current place:
- Work is done on a feature branch on a personal fork of the repo
- Codebase requires 100% functional coverage, and you’re responsible for writing the tests for your code and including that in the same PR
- Run pre-commit hooks for style auto-formatters before you can commit and push your code to your origin fork
- Ideally run local tests as well
- Create a PR to pull feature branch into the upstream repo’s main branch, which triggers the CI pipeline for style and tests
- At least 1 other person must review the code before a PR can be approved and merged into upstream main
- There’s a separate CI pipeline for testing of publishing the packages to TestPyPI
- Releases to PyPI are currently done manually
Small startup - Here is our process
- Create your branch
- Implement feature
- Test independently of other components (with unit tests or otherwise)
- Test directly with other componenets
- Work with other devs to ensure stability on dev branch, make any small bug fixes directly in dev branch
- Push to prod
At my work we use trunk based development, with mandatory review before merging (enforced by tooling). Part of the review is ensuring proper test coverage and it’s enhanced by static code analysis.
We use a version of Git Flow for branching (since everyone is talking about branching strategies here). But technically, you asked specifically about code review process. Every ticket is it’s own branch against the
development
branch, and when complete is merged by PR into the development branch. We’re a small team, so our current process is:- Merges to the development branch require one approval
- Merges to the main branch for a release require two approvals
- If the changes are only code, any developer can review and approve
- If there are “significant” SQL changes a DBA approval is required.
- “significant” means a new entity in the DB, or…
- an inline/Dapper query with a join
As we grow we’ll probably have to silo more and require specific people’s approval for specific areas.
A lot of what we do is “cultural”, like encouraging readability, avoiding hard-coded values, and fixing issues near the altered code even when not related to the original ticket. The key is to be constructive. The goal is better code, not competition. So far we have the right people for that to work.
I work on an old monolithic system (20 years ish). It’s a case management system. We’ve been through many iterations of our workflow over the years I’ve worked there. Our current workflow is,
- Create a branch from main
- Create a PR against main when ready to merge
- Every night a new build of the program is automatically built and pushed to a testing environment
- Every Wednesday night we deploy a new version. We deploy the version that has been fully testet. So for instance, if we built v 9.101 on Tuesday night, but that build still has untested features, we deploy v 9.100 which was built Monday night, etc.
- If there is a major bug in production, we
- temporarily deactivate merging.
- fix the issue in main.
- build a new version, push it out to a test environment
- our product owners test the fix along with other new features that have found their way into the build.
- Once testet ok, we decide if the fix is important enough to take the system down during working hours, or if it can wait for a night-time deploy.
- We open up merging again.
The obvious down side to this way of working is that the product owners might have quite a few features to test if there is a hotfix, and other features have been merged since deploy. This has almost never been an issue though. We almost always deploy the very latest version on Wednesdays, and if there is a major issue, it’s usually discovered and reported before 11 am on Thursday. So the number of other features which have found their way into the code is usually 1 or 2 at the most. Each feature is usually quite small in scope, so it’s actually worked quite well for us.
How many developers do you have that you have to disable merging? Or is it more a safety-net?
Purely a safety net to avoid mistakes.
Merging straight from feature -> master is gutsy
Well. I told a slight lie. We go against develop. However, builds are created from develop. So we don’t actually use main anymore. We used to, but our workflow changed over time untill we got to the point were we didn’t use main anymore. I’m not even sure what we would use it for if we’d tried.