Code review is too late for most automated analysis (at the level of: "parameter isn't validated" as seen in the screenshot), it should ideally be done as a compiler/lint check in the IDE, and at worst as a git pre-commit hook.
In most cases it's not worth sending a code review if there is automated feedback which can and should be addressed before a human sees it. It streamlines the reviews for reviewers, and gives new contributors a much better experience as they have less feedback to address, and more confidence that their code is correct.
One automated thing I've been really wanting to add to our code review process is copied code detection, where a piece of code is very similar to code somewhere else in the product (though likely not part of the current review).
Obviously there's cases where this is code smell, but plenty of times it isn't. It's not something that you would want to block shipping. But if you do want to ship code like this, you should do so with the reviewer(s) fully seeing where it came from, to make sure everyone agrees that's the best way.
No one has implemented a bot for us that would add such comment for use but I do think it would be a huge win.
Sonarqube has, among other things, code duplication detection, where it detects very similar, but slightly different blocks of code.
It's possible to wire things up so that it runs when you open a merge request and notifies you via email the results.
It's not a silver bullet though.
It also has a linter plugin for vscode (and possibly other editors) that DID NOT work well, atleast the last time I checked which was before February this year. Just thought you should know, incase you decided to just use the linter.
There's nothing more frustrating than a stupid machine telling me it knows better when it doesn't. I used to work on a team where the build defaulted to fail if something wasn't used and debugging was a fucking nightmare because the moment you comment out a block of code there's a cascade of warnings into errors that is just never ending (it was typescript so it leaked all the way back to module definition). I had to piss away some time to write a script to turn a bunch of the rules off and then remember to re-enable them later or I'd break the build by breaking someone's OCD.
Well, don't turn warnings into errors. Doubly so on your development environment. That's not a good reason for not running a linter while developing.
But, of course, if the decision was out of your hands, people that do that are usually the same that enable all linter rules. Neither decision makes for a good development practice.
Your comment is more true today than it was back then, not that I disagree with you posting it. Back then linters were a twinkle in the hopeful eye of software devs. Now they're battle hardened.
It seems like autofixing linters got popular about 10 years ago? That was when it hit my radar at least, but before that I wasn't really a software dev. Whenever it happened that was a quantum jump in usability.
It's a 2018 paper, but definitely: it feels like something that's come to the forefront in the past 6/7 years and I don't think people are leveraging it enough.
It's so nice to be able to write a compiler/lint rule with a quick fix and then see it catch both my own mistakes, and potentially remove a concern/checkbox from the code review template
Code review is too late for most automated analysis (at the level of: "parameter isn't validated" as seen in the screenshot), it should ideally be done as a compiler/lint check in the IDE, and at worst as a git pre-commit hook.
In most cases it's not worth sending a code review if there is automated feedback which can and should be addressed before a human sees it. It streamlines the reviews for reviewers, and gives new contributors a much better experience as they have less feedback to address, and more confidence that their code is correct.