BeauRX Posted April 20, 2021 Share Posted April 20, 2021 If I understand the Divide blend mode then the attached seems incorrect for two identical layers. That with the exception of black or 0 areas, everything should be x/x=1 or white. Both Difference and Subtract blend modes produce the expected black or 0 results. ColorPattern.afphoto Link to comment Share on other sites More sharing options...
Staff Gabe Posted April 20, 2021 Staff Share Posted April 20, 2021 Hi @BeauRX. Thanks for spotting this. Issue logged. This only seems to be a metal issue Link to comment Share on other sites More sharing options...
BeauRX Posted April 20, 2021 Author Share Posted April 20, 2021 @GabeI realize the team is small and Affinity is racing with a half dozen other competitors, but that seems like an even greater reason to employ automated nightly regression testing to at least test the fundamentals are working. Imagine automatically doing hundreds of hours of testing every night. It would also allow you to focus on the more interesting/complex issues. It was a huge improvement in my life and sw when we employed it. Hope this helps. IPv6 1 Link to comment Share on other sites More sharing options...
fde101 Posted May 22, 2021 Share Posted May 22, 2021 On 4/20/2021 at 10:56 AM, BeauRX said: @GabeI realize the team is small and Affinity is racing with a half dozen other competitors, but that seems like an even greater reason to employ automated nightly regression testing to at least test the fundamentals are working. Imagine automatically doing hundreds of hours of testing every night. It would also allow you to focus on the more interesting/complex issues. It was a huge improvement in my life and sw when we employed it. Hope this helps. Automated testing can be difficult if not nearly impossible to implement accurately when it comes to things like on-screen rendering due to the way that the test suite would need to hook into the code. Testing for user interaction can also be difficult to implement well. Not that this is a bad idea - it certainly is a good idea in general - but you may not realize how much of an effort it would be to do this in a meaningful way with software of this nature if it wasn't engineered from the beginning to support doing so. Link to comment Share on other sites More sharing options...
BeauRX Posted May 25, 2021 Author Share Posted May 25, 2021 Actually I do realize my team did it on 3MLOC. It's not a panacea and screen compares are not the best approach but one can automate data processing on predefined tests (not random images) and as you say hook into the code to test the data output rather than screen. The nightly testing was of huge value especially as the code base expands and permutations go through the roof. Link to comment Share on other sites More sharing options...
Recommended Posts