@nonfedimemes simpler solutions would be devs main system be low end
if it can run on their system, itll run everywhere,but if it can't, they just can't develop it,and they need to optimise it anyway
Developers getting the fastest and biggest hardware money can buy is definitely a problem.
How about a chaos monkey approach, but for dev computers? A service that locks 90% of RAM and all except one core at random times?
@wakame @SRAZKVT @nonfedimemes how about a separate job on the CI pipeline that launches the e2e tests on a really slow machine.
Run that baby nightly and reward the team that makes its runtime-per-test-average on that machine go down the most.
@Bfritz0815 @wakame @nonfedimemes that's a ci job, you can just remove it
if devs have only low end hardware as possibility, they have no other choice.
also, ci with rewards will push them for early optimisations, shit the code, instead of focusing on making it work and optimise parts that need optimisation
@SRAZKVT @wakame @nonfedimemes as one of those "thems" being discussed here:
My development environment alone needs more resources than a low end machine even has.
Adding to that, as a fullstack dev, so i need additional resources to run the server locally.
And not everyone is "on a distributed thingy anyway": some of us would like to debug our code.
I'd like to add that a good employer provides their workers with tools that are good to use, not as punishment.
Next issue is: Development is always about tradeoffs and in a capitalist society (as ours is) the pressure is usually toward creating new functionality at highest possible throughput.
If you want to improve any quality related aspect, you either need to make a convincing argument to management, why making one feature faster for 10% of users is worth delaying its delivery (and that of any other thing in your queue) by x amount of time for all users.
This id where the law-of-diminishing-returns comes in
@Bfritz0815 @SRAZKVT @nonfedimemes
It definitely makes sense to have a fast dev machine. And a good internet connection (as developer).
But this experience is far off from what the majority of users will experience.
Therefore, having a few old laptops lying around with outdated drivers, WXGA resolution, display scaling at 127.3%, HDDs and three antivirus programs performing a slapstick scene in the background definitely makes sense.
@wakame @Bfritz0815 @nonfedimemes im sorry but i cannot see a good reason to have high end hardware as a dev unless you're talking about tooling, at which point if your tooling can't run on low end hardware, it's probably just shit
@SRAZKVT @wakame @nonfedimemes you're entitled to your own opinion.
Just as i am entitled to my opinion that over-generalizations and a punishment-driven attitude do not lead to improvement but only resentment.
I strongly prefer to take a position that attempts to understand all involved sides and to figure out how to improve mutual awareness in a way that is both actionable and respectful.
That approach creates an environment where actionable decisions are made and constantly re-evaluated to improve them
@Bfritz0815 @SRAZKVT @nonfedimemes
To be fair:
Most tooling really is shit, to use the colorful expression above.
We develop software with tools that would offend any "normal" user.
The typical compiler is still a monolith, almost every build process requires manual tuning, syntax highlighting and looking up library functions is the pinnacle of IDE evolution.
I fear that a similar thing to mass production has been happening for a while in software development:
A process of de-skilling that tries to tie developers to a large infrastructure, but ultimately makes them experts with a set of tools under someone else's control.
(This post is partly sponsored by some musings about 'agile' I had this morning.)
@wakame @SRAZKVT @nonfedimemes your suggestion does make sense.
My only concern is that if no one ever uses those laptops, we're back to square one.
That's what my idea with the special run of the e2e tests tries to address:
those can simply be configured to run as often as you like, so we always have up-to-date intel oh how performance develops over time.
If something makes a noticeable change, that can trigger a maintainer or SRE to look into the latest changes to figure out where the change came from and to determine what can/should be done
@Bfritz0815 @SRAZKVT @nonfedimemes
That's where my (half-serious) suggestion of the chaos monkey comes in:
Setting your computer to a weird configuration 5% of the time.
Might actually be a mix of the "chaos monkey" and the "dog fooding" approach:
Force the developer to use their own application (and yes, if you are currently debugging an issue, then introducing random constraints is definitely not a good idea).
@SRAZKVT @wakame @nonfedimemes btw: in our company someone only removes quality assurance CI jobs if they wish to receive a stern look from me.
Changing ci jobs is locked down to me and 1 other colleague and we suffer no fools when it comes to QA