Sure, but I don’t think Waterfall is going to save you from the soul-crushingness in such an environment.
- 0 Posts
- 12 Comments
azertyfun@sh.itjust.worksto Technology@lemmy.world•public services of an entire german state switches from Microsoft to open source (Libreoffice, Linux, Nextcloud, Thunderbird)English71·5 days agoOr way worse, what you said but senior techs.
Microsoft has been at this long enough that there is an army of old guys whose only - but extremely specialized - skillset is navigating arcane GUIs for group policies and AD administration. But drop them in a bash terminal and they’re like a fish dropped on a tennis court.
How much of it is due to Agile (which is a very broad concept even though some people mistakenly equate it with scrum), and how much is it due to corporate pressures and inadequate processes though?
I find Agile conceptually meshes a lot better with “standard” product and solutions development thanks to the tighter feedback loops and increased reliance on local expertise over centralized planning. This only gets truer as project complexity grows.
However some companies try to make Agile work with top-down decision making and/or hard deadlines, which are deadly antipatterns. As for lack of time/resources and/or timesheet micro-management, this isn’t a problem unique to Agile nor something that waterfall is exempt from.
Good agile teams are mostly independent and can define their own testing/release cycle as required for a given project; though of course when that happens there are at least a couple layers of management who feel a burning itch to stuff their dirty nosed where they don’t belong because if the team succeeds despite their lack of direct involvement then everyone might realize the emperor has no pants.
That may be true in some truly well organized (usually “legacy big corpo” companies).
Where I’ve worked it’s more like:
- Requirements only cover user-facing features, if that. (Not so) senior engineers are left to bridge the gap between UI mockups and literally everything else.
- Implementation issue is accidentally introduced
- Priority on the bug is lower than new features so no-one has any way to justify working on it
- One day a dev might be personally annoyed enough by the issue that they fix the part as part of some tangentially related work. Else it stays like that forever.
That is a basic side-effect of Agile development. If you have implementation details figured out to such an extent before writing the code, you are not doing agile, you are doing waterfall. Which has a time and a place, but that time and place is typically banking or medical or wherever you’re okay with spending several times the time and money to get maximum reliability (which is a different metric than quality!).
I bet NVIDIA has driver crashes to figure out, and I know which of those issues I’d want them to focus on first if I used their windows driver.
azertyfun@sh.itjust.worksto News@lemmy.world•Trans man uses women’s restroom to follow the law. Police detained him for it anyway.101·6 days agoIf god didn’t want my buttcheeks poopy, then why did he make them hairy?
Downmixing is a pretty straightforward affair. You have 6 channels, you need to go to 2, so you just average 4 signals per channel using some weights.
Good media players (Kodi) allow you to change those weights, especially for the center channel, and to reduce dynamic range (with a compressor). Problem solved, the movie will be understandable even on shitty built-in TV speakers if you want to do that for some insane reason.
The problem is that there are “default” weights for 2.0 downmixing that were made in the 90s for professional audio monitoring headphones, and these are the weights used by shitty software from shitty movie distributors or TV sets that don’t care to find out why default downmixing is done the way it is. Netflix could detect that you’re using shitty speakers and automatically reduce dynamic range and boost dialogue for you, they just DGAF. But none of that is the movie’s problem.
I had a 5.0 setup before I even bought my first TV. I was just using my PC monitor until then.
It’s counter-intuitive but decent sound comes first. I’d much rather watch Interstellar in 360p with 5.1 audio than in 4K OLED HDR with built-in speakers.
But when you say that people get mad because they spent a grand on a TV that sounds like shit and they feel they have to defend their choices.
people watch stuff on TV and cannot hear any dialogue
did you read anything I said or do you just want to complain?
have a doctorate on audio / put in thousands of dollars into a hobby
Good news then, a more-than-decent 5.1 setup can be had for ~500 €. A decent soundbar for a few hundred.
and let people like you mess around with the settings for your home cinema
I can’t if the audio source is fucked up because directors have been forced by studios to release with low dynamic range.
My whole point is that your audio goes Master -> 5.1 channels -> downmixer -> your shitty 2.0 channels speakers and my audio goes Master -> 5.1 channels -> receiver -> my 5.1 setup.
You’re asking the master to change to fit your needs. I’m asking the media players to fix their fucking downmixers because that’s where the problem lies. Leave the studio mastering alone god damn it.
Where do you draw the line? If you use a soundbar, someone else is complaining because they use their built-in speakers. But if you optimize for that, someone else is using their laptop speaker on the train.
What really pisses me off with this “argument” is that the audio information is all right there, which you would know if you bothered to read the second half of my comment before getting all pissy.
5.1 audio (and the standards that superseded it in cinemas) all have multiple audio channels with one dedicated to voice. If you have a shit sound system, the sound system should be downmixing in a way that preserves dialogue better. Again, the information is all right there as there is no stereo track in most movies, your player is building it on-the-fly based on the 5.1 track. It’s not the director’s fault that Netflix or Hulu is doing an awful job at accounting for the fact that most of their users are listening on a sound setup that can barely reproduce intelligible speech.
Nah, I have a good sound setup and I don’t want to be watching movies with less dynamic range because some people are using their shrilly built-in TV speakers with their children screaming in the background or $5 earbuds.
If you don’t want to have a proper 5.1 audio setup, it’s not the director’s problem, it’s the media player. Audio compression, center channel boosting, and subtitling are things that media centers have been able to do for decades (e.g. Kodi), it’s just that streaming platforms and TVs don’t always support it because they DGAF. Do look for a “night mode” in your TV settings though, that’s an audio compressor and I have one on my receiver. If you are using headphones, use a media player like Kodi that allows you to boost the center channel (which is dedicated to dialogue).
You know, maybe my grandparents had it right.
It is weird that computers give so little sensory feedback for what they’re doing. Flashlights go click. Cassette decks go clack-vrrrr. Whiteboards go squeek-squeek. Screen sharing goes… nothing, just a small mostly white rectangle on top of my much bigger rectangle until a disembodied, 4 kHz-wide simulacrum of someone’s voice from halfway around the world says “yeah we see your screen”. Unnatural is what it is.
Plenty of cars flash their brake lights when ABS(/ESP?) engages, which is reasonable and should be a legal requirement IMO.
There’s lots of room to give additional info in between that and “brake light is on because the driver doesn’t understand that they can do mild adjustments by letting off the gas / stupid bitch-ass VW PHEV computer thinks using cruise control downhill with electric regen requires the motherfucking brake lights”. It’s like no-one realizes or cares that brake lights lose all purpose if they’re on when the car isn’t meaningfully decelerating. ARGH.