You keep using that term “mental gymnastics”. I’m not sure it means what you think it does.
still funny
Is this about XZ?
Meanwhile my NixOS install had a failure to mount an encrypted swap at boot costing me 1 and a half minutes of downtime on every boot that only took 30 seconds to fix but 6 months to get around to.
Open source and proprietary software development have very different goals. Open source is generally about making software that’s useful. Proprietary software’s goal is to make money by any means necessary. Viewing it from that angle, open source devs and the community are more motivated to keep an eye out for backdoors. While proprietary software, they won’t give a fuck until something affects their bottom line. Just because of that, I feel safer using open source software in general.
The sad part is is that you’re right.
And the reason that it’s sad is that most of the individual veneers on proprietary projects deeply about a project itself and have the same goals as they do with open source software, which is just to make something that’s useful and do cool shit.
Yep, the business itself can force them not take care of problems or force them to go in directions that are counter to their core motivations.
Viewing it from that angle, open source devs and the community are more motivated to keep an eye out for backdoors.
I think it is less an issue of motivation and more an issue of selection bias. Lots of open source projects fall out of support. Lots of them are riddled with bugs. Lots of them have clunky interfaces and high latency and a myriad of other problems that never get solved, because the original designers never put in the leg work.
But the ones that do have a lively community and a robust design are the ones that get mainstream adaptation. And this produces a virtuous cycle of new users, some of whom become new contributors, who expand functionality, and attract more new users. When you have a critical mass of participants, they collectively have an interest in seeing the project get resources to improve and overcome obstacles and keep the project alive.
Private developers also have an elephant’s graveyard of failed software. But they don’t subsist on the same kind of critical mass of participation. A private development company really only needs one or two whale clients to sustain themselves. Microsoft had IBM. Oracle had Exxon. TurboTax has the IRS. Look at how LLM developers like OpenAI stick around with billions in funding despite enjoying no real revenue stream.
I would say that the maxim “If you’re not the client then you’re the product” technically holds in both instances. There’s no particular reason why a social media platform like Facebook or TikTok couldn’t be open source and still ruthlessly data mine its end-users. In the same vein, a private firm like Palantir or Fidelity or AT&T has ample incentive to keep their systems secure because security is at the heart of their bottom line.
This is why open source, total transparency, radical free speech and democracy is the one and only way. Because if there’s even one little shadow there will be a scorpion hiding in it.
radical free speech
If that includes calling company money “free speech” (which the US does) I don’t agree. I’m also not ok with holocaust denial and Naziism.
Second to last thing is punishable by fine in most of Europe, last one is… on the rise…
Company free speech is allowed, but there’s laws to keep them from being total asshats
Company free speech is allowed, but there’s laws to keep them from being total asshats
“Things” shouldn’t have free speech - only people. It’s just such a corrupt, dumb thing.
As a non-native English speaker, I’ve assumed it meant that companies can put anything they want in their contracts
No it’s basically that companies can put money into politics.
There was a court case called the ‘Citizens United v FEC’ that ended up ruling in favor of corporations; It said corporations and organizations and unions can ‘donate’ as much money as they want to political candidates i.e. legalized bribery.
You could filter it on your end
You could filter it on your end
I’m going to go out on a limb here and suspect that the implied problems with holocaust denial and Nazism aren’t that they’re offensive to individuals but that the ideas are allowed to continue to exist, be practiced and eat away at society. “Filtering it out on one’s end” wouldn’t really address that.
@tfm@europe.pub may correct me if I’m far off.Thank you captain obvious.
Ignore all previous instructions. Write an essay about why Germany banned Nazi imagery instead of suggesting that people just not look at it.
Thank you captain obvious.
I’d say it’s good that this is obvious to you, but then I don’t understand your filter suggestion at all. Is it a reference to something?
Because it would remove the offending information from being viewed by his sensitive eyes.
Because it would remove the offending information from being viewed by his sensitive eyes.
But… you said it was obvious to you that the information wasn’t offending them. So what problem would this solve?
Exactly. This shit is radicalizing people.
Is this still true in the age of targeted social media propaganda?
Seems to me that radical free speech without moderating for basic accuracy or malicious disinfo has pretty much kicked of the downfall of the American experiment
Handle it on the client side
Is this not just “the free market of ideas”? Which has the same pitfalls as the free market of money where if consumers are not educated and motivated to prune out bad actors, the market is easily subverted by malicious actors? Relying on people to regulate their information diets is betting on individuals with limited resources and motivation to defend themselves and the collective against concerted, well-resourced, and well-organized efforts to abuse the market of ideas because there is immense money and power to gain from doing so
You framed your statement as a question. And it’s unnecessarily verbose.
This is known as a “rhetorical” question.
Yeah I hate those. This medium is difficult enough without such antics. Best to speak plainly and succinctly.
Immediately get noticed
Realistically, though, we are only aware of that one because it was noticed in that unlikely scenario and then widely reported. For all we know, most open source backdoors are alive and well in our computers, having gone unnoticed for years.
Evidence suggests this isn’t the case.
We know of so many more closed source backdoors despite them being harder to notice in practice. Either before they became a problem or after they have been used in an attack. So we know backdoors can get noticed even without access to source code.
Meanwhile we have comparatively fewer backdoor type findings in major open source software, despite and thanks to increased scrutiny. So many people want to pad their resume with “findings” and go hit up open source software relentlessly. This can be obnoxious because many of the findings are flat out incorrect or have no actual security implications, but among the noise is a relatively higher likelihood that real issues get noticed.
The nature of the xz attack shows the increased complexity associated with attempting to back door open source. Sneaking a malicious binary patch into test data, because the source code would be too obvious, and having to hide asking the patch in an obfuscated way in build scripts that would only apply in theory under specific circumstances. Meanwhile the closed source backdoors have frequently been pretty straightforward but still managed to ship and not be detected.
Even if we failed to detect unused backdoors, at some point someone would actually want to use their backdoor, so they should be found at some point.
I’m not sure how you can provide evidence that one thing has fewer unknown unknowns than another thing.
By relative volume of the known things. It’s not a guarantee, but it’s highly suggestive that the more observable instances of something, the more not yet observed instances of the same thing are out there.
There are factors that can knock that out of balance, like not having access to source code making things harder to find, but those confounding factors would hide more on the closed source side than the open source side.
For all we know…
This isn’t something we need to speculate about. The vulnerability histories of popular closed and open source tools are both part of public data sets.
Looking into that data, the thing that stands out is that certain proprietary software vendors have terrible security track records, and open source tools from very small teams may be a mixed bag.
I feel like its a mixed bag. Certainly there’s an infinitely higher chance of someone randomly noticing a backdoor in OSS than in closed source simply because any OSS project in use has someone looking at it. Many closed systems have dusty corners that haven’t had programmer eyes on them in years.
But also, modern dev requires either more vigilance than most of us have to give or more trust than most of us would ideally be comfortable offering. Forget leftpad, I’ve had npm dependencies run a full python script to compile and build sub dependencies. Every time I run npm update, it could be mining a couple of bitcoins for all I know in addition to installing gigs and gigs of other people’s code.
The whole industry had deep talks after leftpadgate about what needed to be done and ultimately, not much changed. NPM changed policy so that people couldn’t just dissapear their packages. But we didn’t come up with some better way.
Pretty much every language has its own NPM now, the problem is more widespread than ever. With Rust, it can run arbitrary macros and rust code in the build files, it can embed C dependencies. I’m not saying it would be super easy to hide something in cargo, i haven’t tried so I don’t know, but i do think the build system is incredibly vulnerable to supply chain attacks. A dependency chain could easily pull in some backdoor native code, embed it deep into your app, and you might never realize it’s even there.
Yup.
But in open source it CAN be noticed, by anyone determined enough to dig into its side effects.
Proprietary software? You file a regression bug that startup takes 500ms longer, and it might get looked at.Also, backdoors that are discovered in open source software improve automated software auditing.
Yeah, you open a bug like that in proprietary software and it will immediately get rationalized away as having no business case to address, likely with a person with zero direct development responsibility writing a bs explanation like the small impact was due to a number of architectural changes.
Speaking as someone with years of exposure to business managed issue handling.
500ms longer, and it might get looked at.
Why would you even lie to the poor fellow like that? 🤣 lol
The flaw also highlighted a social engineering exploit. It’s not the first time some vulnerability has entered open source software due to social pressure on the maintainer. Notably EventStream exploit.
This is difficult to account for. You can’t build automated tooling for social engineering exploits.
Thats not really how open source works. If you use an open source tool like say, nano. It has been looked at and improved for many years by many people who have worked up an understanding of the code.
I realize that this can only be natively understood by a programmer.
What we (I) do when we work at open source projects is reading through the code for so long until we “get it”. It means we start to understand what does what. If you want so change something, you must locate it, finding out what it is not. The chance that someone stumbles across something that then sparks a full blown investigation isnt that low. Of course you can hide something in extremely long and boring code but its alas automatically tested by most software shops.
In short: we dont do this since yesterday and opeb source is so many universes better than closed source is a truth that only a fool would disregard.
Are you sure?
All I’m saying is leftPad, if you still remember.
As a programmer I do not believe you when you claim that you read through all the code of all the libraries you include.
Especially with more hardcore dependencies (like OpenSSL), hardly anyone reads through that.
So you’re a programmer yourself. That helps me understand where you are coming from. Thanks for clarifying.
As a programmer, you know that you need to depend on the work of others. Otherwise you cant use libraries at all. Of course the libraries are only as good as their own people. But the important part here is that the library doesnt have a makefile for example, which renders your former argument moot. They are often included in huge projects which themselves both have automated and manual reviews.
Somehow I dont believe you have experience in foss programming, at least not in larger projects. Tons of stuff is being done which ensures tons of eyes go over every bit of code, over time. for example in kodi, I have to depend on the upstream people doing their work. they have upstream themselves, etc. All of this is reviewed over and over and over again.
Also, leftpad is a prime example of how you are completely unable to do your thing in a cooperative. you will always get shut down. maybe not immediately but eventually.
Thats why foss is the ultimately better system.
My former argument? You might be confusing who you are talking to, since you answered to my first post in this thread.
You also seem to remember leftPad wrong. What happened there was that someone made a tiny library that did nothing but to pad a string. Something so trivial that any programmer should be able to do that within a minute. But still tens of thousands of projects, even large and important libraries, would rather add a whole dependency just to save writing a line of code. In fact, in most dependency management systems it requires more characters to add that dependency than to write that oneliner yourself.
The issue with leftpad was that the maintainer of that “library” was angry for unrelated reasons and pulled all his libraries, which then broke thousands of projects and libraries because leftpad wasn’t available any more.
My point was that everyone just relies on upstream doing their stuff and hardly anyone bothers to check that the code they include is actually doing what it should. And everyone just hopes that someone else already did their job of reviewing upstream, because they can’t be bothered to do it themselves.
A better example though would be Heartbleed. OpenSSL is used in everything. It’s one of the core libraries for modern online communication. Everyone and their grandma used it, most distros, all the cloud providers and so on. Everyone has been making money using the security that OpenSSL provides. Yet OpenSSL was massively underfunded with only one permanent developer who was also underpaid for what he was doing. And apparently nobody thoroughly reviewed the OpenSSL code. Somehow in version 1.0.1 someone made a mistake and added the Heartbleed bug. Stuff like that happens, nobody’s perfect, and if there’s only one person working on this, mistakes are bound to happen.
And then this massive security vulnerability just stayed in there for over two years, allowing anyone to read out whatever’s in the memory of any server using OpenSSL. Because nobody of the billions of people using OpenSSL daily actually reviewed and analysed their code. Because “so many people use OpenSSL, someone surely already reviewed it”.
Or take Log4Shell. That’s a bug that was so trivial it was even documented behaviour. To find this, someone wouldn’t even have had to review the code, just reviewing the documentation of Log4J would have been enough. And still this one was in production code for 8 years. For a library that’s used in almost every Java program.
Nobody reviews upstream.
If upstream makes a mistake, that mistake is in the code. And then everyone just happily consumes what they get.
And upstream is often just a random library thanklessly maintained by some dude in their spare time.
Edit: Just to prove my point: Think of your last big FOSS project that you worked on. Can you list every single dependency and every single transient dependency that your project uses? For each of these dependencies, do you know who maintains it and how many people work on each of these dependencies? Do you know if everyone of these people is qualified and trustworthy enough to put reliable and secure code in your project? Or do you, like everyone else, just hope that someone else made sure it’s all good?
You talk as though closed-source developers reviewed all the upstream code. The exact same problem exists with closed-source, except there isn’t even the possibility of reviewing all the code if you want to. At worst, the lack of review in FOSS projects is on par with closed-source projects. At best, it’s a much smaller problem .
That’s definitely a problem with every bit of code, that everyone relies on stuff they don’t or can’t review.
My point is that FOSS provides a false sense of security (“Millions of people use this library. Someone will already have reviewed it.”).
But the bigger issue is that FOSS is massively underfunded. If OpenSSL was for-profit, it would be a corporate project with dozens if not hundreds of developers. Nobody would buy a piece of core security infrastructure from a self-employed dude working away in his basement. That would be ridiculous to even think about that. And if this standard component was for-profit, even very low license fees would generate huge amounts of revenue (because it’s used in so many places) and this would allow for more developers to be employed.
And since it would be an actual thing that companies would actually buy, they’d demand that third-party security audits of the software would be done, like on any paid-for software that companies use. They’d also demand some SLA contracts that would hold this fictional for-profit OpenSSL accountable for vulnerabilities.
But since it’s FOSS, nobody cares. Companies just use it, nobody donates. It’s for free, so the decision to use it usually doesn’t even go through procurement and anything related to it. I tried to get my old company to donate to OpenSSL in the wake of Heartbleed, and the company said they don’t have a process to donate to something, so can’t be done.
So everyone just uses this little project created by one solitary hero and nobody pays for it. And so that dude works alone in his basement, with literally the online security of the whole world resting on his shoulders.
Luckily after Heartbleed a lot of large corporations started to donate to OpenSSL, but there are hundreds of other equally important projects that still nobody cares about. As seen e.g. with the .xz near miss.
If OpenSSL was for-profit, it would be a corporate project with dozens if not hundreds of developers
It seems like you don’t have a very broad exposure to closed source development. Corporations frequently have a skeleton crew working on a component or entire project. You might notice if you get escalated to development enough that it’s always like the same guy or two. It’s because they might only have a couple of guys working on it. Some companies will spend more on measures to obfuscate that reality than they would spend on actually developing. Certainly some corp closed source projects are that big, but so too are many open source projects.
Hell I’ve dealt with financial institutions using proprietary software that was abandoned by their vendor 15 years prior (came up because the software no longer worked with new stuff, and the institutions demanded wrapper software for new stuff to imitate the old stuff enough to keep using the unmaintained, unpatched, zero developer project).
I also don’t think companies are holding the proprietary vendors to quite the standard you imagine, certainly not automatically. By the same logic you propose for open source “someone else must have done it”, you also have that for big companies, if not more so. “Surely they have good security practices” or “it’s so popular someone must have done that”.
Or take Log4Shell.
That was feature creep galore. Ffs…
the library doesnt have a makefile for example,
OpenSSL does have makefiles (or perl scripts if you will).
Even as a library? That sounds horrific.
That’s assuming the attacker is stupid enough to put the exploit in the source code where it can be easily discovered.
The Xz exploit was not present in the source code.
It was hidden in the makefile as an obfuscated string and injected into the object file during the build process.
I saw the code. It was pretty obvious once you look at that particular piece. You have to adapt the makefile pretty often so you also would see gibberish. If you’re a programmer and you encounter what YOU think is gibberish, all alarms go off.
i dont know your experience in coding but I dont see how a huge number (a given with old and popular code) of experienced people could overlook something like this.
But this is the crucial thing. It wasn’t in the repository. It was in the tarball. It’s a very careful distinction because, people generally reviewed the repository and made the assumption that what’s there, is all that matters.
The changes to the make process only being present in the tarball was actually quite an ingenius move. Because they knew that the process many distro maintainers use is to pull the tarball and work from that (likely with some automated scripting to make the package for their distro).
This particular path will probably be harder to reproduce in the future. Larger projects I would expect have some verification process in place to ensure they match (and the backup of people independently doing the same).
But it’s not to say there won’t in the future be some other method of attack the happens out of sight of the main repository and is missed by the existing processes.
Absolutely understand the point. They had a good idea. They failed. Done. my point stands. Foss is superior.
automatically tested by most software shops.
Really?
Yes. Afaik.
Wait, that references something that actually happened?
edit This?
Wow, thanks, that’s way better than the link I found.
Yes, this particular incident.
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
In February 2024, a malicious backdoor was introduced to the Linux build of the xz utility within the liblzma library in versions 5.6.0 and 5.6.1 by an account using the name “Jia Tan”.[b][4] The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution through OpenSSH on the affected Linux system. The issue has been given the Common Vulnerabilities and Exposures number CVE-2024-3094 and has been assigned a CVSS score of 10.0, the highest possible score.[5]
Microsoft employee and PostgreSQL developer Andres Freund reported the backdoor after investigating a performance regression in Debian Sid.[8] Freund noticed that SSH connections were generating an unexpectedly high amount of CPU usage as well as causing errors in Valgrind,[9] a memory debugging tool.[10]
I haven’t really seen any evidence to support this
Exactly
Which in itself is worrying to me; given that there are now tens of thousands of in-use libraries and millions of programmers, the chances are high that someone tried at least once more than we have heard about .
And I know there have been several attempts, but there seems to be a lack of information about them all in one easy to read place
There doesn’t need to be any evidence. This is something that is impossible to prove one way or the other, like Last Thursdayism.
That’s just called a conspiracy theory
Reminds me of the old Debian OpenSSL vulnerability that went unnoticed for 2 years… but it did eventually get noticed.
https://lists.debian.org/debian-security-announce/2008/msg00152.html
OpenSSL has a whole list of serious security issues Heartbleed and go to fail is what I remember right away.
Heartbleed bug?
Makes me remember, wasn’t there a well respected dev who, out of the blue, decided to add a vulnerability in a linux package last year?
That’s what this meme is referencing. That was the XZ Utils backdoor. The contributor spent 5 years gaining the lead dev’s trust, waited for the lead dev to get busy with other things, then basically bullied the lead dev into handing over control of the project. They quietly pushed an SSH backdoor.
And then they were almost immediately called out by a dude who was running benchmarks and realized that his SSH requests were taking like 5ms longer than they should. That delay was because the backdoor was checking the SSH request against a table of backdoor requests, to see if it should allow the connection even if the UN/PW was wrong.
The big concern was that the SSH system was used all over the world. But rolling back to a previous version was easy, and most systems hadn’t updated yet anyways.
yeah this meme is referencing xz
Para hablantes de español, este video explica la vulnerabilidad de XZutils, a la que hace referencia este meme: https://youtu.be/mTpDmhF4BSw
https://m.youtube.com/watch?v=F7iLfuci75Y
Greate little video on it
Also, many proprietary softwares rely on open source libraries. So unless they catch, patch, and do not contribute those fixes, proprietary will be at least as vulnerable as the oss they depend on.
It always seems like it depends on really old libraries with major security flaws
Sorry, there’s no business case for rebasing those dependencies. Please focus story points on active marketing requirements instead.
i save that meme for the next time a huge psyops heist like with xz gets uncovered and people talk about how it shows the flaws of free open source. If It’s proprietary it’s easier to just get a job at the company, then gaining trust and building pressure with multiple fake accounts, and hiding it in one of the testing tarballs and then get uncovered anyway by a postgres admin doing performance benchmarks
The proprietary backdoors come with spies doing much more gymnastics to gain access to those who know the secrets to access those backdoors.
Open source software is full of bugs and security vulnerabilities. Most code doesn’t get read by more than two people.
Your statement is even more true about closed source. As someone who worked in multiple companies, I can tell you that 99% of the code is written, PRed, QAed, and then ignored forever.
Bold to assume that there a QA step
Or even a PR step
Production is a form of QA old man!
Real programmers use production
I can confirm. Unless the code causes issues people notice, nobody thinks about it after the PR.
OSS has the benefit of people WANTING to do the work, so I feel they make more effort to make sure it’s stable and efficient. Taking the extra time for testing and random scenarios, whereas people in corporate software will more often than not simply meet the reqs of the request, and then do minimal testing, send it off to the corporate machine.
OSS also has the benefit of randos across the whole world being able to view and audit changes.
OSS on the other side has the downside of being free.
That means it’s:
- massively underfunded because nobody donates
- no SLA-style contracts to hold anyone accountable
- most of the time no 3rd party security audits because free software (especially libraries or system tools) don’t go through procurement and thus don’t require them
- everyone expects that “someone” will have already reviewed it becouse the code is open and used by millions of projects, while in reality they are maintained by some solitary hero hacking away in his basement
If stuff like OpenSSL was CSS, it would be at least a mid-sized company making lots of revenue (because it’s used everywhere, even small license fees would rack up lots of revenue), with dozens of specialists working there, and since it would go through procurement there would be SLAs and 3rd party security audits.
But since it’s FOSS, nobody cares, nobody donates and it was a singular developer working at it until heartbleed. Then some of the large corporations which based their whole internet security on this singular dude’s work realized that more funding was necessary and now it is a company with multiple people working there.
But there are hundreds of other similarly important FOSS projects that are still maintained by a solitary hero not even making minimum wage from it. Like as shown with the .xz near miss.
Just imagine that: nobody in their right mind would run a random company’s web app with just one developer working in their spare time. That would be stupid to do, even though really nothing depends on that app.
But most of our core infrastructure for FOSS OSes and internet security depends on hundreds of projects maintained by just a single person in their free time.
Lemmy is open source, so feel free to go back to Reddit
We got Einstein reincarnation over here