this post was submitted on 14 Aug 2025
159 points (93.4% liked)

Ask Lemmy

34020 readers
2111 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I support free and open source software (FOSS) like VLC, Qbittorrent, LibreOffice, Gimp...

But why do people say that it's as secure or more secure than closed source software?

From what I understand, closed source software don't disclose their code.

If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.

But open source has their code available to the entire world on websites like Github or Gitlab.

Isn't that actually also helping hackers?

top 50 comments
sorted by: hot top controversial new old
[–] Capricorn_Geriatric@lemmy.world 8 points 2 hours ago

It's not "assumed" to be secure.

It's out there and visible for all to see. Hopefully, someone knowledgeable has taken it upon themselves to take a look at the software and assess its security.

The largest projects, like all the ones you named are popular enough that there's no shortage of people taking a peek.

Of course, that doesn't mean actual security audits are uncalled for. They're necessary. And they're being done. And with the code out there, any credible auditer will audit all the code, since it's availiable.

Compare that to closed-source.

With closed-source, the code isn't out there. Anyone can poke around, sure, but that's like poking a black box with a stick. It's not out there. You can infer some things, there are some source code leaks, but it isn't all visible. This is also much less efficient and requires much more work for a fraction of the results.

The same goes with actual audits. Usually not all source code is given over to the auditers, so some voulnerabilities remain uninspected and dormant.

Sure, not having the code out there is "security". If someone doesn't see the code, it's much harder to find the weakness. Harder, but not impossible.

There's a lot of open-source software. There's also a lot closed-source software, much more than the open-source kind, in fact.

What open-sourcing does is increase the number of eyes looking at the code. And each of those eyes could find a weakness. It might be a bad actor, but it's most likely a good one.

With open source, any changes are publically visible, and any attempt to sneak a backdoor in has a much higher chance of being seen, again due to the large number of eyes which can see it.

Closed-source code also gives lazy programmers an easy way out of fixing or not introducing vulnerabilities - "no one will know". With open source, again, there's a lot of eyes on the code - not just the one programmer team making it and the other auditing it, as is often the case.

That's why open source software is safer in general. Percisely because it's availiable, attacking it might seem easier. But for every bad actor looking at the code, there's at least ten people who aren't. And if they spotted a voulnerability, they'd report it.

Security with open source is almost always proactive, while with closed source it's hit-or-miss. Many voulnerabilities have to cause an issue before being fixed.

[–] ArchmageAzor@lemmy.world 4 points 3 hours ago* (last edited 2 hours ago)

Open source has more eyes looking over the code, more chances to catch some would-be loophole or exploit. Closed source stuff may have a team of qualified engineers, but there's only so many on that team, and anyone can get tunnel vision.

[–] JackbyDev@programming.dev 8 points 4 hours ago

A better question may be, why do you assume closed source software is secure? If nobody can see the code, how can we verify it is safe? Don't they have to be some sort of reverse engineering expert to prove it's safe?

[–] MTK@lemmy.world 6 points 5 hours ago* (last edited 5 hours ago)

What is more secure, a secret knock or an actual lock?

The lock is something that everyone can lookup, research and learn about. Sure, it means that people can learn to lockpick, but a well designed lock can stumble even the best lockpicks.

A secret knock is not secure at all, it sounds secure but in reality it is just obscure, and if anyone learns it or it's simple enough to guess, it becomes meaningless. Even a bad lock will show signs that it was picked.

So that's an analogy, here is the actual explanation:

Let's assume we have a closed source product named C and an open source product named O and that the security and quality of the code is the same. Both products are compiled and have been in active development for years. Both products have a total of 2 different people going over the code change of each new version, one person writes it, another reviews the code and approves it. After years of development you probably have about 10 people in total who have actually seen the code, anything that they missed will go unnoticed, any corners that they decided to cut will be approved, any bad decisions that they made will not be criticized. Here is where C and O differ: C will forever stay in this situation, only getting feedback rarely from researchers who found vulnerabilities and decided to report them. O will get small parts of it reviewed by hundreds of developers, and maybe even fully reviewed by a few people. Any corners that O cuts will be criticized, any backdoor that O tries to implemented will be clear to see. C on the other hand has one small advantage, bad actors will have a harder time finding vulnerabilities in it because it is compiled and they would have to reverse engineer it, while O is clear for the bad actors to read. But, bad actors are a very small minority, any vulnerability in O is far more likely to be caught by good actors, while C is very unlikely to be reversed by any good actors at all and so if it has any vulnerabilities, they are far more likely to be found by bad actors first.

And it is important to note the conflict of interests that often exists in closed source software. A company that sells a product for profit and believes that its code is hidden, has very little interest in security and almost no interest in end user security, but if the code is not hidden, the company has an interest to produce reasonably secure code to maintain a reputation.

So almost always, open source leads to safer code for all parties involved.

[–] Nibodhika@lemmy.world 8 points 7 hours ago (1 children)

It's simple really, you have two people selling you a padlock, one has a challenge for anyone who can break it to earn bragging rights, the other comes in a black cardboard box that you can't remove. Would you lock your stuff with something that tells people "I'm secure, prove me wrong" or with what can be anything from a padlock that will close and never let you open it again to an empty cardboard box that anyone can break with their hands?

It's the same thing with software, you need to realize that for every black hat (what people refer to as hackers) out there there are dozens of white hats (security experts that earn their living by finding and reporting security flaws in software). So for open source software that means that the chance of a security issue being found by a white hat is much higher, and if it's found by a black hat you have millions of people trying to figure out how he did it, where's the vulnerability and how to fix it. Whereas for closed software you never know if it has been breached, and white hats can't investigate and find a solution, so you depend on the security team from the company (which is most likely a small team of maybe 5 people of we're being generous) to figure it out and make a fix.

[–] fatalicus@lemmy.world 5 points 6 hours ago (1 children)

Then you have the padlock makers that say "Our lock is secure, prove us wrong", then sue you when you do.

[–] BudgetBandit@sh.itjust.works 1 points 3 hours ago* (last edited 3 hours ago)

Lawful Good: "Hello! I’m the lockpicking lawyer and today we have ……"

Chaotic Good: https://m.youtube.com/shorts/1HS-duJa8DU

[–] descartador@lemmy.eco.br 1 points 6 hours ago

The obscurity method is not quite all that good

[–] lucullus@discuss.tchncs.de 14 points 23 hours ago

Otherwise, you need to be some kind of freaking retro-engineering expert.

Nah, often software is stupidly easy to breach. Often its an openly accessable database (like recently with the Tea app), or that you can pull other data from the webapp just by incrementing or decrementing the ID in your webrequest (that commonly happened with quite a number of digital contact tracing platforms used during Covid).

Very often the closed source just obscures the screaming security issues.

And yeah, there are not enough people to thorouhly audit all the open source code. But there are more people doing that, than you think. And another thing to mind is, that reporting a security problem with a software/service can get you in serious legal trouble depending on your jurisdicting - justified or not. Corporations won't hesitate to slap suit you out of existance, if they can hide the problems that way. With open source software you typically don't have any problems like this, since collaboration and transparency is more baked in into it.

[–] fmstrat@lemmy.nowsci.com 20 points 1 day ago (1 children)

Others have mentioned this, but to make sure all context is clear:

  • FOSS software is not inherently more secure.
  • New FOSS software is probably as secure as any closed source software, because it likely doesn't have many eyes on it and hasn't been audited.
  • Mature FOSS software will likely have more CVEs reported against it than a closed source alternative, because there are more eyes on it.
  • Because of bullet 3, mature FOSS software is typically more secure than closed source, as security holes are found and patched publicly.
  • This does not mean a particular closed source tool is insecure, it means the community can't prove it is secure.
  • I like proof, so I choose FOSS.
  • Most people agree, which is why most major server software is FOSS (or source available)
  • However that's also because of the permissive licensing.
[–] liquefy4931@lemmy.world 3 points 8 hours ago

Also keep in mind that employees of companies that release closed source software are obligated to keep secret any gaping security vulnerabilities. This obligation usually comes with heavy legal ramifications that could be considered "life ruining" for many of us. e.g. Loss of your job plus a lawsuit.

Often, none of the contributors to open source software are associated with each other and therefore have no obligation to keep discovered vulnerabilities a secret. In fact, I would assume that many contributors also actively use the software and have a personal interest in getting security vulnerabilities fixed.

[–] captain_aggravated@sh.itjust.works 57 points 1 day ago (1 children)

You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:

Locksmith #1 says "I have invented my own kind of lock. I haven't told anyone how it works, the lock picking community doesn't know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works."

Locksmith #2 says "Okay so the best lock we've got was designed in the 1980's, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don't work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.

Which lock are you going to buy?

[–] Sir_Premiumhengst@lemmy.world 8 points 1 day ago (2 children)

Or just, you know, move out of Detroit... ¯\_(ツ)_/¯

[–] KingOfTheCouch@lemmy.ca 3 points 6 hours ago (1 children)

To keep that metaphor going, if you are online, you are in Detroit.

[–] stringere@sh.itjust.works 3 points 18 hours ago (1 children)

I hear the real estate in Flint is affordable.

[–] nickwitha_k@lemmy.sdf.org 1 points 3 hours ago

Really? I hear it's a steel.

[–] Lemvi@lemmy.sdf.org 177 points 1 day ago (4 children)

The code being public helps with spotting issues or backdoors.

In practice, "security by obscurity" doesn't really work. The code's security should hinge on the quality of the code itself, not on the amount of people that know it.

[–] pupbiru@aussie.zone 1 points 2 hours ago

security by obscurity doesn’t work on its own, but is a single pillar in a multi-faceted security strategy. in the case of FOSS vs closed source, the down sides (not having eyes on it, etc) outweigh the up sides… but writing off security by obscurity (plus other security) in all cases is the wrong approach to take

[–] WhatAmLemmy@lemmy.world 84 points 1 day ago

It also provides some assurance that the service/project/company is doing what they say they are, instead of "trust us".

Meta has deployed code so criminal that everyone who knew about it should be serving hard jail time (if we didn't live in corporate dictatorships). If their code were public they couldn't pull shit like this anywhere near as easily.

Yuup. “security by obscurity” relies on the attacker not understanding how software works. Problem is, hackers usually know how software works so that barrier is almost non existent.

[–] bamboo@lemmy.blahaj.zone 16 points 1 day ago

The code being public helps with spotting issues or backdoors.

A recent example of this is to see the extent that the TALOS group had to do to reverse engineer Dell ControlVault impacting hundreds of models of Dell laptops. This blog post goes through all of the steps they had to take to reverse engineer things, and they note fortunately there was some Linux support with publicly available shared objects with debug symbols, that helped them reverse the ecosystem. Dell has all this source code, and could have identified these issues much more easily themselves, but didn't and shipped an insecure product leaving the customers vulnerable.

[–] steeznson@lemmy.world 3 points 23 hours ago

There isn't a clear divide between open source software and proprietary software anymore due to how complex modern applications are. Proprietary software is typically built on top of open source libraries: Python's Django web framework, OpenSSL, xz-utils, etc. Basically there isn't anything safe, and even if you wrote it yourself you could introduce bugs or supply-chain attacks from dependencies.

[–] Kolanaki@pawb.social 28 points 1 day ago (2 children)

If I can see the code, I can see if said code is doing something fucky. If I can't see the code, I have to just have faith that it's not doing something fucky.

load more comments (2 replies)
[–] CrazyLikeGollum@lemmy.world 14 points 1 day ago

It's not "assumed to be secure." The source code being publicly available means you (or anyone else) can audit that code for vulnerabilities. The publicly available issue tracking and change tracking means you can look through bug reports and see if anyone else has found vulnerabilities and you can, through the change history and the bug report history, see how the devs responded to issues in the past, how they fixed it, and whether or not they take security seriously.

Open source software is not assumed to be more secure, but it's security (or lack thereof) is much easier to verify, you don't have to take the word of the dev as to whether or not it is secure, and (especially for the more popular projects like the ones you listed) you have thousands of people with different backgrounds and varying specialties within programming, with no affiliation with and no reason to trust the project doing independent audits of the code.

[–] TabbsTheBat@pawb.social 45 points 1 day ago

It's because anyone can find and report vulnerabilities, while closed source could have some issue behind closed doors and not mention that data is at risk even if they knew

[–] DeathByBigSad@sh.itjust.works 15 points 1 day ago

Because "some nerd out there probably would have found any exploits for the X years its been released" is the general assumption about open source software.

[–] assembly@lemmy.world 31 points 1 day ago

One thing to keep in mind is that NO CODE is believed to be secure…regardless of open source or closed source. The difference is that a lot of folk can audit open source whereas we all have to take the word of private companies who are constantly reducing headcount and replacing devs with AI when it comes to closed source.

With open source code you get more eyes on it. Issues get fixed quicker.

With closed source, such as Photoshop, only Adobe can see the code. Maybe there are issues there that could be fixed. Most large companies have a financial interest in having "good enough" security.

[–] philpo@feddit.org 20 points 1 day ago (1 children)

One thing people tend to overlook is: Development costs money. Fixing bugs and exploits costs money.

In a closed source application none will see that your software is still working with arcane concepts that weren't even state-of-the-art when written 25 years ago. The bug that could easily be used as an exploit? Sure, the developer responsible for it did inform his manager around 50 times he needs time and someone from the database team to fix it. And got turned down 50 times as it costs time and "we have to keep deadlines! And none noticed this bug so far,so why should now notice now?"

[–] bestboyfriendintheworld@sh.itjust.works 4 points 1 day ago (1 children)

Lots of open source software uses arcane concepts because lots of it is old. See Xorg as a prime example. That was outdated 20 years ago already.

Closes source software gets exploited and hacked all the time. They take security seriously as well.

Look at OpenSSL and the heartbleed and similar high profile security failures for how even using high profile open source software is not automatically more secure.

[–] philpo@feddit.org 5 points 1 day ago

You didn't get my point: On Open Source people know. People know that Xorg is using arcane concepts and as a client you can pay someone to get through the code. Or a governmental institution can. (And yes, mine does with public reports)

This is not the case with closed sources. You will only know when someone has exploited it. And while closed source applications like Windows,Office,etc. are having enough public weight that a lot of people with good intentions see them as a "challenge" and test for exploits. This is already not the case for smaller,but often critical applications. And no,most commercial closed source applications don't give a fuck about security - even in critical infrastructure. I worked as a PM for these applications in the past and my company now consults for critical infrastructure. The status of security in niche applications is abhorrent. The longest running major exploit I stumbled upon was 22 years old. And left around 65% of all water treatment plants of a smaller nation at risk. (It's fixed now. Not because they wanted to, but because someone forced them to)

[–] Ephera@lemmy.ml 20 points 1 day ago

Somewhat of a different take from what I've seen from the other comments. In my opinion, the main reason is this:
XKCD comic showing other engineers proud of the realibility of their products and then software engineers freaking out about the concept of computerized voting, because they absolute do not trust their entire field.

Companies have basically two reasons to do safety/security: Brand image and legal regulations.
And they have a reason to not do safety/security: Cost pressure.

Now imagine a field where there's hardly any regulations and you don't really stand out when you do security badly. Then the cost pressure means you just won't do much security.

That's the software engineering field.

Now compare that to open-source. I'd argue a solid chunk of its good reputation is from hobby projects, where people have no cost pressure and can therefore take all the time to do security justice.
In particular, you need to remember that most security vulnerabilities are just regular bugs that happen to be exploitable. I have significantly fewer bugs in my hobby projects than in the commercial projects I work on, because there's no pressure to meet deadlines.

And frankly, the brand image applies even to open-source. I will write shitty code, if you pay me to. But if my name is published along with it, you need to pay me significantly more. So, even if it is a commercial project that happens to be published under an open-source license, I will not accept as many compromises to meet deadlines.

[–] Canconda@lemmy.ca 28 points 1 day ago* (last edited 1 day ago) (2 children)

Zero day exploits, aka vulnerabilities that aren't publicly known, offer hackers the ability to essentially rob people blind.

Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it's not inherently more secure, it is in practice.

Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet's design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran's nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.

Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet.

Wikipedia - Stuxnet Worm

“Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”

Heartbleed has entered the chat

load more comments (1 replies)
[–] emb@lemmy.world 21 points 1 day ago* (last edited 1 day ago) (1 children)

The idea you're getting at is 'security by obscurity', which in general is not well regarded. Having secret code does not imply you have secure code.

But I think you're right on a broader level, that people get too comfortable assuming that something is open source, therefore it's safe.

In theory you can go look at the code for the foss you use. In practice, most of us assume someone has, and we just click download or tell the package manager to install. The old adage is "With enough eyes, all bugs are shallow". And I think that probably holds, but the problem is many of the eyes aren't looking at anything. Having the right to view the source code doesn't imply enough people are, or even meaningfully can. (And I'm as guilty of being lax and incapable as anyone, not looking down my nose here.)

In practice, when security flaws are found in oss, word travels pretty fast. But I'm sure more are out there than we realize.

[–] towerful@programming.dev 9 points 1 day ago* (last edited 1 day ago)

It's also easier to share vulnerability fixes between different projects.

"Y" was using a similar memory management as "T", T was hacked due to whatever, people that use Y and T report to Y that a similar vulnerability might be exploitable

Edit:
In closed source, this might happen if both projects are under the same company.
But users will never have the ability to tell Y that T was hacked in a way that might affect Y

[–] dreadbeef@lemmy.dbzer0.com 12 points 1 day ago* (last edited 1 day ago)

It's not more secure or less secure, but it is easier to trust

[–] BartyDeCanter@lemmy.sdf.org 15 points 1 day ago

Otherwise, you need to be some kind of freaking retro-engineering expert.

And as it turns out, there is a ton of financial motivation for less than ethical people to develop those skills and use them to hack proprietary software. And there is some, but less, financial motivation for ethical people to do the same.

[–] Luffy879@lemmy.ml 6 points 1 day ago

By your logic no one can break locks because they can't see it. There are going to be people trying to break into everything even tho they don't have the source code.

9/10 people looking into your code are the ones using it for themselves, so fixing a bug for everyone is beneficial to them too.

Also, there are entire companies working and sponsoring these projects and paying people to find bugs because if someone finds out that curl has a problem, they are gonna have that too, so the only difference between something like vlc and adobe is that you don't have to suck their dick really.

There's also curl and others which are offering bug bounties, since they are way more cost efficient than paying someone full time.

[–] chocrates@piefed.world 10 points 1 day ago

Per Eric S. Raymond "many eyes make all bugs shallow".

Basically it's not inherently more secure, but often it's assumed that enough smart people have looked at it.

But yes all software is going to have vulnerabilities

[–] TeamAssimilation@infosec.pub 8 points 1 day ago* (last edited 1 day ago) (1 children)

It doesn’t literally mean that everyone that uses OSS will inspect the source code for vulnerabilities, most don’t even have the skill to do so.

It’s more secure because access to source facilitates exploiting it, and patching it, faster, and because nerds that do have the skills and find something unusual will delve into the code to debug it. The XZ Utils back door was found by one of such nerds doing beta testing, it didn’t even get to be distributed to general users.

It’s a telling sign that malicious actors nowadays are surreptitiously trying to compromise OSS through supply chain attacks instead of directly finding zero days. For example: StarDict sends X11 clipboard to remote servers

load more comments (1 replies)
[–] Max_P@lemmy.max-p.me 4 points 1 day ago

It helps hackers sure, but it also help the community in general also vet the overall quality of the software and tell the others to not use it. When it's closed source you have no choice but to trust the company behind it.

There's several FOSS apps I've encountered, looked at the code and passed on it because it's horrible. Someone will inevitably write a blog post about how bad the code is warning people to not use the project.

That said, the code being public for everyone to see also inherently puts a bit of pressure to write good code because the community will roast you if it's bad. And FOSS projects are usually either backed by a company or individuals with a passion: the former there's the incentive of having a good image because no company wants to expose themselves cutting corners publicly, and the passion project is well, passion driven so usually also written reasonably well too.

But the key point really is, as a user you have the option to look at it and make your own judgement, and take measures to protect yourself if you must run it.

Most closed source projects are vulnerable because of pressure to deliver fast, and nobody will know until it gets exploited. This leads to really bad code that piles up over time. Try to sneak some bullshit into the Linux kernel and there will be dozens of news article and YouTube videos about Linus' latest rant about the guilty. That doesn't happen in private projects, you get a lgtm because the sprint is ending and sales already sold the feature to a customer next week.

[–] Zwuzelmaus@feddit.org 6 points 1 day ago (1 children)

Isn't that actually also helping hackers?

Evil hackers don't need help and don't want help.

On the other side, there have been cases where evil programmers have brought malicious code into open source software, and it got found out because that code is public, and it got repaired and reported publicly.

Shame on these hackers.

load more comments (1 replies)
[–] omzwo@lemmy.world 6 points 1 day ago

Exactly. Open source means by design there are more people able to look at the code and therefore more emphasis for those interested in the code to want to make sure it works securely. You can be exploitative and try to keep your hack secret but there's also a chance that someone else will see the same thing you saw and then patch the code with a PR. Granted it depends on how much the original developer cares about the code to begin with to then accept or write in a patch/fix for the vulnerability that someone else brings up but the example software you listed are larger projects where lots of people have a vested interest in it working securely. For smaller projects or very niche software that have less eyes and interest, open source might not be the most secure.

On the closed source side, the people who are interested in looking for hacks are the ones who are much more motivated to actually exploit vulnerabilities for personal gain. The white hat hackers on the other hand for closed source software are fewer because not having the code available openly means they have to have more motivation (ie the company offering bounties/incentives because they care about security) to actually try to work out how the closed source software works.

Its relatively easy. First of all if someone would implement a backdoor its much easier to find out, since you can look at the code directly. Second is, that a lot of people actually do this. Looking at the code of projects and searching for ways to find security holes in it.

So even if it isn't that much more secure than closed source, its much easier to trust simply because people can search for vulnerabilities much easier.

One great example of why open source code is easier to realise backdoors would be the xz Security breach.

[–] mintiefresh@piefed.ca 5 points 1 day ago (3 children)

Ape alone... weak. Apes together... strong.

load more comments (3 replies)
[–] mvirts@lemmy.world 5 points 1 day ago

Because more eyes spot more bugs, supposedly. I believe it, running closed source software is truly insane

[–] theunknownmuncher@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.

What you're describing is known as "security through obscurity", the practice of attempting to increase security of a system by hiding the way the system works. This practice is highly discouraged, as it is known to not actually be effective at increasing the security of a system.

Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."

https://en.wikipedia.org/wiki/Security_through_obscurity#Criticism

Isn't that actually also helping hackers?

No, by sharing the implementation details of the system, it helps those trying to keep it secure by allowing anyone to inspect, discover, and contribute fixes to security flaws.

Open-source software is not perfect and is suceptible to security flaws and vulnerabilities, but it is better and more secure than closed-source software in every way. Every risk that applies to open-source software also applies to closed-source software, but worse.

load more comments
view more: next ›