1. 35
    1. 13

      The mentioned caching advantage of HTTP isn’t entirely theoretical fwiw; people do use squid-deb-proxy and similar setups in the real world. At least as of a few years ago some universities were doing something like that to transparently proxy Ubuntu packages. You could instead set up a proper university mirror and try to get everyone to point at it instead of pointing upstream, but that’s usually futile unless IT controls the machines’ config.

    2. 12

      Downloading and installing (even signed) packages over unencrypted channels also allows an attacker with the ability to inspect traffic to be able to take an inventory of the installed software on the system. An attacker could use that to his/her advantage by knowing which software, and its vulnerabilities, is installed. The attacker then has the exact binary and can replicate the entire system, tailoring exploits to the inventory on the target system.

      1. 21

        They cover this in the linked page; they claim there’s such a small number of packages that merely knowing the length of the ciphertext (which, of course, HTTPS can’t hide) is enough to reliably determine which package is being transmitted.

        Perhaps doing it over HTTP2, so you get both encryption and pipelining, would get you sufficient obfuscation, but HTTPS alone doesn’t.

        1. 2

          I’m not sure how http2 helps. You still can generally take a look at traffic bursts and get an idea of how much was transferred. You’d have to generate blinding packets to hide the actual amount of traffic that is being transferred, effectively padding out downloads to the size of the largest packages.

          1. 2

            But figuring out which packages would require solving the knapsack problem, right? Instead of getting N requests of length k_i, you get one request of length \sum k_i. Although, now that I think about it, the number of packages that you download at once is probably small enough for it to be tractable for a motivated attacker.

            Padding is an interesting possibility but I think some of the texlive downloads are >1GB; that’s a pretty rough price to pay to download vi or whatever.

        2. 1

          True. Given that each package download is its own connection, it wouldn’t be too difficult for an attacker to deduce which package is being downloaded given the size of the transmitted encrypted data. The attacker would need to keep a full mirror of the package repo (disk space is cheap, so plausible). I wonder if the same would apply to package repos served over Tor Onion Services.

    3. 24

      “There are a lot of CAs and therefore there is no security in the TLS CA model” is such a worn out trope.

      The Mozilla and Google CA teams work tirelessly to improve standards for CAs and expand technical enforcement. We remove CAs determined to be negligent and raise the bar for the rest. There seems to be an underlying implication that there are trusted CAs who will happily issue you a google.com certificate: NO. Any CA discovered to be doing something like this gets removed with incredible haste.

      If they’re really concerned about the CA ecosystem, requiring Signed Certificate Timestamps (part of the Certificate Transparency ecosystem) for TLS connections provides evidence that the certificate is publicly auditable, making it possible to detect attacks.

      Finally, TLS provides good defense in depth against things like CVE-2016-1252.

      1. 13

        Any CA discovered to be doing something like this gets removed with incredible haste.

        WoSign got dropped by Mozilla and Google last year after it came to light that they were issuing fraudulent certificates, but afaict there was a gap of unknown duration between when they started allowing fraudulent certs to be issued and when it was discovered that they were doing so. And it still took over six months before the certificate was phased out; I wouldn’t call that “incredible haste”.

        1. 2

          I’m not sure where the process is, but if certificate transparency becomes more standard, I think that would help with this problem.

      2. 5

        TLS provides good defense in depth against things like CVE-2016-1252.

        Defense in depth can do more harm than good if it blurs where the actual security boundaries are. It might be better to distribute packages in a way that makes it very clear they’re untrusted than to additionally verify the packages if that additional verification doesn’t actually form a hard security boundary (e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS. So a developer who assumed that packages fed into apt had already been validated by the TLS CA ecosystem would be dangerously mislead)

        1. 5

          This is partly why browsers are trying to move from https being labeled “secure” to http being labeled “insecure” and displaying no specific indicators for https.

        2. 1

          e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS

          If you have this additional complexity in the supply chain then you are going to need additional measures. At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

          1. 5

            If you have this additional complexity in the supply chain then you are going to need additional measures.

            Only if you need the measures at all. Does GPG signing provide an adequate guarantee of package integrity on its own? IMO it does, and our efforts would be better spent on improving the existing security boundary (e.g. by auditing all the apt code that happens before signature verification) than trying to introduce “defence in depth”.

            At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

            Some kind of alternative to HTTPS for obtaining packages is vital, given how easy it is to break your TLS libraries on a linux system through relatively minor sysadmin mistakes.

    4. 9

      I have a few problems with this. The short summary of these claims is “APT checks signatures, therefore downloads for APT don’t need to be HTTPS”.

      The whole argument relies on the idea that APT is the only client that will ever download content from these hosts. This is however not true. Packages can be manually downloaded from packages.debian.org and they reference the same insecure mirrors. At the very least Debian should make sure that there are a few HTTPS mirrors that they use for the direct download links.

      Furthermore Debian also provides ISO downloads over the same HTTP mirrors, which are also not automatically checked. While they can theoretically be checked with PGP signatures it is wishful thinking to assume everyone will do that.

      Finally the chapter about CAs and TLS is - sorry - baseless fearmongering. Yeah, there are problems with CAs, but deducing from that that “HTTPS provides little-to-no protection against a targeted attack on your distribution’s mirror network” is, to put it mildly, nonsense. Compromising a CA is not trivial and due to CT it’s almost certain that such an attempt will be uncovered later. The CA ecosystem has improved a lot in recent years, please update your views accordingly.

      1. 5

        Furthermore Debian also provides ISO downloads over the same HTTP mirrors, which are also not automatically checked. While they can theoretically be checked with PGP signatures it is wishful thinking to assume everyone will do that.

        Not really a full solution, but they do at least make the most prominent links for manually downloading an ISO point to an https site specifically for serving ISOs.

    5. 4

      even over an encrypted connection it is not difficult to figure out which files you are downloading based on the size of the transfer

      A bit more difficult if you’re downloading a bunch of different packages in one connection?

      Also just like, having to infer this indirectly just feels better than shouting package names in the clear…

    6. 2

      Any security minded people have thoughts on this?

      1. 13

        Debian’s security record regarding CAs is atrocious. By this I mean default configuration and things like the ca-certificates package.

        Debian used to include non-standard junk CAs like CACert and also refuse to consider CA removal a security update, so it’s hugely hypocritical of this page to talk about many insecure CAs out of 400+.

        Signing packages is a good idea, as that is bound to the data and not to the transport like https so in principle I agree that using https for debian repositories doesn’t gain much in terms of extra security. However these days the baseline expectation should be that everything defaults to https, as in no more port 80 unauthenticated http traffic.

        Yes, moving over to https for debian repositories breaks local caching like apt-cacher (degrades it to a tcp proxy) and requires some engineering work to figure out how to structure a global mirror network, but this will have to be done sooner or later. I would also not neglect the privacy implications, with https people deploying passive network snooping have to apply heuristics and put in more effort than simply monitoring http.

        Consider the case where someone sitting passively on a network just monitors package downloads that contains a fix for a vulnerability that is exploitable remotely. That passive attacker can just try to race the host and exploit the vulnerability before the update can be installed.

        Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

        1. 4

          Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

          What do you consider a better alternative to gpg?

          1. 10

            signify is a pretty amazing solution here - @tedu wrote it and this paper detailing how OpenBSD has implemented it.

        2. 4

          non-standard junk CAs like CACert

          imho CACert feels more trustworthy than 90% of the commercial cas. i really would like to see cacert paired with the level of automation of letsencrypt. edit: and being included in ca packages.

          1. 2

            With the dawn of Let’s Encrypt, is there still really a use case for CACert?

            1. 4

              i think alternatives are always good. the only thing where they really differ is that letsencrypt certificates are cross signed by a ca already included in browsers, and that letsencrypt has automation tooling. the level of verification is about the same. i’d go as fas as to say that cacert is more secure because web of trust, but that may be just subjective.

    7. 1

      I run my own deb/rpm repository and I have it under SSL via LetsEncrypt. I also sign all my packages and publish a key in the root:

      https://repo.bigsense.io/

      I mean .. you can do both. And LetsEncrypt makes it easy. I’m sure with something hit has hard as the Debian repo, there would be a performance hit though. But if you run your own repo, you might as well. If you’re interested in running your own, I have some ansible scripts and Jenkins configs that make it easy (the readme is kinda out of date):

      https://github.com/bigsense/vSense

    8. 1

      just came to my mind: in case there’s a security vulnerability in package parsing in apt (or one of the libraries it uses), an attacker could craft a package exploiting this vulnerability inject it into a unencrypted http connection. this would likely be undetected if the exploit is sophisticated enough (which it will be, i guess).

    9. 1

      Having a transparent http proxy as an option to speed up the installation of multiple machines is a valid use case, and can work wonders if you only have a slow internet connection.

    10. 1

      In 2018 there is no excuse to not use HTTPs. Period.

      1. 2

        How about to install the list of certificate authorities to trust and ntpd so you can get the right time (both needed for https)?

Stories with similar links:

  1. Why does APT not use HTTPS? via calvin 5 years ago | 3 points | 3 comments