Getting an exception to the rate limit

TL;DR: Players just need a browser on their phone. No app. Separately, someone needs to install a game on their PC/Mac/Linux/Android and run it.

Users (mom, grandma, sister, dad, little brother) install a game on their PC/Mac/Linux/Android. That game runs a webserver (using the happyfuntimes library)

Players then connect to the game with their smartphone. First they need to be on the same LAN. Then they open their browser and connect to the game.

To make that easy there’s a public server at happyfuntimes.net. When the game starts it tells happyfuntimes.net “I’m running a game. My local ip address is 192.168.1.47”. Happyfuntimes.net records that IP address and the public IP address.

Players then also, on the phone’s browser, go to happyfuntimes.net. Since they are on the same lan happyfuntimes.net sees them as the same public IP as the game so it redirects the browser to the game’s local ip (in this example http://192.168.1.47:18679)

Boom! They’re in the game. I’ve had 89 people playing a bomberman clone for example

Now the problem is browsers are banning the features the phone’s browser needs for the games unless the pages are https.

So, I need certs. Certs are assigned to domains so I need domains. Domains are easy. I can run a dynamic dns server. Certs though are either $$$ a piece or free from letsencrypt but the limits on letsencrypt make it not a solution. If a game becomes indie popular and 1000 people install it that’s 1000 certs needed.

I get that. I'm just trying to think of workarounds until the LetsEncrypt team can address this for you.

I get that... but you have a Free Open Source Library that requires an upstream service to act as a coordinator.

You can opensource the upstream bit to let people run it themselves, and offer a tiered plan that uses your coordination system so you can fund development. It will still be free and open source, but it can become self-sustaining through plans and tips. It's a really cool project.

1 Like

Thanks for your suggestion

The entire system is already open source but the target audience is not at a level to set up their own servers nor are they at a level to afford their own servers. Products like Unity3D have really lowered the part to what it takes to make games with no programmers making and shipping games all the time. As one example the UCLA Game Lab is part of the art department, not the CS/Engineering department like most schools. I was scolded by the one of the profs when happyfuntimes used to require 5 steps in a command line / terminal. He said “these kids have never used a command line in their life. you need to simplify this process” so I did. There are now no command line steps for the unity plugin.

That same audience is unwilling to pay. Starving artists and students. In this age of complaining that a smartphone game costs 99¢ even a nominal fee is a barrier to even trying something. On top of that paying customers except support and for people at this level of experience support often equals hours of hand holding. In other words I’d have to charge way too much money to make it worth my time and that in turn would make no one willing to pay.

You’d think letsencrypt would be sympathetic to this. They’re seemingly related to open source, mozilla, and other similar initiatives but this https banning of features means open source project like mine now need $$$$$$ to do something that 6 months ago was effectively free.

1 Like

One thought that pops into my mind is that if you are giving away certificates and providing dynamic DNS that domain should probably be on the public suffix list. Being on list will also remove the rate limit problem (as each “subdomain” is now a different “domain”) however you will have to deal with the downsides that come with this approach. Luckily it’s probably best to deal with these now before an attacker tries to abuse you going away subdomains.

TL;DR:

  • If you are giving away subdomains you should be on the Public Suffix List
  • You won’t hit the rate limit problems while on this list.
  • There are security issues associated with giving away subdomains. You will need to deal with these.

Thanks. Yea, I I’m curious about what kind of abuse. Since I’m only giving away sub domains and since they are relatively deep subdomains like *.users.happyfuntimes.net is there much value in abusing them? It’s not like you can steal cookies or anything else related to other subdomains. The fact that is a deeper subdomain seems like that makes it not all that desirable. There are a few DNS companies that offer free domains as like freenom. Also my DNS server will only points the domains to local IP addresses (like 192.168.0.12) which won’t really get you anywhere right? So is there any really abuse issue?

Speaking of which, somewhere in this thread I suggested making a separate service with DNS and certs. The simplest idea is just use <ipaddress>.freecerteddomains.org. The only issue there is you’d be ending each person with the same ipaddress the same private cert which is supposedly against the rules. But if that was allowed that would also solve my issue as I could just use that. Could also make it <ipaddress>.<randomid>.freecerteddomains.org which is prety much the same as using freenom + letsencrpt except

  1. you wouldn’t have to sign up
  2. you wouldn’t have to per-register an id. (as in choose a subdomain)
  3. your machine wouldn’t need to be publically on the net get the cert

Actually I take that back. The second method doesn’t work because I need to point to internal IPs. Although I guess I could provide 2 subdomains external.<ipaddress>.<randomid>.freecerteddomains.org and internal.<ipaddress>.<randomid>.freecerteddomains.org

I don't think you can steal cookies but there are other cookie issues that can arise. In general to be safe it is best to give out subdomains of an otherwise used domain and add it to the Public Suffix List. I don't claim to know all of the details but for example github used to run into a number of issues when they allowed pages at *.github.com leading them to switch to only allowing pages as *.github.io.

Your attack service is probably less because you are restricting the DNS to local IPs but I imagine that with a bunch of people on the same LAN there is still a chance for abuse. It's better to be safe then sorry.

I suspect that a fairly easy system for you to implement would to be to grab happyfuntimesusers.net and provide subdomains on that and add it to the Public Suffix List.

So I asked the PSL maintainer if giving away subdomains meant I should be on the PSL.

Answer: no

Interesting, I thought that was the whole point. But I guess I'm misunderstanding it.

Okay so I hope we can discuss this here some more.

I’m trying to think what other things would be affected by these limits. Not the the specific limits of LE but the limits created by the fact that certs are now required and certs are not unlimited

Plex brings up a good example. Let’s say you want to make an open source version of Plex (Kodi for example?) You want users to be able to stream video their browsers and go fullscreen. Fullscreen permission requires https. https requires certs. certs require domains. What used to be easy (just run the software) is now hard (apply for a domain (register with provider), set up some kind of daemon or configure your router to keep it up to date (assuming dyn dns), make sure the machine running Kodi has a hole punched to internet, run LE software to get cert. Repeat every 2.9 months.

Or what about a NAS version of same. You want to run an open source NAS. The NAS as stream to browser options. You want to allow users to go fullscreen. All the same issues apply.

How about IoT stuff? Lots of IoT stuff have a built in web server. They all need certs if they want to access any of those features (fullscreen). How about an IoT picture frame. You connect to the frame via browser, take a picture and upload it. Except without a cert webpages aren’t allowed access to the camera.

Worse what if you want to run 2 of these things? Kodi + NAS + IoT Frame, each of which has a built in webserver, each of which needs a cert. Or they need to share certs (which is supposed to bad). They’d also somehow need to magically share domains which you actually might not want.

This seems like a bigger issue to me than just my project.

I’m not suggesting LE needs to solve this but some of the people on this forum seem like they might have some ideas.

AFAICT Plex’s solution would (unlimited wildcard certs). It might require more funding for more infrastructure. Baring that though what would have to change? Not banning the features to HTTPS only? Somehow changing the CA system to separate security from identity? Not even sure that’s possible.

I guess I want to be able to go to the standards bodies and point out the combination of gatekeepers (CAs) and HTTPS required = lots of problems for various projects, especially if those projects are open source. Commercial projects can get $$$$$$ to pay a CA. Open source projects much less likely.

This seems like a bad precedent for the internet. Only those with power and money get to participate.

1 Like

I agree with you that there are a lot of problems facing network-local devices that want to use HTTPS. The Web PKI, and the browser security model in general is very poorly suited to devices that are accessible only on some networks. For instance, if I grant permission for 192.168.1.1 on my home network to use my microphone, does that mean I’ve granted that same permission for 192.168.1.1 on my work network? The same problem applies if you use local domain names like gw.local. The alternative is to use domain names rooted in the global DNS, which is what you are doing with happyfuntimes. Of course, without HTTPS, the same problem applies. If I grant a permission for http:// game1.happyfuntimes.net at a museum, then any network I subsequently connect to can claim to be game1.happyfuntimes.net and get the same permission. It seems like most people are winding up with the same solution that you’ve landed on, to generate subdomains rooted in a single public DNS names, and do the necessary network dance to get a certificate stating that. Not ideal, as you said.

There’s the additional problem of the PKI adding a layer of gatekeepers to the Web. I agree this is a problem too. There should be as few people as possible who have the right to take down your web site. Unfortunately, it’s a property of the Web PKI as it stands today. We’ve explored some alternate models that are more censorship-resistant, like Convergence, TACK, and Sovereign Keys. Unfortunately none of those have gotten significant traction so far. Let’s Encrypt is an effort to solve the problems of today by scaling up the technology of today, and that puts us in the unfortunate position of being a gatekeeper too. We try to mitigate that by having pro-speech policies, but it’s an imperfect solution.

The other big problem with the Web PKI today is its centralization: in order to vouch for your subdomains, Let’s Encrypt has to scale proportionally to the number of subdomains you have rather than doing a single delegation (as DNS does) and letting you provide the appropriate resources to support all subdomains. The Web PKI does support this type of delegation through name constrained intermediates, which could be a long-term solution. Unfortunately, name constraints are not yet recognized by Apple devices. Additionally, name constrained intermediates are currently subject to the same strict audit requirements as CA certificates with the power to issue for any name.

2 Likes

Maybe they didn't understand what you're doing, because that's exactly what many people on the current list do (looking at facebook, google, firebaseapp, github, fastly.net etc etc etc)

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.