SDK responding to localhost on SSL

Hi,
Yeah I know, this issue has been discussed many time, but the browser environment change. Let's get started.

I'm developing a business SDK which respond to API request on https://127.0.0.1:5001 . This SDK is installed on the customer computer, in order to access some business app on local servers (with authentication).

For now, I've met the recommendation from here : https://letsencrypt.org/docs/certificates-for-localhost/ , and I have my own self signed root certificate installed in the customer computer windows store (public key only) by my setup, and the full ssl certificate from this root embedded in my app, and it worked fine until recently.

Recently, firefox changed the way security.enterprise_roots.enabled works, so firefox didn't recognize the certificate from the windows store. The user has to change this settings manually, which is not ideal. Alternatively, the user has to show a webpage and grant an exception for this certificate, which is also not ideal as my users are not power-users.

Chrome asserted some weeks ago that they will follow the firefox way, and implement their own store certificate, so we will have the same issue soon on this browser, and maybe soon in Edge and Opera...

So I'm looking for alternatives, and I'm back to the old "local.mydomain.com" pointing to 127.0.0.1 / ::1 with an SSL or a Letsencrypt certificate, and the private key disclosure issue.

I'm asking if it will be valid if I can renew the certificate every single day from a central server, with each certificate be valid for only 3 days.
Each computer instance would download new settings and private key at every startup from the central server, so that a private key leak will not be a big issue. The private key will only be loaded on computer memory, and will not be persisted on the customer computer.

What do you think ?

Hi @Echtelion

please read some basics:

Letsencrypt certificates are 90 days valid.

There is a rate limit, so you can't create a certificate with the same set of domain names every day. That's a waste of resources.

Create one certificate, then use it 60 - 85 days, then create the next.

You can use dns validation (if mydomain.com is a public visible domain you own), so you don't need a working http connection / public ip address.

Hi Juergen,

I'm aware of all of this, but this doesn't fix the issue about private key distribution, which is the reason why it's currently advised to use a self signed certificate, and not letsencrypt for my use case.

I'm looking for ways to comply to the AC rules about theses private key with this use case.

Regards,

You can use a completely different setup if you have an own domain. Like myfritz.net with FritzBoxes (popular in Germany).

Per user, create a random string and a subdomain randomstring.yourdomain.com.

Create a webservice, so your client is able to create or initiate the required TXT entry.

Then you can start a new Letsencrypt order from the local machine of your clients.

The private key is only on that local machine.

1 Like

That's an interesting way to do it, but I have another concern about that :
How is the website supposed to know the endpoint for the current machine ?

One user -> one registered account with such a random string.

1 Like

That's not something I can do.
This is an SDK, available for others company. I'm not aware of their user account, and they will not be able to know what instance match the local user.
Additionally, the local web server wouldn't know what certificate to load without an understanding of the remote user login... which require a listening socket... and a certificate... this is endless.

I think you are combining and compounding the problem.
If you break it down into smaller steps, you might find ways to complete those smaller steps.
And then once you have all the small steps beaten, you will be able to do the whole thing.

So, if you think only of the "how do I get a cert" step.
@JuergenAuer suggestion is spot on.
If you handle all the cert requests via DNS through one single domain, you can control the entire process.
You simply need to develop a client that can:

  • auto-register (create a new "random string" account, when one doesn't already exist)
    This interacts directly with your base system (known/fixed FQDN:port)
  • auto-renew (generate a new cert, when one doesn't exist - or has expired)
    This interacts directly with your base system via API to validate its' cert need.
    The actual LE validation process can be done at the client or at the base domain.
    [Both via DNS TXT record]
    If at the base domain system, you then need to pass that cert&key securely to the requesting client.

Once you have a cert system working, the rest seems much simpler.
All you then have to do is use the cert you already have.

You have to create a client that runs local.

This client calls your online service to get a random string -> that's your subdomain.

The client must save that string local.

May be a registration isn't required. There are some additional benefits, so things may be easier if you have an online registration.

Hi @rg305, @JuergenAuer,
Sorry, my english is quite bad, I probably didn't explain myself correctly.

I have no technical issue with how the local app will request a new certificate. I see how the workflow will work. In the end, I will have a running app listening on localhost and accessible on "local12345.mydomain.com" with a valid https certificate for THIS subdomain. I actually think this is brilliant (while a little heavy for the AC, one certificate per computer, anyway)

This SDK will interact with native feature (smartcard for example) and local servers, but the app which will make actual calls to this SDK are web page running javascript in the user local browser, and are created by others companies.

Currently, they make direct ajax calls to https://localhost:5001/interop/connect
Tomorrow, they will have to make the call to https://local12345.mydomain.com:5002 in order to have a successful ssl connection to localhost

What I don't see is how the javascript web page will be able to deduce which subdomain to query.

  • As JS scripts, the web page can't access computer metadata that would help to find the correct subdomain for the current computer.
  • As the final customer computer are all running behind a common proxy, I can't have an online index bases on ip address (which will not solve multiple user in the same connection pool anyway),
  • As a C# developper, I can see how a call to a static subdomain to localhost will fail, which will give me access to the public certificate from whom I can extract the correct subdomain and make another call to this subdomain... but this is not possible in javascript (as far as I know).

I also can't have unencrypted preflight request, which will be considered "mixed active content" and then blocked by the browser.

I reviewed how other app doing the same job, like the intel driver assistant work... but they are making non ssl call to localhost, and this is accepted by the browser because CORS allow origin preflight is fixed to intel website.
As an open SDK, my CORS settings are wide open... I may end up with a solution like this anyway with a way to update this allow-origin collection remotely, and totally dump the HTTPS way... which is sad.

Hi @Echtelion,
There is no need to apologize for bad English.
I can't say that I have an answer for the programmatic problem you present.
But maybe there is a simple more basic solution.
Like:

  • can you store information in locally, or globally, accessible file/database?
    [each device should have a unique MAC address, you could use that to identify the device and the cert associated with it - even use that for the cert FQDN = (MAC+.your.domain)]
  • can you store information in local system variables?

Hi @rg305,

The web page will be unable to access his mac address to deduce the subdomain. I may call a webservice to know it, but I don't see this mac-address available remotely in my asp.net router (I see the remote ip and port, which are probably wrong due to the customer proxy, but I don't see the mac address)

As a windows service, I can write a lot of things in the system, but the main issue is still "how the web page can read it".
I'm not a fan in changing things in other systems programaticly, like the browser settings.
I don't want to be considered as a virus, like some antivirus stuff did to implement themselves in the browser... It also require to implement the change in all browser, and it's changing at every update...
Lot of work for a bad approach in my mind.

I think I have a solution without HTTPS, by having my CORS settings changed from "allow-origin=*" to "allow-origin={HttpContext.Request.Headers["Origin"].ToString()}" on the OPTION preflight response.
The browser subsequently allow the local HTTP query to be done even in an HTTPS website, so it's ignoring the "mixed active content" policy in this particular configuration.

Of course, the new "https only" option of some browsers block this behavior, but for now it's not ON by default... This fix will hold until the browsers change their policy, once again...

I'm still searching a way for an HTTPS endpoint on localhost in respect to the AC rules, because I'm afraid the browsers will one day or another, block this workaround.

The installation process itself can retrieve whatever system information needed to complete the install.
Or access a remote web site for "registration" and obtain a unique "identification string" during the install.
It can then tattoo that information onto itself and use it indefinitely going forward.

The problem is not in my app to identify itself and retrieve a certificate.

It's how the other services, which are web pages javascript with the browser limitations, can identify which hostname to choose in order to access localhost with the matching certificate.

Then your app can run a web service to provide that localized information.
As it is the only one that knows the required information, it should be the one to serve it.
http://127.0.0.1:9999/
returns only:
"gbd73gb8eb34"
^ the unique "identification string"
Or the FQDN:
"gbd73gb8eb34.your.domain"
OR a redirection to the resource being requested:
return 301 https://gbd73gb8eb34.your.domain:5001/

And we are back to the issue mentioned before, about performing an http ajax query from an https website, which is by default blocked as a "mixed active content" by the browser.

As stated before this can actually work with fine tuning of the CORS preflight response, and browsers allowing this particular use case when hitting "localhost" only, which have a specific authorization from browsers (see for example https://bugzilla.mozilla.org/show_bug.cgi?id=1631384)

But if this work, why even care for https... doing all on http is what I'm implementing right now as a workaround.

For the future, I assume that browsers will not allow anymore http connection to localhost, so the whole SDK will be unavailable on HTTP, and your idea of this local pre-request will also not be possible.

They are two separate connections (one is http then the following is https).
What am I missing?

Let's take a simple use case :

  • The customer load a partner web page, for example "https://signmydocument.com", which is https
  • This page show a document for the user to digitally sign. The user accept.
  • The webpage will "upload" the document to the local SDK, thus hitting http://localhost:5000/tools/sign, therefore doing an http ajax query from an https webpage.
  • The SDK get the document, sign it with the user smartcard, and return the signed file in the body response back to the webpage.
  • The webpage will then upload this signed document to his backend website.

Now I want the local SDK to respond on https, because I assume http will soon be blocked by browsers.
If http is blocked, your idea of a first http query to have the https domain matching this computer, will also be blocked.

That's

a typical setup problem.

If there is no cookie (or another stored value) found: The application has to ask - simple input box.

If the value works, it's saved (cookie or one of the other browser based options).

I don't see a real problem.

Or create and use an online service (registering with mail address + pwd, mail is enough to get such a random value).

I found this process too complicated for the end user (most of them are secretaries and lawyers, with near to zero IT knowledge), and require additional step for the company which will use this SDK.

But that open new ideas for me, in order to simplify this process.
A webpage would not be able to make an ajax http to another domain, but for now it still can open an http popup to another domain, so they can fire a popup to http://localhost:5000/help/config for example.
The popup could display the correct subdomain to the user, so the user can copy/paste it in the partner webpage.
The popup could also use a webmessage to notify the calling window with this same data in order to have an automated process.

I take the idea.

It doesn't work in case of a total ban of http, but it will give me another workaround if the previously mentioned exception stop working.