I appreciate the suggestion, but in my case, it may not work for a few reasons (although I may be mistaken... You can correct me later:
I've ever tried to make this kind of proxying before, but what happens is as I'm using MVC logic in my application, it simply doesn't let it happen, and I can't explain why (I'm practically an Nginx newbie).
Doesn't this deal with cert generation with production of the well-known .well-known folder (or via acme). I don't use acme, or similar method.... I mean... I use acme, I know, because you said it to me... Lol. Say, what I'm using is the traditional way of doing this that is via 'certbot --nginx -d domain.com --no-redirect && systemtcl restart nginx'. And I just use the cron thing to renew periodically all certificates at once. I think I've explained that before.
I didn't understand quite well that part? Proxying to 8080. It seems to me like a workaround.
And none of you answered my question yet. Which option would be "less bad" in that hypothetical case?
Regarding permissions the 'most correct' way to run any service or script is using it's own dedicated user account so that you can grant it just the permissions it needs to do it's work. Most often though people don't do that because it's extra work to configure.
Where you keep the certificate files doesn't matter as long as your webserver process has permission to read them. It's not necessarily dangerous for a web server to be able to read a file that was written by root, it's dangerous for a webserver to be able to write files as root because someone will always eventually come along and try to break your webserver enough to write the files they want.
Regarding scale, it's a great problem to have because it implies you have many customers. Some people design for scale (e.g. using kubernetes) and never see a single customer, so I'd say keep in mind that scale may be a concern in the future but concentrate on the minimum product that someone could use. I've built many (many) systems over the last three decades and several of those were multi-tenant, yet only a handful have needed significant scale. A single webserver and database instance can generally handle hundreds of tenants depending on how efficient the application design is (or how much work the server application has to do), but you quickly need redundancy and quick recovery, particularly for databases.
Note that Google Cloud etc offer free trials with hundreds of dollars of credit, so they're great for experimenting (e.g. load balancing, container groups, different kinds of databases etc), in fact there's too much choice
The following two excerpts from @webprofusion's reply are probably the best overall technology advice I have seen on this forum, and I can't LIKE them enough: