we've been using cert-manager and letsencrypt cluster issuer for years and no problem.
recently one of our dev branches is having this, every time we deploy a new version, issues a new cert, and since is an actively working environment it gets over the limit of 5 certs for the same domain per week, and is the only env/namespace where this happens, in any other, if the certificate remains valid, it won't be regenerated on new deployments only when expires.
any idea on why this is happening or what can I do to fix it?
I'm not familiar with the specific systems, but the core idea is that the certificates need to be stored in persistent storage, not part of the ephemeral container environment. But it sounds like you might already understand that. If some of your environments do work that way, and others don't, can you compare how those environments are configured differently? Can you post some of the configuration details here so that hopefully someone familiar with how to configure that ACME client can help?
The strategy I implemented at my previous employer was to issue a certificate for the necessary domain names using a centralized, dummy ingress definition where the service application definitions don't matter. The real ingress definitions for your actual environments can then have their defaults configured to use the centralized certificate located in certmanager's namespace. This will prevent repeat issuance and the ingress manifests from "fighting over" ownership and maintenance of the certificate and consequently reporting incorrect namespace and port info to kubectl. Though it's probably obvious, this strategy completely avoids needing to issue a new certificate when you update any SDLC environment, including production, unless you decide to uninstall certmanager.