Resources
Private Inference Endpoints
How to create private inference endpoints
By default, endpoints created for both LLM Serving and General Inference deployments are publicly accessible. If you prefer restricted access, you can make the endpoint private by selecting the Make it a private endpoint? option.
This setting generates a TLS certificate upon deployment, which you can download. Access to the endpoint will then require the http client to use this certificate. Here are a few examples:
Using curl command
Using httpx library
Using OpenAPI client library