博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
nignx SSL 管理详解
阅读量:2342 次
发布时间:2019-05-10

本文共 28222 字,大约阅读时间需要 94 分钟。

3) Managing SSL Traffic

This section describes how to configure an HTTPS server on nginx.

3-1) SSL Termination - Delivering web content over HTTPS

3-1-1) Setting up an HTTPS Server

To set up an HTTPS server, in your nginx.conf file specify the ssl parameter with listen directive in the server block then set the locations of the server certificate and private key files:

server {    listen              443 ssl;    server_name         www.example.com;    ssl_certificate     www.example.com.crt;    ssl_certificate_key www.example.com.key;    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;    ssl_ciphers         HIGH:!aNULL:!MD5;    ...}

The server certificate is a public entity. It is sent to every client that connects to the server. The private key is a secure entity and should be stored in a file with restricted access. However, nginx’s master process must be able to read this file. The private key may alternately be stored in the same file as the certificate:

ssl_certificate www.example.com.cert;ssl_certificate_key www.example.com.cert;

In this case the file access rights should also be restricted. Though the certificate and the key are be stored in one file in this case, only the certificate is sent to a client.

The ssl_protocols and ssl_ciphers directives can be used to limit connections to include only the strong versions and ciphers of SSL/TLS.

Since version 1.0.5, Nginx uses ssl_protocols SSLv3 TLSv1 and ssl_ciphers HIGH:!aNULL:!MD5 by default; since versions 1.1.13 and 1.0.12, the default was updated to ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2

Vulnerabilities are sometimes found in the design of older ciphers, and you would be wise to disable these in a modern nginx configuration (unfortunately, the default configuration cannot easily be changed because of concerns of backward compatibility for existing nginx deployments). Please note that CBC-mode ciphers might be vulnerable to a number of attacks, the BEAST attack in particular and SSLv3 is best avoided unless you need to support legacy clients due to the POODLE attack.

3-1-2) HTTPS Server Optimization

SSL operations consume extra CPU resources. The most CPU-intensive operation is the SSL handshake. There are two ways to minimize the number of these operations per client:

  • Enabling keepalive connections to send several requests via one connection
  • Reusing SSL session parameters to avoid SSL handshake for parallel and subsequent connections

Sessions are stored in the SSL session cache shared between worker processes and configured by the ssl_session_cache directive. One megabyte of cache contains about 4000 sessions. The default cache timeout is 5 minutes. This timeout can be increased by using the ssl_session_timeout directive. Below is a sample configuration optimized for a multi-core system with 10 megabyte shared session cache:

worker_processes auto;http {    ssl_session_cache   shared:SSL:10m;    ssl_session_timeout 10m;    server {        listen              443 ssl;        server_name         www.example.com;        keepalive_timeout   70;        ssl_certificate     www.example.com.crt;        ssl_certificate_key www.example.com.key;        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;        ssl_ciphers         HIGH:!aNULL:!MD5;        ...    }}

3-1-3) SSL Certificate Chains

Some browsers may complain about a certificate signed by a well-known certificate authority, while other browsers may accept the certificate without issues. This occurs because the issuing authority has signed the sever certificate using an intermediate certificate that is not present in the base of well-known trusted certificate authorities which is distributed in a particular browser. In this case the authority provides a bundle of chained certificates that should be concatenated to the signed server certificate. The server certificate must appear before the chained certificates in the combined file:

cat www.example.com.crt bundle.crt > www.example.com.chained.crt

The resulting file should be used in the ssl_certificate directive:

server {    listen              443 ssl;    server_name         www.example.com;    ssl_certificate     www.example.com.chained.crt;    ssl_certificate_key www.example.com.key;    ...}

If the server certificate and the bundle have been concatenated in the wrong order, nginx will fail to start and will display the following error message:

SSL_CTX_use_PrivateKey_file(" ... /www.example.com.key") failed   (SSL: error:0B080074:x509 certificate routines:    X509_check_private_key:key values mismatch)

The error happens because nginx has tried to use the private key with the bundle’s first certificate instead of the server certificate.

Browser usually store intermediate certificate which they receive and are signed by trusted authorities. So actively used browsers may already have the required intermediate certificates and may not complain about a certificate sent without a chained bundle. To ensure the server sends the complete certificate chain the openssl command-line utility may be used:

$ openssl s_client -connect www.godaddy.com:443...Certificate chain 0 s:/C=US/ST=Arizona/L=Scottsdale/1.3.6.1.4.1.311.60.2.1.3=US     /1.3.6.1.4.1.311.60.2.1.2=AZ/O=GoDaddy.com, Inc     /OU=MIS Department/CN=www.GoDaddy.com     /serialNumber=0796928-7/2.5.4.15=V1.0, Clause 5.(b)   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.     /OU=http://certificates.godaddy.com/repository     /CN=Go Daddy Secure Certification Authority     /serialNumber=07969287 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.     /OU=http://certificates.godaddy.com/repository     /CN=Go Daddy Secure Certification Authority     /serialNumber=07969287   i:/C=US/O=The Go Daddy Group, Inc.     /OU=Go Daddy Class 2 Certification Authority 2 s:/C=US/O=The Go Daddy Group, Inc.     /OU=Go Daddy Class 2 Certification Authority   i:/L=ValiCert Validation Network/O=ValiCert, Inc.     /OU=ValiCert Class 2 Policy Validation Authority     /CN=http://www.valicert.com//emailAddress=info@valicert.com...

In this example the subject (“s”) of the www.GoDaddy.com server certificate #0 is signed by an issuer (“i”) which itself is the subject of the certificate #1. This certificate #1 is signed by an issuer which itself is the subject of the certificate #2. This certificate, however, is signed by the well-known issuer ValiCert, Inc. whose certificate is stored in the browsers themselves.

If a certificate bundle has not been added, only the server certificate #0 will be shown.

3-1-4) A single HTTP/HTTPS Server

It is possible to configure a single server that handle both HTTP and HTTPS requests by placing one listen directive with the ssl parameter and one without in the same virtual server.

server {    listen              80;    listen              443 ssl;    server_name         www.example.com;    ssl_certificate     www.example.com.crt;    ssl_certificate_key www.example.com.key;    ...}

In nginx version 0.7.13 and earlier, SSL connot be enabled selectively for individual listening sockets, as shown above. SSL can only be enabled for the entire server using the ssl directive, making it possible to set up a single HTTP/HTTPS server. The ssl parameter to the listen directive was added to solve this issue. The ssl directive therefore is not be used in version 0.7.14 and later.

3-1-5) Name-Based HTTPS Servers

A common issue arises when two or more HTTPS servers are configured to listen on a single ip address:

server {    listen          443 ssl;    server_name     www.example.com;    ssl_certificate www.example.com.crt;    ...}server {    listen          443 ssl;    server_name     www.example.org;    ssl_certificate www.example.org.crt;    ...}

With this configuration, a browser receive the default server’s certificate. In this case, it is www.example.com regardless of the requested server name. This is caused by the SSL protocol behavior itself. The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.

The best way to solve this issue is to assign a separate IP address to the every HTTPS server.

server {    listen          192.168.1.1:443 ssl;    server_name     www.example.com;    ssl_certificate www.example.com.crt;    ...}server {    listen          192.168.1.2:443 ssl;    server_name     www.example.org;    ssl_certificate www.example.org.crt;    ...}

Note that there are also some specific proxy settings for HTTPS upstreams (proxy_ssl_ciphers, proxy_ssl_protocols, and proxy_ssl_session_reuse) which can be used for fine tuning SSL between niginx and upstreams. You can read more about these in the

3-1-5-1) An SSL Certificate With Several Names

There are other ways for sharing a single IP address between several HTTPS servers. However, all of them have their particular drawback. One way is to use a certificate with several names in the SubjectAltName certificate field, for example, www.example.com and www.example.org. However, the SubjectAltName field length is limited.

Another way is to use a certificate with a wildcard name, for example, *.example.org. A wildcard certificate secures all subdomains of the specified domain, but only on one level. This certificate matches www.example.org, but does not match example.org or www.sub.example.org

It is better to place the certificate file with several names and its private key file at the http level of your configuration so that they inherit a single memory copy across all servers.

ssl_certificate     common.crt;ssl_certificate_key common.key;server {    listen          443 ssl;    server_name     www.example.com;    ...}server {    listen          443 ssl;    server_name     www.example.org;    ...}

3-1-5-2) Server Name Indication

A more generic solution for running several HTTPS serves on a single IP address is the TLS Server Name Indication extension (SNI, RFC 6066), which allows a browser to pass a requested server name during the SSL handshake. With this solution, the server will know which certificate should be used for the connection. However, SNI has limited browser support. Currently it is supported starting with the following browser versions:

Opera 8.0;MSIE 7.0 (but only on Windows Vista or higher);Firefox 2.0 and other browsers using Mozilla Platform rv:1.8.1;Safari 3.2.1 (Windows version supports SNI on Vista or higher);and Chrome (Windows version supports SNI on Vista or higher, too).

Only domain name can be passed in SNI. However, some browsers will pass the IP address of server as its name if a request includes a literal IP address. It is best not rely on this.

In order to use SNI in nginx, It must be supported in both the OpenSSL library with which the nginx binary has been built with as well as the library which it is being dynamically linked with at run time. OpenSSL supports SNI since the 0.9.8j version if it was built with configuration option –enable-tlsext. Since OpenSSL version 0.9.8j, this option is enable by default. If nginx was built with SNI support, nginx shows the following when run with the -V switch:

$ nginx -V...TLS SNI support enabled...

However, if the SIN-enabled nginx is linked dynamically to an OpenSSL without SNO support, nginx displays the warning:

NGINX was built with SNI support, however, now it is linkeddynamically to an OpenSSL library which has no tlsext support,therefore SNI is not available

3-2) SSL Termination for TCP upstream - Delivering TCP traffic over HTTPS

This article explains how to set up SSL termination for Nginx Plus and a load-balanced group of servers that accept TCP connections.

3-2-1) What is SSL Termination

SSL termination means that nginx plus act as the server-side SSL endpoint for connections with clients: if performs the decryption of request and encryption of responses that backend servers would otherwise have to do. The operation is called termination because nginx plux closes the client connection and forwards the client data over a newly created, unencrypted connection to the servers in an upstream group. In release R6 and later, nginx plus performs SSL termination for TCP connections as well as HTTP connections.

3-2-2) Prerequisites

  • NGINX Plus R6 or later
  • A load-balanced upstream group with several TCP servers
  • SSL certificates and a private key (obtained or self-generated)

3-2-3) Obtaining SSL Certificates

First, you will need to obtain server certificate and a private key and put them on the server. A certificate can be obtained from a trusted certificate authority (CA) or generated using an SSL library such as OpenSSL.

3-2-4) Configuring Nginx Plus

To configure SSL termination, add the following directives to the nginx plus configuration:

Enabling SSL

To enable SSL, specify the ssl parameter of the listen directive for the TCP server that passes connections to an upstream server group:

stream {    server {        listen     12345 ssl;        proxy_pass backend;        …    }}

Adding SSL Certificates

To add SSL certificates, specify the path to the certificates (which must be in the PEM format) with the ssl_certificate directive, and specify the path to the private key int the ssl_certificate_key directive:

server {    …    ssl_certificate        /etc/ssl/certs/server.crt;    ssl_certificate_key    /etc/ssl/certs/server.key;}

Additionally, the ssl_protocols and ssl_ciphers directives can be used to limit connections and to include only the strong versions and ciphers of SSL/TLS.

server {    …    ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;    ssl_ciphers    HIGH:!aNULL:!MD5;}

The ssl_ciphers directive tells nginx to inform the SSL library which ciphers it prefers.

3-2-5) Speeding up Secure TCP Connections

Implementing SSL/TLS can significantly impact server performance, because the SSL handshake operation is quite CPU_intensive. The default timeout for SSL handshake is 60 seconds and it can be redefined with the ssl_handshake_timeout directive. We do not recommend setting this value too low or too high, as that may result either in handshake failure or a long time to wait for the handshake to complete:

server {    …    ssl_handshake_timeout 10s;}

Optimizing the SSL Session Cache

Creating a cache of the session parameters that apply to each SSL/TLS connection reduces the number of handshakes and thus can significantly improve performance. Caching is set with the ssl_session_cache directive.

ssl_session_cache;

By default, nignx plus uses the built-in type of the session cache, which means the cache built in your SSL library. This is not optimal, because such a cache can be used by only one worker process and can cause memory fragmentation. Set the ssl_session_cache directive to shared to share the cache among all worker processes, which speeds up later connections because the connection setup information is already known:

ssl_session_cache shared:SSL:1m;

As a reference, a 1-MB shared cache can hold approximately 4,000 sessions.

By default, nginx plus retains cached session parameters for five minutes. Increasing the value of the ssl_connection_timeout to several hours can improve performance because reusing cached session parameters reduces the number of time-consuming handshakes. When you increase the timeout, the cache needs to be bigger to accommodate the larger number of cached parameters that results.For the 4-hour timeout in the following example, a 20-MB cache is appropriate:

ssl_session_timeout 4h;

If the timeout length is increased, you need a larger cache to store sessions, for example, 20 MB:

server {    …    ssl_session_cache   shared:SSL:20m;    ssl_session_timeout 4h;}

These lines create an in-memory cache of 20 MB to store session information, and instruct NGINX Plus to reuse session parameters from the cache for 4 hours after the moment they were added.

Session Tickets

Session tickets are an alternative to the session cache. Session information is stored on the client side, eliminating the need for a server-side cache to store session information. When a client resumes interaction with the backend server, it presents the session ticket and re-negotiation is not necessary. Set the ssl_session_tickets directive to on:

server {    …    ssl_session_tickets on;}

When using session tickets for an upstream group, each upstream server must be initialized with the same session key. It’s a best practice to change session keys frequently, we recommend that you implement a mechanism to rotate the shared key across all upstream servers:

server {    …    ssl_session_tickets on;    ssl_session_ticket_key /etc/ssl/session_ticket_keys/current.key;    ssl_session_ticket_key /etc/ssl/session_ticket_keys/previous.key;}

3-2-6) Complete Example

stream {    upstream stream_backend {         server backend1.example.com:12345;         server backend2.example.com:12345;         server backend3.example.com:12345;    }    server {        listen                12345 ssl;        proxy_pass            stream_backend;        ssl_certificate       /etc/ssl/certs/server.crt;        ssl_certificate_key   /etc/ssl/certs/server.key;        ssl_protocols         SSLv3 TLSv1 TLSv1.1 TLSv1.2;        ssl_ciphers           HIGH:!aNULL:!MD5;        ssl_session_cache     shared:SSL:20m;        ssl_session_timeout   4h;        ssl_handshake_timeout 30s;    …     }}

In this example, the directives in the server block instruct NGINX Plus to terminate and decrypt secured TCP traffic from clients and pass it unencrypted to the upstream group stream_backend which consists of three servers.

The ssl parameter of the listen directive instructs NGINX Plus to accept SSL connections. When a clent requests a secure TCP connection, NGINX Plus starts the handshake process, which uses the PEM-format certificate specified by the ssl_certificate directive, the certificate’s private key specified by the ssl_certificate_key directive, and the protocols and cyphers listed by the ssl_protocols and ssl_ciphers directives.

As soon as the secure TCP connection is established, NGINX Plus caches the session parameters according to the ssl_session_cache directive. In the example, the session cache is shared between all worker processes (the shared parameter), is 20 MB in size (the 20m parameter), and retains each SSL session for reuse for 4 hours (the ssl_session_timeout directive).

3-3) SSL between Nginx and HTTP upstream - Securing HTTP traffic between nginx and upstream servers

This article explains how to encrypt HTTP traffic between nginx and a upstream group or a proxied server.

3-3-1) Prerequisites

  • NGINX Open Source or NGINX Plus
  • A proxied server or an upstream group of servers
  • SSL certificates and a private key

3-3-2) Obtain SSL Server Certificates

You can purchase a server certificate from a trusted certificate authority (CA), or you can create own internal CA with an OpenSSL library and generate your own certificate. The server certificate together with a private key should be placed on each upstream server

3-3-3) Obtain an SSL Client Certificates

Nginx will identify itself to the upstream servers by using the SSL client certificate. This client certificate must be signed by a trusted CA and is configuring on the nginx together with the corresponding private key.

You will also need to configure the upstream servers to require client certificate for all incoming SSL connections, and to trust the CA that issued nignx’s client certificate. Then when nginx connects to the upstream, it will provider its client certificate and the upstream server will accept it.

3-3-4) Configuring Nginx

First, change the URL to an upstream group to support SSL connections. In the Nginx configuration file, specify the “https” protocol for the proxied server or an upstream group in the proxy_pass directive:

location /upstream {    proxy_pass https://backend.example.com;}

Add the client certificate and the key that will be used to authentication nginx on each upstream server with proxy_ssl_certificate and proxy_ssl_certificate_key directives:

location /upstream {    proxy_pass                https://backend.example.com;    proxy_ssl_certificate     /etc/nginx/client.pem;    proxy_ssl_certificate_key /etc/nginx/client.key}

If you use a self-signed certificate for an upstream or your own CA, also include the proxy_ssl_trusted_certificate. The first must be in the PEM format. Optionally, include the Proxy_ssl_verify and proxy_ssl_verify_depth directives to have nginx check the validity of the security certificates:

location /upstream {    ...    proxy_ssl_trusted_certificate /etc/nginx/trusted_ca_cert.crt;    proxy_ssl_verify       on;    proxy_ssl_verify_depth 2;    ...}

Each new SSL connection requires a full SSL handshake between the client and server, which is quite CPU-intensive. To have NGINX proxy previously negotiated connection parameters and use a so-called abbreviated handshake, include the proxy_ssl_session_reuse directive:

location /upstream {    ...    proxy_ssl_session_reuse on;    ...}

Optionally, you can specify which SSL protocols and ciphers are used:

location /upstream {        ...        proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;        proxy_ssl_ciphers   HIGH:!aNULL:!MD5;}

3-3-5) Configuring Upstream Servers

Each upstream server should be configured to accept HTTPS connections. For each upstream server, specify a path to the server certificate and the private key with ssl_certificate and ssl_certificate_key directives:

server {    listen              443 ssl;    server_name         backend1.example.com;    ssl_certificate     /etc/ssl/certs/server.crt;    ssl_certificate_key /etc/ssl/certs/server.key;    ...    location /yourapp {        proxy_pass http://url_to_app.com;        ...    }}

Specify the path to a client certificate with the ssl_client_certificate directive:

server {    ...    ssl_client_certificate /etc/ssl/certs/ca.crt;    ssl_verify_client      off;    ...}

3-3-6) Complete Example

http {    ...    upstream backend.example.com {        server backend1.example.com:443;        server backend2.example.com:443;   }    server {        listen      80;        server_name www.example.com;        ...        location /upstream {            proxy_pass                    https://backend.example.com;            proxy_ssl_certificate         /etc/nginx/client.pem;            proxy_ssl_certificate_key     /etc/nginx/client.key            proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;            proxy_ssl_ciphers             HIGH:!aNULL:!MD5;            proxy_ssl_trusted_certificate /etc/nginx/trusted_ca_cert.crt;            proxy_ssl_verify        on;            proxy_ssl_verify_depth  2;            proxy_ssl_session_reuse on;        }    }    server {        listen      443 ssl;        server_name backend1.example.com;        ssl_certificate        /etc/ssl/certs/server.crt;        ssl_certificate_key    /etc/ssl/certs/server.key;        ssl_client_certificate /etc/ssl/certs/ca.crt;        ssl_verify_client      off;        location /yourapp {            proxy_pass http://url_to_app.com;        ...        }    server {        listen      443 ssl;        server_name backend2.example.com;        ssl_certificate        /etc/ssl/certs/server.crt;        ssl_certificate_key    /etc/ssl/certs/server.key;        ssl_client_certificate /etc/ssl/certs/ca.crt;        ssl_verify_client      off;        location /yourapp {            proxy_pass http://url_to_app.com;        ...        }    }}

In this example, the “https” protocol in the proxy_pass directive specifies that the traffic forwarded by NGINX to upstream servers be secured.

When a secure connection is passed from NGINX to the upstream server for the first time, the full handshake process is performed. The proxy_ssl_certificate directive defines the location of the PEM-format certificate required by the upstream server, the proxy_ssl_certificate_key directive defines the location of the certificate’s private key, and the proxy_ssl_protocols and proxy_ssl_ciphers directives control which protocols and ciphers are used.

The next time NGINX passes a connection to the upstream server, session parameters will be reused because of the proxy_ssl_session_reuse directive, and the secured connection is established faster.

The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the upstream. The proxy_ssl_verify_depth directive specifies that two certificates in the certificates chain are checked, and the proxy_ssl_verify directive verifies the validity of certificates.

3-4) SSL between Nginx and TCP upstream - Securing TCP traffic between Nginx and upstream servers

This article explains how to secure TCP traffic between NGINX and a TCP upstream server or an upstream group of TCP servers.

3-4-1) Prerequisites

  • NGINX Plus R6 and later or the latest NGINX Open Source compiled with the –with-stream and with-stream_ssl_module configuration parameters
  • A proxied TCP server or an upstream group of TCP servers
  • SSL certificates and a private key

3-4-2) Configuring Nginx

In the NGINX configuration file, include the proxy_ssl directive in the server block on the stream level:

stream {    server {        ...        proxy_pass backend;        proxy_ssl  on;    }}

Then specify the path to the SSL client certificate required by the upstream server and the certificate’s private key:

server {        ...        proxy_ssl_certificate     /etc/ssl/certs/backend.crt;        proxy_ssl_certificate_key /etc/ssl/certs/backend.key;}

Optionally, you can specify which SSL protocols and ciphers are used:

server {        ...        proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;        proxy_ssl_ciphers   HIGH:!aNULL:!MD5;}

If you use certificates issued by a CA, also include the proxy_ssl_trusted_certificate directive to name the file containing the trusted CA certificates used to verify the upstream’s security certificates. The file must be in the PEM format. Optionally, include the proxy_ssl_verify and proxy_ssl_verfiy_depth directives to have NGINX check the validity of the security certificates:

server {    ...    proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;    proxy_ssl_verify       on;    proxy_ssl_verify_depth 2;}

Each new SSL connection requires a full SSL handshake between the client and server, which is quite CPU-intensive. To have NGINX proxy previously negotiated connection parameters and use a so-called abbreviated handshake, include the proxy_ssl_session_reuse directive:

proxy_ssl_session_reuse on;

3-4-3) Complete Example

stream {    upstream backend {        server backend1.example.com:12345;        server backend2.example.com:12345;        server backend3.example.com:12345;   }    server {        listen     12345;        proxy_pass backend;        proxy_ssl  on;        proxy_ssl_certificate         /etc/ssl/certs/backend.crt;        proxy_ssl_certificate_key     /etc/ssl/certs/backend.key;        proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;        proxy_ssl_ciphers             HIGH:!aNULL:!MD5;        proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;        proxy_ssl_verify        on;        proxy_ssl_verify_depth  2;        proxy_ssl_session_reuse on;    }}

In this example, the proxy_ssl directive specifies that TCP traffic forwarded by NGINX to upstream servers be secured.

When a secure TCP connection is passed from NGINX to the upstream server for the first time, the full handshake process is performed. The upstream server asks NGINX to present a security certificate specified in the proxy_ssl_certificate directive. The proxy_ssl_protocols and proxy_ssl_ciphers directives control which protocols and ciphers are used.

The next time NGINX passes a connection to the upstream, session parameters will be reused because of the proxy_ssl_session_reuse directive, and the secured TCP connection is established faster.

The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the upstream server. The proxy_ssl_verify_depth directive specifies that two certificates in the certificates chain are checked, and the proxy_ssl_verify directive verifies the validity of certificates.

转载地址:http://teyvb.baihongyu.com/

你可能感兴趣的文章
SDL2.0-简介
查看>>
SDL2.0-播放YUV文件
查看>>
leetcode 1.TwoSum--hashmap
查看>>
leetcode 14. Longest Common Prefix
查看>>
leetcode 26. Remove Duplicates from Sorted Array
查看>>
leetcode 27. Remove Element
查看>>
leetcode 66. Plus One
查看>>
leetcode 111. Minimum Depth of Binary Tree
查看>>
leetcode 257. Binary Tree Paths
查看>>
poj1611-并查集
查看>>
Xeon E3-1500 v5 GPU
查看>>
skylake AVC性能
查看>>
IPTV的前世今生与发展
查看>>
x264中的汇编x86inc.asm
查看>>
X264中的sad-a.asm
查看>>
x264中的cpu-a.asm
查看>>
x264中的DCT变换 dct-a.asm
查看>>
X264的时耗分析
查看>>
H.264 Profile、Level、Encoder三张简图
查看>>
NEON指令集综述
查看>>