PHP-Fpm mode is very simple without worrying about memory leaks.
fastcgi process manager
nginx work well enough, but since
php-fpm itself is a synchronous blocking process model , all resources are released after the request ends (including a series of objects created by the framework initialization, resulting in The PHP process idling (creating <-->destroyed <-->created) consumes a lot of
CPU resources, resulting in limited throughput for a single machine. Requests are blocked, which can cause the
CPU not release resources and waste
The php-fpm process model is also very simple. It belongs to the pre-derived child process mode . You must know that the early Apache used this mode to
fork a process with a request. The overhead of the process is very large. This will greatly reduce the throughput rate, which is determined by the number of processes.
Pre-derived child process mode
N processes are created after the program starts. Each child process enters
Accept and waits for a new connection to enter. When the client connects to the server, one of the child processes wakes up, starts processing the client request, and no longer accepts new
TCP连接 . When this connection is closed, the child process is released, re-enters
Accept , and participates in processing the new connection.
The advantage of this model is that the process can be reused completely without requiring too many context switches.
This model relies heavily on the number of processes to solve concurrency problems. A client connection requires one process, the number of worker processes, and the number of concurrent processing capabilities. The number of processes an operating system can create is limited.
PHP framework initialization takes up a lot of computing resources, and each request needs to be initialized.
Starting a large number of processes will result in additional process scheduling consumption. Hundreds of processes may process process context switching scheduling consumption of less than 1% of the CPU can be ignored, if you start thousands or even tens of thousands of processes, the consumption will rise. Scheduling consumption may account for tens or even 100% of the CPU.
If a third-party request is requested to be very slow, the
CPU resources will be occupied during the request process, wasting expensive hardware resources.
For example, a live chat program, a server may have to maintain hundreds of thousands of connections, then it is necessary to start hundreds of thousands of processes to hold. This is obviously impossible
Is there a technology that can handle all concurrent
IO in one process? The answer is yes, this is the IO multiplexing technology.
Php-fpm working mode problem
nginxbased on the
epollevent model, a
workercan handle multiple requests simultaneously
fpm-workercan process a request at the same time
fpm-workerneeds to reinitialize the
mvcframework before processing the request, and then release the resources.
When high concurrent requests,
nginxresponds directly to 502
Fpm-worker process switching between consumption is large
So what solution do we have?
We analyze our business and find that more than
90% of the business is IO-intensive . We only need to improve the IO multiplexing ability to improve the single-machine throughput. In addition, we need to replace the
php-fpm synchronous blocking mode with asynchronous non- independent. Blocking mode , asynchronous open mode is more complicated and difficult to maintain, of course, not necessarily using
php-fpm , we can solve our core problem - performance.
In IO-intensive business we need frequent context switching, and thread-mode development is too complicated.
The number of threads that can be opened in a process is also limited. Too many threads will increase
CPU load and memory resources. The thread has no blocking state , and IO blocking can not actively give up
CPU resources. It is a preemptive scheduling model . Not very suitable for php development.
The full coroutine mode is enabled in
swoole 4.+ , allowing the synchronization code to execute asynchronously. For details, please see why you should use swoole