2

I'm trying to host my laravel application in GCP cloud run and everything works just fine but for some reason whenever I run a POST request with lots of data (100+ rows of data - 64Mb) saving to the database, it always throw an error. I'm using nginx with docker by the way. Please see the details below.

ERROR

Cloud Run Logs

The request has been terminated because it has reached the maximum request timeout.

nginx.conf

worker_processes  1;

events {
    worker_connections  1024;
}
http {
    include       mime.types;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen LISTEN_PORT default_server;
        server_name _;
        root /app/public;
        index index.php;
        charset utf-8;
        location / {
            try_files $uri $uri/ /index.php?$query_string;
        }

        location = /favicon.ico { access_log off; log_not_found off; }
        location = /robots.txt  { access_log off; log_not_found off; }
        access_log /dev/stdout;
        error_log /dev/stderr;
        sendfile off;
        client_max_body_size 100m;

        location ~ \.php$ {
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            fastcgi_intercept_errors off;
            fastcgi_buffer_size 32k;
            fastcgi_buffers 8 32k;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }
    #include /etc/nginx/sites-enabled/*;
}

daemon off;

Dockerfile

FROM php:8.0-fpm-alpine

RUN apk add --no-cache nginx wget

RUN docker-php-ext-install mysqli pdo pdo_mysql

RUN mkdir -p /run/nginx

COPY docker/nginx.conf /etc/nginx/nginx.conf

RUN mkdir -p /app
COPY . /app

RUN sh -c "wget http://getcomposer.org/composer.phar && chmod a+x composer.phar && mv composer.phar /usr/local/bin/composer"
RUN cd /app && \
    /usr/local/bin/composer install --no-dev

RUN chown -R www-data: /app

CMD sh /app/docker/startup.sh

Laravel version:

v9


Please let me know if you need some data that is not indicated yet on my post.

9
  • What is ** lots of data**? Specify an actual value instead of a description. What is the error that your app is reporting? Check the Cloud Run logs and post that detail as well. Commented Jul 5, 2022 at 8:06
  • @JohnHanley It says The request has been terminated because it has reached the maximum request timeout. but I set it to max which is 3600 (which equivalent to 1hour) Commented Jul 5, 2022 at 8:28
  • Use a queue for potentially long running processes. Commented Jul 5, 2022 at 8:47
  • @Peppermintology Good suggestion, but I tried my application in VM and it works pretty well. In addition, 100 rows is normal and must be fast when executing. It's just weird that in cloud run, it throws the error Commented Jul 5, 2022 at 9:14
  • There will be factors to consider when trying this in a production vs development environment, for example differences in network latency and stability. Commented Jul 5, 2022 at 9:25

3 Answers 3

1

Increase max_execution_time in php configuration. By default it is only 30 seconds. Make 30 minutes for example:

max_execution_time = 1800

Increase timeouts of nginx:

http{
   ...
   proxy_read_timeout 1800;
   proxy_connect_timeout 1800;
   proxy_send_timeout 1800;
   send_timeout 1800;
   keepalive_timeout 1800;
   ...
}

Another idea for investigation is to give more resources to your cloud instance (more CPUs, more RAM) in order to process your request faster and avoid timeout. But eventually it should be increased.

Sign up to request clarification or add additional context in comments.

Comments

1
+400

I think the issue has nothing to do with php, laravel, or nginx, but with Cloud Run.

As you can see in the Google Cloud documentation when they describe HTTP 504: Gateway timeout errors:

HTTP 504
The request has been terminated because it has reached the maximum request 
timeout.

If your service is processing long requests, you can increase the request timeout. If your service doesn't return a response within the time specified, the request ends and the service returns an HTTP 504 error, as documented in the container runtime contract.

As suggested in the docs, please, try increasing the request timeout until your application can process the huge POST data you mentioned: it is set by default to 5 minutes, but can be extended up to 60 minutes.

As described in the docs, you can set it through the Google Cloud console and the gcloud CLI; directly, or by modifying the service YAML configuration.

11 Comments

Sorry @Jie, I just realized the comment in which you answered to John Harley. Do you already set the Cloud Run service timeout to one hour then? Please, could you verify it? The error is very clear and it is typically related to Cloud Run. Perhaps some configuration is cached for any reason - it shouldn't by the way, but just in case, did you try recreating the service from scratch with the suggested timeout increase?
In addition there is a hard limit of 32 Mb as maximum HTTP/1 request size, but it would be probably unrelated to your issue.
the request timeout is already set to 3600 which is 60mins. But I still get the same result
Thank you very much for the feedback @Jie. I see. The strange thing is that, as you can see in the documentation, it is a typical error reported by Cloud Run, so it makes perfect sense. As suggested in the comment, did you try to create the service from scratch in order to discard any problem with the current one? Did you verify that the timeout was set to 60 minutes as suggested as well? Consider also review the suggested timeout related changes in other answers, although I think that if the same container runs successfully in a VM everything should be right configured in php and nginx
A certain thing, on the other hand, is that the error page that is being displayed is the one from your nginx server, so perhaps the error that appear in the logs is masquerading another one. Please, in addition to configuring send_timeout try tweaking the fastcgi specific timeouts, especially fastcgi_send_timeout and fastcgi_read_timeout, it may be helpful as well. Please, consider read this
|
1

Default Nginx timeout is 60s. Since you have mentioned the data is 64mb. It will take time to process that request in your backend and send back the response within 60s.

So either you could try to increase the nginx timeout by adding the below block in your nginx.conf file

http{
   ...
   proxy_read_timeout 300;
   proxy_connect_timeout 300;
   proxy_send_timeout 300;
   keepalive_timeout 3000;
   ...
}

Or better way would be, dont process the data immediately, push the data to a message queue and send the response instantly.let the background workers handle the process with data. I dont know much about laravel. In django we can use rabbitmq and celery/ pika.

To get the result for the request with huge data you can poll the server at regular interval or setup a websocket connection

2 Comments

great, but I cannot use queue in cloud run since I cannot install supervisor to run my workers
Okay. But i could see you can setup rabbitmq queue in GCP. I'm doing my deployment using container orchestration. For the same use case what i did was running the workers without any supervisor since the workers are doing just one predefined task and use Prometheus to monitor them. I have setup 40 workers and it works just fine. In AWS you could use lambda. I'm unaware of the equivalent in GCP

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.