0

Okay, here's the detail

I have ubuntu VM with nginx as web server, here's all the app

  1. backend, laravel
  2. frontend, laravel
  3. websocket, socketio, expressJS (run on port 8015, reverse proxy to domain-ws.com)
  4. some hardware/raspberry that always connect to websocket (domain-ws.com)

And i have some feature on frontend to do some handshake to check services status, here's the flow

  1. frontend: send ajax to FE route
  2. fe controller: send http request to BE api endpoint
  3. BE api: send ElepahntIO Client to websocket (domain-ws.com)
  4. WS: get data, and send broadcast to raspberry
  5. raspberry: send feedback to WS
  6. WS: send feedback to BE api (point 3)
  7. BE api: return output true (true mean hardware is online)
  8. fe controller: return output of BE api
  9. frontend: get output from ajax

Yeah, that's a lot of process. But, here's the problem, there are 3 raspberry with different functionality, and in frontend there is a dashboard to asynchronously check all raspberry status. it was fine if only 1 user doing it, but there is a problem if 2 browser tab refresh the dashboard page at the same time, the server (i don't know it was be, fe, or ws) is down for a second and then dashboard page redirect to 504 bad gateway

My first thought is, it was from nginx revers proxy websocket configuration, here's the conf

    location / {
        proxy_pass http://127.0.0.1:8015/;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Buffer size adjustments
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
       

proxy_busy_buffers_size 256k; }

note: the rest of the configuration like a normal configuration to conf some SSL and domain, and redirect from http to https

and here's for the laravel project nginx conf

server {
    listen 443 ssl;
    server_name domain-fe.com;

    ssl_certificate /etc/ssl/certs/domainfe.crt;
    ssl_certificate_key /etc/ssl/certs/domainfe.key;

    # Set maximum upload size to 100MB
    client_max_body_size 100M;

    root /home/domainfe/public;
    index index.php index.html index.htm;

    location / {
        try_files $uri $uri/ /index.php?$query_string;

        # Buffer size adjustments
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    location ~ /\.ht {
        deny all;
    }
}

At some moment i want to increase the timeout, but i'm afraid the bigger timeout makes the longer loading takes, so yeah i don't do that

Thanks

1
  • the reason why I'm using WebSocket as hardware online status cause the flexibility or dynamic, I don't need to know the hardware IP address, just make sure the hardware is connected to same websocket, and boom, they are connected
    – Tuhan Kamu
    Commented Jul 3 at 4:48

0