0

Want to stream (from a webcam) from React to Flask Inside Flask send any frame to Deepstream, on each frame apply (at least) a grey filter (after that I will apply some detector and tracker) The result of the proccessed frame will send to React

tried to apply it with websocket, corrently using regular api

Still cant find how to run the Deepstream allways and send each frame to it and receive the result

Flask, server backend

from flask import Flask, request, send_file
from flask_cors import CORS
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
import tempfile
import os

app = Flask(__name__)
CORS(app)

Gst.init(None)

def process_image_with_gstreamer(input_path, output_path):
    pipeline_str = f"filesrc location={input_path} ! jpegparse ! jpegdec ! videoconvert ! video/x-raw,format=GRAY8 ! jpegenc ! filesink location={output_path}"
    pipeline = Gst.parse_launch(pipeline_str)

    pipeline.set_state(Gst.State.PLAYING)
    bus = pipeline.get_bus()
    msg = bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS)

    if msg:
        if msg.type == Gst.MessageType.ERROR:
            err, debug_info = msg.parse_error()
            print(f"Error received from element {msg.src.get_name()}: {err.message}")
            print(f"Debugging information: {debug_info if debug_info else 'none'}")

    pipeline.set_state(Gst.State.NULL)

@app.route('/process_image', methods=['POST'])
def process_image():
    if 'image' not in request.files:
        return 'No image file in request', 400

    image_file = request.files['image']
    
    with tempfile.NamedTemporaryFile(delete=False, suffix='.jpg') as temp_input:
        image_file.save(temp_input.name)
        temp_output = tempfile.NamedTemporaryFile(delete=False, suffix='.jpg')
        
        process_image_with_gstreamer(temp_input.name, temp_output.name)
        
        response = send_file(temp_output.name, mimetype='image/jpeg')
        
        os.unlink(temp_input.name)
        os.unlink(temp_output.name)
        
        return response

if __name__ == '__main__':
    app.run(debug=True)

React, front end code

import React, { useRef, useEffect, useState } from 'react';

const WebcamStream = () => {
  const videoRef = useRef(null);
  const canvasRef = useRef(null);
  const [processedImage, setProcessedImage] = useState(null);

  useEffect(() => {
    const startWebcam = async () => {
      try {
        const stream = await navigator.mediaDevices.getUserMedia({ video: true });
        if (videoRef.current) {
          videoRef.current.srcObject = stream;
        }
      } catch (err) {
        console.error("Error accessing webcam:", err);
      }
    };

    startWebcam();

    return () => {
      if (videoRef.current && videoRef.current.srcObject) {
        videoRef.current.srcObject.getTracks().forEach(track => track.stop());
      }
    };
  }, []);

  const captureAndSendFrame = () => {
    if (videoRef.current && canvasRef.current) {
      const context = canvasRef.current.getContext('2d');
      context.drawImage(videoRef.current, 0, 0, canvasRef.current.width, canvasRef.current.height);
      
      canvasRef.current.toBlob(blob => {
        const formData = new FormData();
        formData.append('image', blob, 'webcam.jpg');

        fetch('http://localhost:5000/process_image', {
          method: 'POST',
          body: formData,
        })
        .then(response => response.blob())
        .then(imageBlob => {
          const imageUrl = URL.createObjectURL(imageBlob);
          setProcessedImage(imageUrl);
        })
        .catch(error => console.error('Error:', error));
      }, 'image/jpeg');
    }
  };

  useEffect(() => {
    const intervalId = setInterval(captureAndSendFrame, 1000); // Adjust interval as needed
    return () => clearInterval(intervalId);
  }, []);

  return (
    <div>
      <video ref={videoRef} autoPlay style={{ display: 'none' }} />
      <canvas ref={canvasRef} width={640} height={480} style={{ display: 'none' }} />
      {processedImage && <img src={processedImage} alt="Processed frame" />}
    </div>
  );
};

export default WebcamStream;

1 Answer 1

0

I suggest you create a deepstream pipeline in a separate process like a daemon. You cand send the image from your app to the pipeline using an IPC mechanism like shmsink/shmsrc, rtsp, etc.

Then, you can use gst-nvmsgbroker to publish messages from the pipeline and subscribe to the topic on your app.

Sign up to request clarification or add additional context in comments.

2 Comments

Do you have some samples for this? Also, how can I add some plugins to Deepstream ? Thank, Arthur
Sure, this sample from nvidia can be of help When you install deepstream, there will be a lot of plugins installed, so you just need to launch a pipeline and all the different plugins will be available. This wiki provides some basic pipelines

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.