About this codelab
1. Objectives
Overview
This codelab focuses on creating an end-to-end Vertex AI Vision application to demonstrate sending videos with motion filtering feature. In this tutorial, we will go through the different parameters in the motion filter configuration:
- Motion detection sensitivity
- Minimum event length
- Look-back window
- Cool-down time
- Motion detection zone
What you'll learn
- How to ingest videos for streaming
- Different features available in Motion Filter and how to use them
- Where to check the stats of the Motion Filter
- How to adjust the setting based on your video
2. Before You Begin
- In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector
- Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
- Enable the Compute Engine and Vision AI APIs. Enable the APIs
Create a service account:
- In the Google Cloud console, go to the Create service account page. Go to Create service account
- Select your project.
- In the Service account name field, enter a name. The Google Cloud console fills in the Service account ID field based on this name. In the Service account description field, enter a description. For example, Service account for quickstart.
- Click Create and continue.
- To provide access to your project, grant the following role(s) to your service account: Vision AI > Vision AI Editor, Compute Engine > Compute Instance Admin (beta), Storage > Storage Object Viewer † . In the Select a role list, select a role. For additional roles, click Add another role and add each additional role. Note: The Role field affects which resources your service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant a predefined role or custom role that meets your needs.
- Click Continue.
- Click Done to finish creating the service account. Do not close your browser window. You will use it in the next step.
Create a service account key:
- In the Google Cloud console, click the email address for the service account that you created.
- Click Keys.
- Click Add key, and then click Create new key.
- Click Create. A JSON key file is downloaded to your computer.
- Click Close.
- Install and initialize the Google Cloud CLI.
† Role only needed if you copy a sample video file from a Cloud Storage bucket.
3. Motion Filter
Motion filter capture motion and product video segments that contains motion events. By adjusting the motion sensitivity, minimum event length, lookback window, cool down period, and motion detection zone, user can configure the filter base on their own needs.
Motion Filter Configuration
There are 5 configurations available in motion filter for customization.
- Motion sensitivity: how sensitive should motion be triggered.
- Minimum event length: the minimum length a motion event will be captured.
- Lookback window: how long should the video start recording before a motion event is detected.
- Cool down period: after a motion event has ended, a cooldown with the specified duration will happen. During the cool down period, motion events will not be triggered.
- Motion detection zone: user configured zone to specify where motion detection should run. (Will be elaborate in later section)
Motion sensitivity
Use flag motion_detection_sensitivity
in vaictl command.
String. Default medium. Can choose from low, medium, or high.
The higher the motion detection sensitivity is, the more sensitive it is to noise and smaller movements. This setting is recommended for settings where there are smaller moving objects (such as people from a distance) and stable lighting.
On the other hand, low sensitivity is less sensitive to lighting interference. This setting is perfect for when there is more lighting interference such as an outdoor environment and for lower video quality where there may be more noises. Since this setting is the most aggressive filtering of all, it could ignore movements from small objects.
Minimum event length
Use flag min_event_length_in_seconds
in vaictl command.
Integer. Default 10 seconds. Range from 0 second to 3600 seconds.
The minimum duration of motion event videos that will be parsed once a motion event segment has been detected in the frame.
Lookback window
Use flag look_back_window_in_seconds
in vaictl command.
Integer. Default 3 seconds. Range from 0 second to 3600 seconds.
The lookback window is the duration cached before a motion event is detected. It is useful when we are interested to see what happens in the frame a few seconds before motion events are detected.
Cool down period
Use flag cool_down_period_in_seconds
in vaictl command.
Integer. Default 300 seconds. Range from 0 second to 3600 seconds.
Cool down period is how long the motion detection will paused after a motion event has been captured. During the cool down period, no computation will be run to detect motion.
4. Basic motion filter example
Vaictl SDK Manual
To check vaictl
manual for input stream with motion filter, use below command.
vaictl send video-file applying motion-filter -h
Prepare a sample video
- You can copy a sample video with the following gsutil cp command. Replace the following variable:
- SOURCE: The location of a video file to use. You can use your own video file source (for example, gs://BUCKET_NAME/FILENAME.mp4), or use the sample video (gs://cloud-samples-data/vertex-ai-vision/street_vehicles_people.mp4 )(video with people and vehicles, source)
export SOURCE=gs://cloud-samples-data/vertex-ai-vision/street_vehicles_people.mp4 gsutil cp $SOURCE .
Prepare environment variables
Set below environmental variables to use the command template provided.
vaictl variables
- PROJECT_ID: Your Google Cloud project ID.
- LOCATION_ID: Your location ID. For example, us-central1. For more information, see Cloud locations.
- LOCAL_FILE: The filename of a local video file. For example, street_vehicles_people.mp4.
- –loop flag: Optional. Loops file data to simulate streaming.
export PROJECT_ID=<Your Google Cloud project ID> export LOCATION_ID=us-central1
Motion filter variables
- MOTION_SENSITIVITY: How sensitive the motion detection will be.
- MIN_EVENT_LENGTH: Minimum length of the motion events.
- LOOK_BACK_WINDOW: The duration to capture before the first motion in a motion event.
- COOL_DOWN_PERIOD: The period where motion detection will pause after a motion event has been captured.
export MOTION_SENSITIVITY=<low or medium or high> export MIN_EVENT_LENGTH=<0-3600> export LOOK_BACK_WINDOW=<0-3600> export COOL_DOWN_PERIOD=<0-3600>
Prepare motion filter command
There are two options to use the motion filter with input stream. The first option is to send the motion events to a stream in the cloud console. The second option is to send the motion events to local storage.
Sending results to the cloud console
You can use vaictl to stream the output video data to the cloud console. Begin by activating the Vision AI API in the Cloud Console.
Register a new stream
- Click streams tab on the left panel of Vertex AI Vision.
- Click on Register
- In the Stream name enter
motion-detection-stream
- In region enter
us-central1
- Click register
Sending results to stream
This command streams a video file to a stream. If you use the –loop flag, the video is looped into the stream until you stop the command. We will run this command as a background job so that it keeps streaming.
Add nohup
at the beginning and &
at the end to make it background job.
INPUT_VIDEO=street_vehicles_people.mp4 vaictl -p $PROJECT \ -l $LOCATION_ID \ -c application-cluster-0 \ --service-endpoint visionai.googleapis.com \ send video-file --file-path $INPUT_VIDEO \ applying motion-filter --motion-sensitivity=$MOTION_SENSITIVITY \ --min-event-length=$MIN_EVENT_LENGTH \ --lookback-length=$LOOK_BACK_WINDOW \ --cooldown-length=$COOL_DOWN_PERIOD \ to streams motion-detection-stream --loop
It might take ~100 seconds between starting the vaictl ingest operation and the video appearing in the dashboard.
After the stream ingestion is available, you can see the video feed in the Streams tab of the Vertex AI Vision dashboard by selecting the traffic-stream stream.
Sending results to local storage
This command streams a video file to a stream.
Add nohup
at the beginning and &
at the end to make it background job.
INPUT_VIDEO=street_vehicles_people.mp4 OUTPUT_PATH=<path_to_store_motion_events_on_local_disk> nohup vaictl -p $PROJECT \ -l $LOCATION_ID \ -c application-cluster-0 \ --service-endpoint visionai.googleapis.com \ send video-file --file-path $INPUT_VIDEO \ applying motion-filter --motion-sensitivity=$MOTION_SENSITIVITY \ --min-event-length=$MIN_EVENT_LENGTH \ --lookback-length=$LOOK_BACK_WINDOW \ --cooldown-length=$COOL_DOWN_PERIOD \ to mp4file --mp4-file-path=$OUTPUT_PATH --loop
5. Motion detection zone
In this section we will dive into the usage of motion detection zone and how to configure it. The zone is intended to improve the motion detection by masking out the motion coming from areas you are not interested.
The motion detection zone has two types, (1) positive zones where motion detection only runs in the annotated area; (2) negative zones where motion detection ignores any movement in the annotated area.
Zone annotation
Use flag zone_annotation
in vaictl command to input coordinates for zone polygons.
String. Default empty for zone annotation.
Zone annotation will be a string input from the user, denoting the zones in the frame that the user would like to hide or focus on. To annotate the zone, the user will need to specify the image coordinates of the x and the y axis for each node in the zone. A zone needs to have three or more nodes to form a polygon. There can be multiple zones in a frame. If the zones overlap with each other, the area covered by both zones will still be covered.
The zone annotation has a specific input syntax to follow.
- To denote a single node, use
:
to connect the x and y axis of an image coordination. For example a node of(0,0)
at the upper left corner will be denoted as0:0
. - To denote all the nodes in a single zone, use
;
to connect the nodes. For example, for a zone with nodes of(0,0)
,(100,0)
,(100,100)
, and(0, 100)
, the zone will be denoted as0:0;100:0;100:100;0:100
. Always input the nodes as connecting nodes next to each other, the order can be both clockwise or counter-clockwise.
*A square zone with four nodes.
*A triangle zone with three nodes.
- To denote multiple zones in a single frame, use
-
to connect different zones. For example, if we want to input both(0,0)
,(100,0)
,(100,100)
,(0,100)
and(120,120)
,(110,150)
,(200,160)
, the input zone annotation will be0:0;100:0;100:100;0:100-120:120;110:150;200:160
.
*Two zones within a frame.
For getting coordinates from image, there are some tools available online to help get the coordinates. For example, see Wolfram - Get Coordinates from Image
Exclude annotated zone
Use flag exclude_annotated_zone
in vaictl command to configure detect motion in zone or outside of zone.
Boolean. Default false.
Exclude annotated zone is a boolean input from the user, denoting if the user would like to exclude the annotated zone in motion detection or not.
- If set to
true
, the annotated zone will act as an negative zone. Motions in the annotated zones will not be detected.
*Only run motion detection outside of the input zones.
- If set to false, the zone will act as a positive zone, where motion detection will focus on.
*Only run motion detection in the input zones.
6. Motion filter with motion detection zone example
In this example, we will use a video that has a tree constantly moving in the foreground as example. In regular motion filter setting, the video will only produce one motion event that has the duration of the original video because the motion filter register the moving tree as "constantly moving throughout the whole video". However, with the help of motion detection zone, we can properly mask out the motion from the tree and focus on the motion from cars and pedestrians.
Video preparation
The sample video (gs://cloud-samples-data/vertex-ai-vision/dynamic-background-fall.mp4 ) contains tree and cars and pedestrians from www.changedetection.net.
Video credit: N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, changedetection.net: A new change detection benchmark dataset, in Proc. IEEE Workshop on Change Detection (CDW-2012) at CVPR-2012, Providence, RI, 16-21 Jun., 2012
Environment variable preparation
Google cloud project variables.
export PROJECT_ID=<Your Google Cloud project ID> export LOCATION_ID=us-central1 export LOCAL_FILE=street_vehicles_people.mp4
Basic motion filter configuration.
export MOTION_SENSITIVITY=<low or medium or high> export MIN_EVENT_LENGTH=<0-3600> export LOOK_BACK_WINDOW=<0-3600> export COOL_DOWN_PERIOD=<0-3600>
Motion detection zone configuration.
Pick on from below to see different type of usage for the motion detection zone.
Exclude the tree for the motion detection.
export ZONE_ANNOTATION="0:0;680:0;660:70;380:320;100:150" export EXCLUDE_ANNOTATED_ZONE=true
*Only run motion detection outside of the input zones.
Focus motion detection on the street.
export ZONE_ANNOTATION="0:300;780:300;780:480;0:480" export EXCLUDE_ANNOTATED_ZONE=false
*Only run motion detection outside of the input zones.
Send video stream with motion filter
Send the motion events to the cloud console
You can use vaictl to stream the output video data to the cloud console. Begin by activating the Vision AI API in the Cloud Console.
Register a new stream
- Click streams tab on the left panel of Vertex AI Vision.
- Click on Register
- In the Stream name enter
motion-detection-stream
- In region enter
us-central1
- Click register
Sending results to stream
This command streams a video file to a stream. If you use the –loop flag, the video is looped into the stream until you stop the command. We will run this command as a background job so that it keeps streaming.
Add nohup
at the beginning and &
at the end to make it background job.
vaictl -p $PROJECT \ -l $LOCATION_ID \ -c application-cluster-0 \ --service-endpoint visionai.googleapis.com \ send video-file --file-path $INPUT_VIDEO \ applying motion-filter --motion-sensitivity=$MOTION_SENSITIVITY \ --min-event-length=$MIN_EVENT_LENGTH \ --lookback-length=$LOOK_BACK_WINDOW \ --cooldown-length=$COOL_DOWN_PERIOD \ --zone_annotation=ZONE_ANNOTATION \ --exclude_annotated_zone=$EXCLUDE_ANNOTATED_ZONE \ to streams motion-detection-stream --loop
It might take ~100 seconds between starting the vaictl ingest operation and the video appearing in the dashboard.
After the stream ingestion is available, you can see the video feed in the Streams tab of the Vertex AI Vision dashboard by selecting the traffic-stream stream.
Sending results to local storage
This command streams a video file to a stream. If you use the –loop flag, the video is looped into the stream until you stop the command. We will run this command as a background job so that it keeps streaming.
Add nohup
at the beginning and &
at the end to make it background job.
OUTPUT_PATH=<path_to_store_motion_events> vaictl -p $PROJECT \ -l $LOCATION_ID \ -c application-cluster-0 \ --service-endpoint visionai.googleapis.com \ send video-file --file-path $INPUT_VIDEO \ applying motion-filter --motion-sensitivity=$MOTION_SENSITIVITY \ --min-event-length=$MIN_EVENT_LENGTH \ --lookback-length=$LOOK_BACK_WINDOW \ --cooldown-length=$COOL_DOWN_PERIOD \ --zone_annotation=$ZONE_ANNOTATION \ --exclude_annotated_zone=$EXCLUDE_ANNOTATED_ZONE \ to mp4file --mp4-file-path=$OUTPUT_PATH --loop
7. Congratulations
Congratulations, you finished the lab!
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, end the vaictl SDK operation through command-line with ctrl
+ z
.
Resources
https://cloud.google.com/vision-ai/docs/overview
https://cloud.google.com/vision-ai/docs/motion-filtering-model
https://cloud.google.com/vision-ai/docs/create-manage-streams
Feedback
Click here to Provide Feedback