Class PredictPoses
Represents an operator that performs markerless multi-pose estimation for each image in the sequence using a SLEAP model.
PredictPoses
implements the [top-down-model network]. The usual input of this operation will be a sequence of full frames where multiple instances are expected to be found. This operator will output a PoseCollection
with N number of instances found in the image. Indexing a PoseCollection
will return a Pose
where we can access the Centroid
of each detected instance along with the Pose
containing information on all trained body parts.
To access the data of a specific body part we use the GetBodyPart
. We set Name
to match the part name defined in the training_config.json
file. From that moment, the operator will always emit the selected BodyPart
object and its inferred position (BodyPart.Position
).
public class PredictPoses : Transform<IplImage, PoseCollection>
- Inheritance
-
PredictPoses
- Inherited Members
Properties
- CentroidMinConfidence
Gets or sets a value specifying the confidence threshold used to discard centroid predictions. If no value is specified, all estimated centroid positions are returned.
- ColorConversion
Gets or sets a value specifying the optional color conversion used to prepare RGB video frames for inference. If no value is specified, no color conversion is performed.
- ModelFileName
Gets or sets a value specifying the path to the exported Protocol Buffer file containing the pretrained SLEAP model.
- PartMinConfidence
Gets or sets a value specifying the confidence threshold used to discard predicted body part positions. If no value is specified, all estimated positions are returned.
- ScaleFactor
Gets or sets a value specifying the scale factor used to resize video frames for inference. If no value is specified, no resizing is performed.
- TrainingConfig
Gets or sets a value specifying the path to the configuration JSON file containing training metadata.
Methods
- Process(IObservable<IplImage>)
Performs markerless multi-pose estimation for each image in an observable sequence using a SLEAP model.