Google News
logo
ml5.js Interview Questions
method description
.addData() adds data to the neuralNetworkData.data.raw array
.normalizeData() normalizes the data stored in neuralNetworkData.data.raw and stores the normalized values in the neuralNetwork.data.training array
.train() uses the data in the neuralNetwork.data.training array to train your model
.predict() for regression tasks, allows you to make a prediction based on an input array or JSON object.
.predictMultiple() for regression tasks, allows you to make a prediction based on an input array of arrays or array of JSON objects.
.classify() for classification tasks, allows you to make a classification based on an input array or JSON object.
.classifyMultiple() for classification tasks, allows you to make classifications based on an input array of arrays or array of JSON objects.
.saveData() allows you to save your data out from the neuralNetworkData.data.raw array
.loadData() allows you to load data previously saved from the .saveData() function
.save() allows you to save the trained model
.load() allows you to load a trained model
You can use neural networks to recognize the content of images. ml5.imageClassifier() is a method to create an object that classifies an image using a pre-trained model.
 
It should be noted that the pre-trained model provided by the example below was trained on a database of approximately 15 million images (ImageNet). The ml5 library accesses this model from the cloud. What the algorithm labels an image is entirely dependent on that training data -- what is included, excluded, and how those images are labeled (or mislabeled).
// Initialize the Image Classifier method with MobileNet
const classifier = ml5.imageClassifier('MobileNet', modelLoaded);

// When the model is loaded
function modelLoaded() {
  console.log('Model Loaded!');
}

// Make a prediction with a selected image
classifier.classify(document.getElementById('image'), (err, results) => {
  console.log(results);
});
Initialize :
const classifier = ml5.imageClassifier(model, ?video, ?options, ?callback);
 
Parameters : 

model : REQUIRED. A String value of a valid model OR a url to a model.json that contains a pre-trained model. Case insensitive. Models available are: 'MobileNet', 'Darknet' and 'Darknet-tiny','DoodleNet', or any image classification model trained in Teachable Machine. Below are some examples of creating a new image classifier:

* mobilenet :
const classifier = ml5.imageClassifier('MobileNet', modelReady);
 
* Darknet :
const classifier = ml5.imageClassifier('Darknet', modelReady);
 
* DoodleNet :
const classifier = ml5.imageClassifier('DoodleNet', modelReady);
 
* Custom Model from Teachable Machine :
const classifier = ml5.imageClassifier('path/to/custom/model.json', modelReady);
 
video : OPTIONAL. An HTMLVideoElement

callback : OPTIONAL. A function to run once the model has been loaded. If no callback is provided, it will return a promise that will be resolved once the model has loaded.

options : OPTIONAL. An object to change the defaults (shown below). The available options are :
{
  version: 1,
  alpha: 1.0,
  topk: 3,
};
.video
* Object. HTMLVideoElement if given in the constructor. Otherwise it is null.
 
.model
* Object. The image classifier model specified in the constructor.
 
.modelName
* String. The name of the image classifier model specified in the constructor
 
.modelUrl
* String. The absolute or relative URL path to the input model.
.classify()

*
Given an image or video, returns an array of objects containing class names and probabilities
 
If you DID NOT specify an image or video in the constructor...
classifier.classify(input, ?numberOfClasses, ?callback);
 
If you DID specify an image or video in the constructor...
classifier.classify(?numberOfClasses , ?callback);
 
Inputs :
 
* input : HTMLImageElement | ImageData | HTMLCanvasElement | HTMLVideoElement. NOTE: Videos can also be added in the constructor and then do not need to be specified again as an input.

* numberOfClasses : Number. The number of classes you want to return.

* callback : Function. A function to handle the results of .segment(). Likely a function to do something with the segmented image.

Outputs :
 
* Object : Returns an array of objects. Each object contains {label, confidence}.
PoseNet is a machine learning model that allows for Real-time Human Pose Estimation.
 
PoseNet can be used to estimate either a single pose or multiple poses, meaning there is a version of the algorithm that can detect only one person in an image/video and one version that can detect multiple persons in an image/video.
 
The original PoseNet model was ported to TensorFlow.js by Dan Oved. 

Quickstart
const video = document.getElementById('video');

// Create a new poseNet method
const poseNet = ml5.poseNet(video, modelLoaded);

// When the model is loaded
function modelLoaded() {
  console.log('Model Loaded!');
}
// Listen to new 'pose' events
poseNet.on('pose', (results) => {
  poses = results;
});
 
Initialize : There are a couple ways to initialize ml5.poseNet.
// Initialize with video, type and callback
const poseNet = ml5.poseNet(?video, ?type, ?callback);
// OR Initialize with video, options and callback
const poseNet = ml5.poseNet(?video, ?options, ?callback);
// OR Initialize WITHOUT video. Just options and callback here
const poseNet = ml5.poseNet(?callback, ?options);
 
Parameters : 
video : OPTIONAL. Optional HTMLVideoElement input to run poses on.
 
type : OPTIONAL. A String value to run single or multiple estimation. Changes the detectionType property of the options. Default is multiple.
 
callback : OPTIONAL. A function that is called when the model is loaded.
 
options : OPTIONAL. A object that contains properties that effect the posenet model accuracy, results, etc.
{
  architecture: 'MobileNetV1',
  imageScaleFactor: 0.3,
  outputStride: 16,
  flipHorizontal: false,
  minConfidence: 0.5,
  maxPoseDetections: 5,
  scoreThreshold: 0.5,
  nmsRadius: 20,
  detectionType: 'multiple',
  inputResolution: 513,
  multiplier: 0.75,
  quantBytes: 2,
};
As written by the developers of BodyPix :
 
"Bodypix is an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. In computer vision, image segmentation refers to the technique of grouping pixels in an image into semantic areas typically to locate objects and boundaries. The BodyPix model is trained to do this for a person and twenty-four body parts (parts such as the left hand, front right lower leg, or back torso). In other words, BodyPix can classify the pixels of an image into two categories: 1) pixels that represent a person and 2) pixels that represent background. It can further classify pixels representing a person into any one of twenty-four body parts."

Quickstart
const bodypix = ml5.bodyPix(modelReady);

function modelReady() {
  // segment the image given
  bodypix.segment(img, gotResults);
}

function gotResults(error, result) {
  if (error) {
    console.log(error);
    return;
  }
  // log the result
  console.log(result.backgroundMask);
}
 
Usage
Initialize : 
const bodyPix = new ml5.bodyPix(?video, ?options, ?callback);
Parameters :

video : OPTIONAL. An HTMLVideoElement

callback : REQUIRED. A function to run once the model has been loaded.

options : OPTIONAL. An object to change the defaults (shown below). The available options are:
{
  multiplier: 0.75, // 1.0, 0.75, or 0.50, 0.25
  outputStride: 16, // 8, 16, or 32, default is 16
  segmentationThreshold: 0.5, // 0 - 1, defaults to 0.5
  palette: {
    leftFace: {
      id: 0,
      color: [110, 64, 170],
    },
    rightFace: {
      id: 1,
      color: [106, 72, 183],
    },
    rightUpperLegFront: {
      id: 2,
      color: [100, 81, 196],
    },
    rightLowerLegBack: {
      id: 3,
      color: [92, 91, 206],
    },
    rightUpperLegBack: {
      id: 4,
      color: [84, 101, 214],
    },
    leftLowerLegFront: {
      id: 5,
      color: [75, 113, 221],
    },
    leftUpperLegFront: {
      id: 6,
      color: [66, 125, 224],
    },
    leftUpperLegBack: {
      id: 7,
      color: [56, 138, 226],
    },
    leftLowerLegBack: {
      id: 8,
      color: [48, 150, 224],
    },
    rightFeet: {
      id: 9,
      color: [40, 163, 220],
    },
    rightLowerLegFront: {
      id: 10,
      color: [33, 176, 214],
    },
    leftFeet: {
      id: 11,
      color: [29, 188, 205],
    },
    torsoFront: {
      id: 12,
      color: [26, 199, 194],
    },
    torsoBack: {
      id: 13,
      color: [26, 210, 182],
    },
    rightUpperArmFront: {
      id: 14,
      color: [28, 219, 169],
    },
    rightUpperArmBack: {
      id: 15,
      color: [33, 227, 155],
    },
    rightLowerArmBack: {
      id: 16,
      color: [41, 234, 141],
    },
    leftLowerArmFront: {
      id: 17,
      color: [51, 240, 128],
    },
    leftUpperArmFront: {
      id: 18,
      color: [64, 243, 116],
    },
    leftUpperArmBack: {
      id: 19,
      color: [79, 246, 105],
    },
    leftLowerArmBack: {
      id: 20,
      color: [96, 247, 97],
    },
    rightHand: {
      id: 21,
      color: [115, 246, 91],
    },
    rightLowerArmFront: {
      id: 22,
      color: [134, 245, 88],
    },
    leftHand: {
      id: 23,
      color: [155, 243, 88],
    },
  },
};​
The U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg, Germany.[1] The network is based on the fully convolutional network [2] and its architecture was modified and extended to work with fewer training images and to yield more precise segmentations.
 
UNET allows you to segment an image.
 
The ml5 unet face allows you to remove, for example, the background from video of the upper body of person.
 
Quickstart
// load your model...
const uNet = ml5.uNet('face');

// assuming you have an HTMLVideo feed...
uNet.segment(video, gotResult);

function gotResult(error, result) {
  // if there's an error return it
  if (error) {
    console.error(error);
    return;
  }
  // log your result
  console.log(result);
}
 
Usage
Initialize : 
const unet = ml5.uNet(model, ?callback);
Parameters : 

* model : A string to the path of the JSON model.

* callback : Optional. A callback function that is called once the model has loaded. If no callback is provided, it will return a promise that will be resolved once the model has loaded.
Handpose is a machine-learning model that allows for palm detection and hand-skeleton finger tracking in the browser. It can detect a maximum of one hand at a time and provides 21 3D hand keypoints that describe important locations on the palm and fingers.
Handpose

Quickstart :
let predictions = [];
const video = document.getElementById('video');

// Create a new handpose method
const handpose = ml5.handpose(video, modelLoaded);

// When the model is loaded
function modelLoaded() {
  console.log('Model Loaded!');
}

// Listen to new 'hand' events
handpose.on('hand', results => {
  predictions = results;
});
 
Usage :
Initialize
You can initialize ml5.handpose with an optional video, configuration options object, or a callback function.
const handpose = ml5.handpose(?video, ?options, ?callback);
Parameters :
* video : OPTIONAL. Optional HTMLVideoElement input to run predictions on.
 
* options : OPTIONAL. A object that contains properties that effect the Handpose model accuracy, results, etc. See documentation on the available options in TensorFlow's Handpose documentation.
const options = {
flipHorizontal: false, // boolean value for if the video should be flipped, defaults to false
maxContinuousChecks: Infinity, // How many frames to go without running the bounding box detector. Defaults to infinity, but try a lower value if the detector is consistently producing bad predictions.
detectionConfidence: 0.8, // Threshold for discarding a prediction. Defaults to 0.8.
scoreThreshold: 0.75, // A threshold for removing multiple (likely duplicate) detections based on a "non-maximum suppression" algorithm. Defaults to 0.75
iouThreshold: 0.3, // A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]. Defaults to 0.3.
}
* callback : OPTIONAL. A function that is called once the model has loaded.

Sources : Ml5.js, W3C, more..