A tf.LayersModel is a directed, acyclic graph of tf.Layers plus methods for training, evaluation, prediction and saving.

tf.LayersModel is the basic unit of training, inference and evaluation in TensorFlow.js. To create a tf.LayersModel, use tf.LayersModel.

See also: tf.Sequential, tf.loadLayersModel.

Doc

Hierarchy

Implements

Properties

checkNumSamples: any

Get number of samples provided for training, evaluation or prediction.

Param: ins

Input tf.Tensor.

Param: batchSize

Integer batch size, optional.

Param: steps

Total number of steps (batches of samples) before declaring loop finished. Optional.

Param: stepsName

The public API's parameter name for steps.

Returns

Number of samples provided.

inputSpec: InputSpec[]

List of InputSpec class instances.

Each entry describes one required input:

  • ndim
  • dtype A layer with n input tensors must have an inputSpec of length n.
makeTestFunction: any

Create a function which, when invoked with an array of tf.Tensors as a batch of inputs, returns the prespecified loss and metrics of the model under the batch of input data.

name: string

Name for this layer. Must be unique within a model.

predictLoop: any

Helper method to loop over some data in batches.

Porting Note: Not using the functional approach in the Python equivalent due to the imperative backend. Porting Note: Does not support step mode currently.

Param: ins:

input data

Param: batchSize:

integer batch size.

Param: verbose:

verbosity model @returns: Predictions as tf.Tensor (if a single output) or an Array of tf.Tensor (if multipe outputs).

retrieveSymbolicTensors: any

Retrieve the model's internal symbolic tensors from symbolic-tensor names.

testLoop: any

Loop over some test data in batches.

Param: f

A Function returning a list of tensors.

Param: ins

Array of tensors to be fed to f.

Param: batchSize

Integer batch size or null / undefined.

Param: verbose

verbosity mode.

Param: steps

Total number of steps (batches of samples) before declaring test finished. Ignored with the default value of null / undefined.

Returns

Array of Scalars.

trainable_: boolean

Whether the layer weights will be updated during training.

className: string

Nocollapse

Accessors

  • get input(): SymbolicTensor | SymbolicTensor[]
  • Retrieves the input tensor(s) of a layer.

    Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

    Returns SymbolicTensor | SymbolicTensor[]

    Input tensor or list of input tensors.

    Exception

    AttributeError if the layer is connected to more than one incoming layers.

  • get output(): SymbolicTensor | SymbolicTensor[]
  • Retrieves the output tensor(s) of a layer.

    Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

    Returns SymbolicTensor | SymbolicTensor[]

    Output tensor or list of output tensors.

    Exception

    AttributeError if the layer is connected to more than one incoming layers.

  • get outputShape(): Shape | Shape[]
  • Retrieves the output shape(s) of a layer.

    Only applicable if the layer has only one inbound node, or if all inbound nodes have the same output shape.

    Returns Shape | Shape[]

    Output shape or shapes.

    Throws

    AttributeError: if the layer is connected to more than one incoming nodes.

    Doc

  • get stateful(): boolean
  • Determine whether the container is stateful.

    Porting Note: this is the equivalent of the stateful

    Returns boolean

  • set stopTraining(stop): void
  • Setter used for force stopping of LayersModel.fit() (i.e., training).

    Example:

    const input = tf.input({shape: [10]});
    const output = tf.layers.dense({units: 1}).apply(input);
    const model = tf.model({inputs: [input], outputs: [output]});
    model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
    const xs = tf.ones([8, 10]);
    const ys = tf.zeros([8, 1]);

    const history = await model.fit(xs, ys, {
    epochs: 10,
    callbacks: {
    onEpochEnd: async (epoch, logs) => {
    if (epoch === 2) {
    model.stopTraining = true;
    }
    }
    }
    });

    // There should be only 3 values in the loss array, instead of 10
    values,
    // due to the stopping after 3 epochs.
    console.log(history.history.loss);

    Parameters

    • stop: boolean

    Returns void

Methods

  • Add losses to the layer.

    The loss may potentially be conditional on some inputs tensors, for instance activity losses are conditional on the layer's inputs.

    Parameters

    • losses: RegularizerFn | RegularizerFn[]

    Returns void

    Doc

  • Adds a weight variable to the layer.

    Parameters

    • name: string

      Name of the new weight variable.

    • shape: Shape

      The shape of the weight.

    • Optional dtype: keyof DataTypeMap

      The dtype of the weight.

    • Optional initializer: Initializer

      An initializer instance.

    • Optional regularizer: Regularizer

      A regularizer instance.

    • Optional trainable: boolean

      Whether the weight should be trained via backprop or not (assuming that the layer itself is also trainable).

    • Optional constraint: Constraint

      An optional trainable.

    • Optional getInitializerFunc: Function

    Returns LayerVariable

    The created weight variable.

    Doc

  • Builds or executes a Layer's logic.

    When called with tf.Tensor(s), execute the Layer's computation and return Tensor(s). For example:

    const denseLayer = tf.layers.dense({
    units: 1,
    kernelInitializer: 'zeros',
    useBias: false
    });

    // Invoke the layer's apply() method with a `tf.Tensor` (with concrete
    // numeric values).
    const input = tf.ones([2, 2]);
    const output = denseLayer.apply(input);

    // The output's value is expected to be [[0], [0]], due to the fact that
    // the dense layer has a kernel initialized to all-zeros and does not have
    // a bias.
    output.print();

    When called with tf.SymbolicTensor(s), this will prepare the layer for future execution. This entails internal book-keeping on shapes of expected Tensors, wiring layers together, and initializing weights.

    Calling apply with tf.SymbolicTensors are typically used during the building of non-tf.Sequential models. For example:

    const flattenLayer = tf.layers.flatten();
    const denseLayer = tf.layers.dense({units: 1});

    // Use tf.layers.input() to obtain a SymbolicTensor as input to apply().
    const input = tf.input({shape: [2, 2]});
    const output1 = flattenLayer.apply(input);

    // output1.shape is [null, 4]. The first dimension is the undetermined
    // batch size. The second dimension comes from flattening the [2, 2]
    // shape.
    console.log(JSON.stringify(output1.shape));

    // The output SymbolicTensor of the flatten layer can be used to call
    // the apply() of the dense layer:
    const output2 = denseLayer.apply(output1);

    // output2.shape is [null, 1]. The first dimension is the undetermined
    // batch size. The second dimension matches the number of units of the
    // dense layer.
    console.log(JSON.stringify(output2.shape));

    // The input and output can be used to construct a model that consists
    // of the flatten and dense layers.
    const model = tf.model({inputs: input, outputs: output2});

    Parameters

    • inputs: SymbolicTensor | SymbolicTensor[] | Tensor<Rank> | Tensor<Rank>[]

      a tf.Tensor or tf.SymbolicTensor or an Array of them.

    • Optional kwargs: Kwargs

      Additional keyword arguments to be passed to call().

    Returns SymbolicTensor | SymbolicTensor[] | Tensor<Rank> | Tensor<Rank>[]

    Output of the layer's call method.

    Exception

    ValueError error in case the layer is missing shape information for its build call.

    Doc

  • Checks compatibility between the layer and provided inputs.

    This checks that the tensor(s) input verify the input assumptions of the layer (if any). If not, exceptions are raised.

    Parameters

    Returns void

    Exception

    ValueError in case of mismatch between the provided inputs and the expectations of the layer.

  • Creates the layer weights.

    Must be implemented on all layers that have weights.

    Called when apply() is called to construct the weights.

    Parameters

    • inputShape: Shape | Shape[]

      A Shape or array of Shape (unused).

    Returns void

    Doc

  • Retrieves the Container's current loss values.

    Used for regularizers during training.

    Returns Scalar[]

  • Call the model on new inputs.

    In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

    Parameters

    • inputs: Tensor<Rank> | Tensor<Rank>[]

      A tensor or list of tensors.

    • kwargs: Kwargs

    Returns Tensor<Rank> | Tensor<Rank>[]

    A tensor if there is a single output, or a list of tensors if there are more than one outputs.

  • Check trainable weights count consistency.

    This will raise a warning if this.trainableWeights and this.collectedTrainableWeights are inconsistent (i.e., have different numbers of parameters). Inconsistency will typically arise when one modifies model.trainable without calling model.compile() again.

    Returns void

  • Clear call hook. This is currently used for testing only.

    Returns void

  • Configures and prepares the model for training and evaluation. Compiling outfits the model with an optimizer, loss, and/or metrics. Calling fit or evaluate on an un-compiled model will throw an error.

    Parameters

    • args: ModelCompileArgs

      a ModelCompileArgs specifying the loss, optimizer, and metrics to be used for fitting and evaluating this model.

    Returns void

    Doc

  • Computes an output mask tensor.

    Parameters

    • inputs: Tensor<Rank> | Tensor<Rank>[]

      Tensor or list of tensors.

    • Optional mask: Tensor<Rank> | Tensor<Rank>[]

      Tensor or list of tensors.

    Returns Tensor<Rank> | Tensor<Rank>[]

    null or a tensor (or list of tensors, one per output tensor of the layer).

  • Computes the output shape of the layer.

    Assumes that the layer will be built to match that input shape provided.

    Parameters

    • inputShape: Shape | Shape[]

      A shape (tuple of integers) or a list of shape tuples (one per output tensor of the layer). Shape tuples can include null for free dimensions, instead of an integer.

    Returns Shape | Shape[]

  • Counts the total number of numbers (e.g., float32, int32) in the weights.

    Returns number

    An integer count.

    Throws

    RuntimeError: If the layer is not built yet (in which case its weights are not defined yet.)

    Doc

  • Dispose the weight variables that this Layer instance holds.

    Returns number

    Number of disposed variables.

  • Returns the loss value & metrics values for the model in test mode.

    Loss and metrics are specified during compile(), which needs to happen before calls to evaluate().

    Computation is done in batches.

    const model = tf.sequential({
    layers: [tf.layers.dense({units: 1, inputShape: [10]})]
    });
    model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
    const result = model.evaluate(
    tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4});
    result.print();

    Parameters

    • x: Tensor<Rank> | Tensor<Rank>[]

      tf.Tensor of test data, or an Array of tf.Tensors if the model has multiple inputs.

    • y: Tensor<Rank> | Tensor<Rank>[]

      tf.Tensor of target data, or an Array of tf.Tensors if the model has multiple outputs.

    • Optional args: ModelEvaluateArgs

      A ModelEvaluateArgs, containing optional fields.

    Returns Scalar | Scalar[]

    Scalar test loss (if the model has a single output and no metrics) or Array of Scalars (if the model has multiple outputs and/or metrics). The attribute model.metricsNames will give you the display labels for the scalar outputs.

    Doc

  • Evaluate model using a dataset object.

    Note: Unlike evaluate(), this method is asynchronous (async).

    Parameters

    • dataset: Dataset<{}>

      A dataset object. Its iterator() method is expected to generate a dataset iterator object, the next() method of which is expected to produce data batches for evaluation. The return value of the next() call ought to contain a boolean done field and a value field. The value field is expected to be an array of two tf.Tensors or an array of two nested tf.Tensor structures. The former case is for models with exactly one input and one output (e.g. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s).

    • Optional args: ModelEvaluateDatasetArgs

      A configuration object for the dataset-based evaluation.

    Returns Promise<Scalar | Scalar[]>

    Loss and metric values as an Array of Scalar objects.

    Doc

  • Execute internal tensors of the model with input data feed.

    Parameters

    • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

      Input data feed. Must match the inputs of the model.

    • outputs: string | string[]

      Names of the output tensors to be fetched. Must match names of the SymbolicTensors that belong to the graph.

    Returns Tensor<Rank> | Tensor<Rank>[]

    Fetched values for outputs.

  • Trains the model for a fixed number of epochs (iterations on a dataset).

    const model = tf.sequential({
    layers: [tf.layers.dense({units: 1, inputShape: [10]})]
    });
    model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
    for (let i = 1; i < 5 ; ++i) {
    const h = await model.fit(tf.ones([8, 10]), tf.ones([8, 1]), {
    batchSize: 4,
    epochs: 3
    });
    console.log("Loss after Epoch " + i + " : " + h.history.loss[0]);
    }

    Parameters

    • x: Tensor<Rank> | Tensor<Rank>[] | {
          [inputName: string]: Tensor;
      }

      tf.Tensor of training data, or an array of tf.Tensors if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names to tf.Tensors.

    • y: Tensor<Rank> | Tensor<Rank>[] | {
          [inputName: string]: Tensor;
      }

      tf.Tensor of target (label) data, or an array of tf.Tensors if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names to tf.Tensors.

    • Optional args: ModelFitArgs

      A ModelFitArgs, containing optional fields.

    Returns Promise<History>

    A History instance. Its history attribute contains all information collected during training.

    Exception

    ValueError In case of mismatch between the provided input data and what the model expects.

    Doc

  • Trains the model using a dataset object.

    Type Parameters

    • T

    Parameters

    • dataset: Dataset<T>

      A dataset object. Its iterator() method is expected to generate a dataset iterator object, the next() method of which is expected to produce data batches for training. The return value of the next() call ought to contain a boolean done field and a value field. The value field is expected to be an array of two tf.Tensors or an array of two nested tf.Tensor structures. The former case is for models with exactly one input and one output (e.g. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s).

    • args: ModelFitDatasetArgs<T>

      A ModelFitDatasetArgs, containing optional fields.

    Returns Promise<History>

    A History instance. Its history attribute contains all information collected during training.

    Doc

  • Abstract fit function for f(ins).

    Parameters

    • f: ((data) => Scalar[])

      A Function returning a list of tensors. For training, this function is expected to perform the updates to the variables.

    • ins: Tensor<Rank>[]

      List of tensors to be fed to f.

    • Optional outLabels: string[]

      List of strings, display names of the outputs of f.

    • Optional batchSize: number

      Integer batch size or == null if unknown. Default : 32.

    • Optional epochs: number

      Number of times to iterate over the data. Default : 1.

    • Optional verbose: number

      Verbosity mode: 0, 1, or 2. Default: 1.

    • Optional callbacks: BaseCallback[]

      List of callbacks to be called during training.

    • Optional valF: ((data) => Scalar[])

      Function to call for validation.

    • Optional valIns: Tensor<Rank>[]

      List of tensors to be fed to valF.

    • Optional shuffle: string | boolean

      Whether to shuffle the data at the beginning of every epoch. Default : true.

    • Optional callbackMetrics: string[]

      List of strings, the display names of the metrics passed to the callbacks. They should be the concatenation of the display names of the outputs of f and the list of display names of the outputs of valF.

    • Optional initialEpoch: number

      Epoch at which to start training (useful for resuming a previous training run). Default : 0.

    • Optional stepsPerEpoch: number

      Total number of steps (batches on samples) before declaring one epoch finished and starting the next epoch. Ignored with the default value of undefined or null.

    • Optional validationSteps: number

      Number of steps to run validation for (only if doing validation from data tensors). Not applicable for tfjs-layers.

    Returns Promise<History>

    A History object.

  • Return the class name for this class to use in serialization contexts.

    Generally speaking this will be the same thing that constructor.name would have returned. However, the class name needs to be robust against minification for serialization/deserialization to work properly.

    There's also places such as initializers.VarianceScaling, where implementation details between different languages led to different class hierarchies and a non-leaf node is used for serialization purposes.

    Returns string

  • Retrieves the input tensor(s) of a layer at a given node.

    Parameters

    • nodeIndex: number

      Integer, index of the node from which to retrieve the attribute. E.g. nodeIndex=0 will correspond to the first time the layer was called.

    Returns SymbolicTensor | SymbolicTensor[]

    A tensor (or list of tensors if the layer has multiple inputs).

  • Retrieves a layer based on either its name (unique) or index.

    Indices are based on order of horizontal graph traversal (bottom-up).

    If both name and index are specified, index takes precedence.

    Parameters

    • name: string

      Name of layer.

    Returns Layer

    A Layer instance.

    Throws

    ValueError: In case of invalid layer name or index.

    Doc

  • Extract weight values of the model.

    Parameters

    • Optional config: SaveConfig

    Returns NamedTensor[]

    A NamedTensorMap mapping original weight names (i.e., non-uniqueified weight names) to their values.

  • Retrieves the output tensor(s) of a layer at a given node.

    Parameters

    • nodeIndex: number

      Integer, index of the node from which to retrieve the attribute. E.g. nodeIndex=0 will correspond to the first time the layer was called.

    Returns SymbolicTensor | SymbolicTensor[]

    A tensor (or list of tensors if the layer has multiple outputs).

  • Get user-defined metadata.

    The metadata is supplied via one of the two routes:

    1. By calling setUserDefinedMetadata().
    2. Loaded during model loading (if the model is constructed via tf.loadLayersModel().)

    If no user-defined metadata is available from either of the two routes, this function will return undefined.

    Returns {}

    • Returns the current values of the weights of the layer.

      Parameters

      • Optional trainableOnly: boolean

        Whether to get the values of only trainable weights.

      Returns Tensor<Rank>[]

      Weight values as an Array of tf.Tensors.

      Doc

    • Loads all layer weights from a JSON object.

      Porting Note: HDF5 weight files cannot be directly loaded in JavaScript / TypeScript. The utility script at scripts/pykeras.py offers means to convert them into JSON strings compatible with this method. Porting Note: TensorFlow.js Layers supports only loading by name currently.

      Parameters

      • weights: NamedTensorMap

        A JSON mapping weight names to weight values as nested arrays of numbers, or a NamedTensorMap, i.e., a JSON mapping weight names to tf.Tensor objects.

      • Optional strict: boolean

        Require that the provided weights exactly match those required by the container. Default: true. Passing false means that extra weights and missing weights will be silently ignored.

      Returns void

    • Creates a function that performs the following actions:

      1. computes the losses
      2. sums them to get the total loss
      3. call the optimizer computes the gradients of the LayersModel's trainable weights w.r.t. the total loss and update the variables
      4. calculates the metrics
      5. returns the values of the losses and metrics.

      Returns ((data) => Scalar[])

        • (data): Scalar[]
        • Creates a function that performs the following actions:

          1. computes the losses
          2. sums them to get the total loss
          3. call the optimizer computes the gradients of the LayersModel's trainable weights w.r.t. the total loss and update the variables
          4. calculates the metrics
          5. returns the values of the losses and metrics.

          Parameters

          Returns Scalar[]

    • Generates output predictions for the input samples.

      Computation is done in batches.

      Note: the "step" mode of predict() is currently not supported. This is because the TensorFlow.js core backend is imperative only.

      const model = tf.sequential({
      layers: [tf.layers.dense({units: 1, inputShape: [10]})]
      });
      model.predict(tf.ones([8, 10]), {batchSize: 4}).print();

      Parameters

      • x: Tensor<Rank> | Tensor<Rank>[]

        The input data, as a Tensor, or an Array of tf.Tensors if the model has multiple inputs.

      • Optional args: ModelPredictConfig

        A ModelPredictArgs object containing optional fields.

      Returns Tensor<Rank> | Tensor<Rank>[]

      Prediction results as a tf.Tensor(s).

      Exception

      ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

      Doc

    • Returns predictions for a single batch of samples.

      const model = tf.sequential({
      layers: [tf.layers.dense({units: 1, inputShape: [10]})]
      });
      model.predictOnBatch(tf.ones([8, 10])).print();

      Parameters

      Returns Tensor<Rank> | Tensor<Rank>[]

      Tensor(s) of predictions

      Doc

    • Reset the state of all stateful constituent layers (if any).

      Examples of stateful layers include RNN layers whose stateful property is set as true.

      Returns void

    • Computes output tensors for new inputs.

      Note:

      • Expects inputs to be a list (potentially with 1 element).

      Parameters

      • inputs: Tensor<Rank>[]

        List of tensors

      • Optional masks: Tensor<Rank>[]

        List of masks (tensors or null).

      Returns [Tensor<Rank>[], Tensor<Rank>[], Shape[]]

      Three lists: outputTensors, outputMasks, outputShapes

    • Save the configuration and/or weights of the LayersModel.

      An IOHandler is an object that has a save method of the proper signature defined. The save method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js provides IOHandler implementations for a number of frequently used saving mediums, such as tf.io.browserDownloads and tf.io.browserLocalStorage. See tf.io for more details.

      This method also allows you to refer to certain types of IOHandlers as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.

      Example 1: Save model's topology and weights to browser local storage; then load it back.

      const model = tf.sequential(
      {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
      console.log('Prediction from original model:');
      model.predict(tf.ones([1, 3])).print();

      const saveResults = await model.save('localstorage://my-model-1');

      const loadedModel = await tf.loadLayersModel('localstorage://my-model-1');
      console.log('Prediction from loaded model:');
      loadedModel.predict(tf.ones([1, 3])).print();

      Example 2. Saving model's topology and weights to browser IndexedDB; then load it back.

      const model = tf.sequential(
      {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
      console.log('Prediction from original model:');
      model.predict(tf.ones([1, 3])).print();

      const saveResults = await model.save('indexeddb://my-model-1');

      const loadedModel = await tf.loadLayersModel('indexeddb://my-model-1');
      console.log('Prediction from loaded model:');
      loadedModel.predict(tf.ones([1, 3])).print();

      Example 3. Saving model's topology and weights as two files (my-model-1.json and my-model-1.weights.bin) downloaded from browser.

      const model = tf.sequential(
      {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
      const saveResults = await model.save('downloads://my-model-1');

      Example 4. Send model's topology and weights to an HTTP server. See the documentation of tf.io.http for more details including specifying request parameters and implementation of the server.

      const model = tf.sequential(
      {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
      const saveResults = await model.save('http://my-server/model/upload');

      Parameters

      • handlerOrURL: string | IOHandler

        An instance of IOHandler or a URL-like, scheme-based string shortcut for IOHandler.

      • Optional config: SaveConfig

        Options for saving the model.

      Returns Promise<SaveResult>

      A Promise of SaveResult, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.

      Doc

    • Set call hook. This is currently used for testing only.

      Parameters

      • callHook: CallHook

      Returns void

    • Set the fast-weight-initialization flag.

      In cases where the initialized weight values will be immediately overwritten by loaded weight values during model loading, setting the flag to true saves unnecessary calls to potentially expensive initializers and speeds up the loading process.

      Parameters

      • value: boolean

        Target value of the flag.

      Returns void

    • Set user-defined metadata.

      The set metadata will be serialized together with the topology and weights of the model during save() calls.

      Parameters

      • userDefinedMetadata: {}

        Returns void

      • Sets the weights of the layer, from Tensors.

        Parameters

        • weights: Tensor<Rank>[]

          a list of Tensors. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of getWeights).

        Returns void

        Exception

        ValueError If the provided weights list does not match the layer's specifications.

        Doc

      • Print a text summary of the model's layers.

        The summary includes

        • Name and type of all layers that comprise the model.
        • Output shape(s) of the layers
        • Number of weight parameters of each layer
        • If the model has non-sequential-like topology, the inputs each layer receives
        • The total number of trainable and non-trainable parameters of the model.
        const input1 = tf.input({shape: [10]});
        const input2 = tf.input({shape: [20]});
        const dense1 = tf.layers.dense({units: 4}).apply(input1);
        const dense2 = tf.layers.dense({units: 8}).apply(input2);
        const concat = tf.layers.concatenate().apply([dense1, dense2]);
        const output =
        tf.layers.dense({units: 3, activation: 'softmax'}).apply(concat);

        const model = tf.model({inputs: [input1, input2], outputs: output});
        model.summary();

        Parameters

        • Optional lineLength: number

          Custom line length, in number of characters.

        • Optional positions: number[]

          Custom widths of each of the columns, as either fractions of lineLength (e.g., [0.5, 0.75, 1]) or absolute number of characters (e.g., [30, 50, 65]). Each number corresponds to right-most (i.e., ending) position of a column.

        • Optional printFn: ((message?, ...optionalParams) => void)

          Custom print function. Can be used to replace the default console.log. For example, you can use x => {} to mute the printed messages in the console.

            • (message?, ...optionalParams): void
            • Parameters

              • Optional message: any
              • Rest ...optionalParams: any[]

              Returns void

        Returns void

        Doc

      • Returns a JSON string containing the network configuration.

        To load a network from a JSON save file, use models.modelFromJSON(jsonString);

        Parameters

        • Optional unused: any
        • Optional returnString: boolean

          Whether the return value should be stringified (default: true).

        Returns string | PyJsonDict

        a JSON string if returnString (default), or a JSON object if !returnString.

      • Runs a single gradient update on a single batch of data.

        This method differs from fit() and fitDataset() in the following regards:

        • It operates on exactly one batch of data.
        • It returns only the loss and metric values, instead of returning the batch-by-batch loss and metric values.
        • It doesn't support fine-grained options such as verbosity and callbacks.

        Parameters

        • x: Tensor<Rank> | Tensor<Rank>[] | {
              [inputName: string]: Tensor;
          }

          Input data. It could be one of the following:

          • A tf.Tensor, or an Array of tf.Tensors (in case the model has multiple inputs).
          • An Object mapping input names to corresponding tf.Tensor (if the model has named inputs).
        • y: Tensor<Rank> | Tensor<Rank>[] | {
              [inputName: string]: Tensor;
          }

          Target data. It could be either a tf.Tensor or multiple tf.Tensors. It should be consistent with x.

        Returns Promise<number | number[]>

        Training loss or losses (in case the model has multiple outputs), along with metrics (if any), as numbers.

        Doc

      • Util shared between different serialization methods.

        Returns ConfigDict

        LayersModel config with Keras version information added.

      • Check compatibility between input shape and this layer's batchInputShape.

        Print warning if any incompatibility is found.

        Parameters

        • inputShape: Shape

          Input shape to be checked.

        Returns void

      • Type Parameters

        Parameters

        • cls: SerializableConstructor<T>
        • config: ConfigDict
        • Optional customObjects: ConfigDict
        • Optional fastWeightInit: boolean

        Returns T

        Nocollapse

      • Converts a layer and its index to a unique (immutable type) name. This function is used internally with this.containerNodes.

        Parameters

        • layer: Layer

          The layer.

        • nodeIndex: number

          The layer's position (e.g. via enumerate) in a list of nodes.

        Returns string

        The unique name.

      Generated using TypeDoc