


Model %>% compile( optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = "accuracy" ) history % fit( x = cifar $train $x, y = cifar $train $y, epochs = 10, validation_data = unname(cifar $test), verbose = 2 ) # Train on 50000 samples, validate on 10000 samples Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer.Īs you can see, our (3, 3, 64) outputs were flattened into vectors of shape (576) before going through two Dense layers. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). The width and height dimensions tend to shrink as you go deeper in the network.


# conv2d_2 (Conv2D) (None, 4, 4, 64) 36928Ībove, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels).
