Layer Kernel size Activation
Convolutional 32 Rectified Linear Activator (ReLu) (Agarap, 2019)
Convolutional 64 ReLu
Max Pooling 2D NA NA
Dropout NA NA
Flatten NA NA
Dense 128 ReLu
Dense 1 Sigmoid