JetBrains has launched an alpha version of the framework for learning neural networks on Kotlin. KotlinDL offers simple APIs for setting and training neural networks. The developers expect to lower the entry threshold for deep learning in the Java Virtual Machine with a high-level API and by using default values for a range of parameters.
What Constitutes the API?
In the early version of KotlinDL, developers might see all the necessary characteristics for defining multilayer perceptrons and convolutional networks. Most of the settings are coordinated to reasonable default values. Yet at the same time, users are offered with a wide scope of optimizers, initializers, activation functions, and others. The model structured during the training process can be saved and utilized in applications scripted in Kotlin and Java.
Using Keras Models
Due to the similarity of the API in KotlinDL, you can load and use models trained with Keras in Python. When loading, you can activate the Transfer Learning solution, which lets you train a neural network from scratch, but to use a ready-made model, modifying it to your task.
KotlinDL has the TensorFlow Java API as its engine. All calculations are completed in the TensorFlow machine learning library using native memory. During training, all data stays the same native format.
In the alpha version of KotlinDL, a limited set of layers are available: Input (), Flatten (), Dense (), Dropout (), Conv2D (), MaxPool2D (), and AvgPool2D(). This restriction also dictates which Keras models can be loaded into the framework. VGG-16 and VGG-19 frameworks are already supported, but ResNet50 is not yet. However, in the coming months, it is planned to introduce a minor update, in which the number of supported architectures will increase. The second time limit generally points at the lack of support for Android devices.
Training models on the CPU can take some time to process. The usual practice is to run computations on the GPU. But in order to do this, you will need an installed CUDA from NVIDIA. In case you want to start training the model on the GPU, it would be enough to add just one dependency.