News from the RAVEN Coin Development team on July 31, 2021
Dear Raven Community,
This quarter we’ve focused heavily on the development and future growth of Raven Protocol. As our sharp-eyed GitHub subscribers saw, we restructured the framework to allow contributors to easily implement different algorithms.
The Raven Distribution Framework (RDF) is our suite of libraries to train machine learning/deep learning models in a decentralized and distributed manner. It can be used to perform statistical operations also. Most importantly, it facilitates the faster and cheaper training of ML/DL models on browser nodes. With RDF, we aim to build an ecosystem and accelerate the development of the fundamentals.
Let’s explore the different libraries/repositories as our codebase is growing:
RavCom: RavCom is a common library that contains various methods to interact with the databases like MySQL, Redis, PostgreSQL, and others. It is a common library used in most of our libraries.
RavOp: Op is the fundamental unit of RDF. RavOp is our library to work with ops. You can create ops, interact with ops, and create scalars and tensors. RavOp is a crucial building block of the framework. It can be used to write various algorithms, formulas, and mathematical calculations.
RavSock (Socket Server): RavSock is the second most crucial building block of the framework. It sits between the developer(who create ops and write algorithms) and the contributor who contribute the idle computing power. It facilitates the efficient distribution of ops and the efficient merging of results.
RavML: RavML is the machine learning library based on RavOp. It contains implementations of various machine learning algorithms like ordinary least squares, linear regression, logistic regression, KNN, Kmeans, Mini batch Kmeans, Decision Tree classifier, and Naive Bayes classifier. These algorithms can be used out of the box. We are constantly working on new algorithms and looking for enthusiasts to contribute.
RavViz: RavViz is the visualization library. While training models, it is crucial to understand how your model is doing. In RavViz, you can see: Progress of models, Ops and their values, Graphs and their ops.
RavJS: RavJS is the Javascript library to calculate various ops on the browser node. We support 100+ ops for now and constantly adding new ops. Currently, we are taking TensorflowJS’s help to calculate ops and very soon will be working on our own library. Most of the ops are based on TensorflowJS because of its support for thousands of operations.
Our heads down building was rewarded with an Ocean Protocol partnership!
In Ocean, a Compute-to-Data infrastructure is set up as a Kubernetes (K8s) cluster e.g. on AWS or Azure in the background. This Kubernetes cluster is responsible for running the actual compute jobs, out of sight for marketplace clients and end users. While this is an incredible feat in itself, users and Data Providers may want an alternative option in the Compute Providers they choose to approve. The spirit of decentralization may be a philosophical choice for some, but a strict requirement for others. Raven Protocol provides the decentralized option when choosing a Compute Provider.
On top of that, Raven provides an additional layer of privacy for Ocean Compute-to-Data. We mentioned that we will be publishing Federated Learning algorithms. A neural network is randomly initialized. Weight updates are computed next to the data itself in a data silo and then sent to the neural network. This is repeated in data silo #1, data silo #2, data silo #3, and so on. A neural network gets trained across many data silos without data leaving the premises of each respective silo. The Raven Distribution Framework enables this in Compute-to-Data.
Towards Growth
We know that at the heart of protocol growth is the community. We need the best AI researchers, the best ML engineers, and the brightest minds to support this piece of decentralized infrastructure. This enables us to research, develop, and publish more AI/ML algorithms. This pushes us closer and closer to our goal of being the decentralized option for AI/ML training.
Q2 2021: Heads Down Building was originally published in RavenProtocol on Medium, where people are continuing the conversation by highlighting and responding to this story.