or 2500 $/year
We built it because we scratched our own itch while deploying multiple ML models in production for the last 3 years. Our team struggled to find a simple yet production-ready solution whilst developing large scale ML pipelines, and built a solution for it that we are now proud to share with all of you!
If you're okay with a managed solution, then you can start using the Core Engine right away and the processing will run on our servers (hosted at Google Cloud Platform). If not, then please contact us to deploy the Core Engine on your premises.
Actually, there is a lot of value in separating the configuration of your pipelines from the actual implementation. Read our blog post for a detailed case for YAML configurations in ML.
Yes. We have defined interfaces where you can plug in your custom preprocessing/model code. Check out our docs for more information.