Our pricing plans

Most popular

Explorer

Designed for researchers, individuals and startups

Price per user:

250 $/month

or 2500 $/year


Included:

  • Fully managed orchestration and scaling
  • Built-in caching for intermediate pipeline steps
  • Unlimited workspaces, datasources and pipelines
  • Automated evaluation for each pipeline run (Tensorboard + TFMA)
  • Built-in AND custom data sources
  • Built-in splitting mechanism
  • Built-in AND custom preprocessing functions and models
  • Experiment tracking
  • Training on K80 GPUs
  • Full access to trained model artifacts (TFServing)
 

Unlimited

Ideal for teams and companies of all sizes, made for collaboration.

Price per user:

 Coming soon

Contact us for early access


All of Explorer plan, plus:

  • Built-in AND custom splitting mechanisms
  • Full access to trained model artifacts AND managed serving of models
  • Training on up to 8 NVIDIA K80, P4, P100, T4, V100
  • Trigger-based retraining (Time, Data, Webhooks)
  • Continuous training and evaluation
  • Tailor-made support plans and SLAs
 

Enterprise

Run natively on your own resources, in your cloud or datacenter.

Pricing:

 Contact us

 


Included:

  • Implementation support and trainings
  • Full caching support
  • Fully distributed with Beam-compatible infrastructure
  • Unlimited workspaces, datasources and pipelines
  • Unlimited organisations
  • Built-in AND custom data sources
  • Built-in AND custom splitting mechanisms
  • Built-in AND custom preprocessing functions and models
  • Full GPU/CUDA support
  • Full access and control over resources and environments

Pricing plans compared

Explorer

Unlimited

Enterprise

Key features

Orchestrated scaling

Caching of all pipelines

Unlimited Users

Unlimited Workspaces

Unlimited Datasources

Built-in automated Evaluation

Built-in AND custom data sources

Built-in AND custom preprocessing functions and models

Experiment tracking

Access via CLI

Access via API

Continuous training

Built-in AND custom triggers

99,9% SLA *

On-Premise

1

1

Unlimited

Teams

Models

Models

Models and intermediate pipeline artifacts

Access to artifacts

Frequently asked questions:

Q: Why did you build the core engine?

We built it because we scratched our own itch while deploying multiple ML models in production for the last 3 years. Our team struggled to find a simple yet production-ready solution whilst developing large scale ML pipelines, and built a solution for it that we are now proud to share with all of you!

Q: Where is the actual processing done?

If you're okay with a managed solution, then you can start using the Core Engine right away and the processing will run on our servers (hosted at Google Cloud Platform). If not, then please contact us to deploy the Core Engine on your premises.

Q: The YAML configuration file is too complicated!

Actually, there is a lot of value in separating the configuration of your pipelines from the actual implementation. Read our blog post for a detailed case for YAML configurations in ML.

Q: Can I run custom code on Core Engine?

Yes. We have defined interfaces where you can plug in your custom preprocessing/model code. Check out our docs for more information.