Our pricing plans

 

Explorer

Designed for researchers, individuals and early-stage startups

Price:

Free €/month

incl. 100 million processed Datapoints per month


Included:

  • 1 workspace
  • 5 datasources
  • Unlimited data commits
  • Unlimited users
  • Fully managed orchestration and scaling
  • Built-in caching for intermediate pipeline steps
  • Automated evaluation for each pipeline run (Tensorboard + TFMA)
  • Built-in AND custom data sources
  • Built-in AND custom splitting mechanism
  • Built-in AND custom preprocessing functions and models
  • Experiment tracking
  • Training on CPU & GPUs
  • Full access to trained model artifacts
Most popular

Unlimited

Ideal for teams and companies of all sizes, made for collaboration.

Price:

100 €/month

+ 0,25€ per million processed Datapoints


All of Explorer plan, plus:

  • Unlimited workspaces
  • Unlimited datasources
  • Unlimited data commits
  • Unlimited users
  • Run all workloads in YOUR cloud environment
  • Full access to trained model artifacts AND managed serving of models
  • Trigger-based retraining (Time, Data, Webhooks)
  • Continuous training and evaluation
  • Tailor-made support plans and SLAs
 

Enterprise

Run natively on your own resources, in your cloud or datacenter.

Pricing:

 Contact us

 


Included:

  • Implementation support and trainings
  • Full caching support
  • Fully distributed with Beam-compatible infrastructure
  • Unlimited workspaces, datasources and pipelines
  • Unlimited organisations
  • Built-in AND custom data sources
  • Built-in AND custom splitting mechanisms
  • Built-in AND custom preprocessing functions and models
  • Full GPU/CUDA support
  • Full access and control over resources and environments

Pricing plans compared

Explorer

Unlimited

Enterprise

Key features

Orchestrated scaling

Caching of all pipelines

Unlimited Users

Unlimited Workspaces

Unlimited Datasources

Built-in automated Evaluation

Built-in AND custom data sources

Built-in AND custom preprocessing functions and models

Experiment tracking

Access via CLI

Access via API

Continuous training

Built-in AND custom triggers

99,9% SLA *

Custom Cloud backends

On-Premise

1

Unlimited

Unlimited

Teams

Models

Models and intermediate pipeline artifacts

Models and intermediate pipeline artifacts

Access to artifacts

Frequently asked questions:

Q: Why did you build the core engine?

We built it because we scratched our own itch while deploying multiple ML models in production for the last 3 years. Our team struggled to find a simple yet production-ready solution whilst developing large scale ML pipelines, and built a solution for it that we are now proud to share with all of you!

Q: Where is the actual processing done?

If you're okay with a managed solution, then you can start using the Core Engine right away and the processing will run on our servers (hosted at Google Cloud Platform). If not, then please contact us to deploy the Core Engine on your premises.

Q: The YAML configuration file is too complicated!

Actually, there is a lot of value in separating the configuration of your pipelines from the actual implementation. Read our blog post for a detailed case for YAML configurations in ML.

Q: Can I run custom code on Core Engine?

Yes. We have defined interfaces where you can plug in your custom preprocessing/model code. Check out our docs for more information.