Our pricing plans

Available as BETA

Solo

Designed for researchers, individuals and small startups

Price per user:

Free


Resource pricing:

Free credits:

CPUs:

Training:

150€

0.08 €/h

1.30 €/h


Included:

  • Fully managed
  • 5 Workspaces
  • 5 Datasources
  • Built-in caching for similar pipelines
  • Unlimited pipeline runs
  • GPUs: Single K80s
 

Teams

Ideal for teams and companies of all sizes, made for collaboration.

Price per user:

 Contact us


Resource pricing:

 

CPUs:

Training:

 

0.08 €/h

1.30 €/h


All of Solo plan, plus:

  • Fully managed
  • Multiple organisations and users
  • Unlimited workspaces
  • Tailor-made support plans and SLAs
  • Custom integrations
  • GPUs: up to 8 NVIDIA K80, P4, P100, T4, V100
 

Cloud-Native

Run natively on your own resources, in your cloud or datacenter.

Pricing:

 Contact us


Included:

  • Runs on any Kubernetes cluster
  • Full GPU/CUDA support
  • Self-distributing with Beam-compatible infrastructure
  • Unlimited organisations and users
  • Unlimited workspaces and datasources
  • Full control over resources and environments
  • Full caching support
  • Implementation support and trainings available

Pricing plans compared

Teams

Cloud-Native

Key features

Managed

Managed

Self-hosted

Managed / Self-hosted

Caching of all pipelines

Automated scaling & distribution

Built-in automated Evaluation

Custom Models

99,9% SLA *

Access via CLI

Access via API

WIP

WIP

WIP

Access via UI

1

Unlimited

Unlimited

Users

5

Unlimited

Unlimited

Workspaces

5

Unlimited

Unlimited

Datasources

Teams

Join our beta.

We are launching the Core Engine with a public beta.
Sign up and receive 150€ for 30 days!
Free credits: 150€
Available features: Solo plan
Duration: 30 days
No credit card or payments required!
No hidden traps, just the best ML / DL tool you've ever used.

Frequently asked questions:

Q: Why did you build the core engine?

We built it because we scratched our own itch while developing for deep learning in production.

Q: When is the Core Engine going to launch?

We plan to launch the CLI in April 2020. The Web UI will be ready soon after that.

Q: Where is the actual processing done?

Currently, we run the workloads on the Google Cloud Platform. We are working on having it integrated into our user’s cloud platforms.

Q: What happens to my data?

We keep it safe and isolated. The storage and workloads run only in datacenters compliant to ISO 27001, ISO 27017, ISO 27018 and GDPR.

Q: What is the difference between the Asset Optimization Platform and the Core Engine?

Both are developed by the same folks. The Asset Optimization Platform (AOP) is a product aimed at enabling industry use-cases like predictive maintenance and damage detection. The Core Engine used to be the internal platform that powered the AOP, until we decided to release it as its own product.

Q: What is the pricing?

Stay tuned for our pricing plans as soon as we launch!