Get startedSign in
Back

Yatai

yatai deployed on plural

Available providers

Why use Yatai on Plural?

Plural helps you deploy and manage the lifecycle of open-source applications on Kubernetes. Our platform combines the scalability and observability benefits of managed SaaS with the data security, governance, and compliance benefits of self-hosting Yatai.

If you need more than just Yatai, look for other cloud-native and open-source tools in our marketplace of curated applications to leapfrog complex deployments and get started quickly.

Yatai’s websiteGitHubLicenseInstalling Yatai docs
Deploying Yatai is a matter of executing these 3 commands:
plural bundle install yatai yatai-aws
plural build
plural deploy --commit "deploying yatai"
Read the install documentation

🦄️ Yatai: Model Deployment at Scale on Kubernetes

actions_status docs join_slack

Yatai (屋台, food cart) lets you deploy, operate and scale Machine Learning services on Kubernetes.

It supports deploying any ML models via BentoML: the unified model serving framework.

yatai-overview-page

👉 Join our Slack community today!

✨ Looking for the fastest way to give Yatai a try? Check out BentoML Cloud to get started today.


Why Yatai?

🍱 Made for BentoML, deploy at scale

  • Scale BentoML to its full potential on a distributed system, optimized for cost saving and performance.
  • Manage deployment lifecycle to deploy, update, or rollback via API or Web UI.
  • Centralized registry providing the foundation for CI/CD via artifact management APIs, labeling, and WebHooks for custom integration.

🚅 Cloud native & DevOps friendly

  • Kubernetes-native workflow via BentoDeployment CRD (Custom Resource Definition), which can easily fit into an existing GitOps workflow.
  • Native integration with Grafana stack for observability.
  • Support for traffic control with Istio.
  • Compatible with all major cloud platforms (AWS, Azure, and GCP).

Getting Started

  • 📖 Documentation - Overview of the Yatai docs and related resources
  • ⚙️ Installation - Hands-on instruction on how to install Yatai for production use
  • 👉 Join Community Slack - Get help from our community and maintainers

Quick Tour

Let's try out Yatai locally in a minikube cluster!

⚙️ Prerequisites:

  • Install latest minikube: https://minikube.sigs.k8s.io/docs/start/
  • Install latest Helm: https://helm.sh/docs/intro/install/
  • Start a minikube Kubernetes cluster: minikube start --cpus 4 --memory 4096, if you are using macOS, you should use hyperkit driver to prevent the macOS docker desktop network limitation
  • Check that minikube cluster status is "running": minikube status
  • Make sure your kubectl is configured with minikube context: kubectl config current-context
  • Enable ingress controller: minikube addons enable ingress

🚧 Install Yatai

Install Yatai with the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai/main/scripts/quick-install-yatai.sh")

This script will install Yatai along with its dependencies (PostgreSQL and MinIO) on your minikube cluster.

Note that this installation script is made for development and testing use only. For production deployment, check out the Installation Guide.

To access Yatai web UI, run the following command and keep the terminal open:

kubectl --namespace yatai-system port-forward svc/yatai 8080:80

In a separate terminal, run:

YATAI_INITIALIZATION_TOKEN=$(kubectl get secret yatai-env --namespace yatai-system -o jsonpath="{.data.YATAI_INITIALIZATION_TOKEN}" | base64 --decode)
echo "Open in browser: http://127.0.0.1:8080/setup?token=$YATAI_INITIALIZATION_TOKEN"

Open the URL printed above from your browser to finish admin account setup.

🍱 Push Bento to Yatai

First, get an API token and login to the BentoML CLI:

  • Keep the kubectl port-forward command in the step above running

  • Go to Yatai's API tokens page: http://127.0.0.1:8080/api_tokens

  • Create a new API token from the UI, making sure to assign "API" access under "Scopes"

  • Copy the login command upon token creation and run as a shell command, e.g.:

    bentoml yatai login --api-token {YOUR_TOKEN} --endpoint http://127.0.0.1:8080

If you don't already have a Bento built, run the following commands from the BentoML Quickstart Project to build a sample Bento:

git clone https://github.com/bentoml/bentoml.git && cd ./examples/quickstart
pip install -r ./requirements.txt
python train.py
bentoml build

Push your newly built Bento to Yatai:

bentoml push iris_classifier:latest

Now you can view and manage models and bentos from the web UI:

yatai-bento-repos yatai-model-detail

🔧 Install yatai-image-builder component

Yatai's image builder feature comes as a separate component, you can install it via the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai-image-builder/main/scripts/quick-install-yatai-image-builder.sh")

This will install the BentoRequest CRD(Custom Resource Definition) and Bento CRD in your cluster. Similarly, this script is made for development and testing purposes only.

🔧 Install yatai-deployment component

Yatai's Deployment feature comes as a separate component, you can install it via the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai-deployment/main/scripts/quick-install-yatai-deployment.sh")

This will install the BentoDeployment CRD(Custom Resource Definition) in your cluster and enable the deployment UI on Yatai. Similarly, this script is made for development and testing purposes only.

🚢 Deploy Bento!

Once the yatai-deployment component was installed, Bentos pushed to Yatai can be deployed to your Kubernetes cluster and exposed via a Service endpoint.

A Bento Deployment can be created either via Web UI or via a Kubernetes CRD config:

Option 1. Simple Deployment via Web UI

  • Go to the deployments page: http://127.0.0.1:8080/deployments
  • Click Create button and follow the instructions on the UI
yatai-deployment-creation

Option 2. Deploy with kubectl & CRD

Define your Bento deployment in a my_deployment.yaml file:

apiVersion: resources.yatai.ai/v1alpha1
kind: BentoRequest
metadata:
    name: iris-classifier
    namespace: yatai
spec:
    bentoTag: iris_classifier:3oevmqfvnkvwvuqj
---
apiVersion: serving.yatai.ai/v2alpha1
kind: BentoDeployment
metadata:
    name: my-bento-deployment
    namespace: yatai
spec:
    bento: iris-classifier
    ingress:
        enabled: true
    resources:
        limits:
            cpu: "500m"
            memory: "512m"
        requests:
            cpu: "250m"
            memory: "128m"
    autoscaling:
        maxReplicas: 10
        minReplicas: 2
    runners:
        - name: iris_clf
          resources:
              limits:
                  cpu: "1000m"
                  memory: "1Gi"
              requests:
                  cpu: "500m"
                  memory: "512m"
          autoscaling:
              maxReplicas: 4
              minReplicas: 1

Apply the deployment to your minikube cluster:

kubectl apply -f my_deployment.yaml

Now you can see the deployment process from the Yatai Web UI and find the endpoint URL for accessing the deployed Bento.

yatai-deployment-details

Community

Contributing

There are many ways to contribute to the project:

  • If you have any feedback on the project, share it with the community in GitHub Discussions under the BentoML repo.
  • Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
  • Investigate bugs and review other developers' pull requests.
  • Contributing code or documentation to the project by submitting a GitHub pull request. See the development guide.

Usage Reporting

Yatai collects usage data that helps our team to improve the product. Only Yatai's internal API calls are being reported. We strip out as much potentially sensitive information as possible, and we will never collect user code, model data, model names, or stack traces. Here's the code for usage tracking. You can opt-out of usage by configuring the helm chart option doNotTrack to true.

doNotTrack: false

Or by setting the YATAI_DONOT_TRACK env var in yatai deployment.

spec:
  template:
    spec:
      containers:
        env:
        - name: YATAI_DONOT_TRACK
          value: "true"

Licence

Elastic License 2.0 (ELv2)

How Plural works

We make it easy to securely deploy and manage open-source applications in your cloud.

Select from 90+ open-source applications

Get any stack you want running in minutes, and never think about upgrades again.

Securely deployed on your cloud with your git

You control everything. No need to share your cloud account, keys, or data.

Designed to be fully customizable

Built on Kubernetes and using standard infrastructure as code with Terraform and Helm.

Maintain & Scale with Plural Console

Interactive runbooks, dashboards, and Kubernetes api visualizers give an easy-to-use toolset to manage application operations.

Learn more
Screenshot of app installation in Plural app

Build your custom stack with Plural

Build your custom stack with over 90+ apps in the Plural Marketplace.

Explore the Marketplace

Used by fast-moving teams at

  • CoachHub
  • Digitas
  • Fnatic
  • FSN Capital
  • Justos
  • Mott Mac

What companies are saying about us

We no longer needed a dedicated DevOps team; instead, we actively participated in the industrialization and deployment of our applications through Plural. Additionally, it allowed us to quickly gain proficiency in Terraform and Helm.

Walid El Bouchikhi
Data Engineer at Beamy

I have neither the patience nor the talent for DevOps/SysAdmin work, and yet I've deployed four enterprise-caliber open-source apps on Kubernetes... since 9am today. Bonkers.

Sawyer Waugh
Head of Engineering at Justifi

This is awesome. You saved me hours of further DevOps work for our v1 release. Just to say, I really love Plural.

Ismael Goulani
CTO & Data Engineer at Modeo

Wow! First of all I want to say thank you for creating Plural! It solves a lot of problems coming from a non-DevOps background. You guys are amazing!

Joey Taleño
Head of Data at Poplar Homes

We have been using Plural for complex Kubernetes deployments of Kubeflow and are excited with the possibilities it provides in making our workflows simpler and more efficient.

Jürgen Stary
Engineering Manager @ Alexander Thamm

Plural has been awesome, it’s super fast and intuitive to get going and there is zero-to-no overhead of the app management.

Richard Freling
CTO and Co-Founder at Commandbar

Case StudyHow Fnatic Deploys Their Data Stack with Plural

Fnatic is a leading global esports performance brand headquartered in London, focused on leveling up gamers. At the core of Fnatic’s success is its best-in-class data team. The Fnatic data team relies on third-party applications to serve different business functions with every member of the organization utilizing data daily. While having access to an abundance of data is great, it opens up a degree of complexity when it comes to answering critical business questions and in-game analytics for gaming members.

To answer these questions, the data team began constructing a data stack to solve these use cases. Since the team at Fnatic are big fans of open-source they elected to build their stack with popular open-source technologies.

FAQ

Plural is open-source and self-hosted. You retain full control over your deployments in your cloud. We perform automated testing and upgrades and provide out-of-the-box Day 2 operational workflows. Monitor, manage, and scale your configuration with ease to meet changing demands of your business. Read more.

We support deploying on all major cloud providers, including AWS, Azure, and GCP. We also support all on-prem Kubernetes clusters, including OpenShift, Tanzu, Rancher, and others.

No, Plural does not have access to any cloud environments when deployed through the CLI. We generate deployment manifests in the Plural Git repository and then use your configured cloud provider's CLI on your behalf. We cannot perform anything outside of deploying and managing the manifests that are created in your Plural Git repository. However, Plural does have access to your cloud credentials when deployed through the Cloud Shell. Read more.