The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for KubeCon + CloudNativeCon North America 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
Please note: This schedule is automatically displayed in Mountain Standard Time (UTC -7). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change and session seating is available on a first-come, first-served basis.
A standout feature of Kubernetes is its sophisticated mechanism for pulling container images from repositories, aligning containers with the appropriate pods, and strategically deploying pods to nodes that meet their resource requirements—such as CPU, GPU, RAM, network, and storage. This process adheres to the defined affinity and anti-affinity specifications between pods and nodes. Despite these capabilities, the challenge of optimally arranging a multitude of workloads, each comprising several pods within a cluster, remains an ongoing endeavor. In our research, we illustrate that a set of YAML files, which detail a workload deployment request, can be systematically transformed into a Binary Integer Linear Programming (BILP) model. Depending on the specific optimization goals, the objective functions of the model can be tailored accordingly. With the imposition of broad conditions, it is feasible to derive an optimal solution that adheres to polynomial time complexity constraints.
At Intuit, our control plane components such as IstioD are responsible for hundreds of applications per cluster. It is responsible for configuring data plane, as well as injecting the istio-proxy container. With an increase in application traffic, there is an increase in application pods, which results in the control plane to scale up. For critical control planes such as IstioD, it is wise to scale proactively, rather than as a reaction to increase in load. With traditional approaches, like tuning HPA thresholds, to scale in advance, we might pre scale even when not required due to outliers, which could be wasteful. At Intuit a novel deep learning forecasting model called N-HiTS was employed to solve this issue. This session will discuss and demo how we train N-HiTS, our most important model features, and how we deploy our service on a per-cluster basis to provide contextualized predictions for cost effective and on time auto-scaling.
Anubhav is a seasoned software engineer in the field of Cloud Native Technologies, and has been doing Kubernetes and Service Mesh since 2016. He developed Redis Cluster as a Service, and a Templating Engine while working at Yahoo! He is the lead maintainer of Admiral, which is an... Read More →
As a software engineer on the Service Mesh team at Intuit, Ryan works to support Intuit's extensive Istio deployment through contributions to projects like Admiral. He has previously worked to reduce costs of cloud development environments for the Intuit API Gateway team. His main... Read More →