The summary of ’20: Distributed Job Scheduler | Systems Design Interview Questions With Ex-Google SWE’

This summary of the video was created by an AI. It might contain some inaccuracies.

00:00:0000:30:29

The video discusses the design and implementation of a distributed job scheduler, focusing on scheduling tasks efficiently with strong consistency in a distributed system. Key points include using Cron jobs and directed acyclic graphs (DAGs), managing dependencies, ensuring atomicity in updating multiple rows, and optimizing task scheduling performance. The speaker emphasizes the importance of preventing job duplication, efficient job retry strategies, utilizing a document-based database for flexibility, and load balancing for task executors. Strategies for fast reads, managing locks, and ensuring only one job runs at a time are also highlighted. The goal is to maintain data integrity and optimize job scheduling processes.

00:00:00

In this part of the video, the speaker discusses building a distributed job scheduler. They explain the concept of a directed acyclic graph (DAG) and the importance of running uploaded binaries upon user request. Key problem requirements include running jobs once, allowing users to monitor job status, and handling both individual scheduled tasks (Cron jobs) and DAGs. The high-level overview includes a recurring task scheduler and a scheduler table that schedules tasks to be executed. The speaker delves into task scheduling processes, such as Cron task scheduling, which involves managing Cron job settings and scheduled tasks within separate tables. The use of change data capture for task scheduling is hinted at as a more efficient approach than traditional two-phase commits.

00:05:00

In this segment of the video, the speaker discusses the process of scheduling tasks using cron jobs and dag jobs. For cron jobs, the executive pulls from the scheduled task table, runs the job, and schedules the next iteration. For dag jobs, a dedicated database table is proposed to manage dependencies. The speaker explains a method of structuring the database table for dag jobs, including handling dependencies and scheduling based on epochs. The approach involves marking job completions, ensuring dependencies are met before rescheduling, and utilizing topological sorting. The emphasis is on efficiently managing dependencies and scheduling tasks in a structured manner.

00:10:00

In this segment, the speaker discusses the process of scheduling tasks in a job scheduler system. They highlight the need for atomicity when updating multiple rows, preferential use of transactions over distributed transactions, and the importance of sharding data to update rows efficiently. The speaker also mentions the use of a document-based database for data flexibility in representing dependencies. Additionally, they delve into the structure of a scheduler table, emphasizing the primary key of job IDs, the S3 URL for job binaries, and a run timestamp for job execution timing. The concept of updating the run timestamp to prevent unnecessary job retries is explained to optimize job scheduling performance. Finally, the speaker touches on strategies for ensuring fast reads from the scheduler table, such as in-memory data storage and efficient indexing.

00:15:00

In this segment of the video, the speaker discusses the challenges of task scheduling in a distributed system. They mention the need to grab locks for reading and writing from a table, which can lead to contention. Two potential solutions are suggested: using partitions for the scheduler table or reading from a read-only replica to avoid grabbing locks. They also discuss load balancing for executors and propose using a message broker like Kafka for routing messages to the appropriate executors. The concept of using multiple levels of queues for scheduling tasks based on priority is also introduced, similar to how operating systems handle tasks.

00:20:00

In this segment of the video, the speaker talks about job retry strategies and job completion handling in a system. They discuss retrying jobs on different levels with varying timeouts, prioritizing jobs from higher levels, and tracking job retries to determine priority levels. The importance of marking jobs as completed or failed to avoid infinite retries is emphasized, along with adding a status column in the scheduling table. The concept of indexing on both status and timestamp of the job is suggested to efficiently manage job retries. The speaker also mentions the need for user query capabilities to check job status and recommends using replicas to reduce load on the main scheduling table for improved performance. The discussion touches on partitioning strategies for distributing load across scheduling tables and the use of distributed locks like Zookeeper to prevent multiple instances of the same job running simultaneously, which could lead to state corruption. The goal is to ensure jobs run only once to maintain data integrity.

00:25:00

In this segment of the video, the speaker discusses the importance of strong consistency in ensuring that only one job can be run at a time using locks in distributed systems like Zookeeper. They mention the need for Time-to-Live (TTL) on locks to prevent issue with long-running tasks and potential failures. Additionally, the video talks about the challenges of preventing jobs from running more than once and suggests options like running jobs on the same node, checking scheduling tables for job completion, and making jobs idempotent. The segment also covers the speaker’s diagram outlining job scheduling processes using tables, two-phase commits, change data capture, and handling job statuses efficiently.

00:30:00

In this segment of the video, the speaker discusses updating epoch numbers on the DAG table. The updated numbers then pass through change data capture, Kafka, DAG task queuing node, and back to the scheduling node. The speaker mentions having explained this process in previous slides and signs off, wishing viewers a good week and hinting at more content in the next video.

Scroll to Top