The summary of ‘System Design: Online Auction & Bidding Service (with FAANG Senior Engineer)’

This summary of the video was created by an AI. It might contain some inaccuracies.

00:00:0000:28:40

The video discusses the design of an online auction system resembling eBay, exploring aspects such as bid handling, data storage, and user interactions. Key points include utilizing websockets, Redis Pub/Sub, event sourcing, and firm data schema structures. Considerations revolve around managing auctions with fixed deadlines, bid validation, and user payments. The architecture incorporates PostgresQL, Redis cache, and potential alternatives like Cassandra and DynamoDB. Differentiated strategies are proposed for auctions with fixed vs. dynamic deadlines, emphasizing efficient handling of bid updates and persistent connections. The speaker highlights the importance of scalability, data consistency, and effective communication channels, ultimately aiming to optimize user experiences and system reliability.

00:00:00

In this segment of the video, the presenter covers the online auction problem, similar to eBay. Two variations are mentioned: one with a fixed auction deadline and another where the auction deadline moves dynamically. The numbers used for this problem include 10 million items being auctioned daily and 1 billion bids placed daily. They calculate approximately 100 items added per second and 10,000 bids per second. The diagram includes users submitting bids, competing users, viewing bids, and a backend service to receive bids.

00:03:00

In this segment of the video, the speaker discusses setting up a bid receiving service using websockets. They mention using Redis Pub/Sub as a broker to publish bids to other bidders. The bids are also recorded in a database using an event sourcing approach. The speaker explains two potential approaches for handling tasks and mentions the advantage of directly storing data in the data store to avoid reliance on task runners during outages.

00:06:00

In this segment of the video, the speaker discusses the approach to decoupling the data store from the broker, ultimately deciding to stick with broadcasting to other users and event sourcing. They mention creating a bid data store schema, which includes fields like auction ID, user ID, bid ID, and timestamp. The speaker considers using a UUID for bid IDs and decides to keep it for uniqueness. They also mention the need for a separate data store to track the winning bid on an auction and closing it out.

00:09:00

In this segment of the video, the speaker discusses the variation of a fixed deadline auction data store schema. The schema includes details like deadline, status (open or closed), winning user ID, top bid, sold price, and handling cases where users fail to pay. A suggestion is made to implement a Cron job every 15 minutes to handle closing auctions based on timestamps. The importance of validating bids and deadlines for open auctions is highlighted. Users can browse auctions and place items in this proposed system.

00:12:00

In this part of the video, key actions and details discussed include setting up a browser for users to browse auctions, auction listing service, auction posting service with data entering the data store with open status, given deadline, and null winning ID. It is suggested to store the sold price as a string value. Scalability through PostgresQL with a leader-follower pattern is mentioned. The architecture includes read replicas for reads, one reader DB for writes, and Pub/Sub for interacting with the data.

00:15:00

In this part of the video, the speaker discusses the use of Redis cache for people joining an auction. They mention that having a Redis cache could make sense and suggest keeping a Redis cache of the current winning bid. The speaker also talks about using a task runner to manage the Redis cache updates and considers having Cassandra or DynamoDB as a fallback option. Ultimately, they lean towards Redis cache as the best approach for the data store in this scenario.

00:18:00

In this segment of the video, a discussion is held about using websockets for a dynamic deadline approach in an auction setting. The speaker explains the reasoning behind using stateful websockets due to the need for persistent connections, especially during the final minutes of an auction when bidding is intense. The idea is to have users actively watching the auction site, with bids coming in rapidly towards the end. The decision is made to switch to a polling or full page refresh approach for a longer, more static viewing period to avoid unnecessary websocket usage.

00:21:00

In this part of the video, the speaker discusses the dynamic bidding process, where there is no fixed deadline but a 24-hour window to start with bids updating every 15 minutes in the last hour. The system calculates the next closing time based on incoming bids, validates bids, and integrates business logic for bid acceptance. The video transitions to implementing bid receiving and viewing services without using web sockets, combining functionalities for efficiency. Redis is mentioned as part of the bid receiving service, with a focus on evaluating the need for caching in this dynamic process.

00:24:00

In this part of the video, the speaker discusses considerations for the auction system architecture. They contemplate using DynamoDB more instead of Cassandra, suggest writing the winning price directly to the data store, and mention the need to connect with a payment service like PayPal. The speaker also touches on the potential issue of winner payment failure and suggests redoing the auction if needed. Lastly, they mention the use of DynamoDB for strong read consistency and handling partitioning in services like bid viewing.

00:27:00

In this part of the video, the speaker discusses updating the closing time when placing a new bid and mentions handling HTTP with full page reloads instead of using websockets. The speaker feels confident with this approach and suggests wrapping up the session, noting they will start a discussion thread in Discord for any further questions.

Scroll to Top