Interview Transcript
Doctor Squab: Yeah. Okay, I usually start with introduction and please feel free to introduce as much or as little you would like. Yeah, I left Google recently. I was a staff engineer there. I worked in a couple of teams in Google, most recently in Cloud, but before that I was in Ads and Research. My core expertise are usually in machine learning backend, but I have worked on a few other things as well. Usually I ask you to introduce and then maybe if you want to share what kind of companies are you targeting and where are you in the process.
The Legendary Waffle: Sure. So I'm a software engineer with about three years of experience. I've been working mostly in the fintech space in New York City. I work mostly in Python and Django right now. But I'm mostly targeting Google at the moment. I have my on-site coming up on Monday, actually, Monday morning. So I'm just trying to get as much practice in before as I can. And yeah, I'm targeting the L3 level. And yeah, I'm really excited to go through the process for the first time.
Doctor Squab: Very nice, very nice. Cool. Okay. Since you are, since you're targeting Google, it would be nice to very quickly go over the Google rubric. of and how kind of coding works, coding interviews work and what are they kind of looking for. I will quickly paste it and then we will go on our way. So Google has, this is what these are the things Google interviewer is looking for. I don't know if somebody has shared with you these before or not. So let me, share these things. So Google judges you on these six dimensions. Code comprehension is about how are you explaining your code and can you reason about what type of code is better than other or something like that. So you are explaining it, going into details. Programming is that whatever, let's say you chose Python. And how good are you at Python? Are you using the language syntax very well? Are you using the data structure implementation very well? Can you differentiate between the nuances in Python data structure algorithm? Can you narrate this correct time complexity, space complexity? Can you come up with the correct data structure? Can you reason about various algorithms and pick the best and discuss trade-offs among them? Debugging and the resolution, stuff like that is that how are you really when you implement the code, how are you debugging it? Are you somebody who just puts a bunch of printf statements or somebody who more goes into systematically debugging things? And test engineering is whether you write a test case and do you know the test cases and if it's asked you what kind of test cases you will write, can you come up with all the scenarios of the tests which you will cover? Code health and system is usually asked from a senior candidate, which goes into are you creating tech debt or can you differentiate between your code and attack debt? So I would say last one is not that important, but at least the other five you should kind of pay attention to. The feedback is split into four categories. So this is how these are the things they are looking for and how it's captured is that there's a form where there's first thing they asked you like how did this candidate do on algorithm data structure? How did they do it in coding? How did they do on values feedback? And how did they do on communication?
The Legendary Waffle: Okay.
Doctor Squab: So and yeah, and you are given 1 to 4 rating. Where 1 being poor and 4 being outstanding. okay.
The Legendary Waffle: All right, sounds good. Thank you.
Doctor Squab: Yeah. So. So, yeah, this is. This is the general structure, and I will try to give you feedback in this whole way as well. Algorithm coding values, feedback, communication. Right. Okay, good. And I've said, let me see which. Let me pick question for you. One sec. I wanted to pick something which I heard recently Google asked somebody else. So I was going to ask something else but then.
The Legendary Waffle: Okay, here we go.
Doctor Squab: And then if you have seen this question, by the way, please feel free to Yeah.
The Legendary Waffle: Okay.
Doctor Squab: Please feel free to let me know.
The Legendary Waffle: Sounds good.
Doctor Squab: Okay. Okay, one last thing, these interviews are 45 minutes at Google, so you will be spending like more 40 minutes in coding. So we are starting at like, you have a clock on your bottom right. So we are starting now, so we'll just add 40 minutes to that.
The Legendary Waffle: All right.
Doctor Squab: Cool.
The Legendary Waffle: Here we go. Okay. Let me read over it one second. Imagine you're part of a team at Google working on optimizing delivery routes for new drone delivery service. They're capable of navigating through the city represented as a grid. Each cell in the grid contains an obstacle or is free for navigation, and it can move up, down, left, right, but can't pass obstacles. Given a grid representing the city and the starting position, determine the number of unique paths the drone can take to deliver a package to its destination. The grid is guaranteed to have exactly one starting position and one destination. OK, and we need to write a function that takes in the two-dimensional grid and returns a number of unique paths from the starting position to the destination. OK, I've seen a similar question, but not one where the start and destination positions are different. I've seen one where we start in the top left and go to the bottom right, But it's not like this problem where we start in any position and go to any other position. Does that sound right?
Doctor Squab: Yeah, correct.
The Legendary Waffle: Okay. Yeah, so what I'm thinking immediately in terms of how to go about exploring this is since we want the number of unique paths, we're going to need to traverse Like the entire grid. So immediately I'm thinking like we could do a DFS here starting from the start position, branching out maybe with like memoization and then basically doing like a top-down approach where we start from the start position. When we reach the end position, we increment the number of unique paths and then we programatically build it that way. I'm also thinking maybe something dynamic programming related could work, but I'm a bit more comfortable with the DFS approach, at least initially. So does that sound like an okay place for us to start here?
Doctor Squab: Yeah, yeah, that sounds perfect. If you think that you are reasonably confident in implementing this, we can swap the question right now.
The Legendary Waffle: Yeah, I feel pretty good about it.
Doctor Squab: Okay. I think it's because I just want to kind of, you know, not waste your time. If you know that you can do it, then there's no point of it. Okay, let's wrap the question. I feel like you are very, very confident about this one. Okay, here's an alternative question.
The Legendary Waffle: All right. When you expose a web service API endpoint, you need to implement a rate limiter to prevent abuse of the service, implement a rate limiter class, with an isAllowed method. Every request comes with a unique ID, and we need to deny a request if the client has made more than n successful requests in the past t milliseconds. Okay. And in terms of t, can we assume that the input is in milliseconds? We don't need to do any conversion or anything like that?
Doctor Squab: No, so yeah, as example is saying like you can look at the example. So for example, if n is equal to 2, that you can make two requests in T equals to 100 milliseconds, then there is a pattern of request from line 25 to line 30. And if you want, you can write answers next to it, like just so that we know that we are on the same page. Whether you will allow it or not.
The Legendary Waffle: Sure. Let's go through this initial example just to make sure we're on the same page. This is the first time we're seeing client one and first timestamp is one, so we would allow this one. This is the second time we're seeing them and the time is less than 100 from the start, so we would allow that. Here, would we want to, is it like basically the next allowed time after 1, I'm assuming would be 101 based on the logic. So in that case, since we've already seen two messages at this point, this one would be false. And then here, since the initial time plus 100 is 101. Then once we reach 150, we would allow this one. And same thing here, since the 100 after 50 is 151, we would allow this. And I believe we would also.
Doctor Squab: Allow.
The Legendary Waffle: This one because 151 would take us to where there's two open slots at sink. So I think we would allow this one.
Doctor Squab: No, we won't allow last one. We won't allow last one because we will be looking at the window 200 to 100 that range, not 200 to 150. So we will be looking from current time that whether we have allowed anything in last 100 milliseconds.
The Legendary Waffle: Got it. Okay.
Doctor Squab: Yeah? I think this is the correct answer.
The Legendary Waffle: Sorry, Robert. Okay. Sorry. Just to clarify, so is the input going to be increasing or can we assume that it's going to be in order in terms of timestamp?
Doctor Squab: Correct. So it's a time, so it increases. never goes back. So some candidate likes implementing using like system clock, but if you want to take it as an input in your method, that's also perfectly fine with me. So either you can take it as the input or you can just use system clock.
The Legendary Waffle: All right, I'll take it as an input. What I'm imagining is we can maintain like a hash map of the clients and their two most recent times. And then once we see a new time, we can basically see if the new time invalidates any of the recent times. And if it does, then we can return true and update the two most recent times. Does that seem like an approach we can start with?
Doctor Squab: Yeah. Yep, yep, yep, yep.
The Legendary Waffle: All right.
Doctor Squab: Before we get into coding, just wanted to understand, so what's the time and space complexity?
The Legendary Waffle: You are thinking? Sure. So because we're going to need to store a hashmap, that will be O of n space complexity because we'll need the one entry in the hashmap for each client. But then for each client, only two max timestamps that we're keeping track of. So that's constant there.
Doctor Squab: I mean, and then so n is like just to be sure, like let's say that there are maximum number of clients at C. And maximum number of requests allowed is end. So yeah, I see. Okay, max clients and max requests, yeah. So maybe it helps in answering this way. Yeah, and it's not going to be true all the time, right? And is a variable.
The Legendary Waffle: I see what you're saying. Thank you for clarifying. So in that case, we're going to have like C clients, and for each of those, we're going to have end requests. that we can store in the dictionary.
Doctor Squab: Yeah.
The Legendary Waffle: And then in terms of the time complexity, because we don't need to do any searching, we can just do a direct lookup for the client. I believe this should be O of one time for each of these operations we're going to have.
Doctor Squab: Yep. Perfect. Yeah, sounds good.
The Legendary Waffle: All right, so let's get started here. And let's say we take The C and N here. Well, C is not passed in directly, right? C is more just like the upper level.
Doctor Squab: Correct, correct. Yeah, C is not passed in, correct. But client ID is passed in, but that is part of the request.
The Legendary Waffle: Yeah, sure, sure. Okay, so maxrequests = N, And so we can define t as the delay between requests is t. And then we can do the isAllowed method, which will take, let's see, the client ID and the timestamp. Does that sound right?
Doctor Squab: Mm-.
The Legendary Waffle: Okay, so the most basic thing we can do here is if the client ID is not present in the dictionary, which I forgot to add. So we'll also need this dictionary, which we can use collections.defaultdict if that library is okay with you. And here we can use a list, at least for now. And so we can say if the client ID is not in self.map, then what we can do is we know that this is going to be allowed. I suppose so long as n is greater than zero, is that something we can assume here as well?
Doctor Squab: Yeah. One more time?
The Legendary Waffle: Do we need to handle bad input variables? Like if the max request is zero?
Doctor Squab: Yeah.
The Legendary Waffle: Okay. Then we can append the current timestamp to the client ID list and return true early because we know that no matter what, we're always going to want to include this so long as max request is at least one or greater. Then otherwise, what we can do is we can say that the numRequests.
Doctor Squab: Is.
The Legendary Waffle: Equal to the length of self.map Client ID. And before this, what we would want to do is we want to clean up any timestamps that are out of range. What we want to do is basically what I'm thinking is maybe what would be useful here is if we had a If we're going to be popping from the left side, which is what I'm thinking, because what I'm imagining is like if we have timestamps 1, 2, 3, then we want to pop this way. Maybe we can use a deck here, a queue. I think you can pass that in like that. I think so. We can do that. Then basically while self.map, client ID 0 is less than timestamp -self.delay, we can do map client ID.pop less. And so that should clear up any out of range dates before we start doing this evaluation logic. Then we can get the number of requests. If the number of requests is less than self.maxrequests, then what we can do is we can append, do basically the same thing as we did up here, where we take the client ID and append the current timestamp, and we can return true. and otherwise we can return false. And so essentially, I suppose our initial time complexity is almost correct, but with the addition of this cleanup, in the worst case, this could have us iterating through the entire list or queue. So in that case, worst case, Would be O of N where N is like this the size of this list for the client ID. But I believe the space complexity should be the same in terms of we have the max clients and the max, you know, max requests. So to walk through the initial example again. I'll just go through this really quick. If we have client 1 and we initialize the rate limiter and we call isAllowed with client 1, ID 1, or timestamp 1, then we'll see that it's not in the map. We'll end it. Then we go with 50. We see that it is in the map. We go through this while loop, but this while loop gets skipped because Client ID of 0 is going to be 1, the timestamp 1, but current timestamp 50 minus 100 is going to be less than that. So we will skip those. And then we'll see that the numRequest is going to be 1. And we'll see that that's less than 2, which is the max request in this example. So then we'll attend. Then we keep going. We get to 100 for the same ID. We actually do, we evaluate this again, this gets skipped, then we see that numrequests is at 2, and then this we skip it and we return false because 100 is not allowed at this point. Then 150 we go through here again. This time we finally hit the while loop. So we pop, if the timestamps basically Where should I do this? Here we go. So if the time stamps are like 1 and 50, then at this point we will pop 1 from the left side. And so then we'll see again, we have 1 where we need 2. So we'll append this one, which is 150. Then we keep going 199, basically same thing. Pop 50, now we have 199. Then 200, we see that 200 minus 100, neither of these get popped, so we return false. Does that sound okay to you?
Doctor Squab: Yeah, yeah, it sounds good. Yep. Any improvement you would like to make to this? To your implementation?
The Legendary Waffle: I'd say at the moment, I would be satisfied with this. I'd say that we have to store each client and we have to store at least the timestamp that's in range. So I think this is good for now.
Doctor Squab: Yeah. I see. Okay. Yeah. So initially our time complexity was order one and space was C times n, right? And now, as you said, like, you know, time complexity is order n and space is C times n. Is it possible to do something about that? That what you were thinking and what you implemented kind of differs. Is it possible to fix that?
The Legendary Waffle: Sure, I guess one way to fix that is if we don't need to clean up the timestamps at all, then what we could do is basically the consequence of that would be a bigger storage. But essentially we could check if there is a if there is a timestamp that's basically we can, I guess we could like binary search for the index of the timestamp 100 minus the current. And then we could see the length from there to the end of the queue. And then go from there. And that would bring us down to login. Does that sound like doable?
Doctor Squab: Yeah, that's an improvement. So then the complexity becomes order log in and C times and it still remains the same, right? Yes.
The Legendary Waffle: Right.
Doctor Squab: Can we do better?
The Legendary Waffle: Could we do better, yes. So I guess an improvement, another improvement maybe we could make is because the inputs are sorted, I'm wondering if maybe we could do like a binary search tree. Just to optimize these validation. I'm not super familiar with, I know there's the sorted containers library in Python, but if I were to do that, I'd have to look up the implementation there. But I believe that would allow us to have some sort of improvement there in terms of if we're expected to do this look up very often.
Doctor Squab: Okay, yeah. So we have discussed now order and login. What's the next after this?
The Legendary Waffle: Yeah, if we wanted to get to constant time, that's where we started.
Doctor Squab: Yeah, exactly. So why is your solution different from what you were thinking and what you implemented? Or do you think that you just kind of came up with their own complexity in the beginning?
The Legendary Waffle: Well, so to answer the first question, it's different because initially I hadn't taken into account finding which timestamps we want to consider in the current range.
Doctor Squab: Correct.
The Legendary Waffle: And so once we started implementation, I realized that we needed to do that. And so the way I came up with that was to prune the old timestamps before that are outside of the current range. And that's what changed it to O of M. So yeah, I would say that the initial time complexity was wrong just based on how I was thinking about the problem. I don't know if maybe constant time is possible. It seems like there might be something we could do, but that wasn't the initial approach that I was thinking. So I could say my initial estimate was off.
Doctor Squab: Got it.
The Legendary Waffle: Got it.
Doctor Squab: Okay. Okay. Maybe if you want to spend one more minute on thinking about it before we move on, do you think that there is a possibility of improving?
The Legendary Waffle: You know, I do think we could maybe do something where we know that we have to hold up to n max requests. So what we could do is we could say the potentially the oldest request would be self.map client ID and the index would be like The length of.
Doctor Squab: This.
The Legendary Waffle: Minus this is wrong. There we go. Minus max requests. And so we could see there'd have to be some validation, obviously, because this could go out of bounds the way it's written right now. But assuming that validation is there, we could check that oldest and see if the oldest is in the range we want to consider or not. And if it is, then we know that the, from there to where we are now is full and we can't move forward. And if it is not, then we could move forward. What do you think about that?
Doctor Squab: Yeah, I mean, I mean, first thing, oldest request is the left request. So in your, if you think about it, yes, this is, this is the most recent expired request. The one you are. talking about. But if we don't know strictly the oldest request, the oldest request is just the one which you are doing this here, line 55, that map client ID dot left, that's the one which is the oldest, right?
The Legendary Waffle: Sure. I guess in my mind, I was thinking more like oldest in range.
Doctor Squab: Correct, exactly. No, I understand what you're trying to do. But then also, you have to remove as many requests have expired, and then how many do you think have expired?
The Legendary Waffle: At this point, we could get that by saying basically whatever this index computes to, everything from zero up to that would be expired.
Doctor Squab: Yeah, but in practice, how many will be that?
The Legendary Waffle: I suppose that could be up to O of n. Is that kind of what you're getting at?
Doctor Squab: How is that possible that you have so sure, I mean, in a case that, you know, if there's a bunch of requests came and then none of the requests came, then you can end up with order. you know, and. But maybe what's the more of average case looks like if the traffic is steady?
The Legendary Waffle: Good question. I guess it's a traffic study maybe like N over two. I'm not 100% sure here. Yeah, to be honest.
Doctor Squab: Yeah, I mean if traffic is steady then you will be removing only one request at a time. Yeah, right. So you will, I mean in average case you will be at order one. Does it make sense?
The Legendary Waffle: With this logic you're saying?
Doctor Squab: Yes, so this logic makes it feel like that it's order n, but in a steady state, It's order one. Or to be precise, if you do this, then also code doesn't change.
The Legendary Waffle: I see. Yeah, that's a good point. I didn't think about that.
Doctor Squab: Yeah, so it just does the same thing basically still, and then your code just works fine. Yeah, but now complexity becomes more clear. because of the while it was a little bit confusing, but because of if it is very clear that it's sort of one.
The Legendary Waffle: Okay. Yeah, I see what you're saying. That makes sense, right?
Doctor Squab: Yeah. I mean, and to take it to next level that you don't have to delete the left one, right? All of them. Why? Why clean up? Let it. Let it sit there and just clean up one at a time.
The Legendary Waffle: And that's kind of what we ended up with here, right?
Doctor Squab: Yes, but you would have still cleaned up all of them. The problem was that you came up with a range and you would have cleaned up all of them before that. So the operation to delete all those would have still cost you. Yeah, so it would have made, like you would have still put a while after this, right? Once you found oldest in range, you would have put a while there. to delete all of them.
The Legendary Waffle: So, okay.
Doctor Squab: No worries. So now, how would you test this code for a traffic which is in a real traffic? Let's say if you want to test this code in your production and you want to make sure that it works, how would you ensure that this works for multiple clients?
The Legendary Waffle: So in terms of like a unit test or more of like in a production environment?
Doctor Squab: Let's talk about unit tests that how would you ensure that and you don't have to write it, but I want to discuss that like how would you come up with the unit test to test this for multiple clients by the way, not for single client, single client I understand.
The Legendary Waffle: Sure. Immediately what I'm thinking here for unit test is the simplest case we could do two clients and have their times be overlapping so that if it was just one client, we would expect it to return false. If we had client one with same cases before like 1, 2, 3, we would expect 3 to return false. But if we had Client 2 with 3, 4, 5 or 3, 4, then we would expect both of these to return true. Whereas if it was just one, it would return false. So that would be an initial step towards validating multiple clients work.
Doctor Squab: Yeah, understood. But what I want to see is how can I create a scenario as if requests are really coming in parallel. Just to give you context, that I'm going to take this code and I'm going to put it in like API gateway or somewhere where I'm going to protect my service using this. My service can connect to multiple clients at a time. You can think of multiple phones connecting to it or something like that. I want to enforce limits for each client. and I have no control over those clients, so they can send requests as they want. How would I test that scenario to make sure that my code can survive in a real production environment? So what kind of test cases you will write to ensure that?
The Legendary Waffle: Okay, it sounds like we're talking a little bit about concurrency and maintaining the map. while many requests are coming in simultaneously.
Doctor Squab: So.
The Legendary Waffle: For that, I don't have a ton of experience with the concurrent libraries in Python, but I do know that I'm assuming you could use them to write some sort of unit tests for the asynchronous case where you have a large stream coming in from client one and at the same timestamp, a large stream coming in pipeline too, and you could assert that they're being handled properly and maybe that would mimic more of that environment.
Doctor Squab: Yes, yes, yeah, yeah. Correct, correct. So yeah, you can either do multi-processing or you can do multi, you know, concurrent, you know, multi-threading to do that. No, yeah, it's okay. I mean, you don't have to, I understand that you, if you don't have experience, but okay, now, Let me tell you. One sec. Here is a. So now let's talk about this case.
The Legendary Waffle: Okay. So if the number of concurrent clients is too many, then there still may be a denial of service attack. Change the code to enforce a maximum QPS queries per second the service can handle at any given time. Deny requests and return resource exhausted if more requests arrive. Okay, so queries per second and right now the timestamps are in milliseconds. Is that right?
Doctor Squab: Correct.
The Legendary Waffle: So that'd be 1000 per second, I think. Does that translate?
Doctor Squab: Yeah, so think about that. I have a new input called M, where M is much bigger than N. So, n was the number of requests a client can make in a given time window. Now let's introduce another variable called m. And m ensures you can introduce that in constructor. And this m variable is the global limit on the system that in a rolling window of t milliseconds, we cannot have more than m connections open. or M requests being served. And this is, as you understand, like if I'm deploying a service, I'm going to deploy it on a fixed number of resources. The problem right now with the code is that if there are too many clients, then my N is not good enough to protect me because C is variable and C can be arbitrary large. So what I'm now doing is that I cannot I cannot say that, hey, I don't want more customers in my service. So I'm going to just make sure that, hey, if I know that I have deployed my service on a fixed number of resources, and they can handle M requests at a time, so I'm just going to put that. So that's where this M is coming from. So let's put that in the constructor next to the N and see whether we can deny a request if globally across all clients we exceed this.
The Legendary Waffle: Okay. So in that case, yeah, let's start by putting that in the constructor. So x queries per second, let's say that's n. Yeah. The simplest thing I could think of initially is we could maintain a count. So we could say like, current queries per second equals zero initially. And so then is allowed, we'd want to increment that at the beginning no matter what, because even if we were to return false, that's still a query. So we'd say current queries per second plus equals one, and then we could initially immediately check if queries per second is greater than max queries per second. Then we just return false. And what we would do is maybe, we could log something to the system saying resource exhausted. Or we could raise an exception depending on how we want to handle that or let the devs know what's going on. Then what we would need to handle is, Current queries per second, we need to handle when the when we get to a new second, I suppose. So maybe what we would have to handle first is like I don't know. Maybe what we could do is like we could have like current second. I I'm not 100 sure there. And then we could say, like, we can.
Doctor Squab: Yeah, let me step back a little bit.
The Legendary Waffle: Right.
Doctor Squab: It's not really a QPS because QPS is trying. What you're trying to capture. I see that. What I'm trying to hint at it is think of M as a, as a constraint where. looking at all the active requests across all the clients or sorry, all the non-expired requests across all the clients. So if you go into the queues of everybody else from client 0 to N, right? Then and you add that number that how many haven't expired, that number should not exceed this N. That's the constraint I'm going for. So.
The Legendary Waffle: Okay. And so in that case, are you saying that we would increment that here? Like when we update this value, we would increment the queries per second in terms of this is a query that's actually being made in the database or something?
Doctor Squab: I mean, that's more of an implementation detail that how you want to handle that, right? But yeah, what I'm going for is that, as I said, I want to ensure that at no time, or here's a way to think about it, that let's say that we did not have this per client quota for each client and we had a global queue. In that global queue, we will never have more than n requests active.
The Legendary Waffle: Okay. So I guess what we could do is maybe imagine a function and say that if we could have a function that validates is the queue full?
Doctor Squab: Yeah.
The Legendary Waffle: And if it is, Then what we want to do is that.
Doctor Squab: Yes, something like that. Yeah.
The Legendary Waffle: Okay. So is that more along the lines of what you were thinking?
Doctor Squab: Yeah, I mean, you got it right, but I'm trying to say like, so this is, but that queue, this queue is now attracts every single client, right? So yeah. Something like that.
The Legendary Waffle: The question becomes how do we adapt the hash map to count that? Is that what you're saying?
Doctor Squab: Perfect. Exactly. How do you change your code to now enforce this limit?
The Legendary Waffle: Okay, and the queue will include each client or each client's request cumulatively.
Doctor Squab: Yeah.
The Legendary Waffle: Okay. So in that case, we could have like a queue, which is like in zero, which just is like the number of items in the queue right now. And so for each request, I guess we would increment the queue and count that as a request. Is that kind of what you're saying, or do you think we should actually like, have a literal queue that we push to and then process that way.
Doctor Squab: I mean, if you let's say that if you keep a counter, right, how are you going to increment and decrement it?
The Legendary Waffle: I suppose what we would do is we'd increment it whenever we append to the the data structure and we could decrement it whenever we pop from the data structure. That would be one way of doing it, but that wouldn't take into account false requests.
Doctor Squab: It won't take care of the account, like say, because you are implementing this right now, but there might be expired requests for some other client. So you might be denying the requests So you are basing your decision based on whether this client's quota is full or not. But what if there was a room in another client's quota?
The Legendary Waffle: So I see.
Doctor Squab: Or other way around. Maybe in this case it will not be a problem, but other way around, let's say that you might allow it, but other client's request might have expired by then.
The Legendary Waffle: Okay, so in that case we could maybe have a global cleanup. So we would extend this maybe to be like the initial version where it's a while loop, but we could say like for client, for like ID, that's a bad one. For clients, there we go, in self.map, we could do cleanup, Expired, which would be something like this, but a little bit modified. And then what we could do is maybe increment on each request at the beginning and kind of call this at the beginning as well. And that way we would have a more up-to-date count before we evaluate the request.
Doctor Squab: Okay, so in terms of data structures, what are you maintaining now to maintain this global state?
The Legendary Waffle: What are we maintaining?
Doctor Squab: What would the CleanUp SPI do inside it? What will be inside this function?
The Legendary Waffle: So in this function, what I'm imagining here is if we have.
Doctor Squab: The current.
The Legendary Waffle: Timestamp we're evaluating, we do for client in map, if we do this while loop. So while map client 0 less than timestamp minus the delay, we pop left.
Doctor Squab: Yeah.
The Legendary Waffle: And then what we would do is we could say we could decrement the queue.
Doctor Squab: So.
The Legendary Waffle: Decrement the count of the queue. Is that kind of what you were wondering?
Doctor Squab: No, take a little bit step back a little bit and think about it. How is what I'm trying to say that you have requests coming from multiple clients, right? and they will be coming at different intervals. Now, at any given time, I'm asking whether this request should be allowed, and you need to check two conditions. Number one, has the client quota expired? And you will say, okay, no client quota is available. Has global quota expired? Now, the global quota is tricky to calculate, because what you have to do is that, you have to look up at everybody else and see whether how many of their requests have expired if we are at capacity right now. And from there, see whether any of their requests can be deleted. And if it can be, then maybe we can allow this request to go ahead.
The Legendary Waffle: Okay, so It sounds like we're talking about like another, like the way you phrased it of is this client's request allowed versus is the global request allowed? Is that we could define like a separate function that checks this global, kind of like every client's request and then it does that pruning and then we check that before we check if it's if the current request was allowed.
Doctor Squab: Correct, correct, correct. Okay, but we are at time. So, but did you get the question at least? I hope that I was just increasing the complexity of the question. But before we discuss feedback or anything, I just wanted to understand like you understood the question. I don't know why it took so long to, I thought that's why I write it down and in line 83. But do you understand the question or do you still not very clear?
The Legendary Waffle: I I think I understand the question.
Doctor Squab: Yeah. Okay. Okay. Yeah. I mean, I thought it was very clear. I don't know. That's why this has happened before, too. And usually I. So I write. I wrote it down and I thought I'll just start pasting it as a follow-up but anyway, it still came out. This is my question, so maybe I'm still kind of. massaging over it. Yeah, but this increases the complexity a lot. I think at the end you probably understood what I was trying to say that you will have to kind of maintain a global state of the system. So one way is to actually go through all the queues and then figure out from there whether you can clean up something and then add up how many are still active and then see whether you can allow it. So that's one way. that's more of a brute force, right? But there is something you can probably improve on. You can come up with something slightly better with like priority queue and all, where you can delete all the requests which are, if you can maintain a priority queue with a pointer, to the original queue. So the priority queue and when priority queue will help you is that all the requests which are expired will be in the top so you can delete them very fast.
The Legendary Waffle: Okay.
Doctor Squab: Like a heap and then that will and that will be at any time you can then check the size of the queue, the priority queue, which will tell you whether you can allow it or not. so that's the kind of solution I was. You see, that's the solution I was kind of hinting at, that if you, if you maintain a. Yeah, that's what. Anyway, it didn't go there somehow. I need to kind of clean it up a little bit more. But, yeah, I was, I was trying to see, like, if somebody can come up with the priority queue. I was thinking as a priority queue that that's a more optimal, but I would be happy with if somebody just says that I will just go over every single queue and then delete them and then calculate how many are. still active and then enforce that. Okay. Yeah. Okay. Okay. So here's my feedback. What I wrote down so far, we can go over this. I have not given the scores, but. And I might kind of change a little bit. This is. These are, of course, as I was taking notes, so, yeah. I will, this is here I will give you outstanding, here I will give you solid and we can go over details. Here also I will give you outstanding and communication also I will give you outstanding. So yeah overall I will give you a strong hire.
The Legendary Waffle: Yeah. Amazing.
Doctor Squab: Okay. Okay. So this is how it typically looks like. So very similar to this, how actual feedback looks like. People might add bullet points, clean up a little bit for hiring committee, but this is something, a very standard feedback. Yeah. So if you want, we can go over it. Definitely in algorithms, you came up with the optimal solution within minutes, which A lot of people take time. Sometimes they come up with order and then log in and then they improve to order one. You came up with order one, but then kind of regressed over it. I thought, oh, you completely killed it. And I was like, oh, wow, this is too fast. But yeah, then another nuance was which I liked a lot was that you came up with the list initially, and then I started writing in feedback. oh, came up with suboptimal data structure, but then you realize, ah, no, I will use switch it to queue, and then you use that thing, so which is nice. Yeah, you did check edge cases and all that. At least you asked for that edge cases, which is good practice, clean code, and no comments there. Yeah, I think the one thing which I would say was slightly the implementation, that's where I kind of didn't give you outstanding was that realizing that this and while thing which we discussed. Right. Candidates are usually in a real interview, you will not get that. They will not tell you that. I mean, you can replace that while with if they will just write it in the feedback and they will just leave it like that. I, I would also just leave it like that. But I just thought I will tell you right away that, whether it should be here for a while, but yeah, that order one kind of makes it very clear. But even, even if that while stays, really in practice, it doesn't change that much. So it's always a nice idea to discuss average case complexity or steady state, because that's what is more important in industry rather than like first, you know, extreme worst case at some point. Especially when we are talking about something like rate limiter, where traffic is going to come and what are the chances that there is nothing, no traffic coming in. So in practice, it's mostly order one. Right?
The Legendary Waffle: Yeah.
Doctor Squab: I did test a little bit you on, you know, I wanted to see whether, you know, you practice a lot of these lead code type questions, so I hope this is not like a lead code. So that's why I asked you about concurrency and exception handling and stuff like that, more relevant to the job. That's why I was pushing you on that unit test for multiple clients or or error handling, like resource exhaustion and all that. So those are a little signal whether you can think of that. Yeah. And then the follow-up is, I knew that it's kind of complicated. I expect you are applying for LC, so I don't think you'd have to do that question. But more of L4, L5, I usually expect them to have more solid, kind of thoughts on that. Some people do come up with that. I think only one candidate came up with the priority queue thing, but some people do mention that they will just go through all the queues and enforce that limit. But I won't deduct that mark, but if you had to answer that slightly better, then I would have kind of pushed you, made a case for like, hey, maybe we should consider L4 for you. something like that. So you can, you, you can kind of give that kind of feedback as well that, oh, maybe think about leveling up the candidate. Yeah, but I think at L3, it's, it's a smooth sale. I have pretty much no reservations in giving you a strong hire on that.
The Legendary Waffle: Yeah.
Doctor Squab: Just keep doing what you're doing and. yeah. Thank you. Yeah. And be prepared for more uncomfortable questions. But if you practiced a lot of lead code, yeah, you are already doing that. But Google tend to ask questions which are not very lead code type, something like this more practical. And so be prepared to discuss more nuances like multi-threading multi-processing locking unlock. So all that maybe you should kind of brush up a little bit, instead of saying that I have never worked on it, maybe it's not that hard to quickly brush up. So, you know, look into how locking is implemented, how multi-threading multi-processing thread pools, stuff like that. They would expect you to know a little bit about it because it is used so much. So definitely take a look at that thread pool. process pool, locking, unlocking, stuff like that. Yeah.
The Legendary Waffle: All right.
Doctor Squab: Yeah.
The Legendary Waffle: Thank you. I really appreciate the feedback. I think that's really helpful. Yeah. One, one question I just had is so related to, like, the behavioral interview, do you have any tips for how to stand out there that are. may be different from how to stand out in the technical rounds?
Doctor Squab: I think, yeah, to be honest, this is exactly what they are looking for, what you did. I would be saying anything expecting more would be, you are already standing out. So, just, you did not, you had a fantastic time management. and I would definitely make sure that you keep it up like this. So one thing I can tell you about is that Google always asks a follow-up, and pretty much unless you see a question is such a hard, let's say if they start with LeetCode hard, in that case, maybe not, but Google tend to ask follow-up most of the time, multiple follow-ups at times, right? so I would definitely keep up the pace the way you were keeping up because you are don't think that, oh, let's, let's get comfortable and let's just, you know, take more time in discussing this and that, which is not very important. So if you know the answer, just nail it, move on, let them ask second question. Don't waste your time because you never know how many questions are there in their Arsenal and it will be hard to know. Other than that, I don't think so. As I said, like, keep valuing feedback is another one, which is a lot of time people kind of oversee that rubric. You're already solid in algorithm and programming, so I don't have much comments. But like these skills, like valuing feedback and communication, we keep up those ways. You kept it very easy. you made sure you had a good humor and all that, so keep it up. All that is very good. They are hiring a potential teammate, and they do look for whether somebody is very easy to work with. And you did so that, so keep it casual, keep it, and the way you are interacting with the interviewer and all that, keep it up. and if they give you a hint, do take it. Take it seriously. Whether you use it or not is not that important, but don't ignore them. So, because people take it as a negative that, oh, I, I gave them a hint and they just ignored me, and they might write, like, a negative point. So. So, yeah, you can always say, like, I understood it, but I would. This is what I'm thinking or something like, you know, just, just. saying like, okay, I got it. I heard you. Still, I'm not accepting it because of this, whatever. So. But at least you acknowledged it, which is. Which is good enough to not get a negative if you don't want. But usually they. If they are hinting you, they will be. It will be a meaningful hint. It will not be some random thing.
The Legendary Waffle: All right.
Doctor Squab: Okay. Yeah. Well, yeah, good luck with your interview.
The Legendary Waffle: Thank you so much. Have a good rest of your weekend.
Doctor Squab: Yep, you too. Take care.
The Legendary Waffle: Bye. Bye.