Interviews are really a dumb game these days so if you want to really game it you can go with a statistical approach:
* Practice questions by company on LeetCode, sort by frequency of last 6 months and work down the list, do maybe 75-100, the list updates once a week
* Search for the company on the LeetCode forums and sort by most recent. If a question is not on LC yet it will likely get posted there, so you can get really fresh intel.
I've read of people doing this and getting like 6/6 questions they've seen before.
Almost all advice online about interviewing is written from the point of view of the candidate. Sometimes this is good advice, but sometimes it devolves into some kind of astrology, where candidate are just guessing how things work.
I've done +300 interviews at FAANG so I can share bit of advice from the interviewer's side. The caveat is that this is based on how I conduct interviews, so YMMV with other people.
* Always ask clarifying questions. Almost all problems have some level of ambiguity that you need to clarify. This is intended. The more senior you are, the more important this becomes.
* Explain your approach before you begin, and even while you code. This gives the interviewer a chance to help you if you are down the wrong path, or even just understand what you are up to.
* If you get stuck, ask for help! think of this as a pair programming session rather than an interview. I much prefer a candidate who gets stuck, ask for hints, gets unblocked and writes good code, rather than one that doesn't ask for help and writes not-so-good code.
* Caveat: There is a tension with asking questions. There is such a thing as too much help, e.g. I need to explicitly tell you how to solve the problem. Use judiciously.
* Take interviewer's hints. The interviewer has probably asked this question dozens of times and knows it inside out. If they give you a hint, 99% of times they are on to something.
* Personally I don't like "hard LC" problems. Instead I prefer medium difficulty problems, and spend extra time probing the candidate's coding skills. This includes:
1. Write tests.
2. Handle corner cases/incorrect inputs.
3. Discuss how to scale the code.
4. Discuss how to refactor the code.
* If nothing else, write the simplest brute force solution you can think of. As an interviewer I need a coding sample to evaluate the candidate. Trying to and failing to implement an optimal solution is worse than implementing a correct brute force one.
> and even while you code
As an interviewer I explicitly ask candidates NOT to do this (or only doing it if they want). For some reason there is this expectation in coding interviews. I challenge anyone that pushes their interviewees to do this, to sit down during one of their own coding sessions and vocalize the stuff they are coding while doing it. In my opinion it is stupid, and unnecessary.
When my interviewees start solving the coding tests and begin "thinking out loud" I told them: "you don't need to do this here, focus on your code, and once you are done I'll ask you to explain some parts to me". I invariably listen to a huge breath of relief from them haha.
Luckily my problem-solving thought process is already like conversation so just saying out loud what I'm thinking is just vocalizing what's already going through my head.
The more I read HN comments the more I realize that not everyone reasons through problems "verbally" (even if it's silent or internal) and uses some other thought process. To me, coding has always been like speaking a very specific and pedantic language.
I would find it interesting if other people were able to share their own mental models for coding, if that's even possible.
You’re rare as far as expectations go in the Bay Area. Most interviewers want explanations the whole way. Basically don’t stop talking ever.
I appreciate the caveat. Caveats are almost never given when people give advice and it is exactly the thing that people get stuck on when they take the advice to heart.
Have you ever passed someone who wrote brute force algorithms?
Of course. Contrary to popular belief you don't need to ace all interviews to get an offer.
It's fairly common to make a hire decision with one person not inclined.
So your assessment was "inclined no hire" when the person gave brute force solution, am I wrong?
Most often, but not always.
To understand why here is another bit of my philosophy: the aim is to find a good match between candidate and company. The interview is a (very flawed) proxy for this, so don't overindex on it.
For instance, recently I was inclined for a candidate although he didn't make it past brute force. The reason is that he did very well in the behavioural round, and wrote a very good brute force solution. Also, during coding, he explained in detail the internals of some data structures (e.g. hash maps), which shows they know their stuff. Lastly, they had an intuition about how to approach the problem optimally, even if they didn't manage to write the code.
> After being given the algorithm problem, ask for the specific runtime your solution will need to have. Almost certainly, the interviewer will tell you.
In my experience, interviewers will rarely tell you the runtime of the optimal solution.
Regardless, very interesting blog post.
> In my experience, interviewers will rarely tell you the runtime of the optimal solution.
I agree. As an interviewer, I genuinely want each candidate to do well and I think that, on balance, setting a specific complexity goal is likely to do more harm that good.
For a strong candidate, it could limit their opportunity to shine by narrowing down the solution space and discouraging them from exploring tradeoffs (e.g. runtime vs memory).
For a weaker candidate, it could mean no solution at all instead of a suboptimal solution. I generally choose my problems such that they admit multiple solutions of varying degrees of efficiency/sophistication. It is generally not a hard requirement that a candidate find the optimal solution in order for me to score the interview favourably.
In my experience, "no solution at all" is already the default. When people choke on questions it is because they get stuck trying to guess at what you want, and panic thinking it must be some clever trick they don't recognize. Becasue we've all seen a million tricks and forgotten most of them.
If you're happy with just getting it done quick and dirty, at least as a start, then telling them so is obviously better than not doing so. Beyond that, anything more about what you want will help trigger their thinking in terms of the right approach. Unless of course you really are looking for some clever trick (and I have certainly seen those annoying questions).
What you do in leetcode is different from what you'd do in an actual interview though. In leetcode, I don't even bother with the trivial solution, as that would never pass the testcases. However, for an interview, I'd always start with the trivial brute force solution, at least to get out a base solution to evaluate, describe and then iterate on,
I don't ask questions that can only be solved with some clever trick. In my experience, I get the most signal by posing a question that has many solutions of varying degrees of efficiency.
This way anyone except the least qualified candidates can make some meaningful progress. I then gently guide the candidate to help them produce the best solution they're capable of in the allotted time.
I then score the interview on the basis of where they have landed, the process by which they got there and how much help they needed along the way (and of course the role/level they're interviewing for).
I’d turn that question back around at the candidate, unless it were for a junior candidate.
I’ll give hints if the candidate is struggling, but I won’t just come out and tell them something like this. If they pushed me hard enough at the start, I would tell them and then fail them on the algorithms/reasoning component of the interview.
Yeah - why not just tell them?
You know, like IRL. What this is supposedly about.
Again, the question referred to the target performance, not the optimal performance.
When I implement something IRL (unless I'm implementing something from a paper or an algorithm that's well known), I typically can't just ask someone what the algorithmic complexity is. I have to determine if for myself.
Ed: to clarify a bit, in some cases the "non-optimal" solution is going to be the best one. And that's even before you start worrying about things like time/memory tradeoffs. When a candidate asks me questions along the lines of "which of the 2 input lists is bigger, what order of magnitude is the length, does XYZ fit in memory, etc", IMO it's a useful signal that shows they're aware of these tradeoffs.
The original question referred to "target runtime", which I take to me "basic expected performance characteristics." E.g. "Nothing crazy -- should run on a million integers or less, in half a second or less, which requiring not much more than the array size (or a small multiple) in extra memory. And worst case should be not too far from average case."
From there, I can start to think about the running complexity (or whether it even matters, for the scale given). But if an interviewer won't even tell me that ... I'd assume they're just like making candidates dance, for the sake of making them dance. While they sit back and stare at their phone, and occasionally interrupt with "hints".
Identifying a reasonable target runtime can be part of the problem. The longer they’ve been writing software, the more ambiguity I expect them to handle.
If the candidate wants to discuss performance, I’m obviously interested, but if they want me to serve a hint to them on a silver platter I’m just confused.
> If they pushed me hard enough at the start, I would tell them and then fail them on the algorithms/reasoning component of the interview.
I agreed with you up until this point, which seems unnecessarily harsh to me
There’s more to an interview than algorithms (e.g. coding, communication), so it’s not a death sentence.
In the hundred or so interviews I’ve given, I’ve never had a candidate persist in forcing questions along these lines from the very start, even after I have turned the question back around at them. If you force me to give you hints out of the gate, something very weird is going on.
Sorry but this sounds like "I want to see something, you don't know what it is, I won't tell you, and if I don't see it it's game over bye!"
Why??
Because in the real world you don't know the runtime of the optimal solution, except for certain very well studied and simple problems. At best you might know the runtime of the best published solution. But in many cases your problem isn't something that anyone has studied in that kind of detail and it's going to be on you to determine whether your solution is satisfactory or whether you should go back and improve.
Being told what to do is for juniors, anyone at intermediate or above should be able to recognize most compute waste and be able to fix it without being told what they are supposed to do or how fast the end result is supposed to run.
+1, got me thinking about this one Elon quote that's roughly about how we're trained in school to solve the problem given to us, even if it's not a valuable problem to solve. Can save a lot more time and complexity by realizing the 'problem' isn't really a problem than any solution.
The thing that people like Elon sometimes forget is that most companies have far less room for first principles multidisciplinary problem solving than for the people who then implement the plan that comes out of said problem solving. It can still be helpful but it comes with the added cost that every time you ask someone to do a task you have to explain the entire decision tree that lead to the task, and then even if they do come up with a valid improvement you probably have to throw out a bunch of other decisions that other people are already working on implementing.
Agreed.
What I'd actually ask as a candidate is what to optimize for. Are we looking at an N of 100 and a simple solution will do, or are we looking at an N of a million? Maybe memory is the constraint.
Atleast show the candidate that you're thinking pragmatically. The goal in the real world isn't to write the "fastest" algorithm, it's to write the most appropriate one.
Exactly. The only scenario in which I actually reveal the optimal complexity is when they got the best solution and are still struggling to see if there's a better one. It's better if they understand that it is optimal but I don't let people waste more than a minute or two on this. If their solution is not optimal I just say something like "Is this optimal?" or "Is there a better way?", and take it from there as a dialog.
Agree. When I'm interviewing I'm trying to evaluate if the candidate will be able to do a real job. In the real world you're coming up with answers to a novel problem. There's no way to know what the complexity of the optimal solution is (unless you can prove it, which doesn't happen outside of a classroom).
If you throw out a time complexity most interviewers will tell you if they expect something better. For big companies, it's always a crap shoot. For smaller companies, if the person isn't willing to have a conversation with you it's a sign they would be difficult to work with.
Not just throwing out a time complexity to see what sticks, but if you sketch out an approach and give the complexity then I agree, they will probably either tell you “implement that approach” or ask “can we do better?”.
There's a bit of a "draw the rest of the owl" thing going on here. If you have the experience to confidently eliminate the likely approaches that don't work, then you must know quite a bit about those approaches and their applications.
And in particular, I'm not sure I can agree on eliminating recursion in the anagrams problem. He just says
> Recursion – No way to apply recursion to the problem.
but...there is? One plausible approach, I sort each of the strings and put them in a trie. A fancy sounding thing, but really just a data structure which allows me to rephrase "someString in myTrie" as the recursive "someString is null || (head(someString) in myTrie.keys && rest(someString) in myTrie[someString])". I'm not arguing that this is better than a hash table, though I think it probably is on some workloads - just that it's not possible to rule this out and solve the problem by pure process of elimination.
That's the nature of thought frameworks. You need to fill in the gaps yourself. It's still useful to have an organized approach. In my experience, people can know how well each technique would be suited to the task at hand if asked. But, they rarely consciously go through each technique they know and consider if it's the right approach. Most people would be well served to do this more often.
Btw, this is a big failing in interviewers who help. Asking the right questions (what data structure could you use here?) is incredibly helpful. So helpful that you're not testing them on a slow moving, relevant skill (structured thinking) but instead a fast moving, less relevant skill (what data structures do you know?). Bad move.
I kind of agree, in that a good candidate should probably already be thinking "what data structure should I use here" no matter what the problem is. Just always think about what data structure to use. Every problem requires one.
More in the vein of what you're getting at though, I think helping a nervous candidate by asking obvious questions like this can be helpful: "What data structure would you recommend using for this problem?", or "What programming paradigm do you think would fit this problem best?". It's a bit of an ice breaker, and can lead to more interesting conversation than the candidate just sitting there, quietly thinking (admittedly, possibly not a good sign for a candidate if they do this, but interviews should be built around making it a good experience for the candidate as well).
I have completely the opposite opinion of your second paragraph. I’m fine with candidates quietly thinking in an interview. It’s a big part of the job. I’m not optimizing for interesting, but for signal. And dropping the signal of “can get themselves going with structured thinking” is not worth it to make it a better experience for the candidate in that moment. I’m okay with a candidate feeling uncomfortable while they’re failing an interview. If anything, I think it’s better than the standard which is to give hints, act like everything is great and then thumbs down them later. Makes people so confused. There’s also a whole thing about who gets hints fastest, confirming biases. But that’s another giant thread.
This is a good post for beginners. Fortunately or unfortunately coding interviews have become much tougher. This resource [1] has helped some. [1]: https://hackernoon.com/14-patterns-to-ace-any-coding-intervi...
This type of top down learning doesn't work for interviews.
Just like how you can't learn math by reading high level summaries written by others.
You have to grind through it and see the connections yourself.
The author's lack of depth shows when he says that no graph means you can rule out DFS.
I also disagree on asking interviewer for the expected runtime. You're needlessly adding another constraint on yourself by doing that. You should ask for the input size and then figure out the best runtime that you can reach. A working suboptimal solution is more likely to get you to the next round than an incomplete attempt at an optimal solution.
Am I crazy or is the first answer terrible? I sat down and wrote out and answer for the problem and I initialized one integer and one timestamp. It should be O(1) time and O(1) memory easily, right? I'm seeing comments saying they'd use an array or a hash table -- why are you using any data structure? You don't need to remember how many times it was called 61 seconds ago; just keep a timestamp and the last time you reset the timestamp. If you need it to be clock-minutes instead of rolling (it doesn't; whose clock, anyway?), then just round the timestamp. The obvious follow-up question is "ok, what if it needs to be limited to 10,000 calls per minute?" You're going to keep 10,000+ elements in some collection for no reason? I wouldn't immediately reject a candidate for coming up with this answer but I would seriously question their thought process on this one.
In fact it's kind of ironic that his first example exposes how bad this advice is: by trying to meta-game, starting from a few example data structures, he shows he completely misses the solution that is both simpler and much more performant.
I don't think you can have a "sliding window" of time just based on that.
The next time you call the function, you need to count how many of the old calls are still "unexpired". This number (potentially) gets lower with each passing quantum of time.
How can you do that without holding a timestamp for each call? Please clarify if I misunderstood you.
I think you can have a more performant algorithm if you soften the constraint a little. (see my previous comment https://news.ycombinator.com/item?id=29776678 )
TLDR : You have a maximum of N credits, when time pass you earn credit at a rate of N credit by window_size, but if the time since previous request is less than window_size/N you lose 1 credit.
I don't think you can have more than 2*N requests in any sliding window without tripping the filter, but you can't consume more than the average of N requests / window_size without tripping the filter.
I think it's a better solution than the question asked by the author, because when you are rate limiting, you and the client may not have exactly the same time, and you might have edge cases like where the client batch 60 requests in a few ms every minute. If there is some time-jitter in the requests you may have 120 requests in 59.9 seconds. (Bonus question : What time-stamping of the request should be used ?)
Whereas with my solution it is more forgiving, it allow the client to use all its rate credit without risking to trip the filter if he respect the rate intent.
That's a better solution in general IMO, but the author's approach can guarantee that you'll never go over N calls in a sliding window rather than fixed windows. I don't believe that's possible with the timestamp + count solution. Gotta bring up both solutions and ask the interviewer what they want :)
What's the problem with a sliding window and the timestamp + count solution? lastTime is the timestamp of the last call. newTime is the timestamp of the incoming call. If newTime - lastTime > 60 seconds then you're good to proceed, set count to 1 change lastTime to newTime and go on. Otherwise, check whether count is less than n and proceed accordingly (incrementing count if so). This accomplishes the sliding window and rounding down to the last whole minute handles the fixed window - right?
There’s also two additional programming techniques you should be aware of:
* Dynamic Programming
</snip>
How often do folks here use dynamic programming techniques in their professional lives? My own niche is systems programming. Dynamic programming is an important technique, sure, but given how rarely I see it used in practice compared to, say, statistical estimation it feels very overrepresented in interviews. But, maybe that's my own niche being unusual.An interesting point. I think most of today's systems tend to be "stateless" store state in a Database, which would make DP pretty cumbersome if not impossible to manage.
The only places were such types of processing will be required is when doing specific kind of processing, which for most scenarios you'd be better using an algorithm previously implemented on an existing library.
Most things in interviews are wildly overrepresented compared to their usage though. I haven’t done any shortest path, binary search, topological sort, or many other things in most of my time in industry. I would have a hard time remembering even one instance for many algorithms.
What niche do you work in, if you don't mind my asking?
I don’t have a niche. I’m full stack. I do everything related to web apps and been at six companies doing that all in different fields. (Seed startups to large public companies)
Maybe if I was in computer vision then I’d use some of these algos more often. But those usually require a special masters or PhD to get an interview anyway. So, not really applicable to the overall industry tbh.
It's hella common for SV though. The overwhelming majority of jobs here and startups are based around it. So, unless you're doing something close to hardware - you're likely working on a web app.
This doesn't mean you're just making pretty widgets for a browser.
More than my coworkers, not often enough, and not very often.
Many people think DP and caches are synonymous, unfortunately.
Yeah, it's not commonly necessary for real world engineering, but it's certainly good to know, at least as a mental exercise. There's a nice free algorithms textbook used at UC Berkeley that covers the concept pretty well: http://algorithmics.lsi.upc.edu/docs/Dasgupta-Papadimitriou-...
I usually reach for caching first because it’s more intuitive (for me and my reviewers) and because we often have well-known size bounds and relaxed enough performance requirements that make an “analytical” solution unnecessary.
Caching creates a mature product, no matter what your product roadmap says. If a rewrite is the tool of last result, caching is the second to last. Like pruning shears, once you use them, your world of options collapses considerably. All of those opportunities are gone and nature won't let you put them back.
Critically, caches are global shared state. That's why cache invalidation is the hardest thing. Global state also means people stop trying to have a data flow/architecture. None of these call trees need to advertise that they use a particular value because I can just grab it from a global if I need it. You don't know where things are actually used anymore because they all come from cache lookups, and because the lookup is so cheap (under typical system load, but catastrophic under high load), people don't even try to hold onto previously acquired data. They just fetch it again. Which is another boundary condition for cache invalidation (what if half a transaction has the old value and half the new value?)
They make flame graphs essentially useless. Sometimes immediately, or once the above starts happening. You quickly have no clear idea what the cost of anything is, because you never pay it, or if you try to pay it you end up amplifying the cost until there is no signal. Once people are promiscuously fetching from the cache, they don't even attempt to avoid duplicate lookups, so activities will end up fetching the same data 6 times. If you turn off the cache entirely for perf analysis then the cost skyrockets and the flame chart is still wrong.
You are back to dead reckoning and manually adding telemetry to potential hotspots to sort it out. This is very slow, and can be demoralizing. Quickly this becomes the fastest our app will be for a very long time.
All great points! For the type of work I'm thinking of (mostly offline data processing) I use caches in a much more limited way than you suggest. A short-lived local LRU cache on an expensive pure function over immutable data can significantly reduce our resource costs without adding significant complexity to the code. In some cases, DP would be more efficient but would require a mode of thinking and abstraction that's very different from most of our code.
What niche do you work in?
Glad to hear that the software engineer selection method is determined by this very rigorous process of character assessment. No wonder why big corporations produce such amazing teams.
I am not gonna die on a hill defending these types of interviews but they are infinitely better than the alternatives. Alternatives are often subjective criteria like "culture fit" which can lead to nepotism/other types of biases or hiring being done by sorting resumes based on the prestige level of the college of the applicant and just running down the list from the top (which happens in other professions like law).
Ofcourse the interview process is (or atleast is supposed to be multifaced) and passing these problems is just one of the facets of the process. A great candidate can bomb one of these questions but I wouldn't necessarily reject them just because of that (although that happens in practice in the industry) and wouldn't necessarily okay someone just because they aced one of these questions.
I'd take complaining about these type of problems any day over complaining about only being able to get a job in certain companies by having connections or having gone to an Ivy League school.
Alternatives are often subjective criteria like "culture fit"
There's a vast territory of approaches that are neither leetcode hazing or culture fit tests. And which, while nuanced, are basically objective.
Work sample evaluations do quite well, for example. As do in-depth technical discussions about ... just about any subject the candidate claims to know about.
Neither of which have anything to do with "culture fit".
>As do in-depth technical discussions about ... just about any subject the candidate claims to know about.
Having attempted to do this after reading all the HN screeds, I found that this failed horribly. You'll find a large number of people who seemingly know the theory, but can't execute worth a damn. They can happily talk on and on about normalization, regularization, imbalanced datasets and so on. Then they fail fizzbuzz.
I then put the programming screen first.
You can't communicate experience like you can communicate theory, if you could then it would be called theory instead of practice. Theory is great since it is something we can communicate, whatever instincts you developed working on a big codebase can't be communicated. Some people thing you can, there are lots of talks about these things etc, but in practice you can't teach practice, you need to experience it yourself. And if you distil practice into pure enough droplets of wisdom that others can recognize as the truth without being your in-group, guess what, that is what we call theory!
> Work sample evaluations do quite well, for example. As do in depth technical discussions about ... just about any subject the candidate claims to know about.
How do you compare candidate A with preferred subject 1 with candidate B with subject 2? And how do you get the candidate to talk sincerely and not bullshit around the topic, especially when testing ML - a modern alchemy more or less, no definitive answers, just a bag of tricks?
I have had problems with controlling the waste of time and directing the talk towards useful topics because the candidate wouldn't stop the bullshit talk. Maybe other fields have less bullshit maneuvering space.
I think it's better to go with a list of basic questions that remains the same between candidates. Only if they ace the basics I test for depth. Depth can be tricky to evaluate, especially for people who are not very bad - maybe just in ML.
In small companies or startups I have come to believe the quality and objectivity of the interview is a fair reflection of the maturity of the thought leaders at the company. I then use that assessment as a primary indicator to drop out.
As someone who has a current stable employment I have the freedom and flexibility to fail at numerous interviews with only minor frustration of time wasted. If I were out of a job and a bit more desperate I would just lie through my teeth on all elements that of a job interview except education and employment history like a super brown-nose narcissist. Since most all programming interviews are maximally subjective intentionally injected bias goes both ways and is easily applied.
Unfortunately subjective criteria are not alternatives to objective criteria but are generally applied in conjunction with them. From my experience, in the past one year and a half I've been interviewing at several companies, and I've done over 20 different code challenges. Even though I passed 90% of those code challenges, only 1 company gave me an offer and that's the company I'm working for right now. Other companies just rejected my application because I wasn't a "cultural fit" or something.
I would also add heaps/priority queues to this list. They don't come up as often as HashTables/LinkedLists but come up often enough.
If you wanna be thorough (esp if you are applying at companies known for harder interviews) I would add practicing backtracking problems where you are doing a full exhaustive search of the problem space as well (often O(k^n) or O(n!) complexity). Yes these are often mostly just DFS + Recursion but they have subtlety to them that requires practice for most people. They tend to be tricky to get right in an interview setting without practice as due to the really bad time complexity of most problems of this type they almost never come up in production for most people so they're not even on the radar of most engineers.
There's a specific reason I didn't mention priority queues in the post. In most cases, anything you can do with a heap you can do with a binary tree instead! A binary tree has O(log(n)) insert and deletion which is the same as a traditional heap. The only advantage a traditional heap has is you can construct a heap in O(n) time whereas a binary tree takes O(nlog(n)) time.
Of course there are even more niche data structures like a Fibonacci heap which have O(1) insertion, but you will have to get extremely unlucky to get asked about a Fibonacci heap in an interview.
One more thing: a heap can be easily packed contiguously in memory, while the tree nodes are typically dynamically allocated.
This helps with allocation pressure, which is often the "hidden" cost people don't pay enough attention to, especially in garbage-collected languages where the allocation might appear to be almost "free" (often a simple bump allocator), but the deallocation price is paid at later indeterminate time (when the GC runs).
But heap gives you O(1) max/min and Binary tree gives you O(log(n)) of max/min. Am I wrong?
In C++ (for example) getting the minimum and maximum is an O(1) operation [1][2]. I guess this is probably enabled through some extra bookkeeping and many other languages may do the same.
Wouldn't that mean you have to implement the binary tree as part of your solution?
Seems way easier/time efficient to just use the built-in heap/priority queue of the language standard lib
C++ has built in binary trees with fast iteration. The C++ binary tree is my favourite standard library data structure of any language I've used, it isn't the fastest out there but it has so many useful things you can do with it. You can use it as a heap, you can use it as a dictionary, you can use it to sort things, you can insert items at the same time you iterate through it and if they are after where you are in the tree then you will find them in your iteration etc. It does so many things right that others don't do, I'm sure the others data structures are slightly faster but the C++ binary tree is so powerful and still fast enough for basically every problem.
Oh interesting, I'll have to try that, didn't know there was a built-in one
A priority queue is the easy answer to virtually any “top k” problem so is still worth mentioning.
That's reasonable, makes a lot of sense.
It is amazing how far you can go with:
1. Look it up in the hash table,
2. Look it up in the literature
Before people start complaining about leetcode and how it doesnt exemplify skills:
its a proxy for a combination of:
intelligence and how hard you are willing to study
the computer science knowledge shown is just a bonus
EDIT:
One last thing to throw in, its pretty clear that theres a correlation between the top software companies and how hard their leetcode interviews are. You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
They are known for the highest paying salaries + providing stability relative to a high paying start-up. They aren't known for the 'best' software (how do you even quantify that) and Google specifically has a reputation for not maintaining software and letting products die. Generally not a hallmark of good software.
Also, the computer science shown asked in these questions is a very subset of computer science. I've never been asked an image processing question, non-trivial concurrency/parallelism, or numerical optimization questions (all things I've actually used in my job, I've never had to do strange linked list manipulations unfortunately). Those are all CS or CS adjacent but never get asked in my experience. I've also never been asked low level networking questions.
Instead it's just tricky graph questions and list/tree manipulation questions (that aren't that hard, but they are incredibly boring). CS is such a huge field, it's truly baffling that the technical interview questions at Google and Facebook are so miopic.
You've to be a genius to solve a "hard" unseen leetcode problem in 15 mins correctly. Facebook is notorious at expecting candidates to regurgitate solutions to problems in 15 mins. Intelligence plays a lesser role than exhaustive and painful practice which involves solving the same problem multiple times. It's a full-fledged examination that expects you to excel while being constantly watched and judged.
>It's a full-fledged examination that expects you to excel while being constantly watched and judged.
and interrupted, frequently, because that's also perfectly normal.
It's comical how this industry now thinks these arcane and often quite difficult DS&A interview question processes are reasonable and how it's been so normalized people just study this for weeks before applying to a new position, sacrificing evenings and weekends just for job mobility. These processes are not even proxies for intelligence anymore except maybe the very few people so wired they miraculously could jump through a handful of these with optimized solutions in 15 minutes without ever seeing and solving the problem before. I've worked with very intelligent people before who qualify as geniuses and they couldn't solve these problems under these conditions, especially not multiple of them without having at least seen the problem before.
Made an edit to my post:
One last thing to throw in, its pretty clear that theres a correlation between the top software companies and how hard their leetcode interviews are. You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
How do you know that Facebook and Google have best software because of the way they do their interviews?
At their size (number of employees, number of people who want to work there) they could probably just randomly pick (or add here any other way of selecting candidates) software engineers and still get some of the best in the market that will create amazing software. Not saying this is what is happening but I am just providing an alternative explanation to underline that the conclusion hard interview => best software needs more evidence to be true.
I think the way interviews are done is a function of:
- company culture (first and foremost)
- size (how big is the company and how many people they are hiring) and churn
- their believes about building software (some people believe math is required, some people think engineering is required, some people believe no pre-requisite is required)
- employer attraction: how much/how many people want to work there
- availability/support of employed engineers to be part of interview process
- how it started (usually big companies inherit the conception about interviews from their original founders as they where the ones hiring the Cs)
- the country culture where the C level and top management is located
The problem is that soon enough, all these mediocre and wannabe companies started copying the FAANG process, thinking: "If we interview like FAANG, we must be like FAANG. Or at least candidates will think we're like FAANG." And: "If your process rejects 99 percent of all applications, then that mean we're hiring the top 1 percent."
Which is basically how the LC craze got started.
Google and Facebook are known for the best software? They are known for "some" exceptional software; but most of Google stack is crap. Facebook we can hardly know but they rely heavily on buying startups.
exactly - the proof is in the eating of the pudding.
Yeah but it's a "how much do you want this job?" test. Which is not necessarily crazy, if you want to retain people in the firm it makes some sense to have them feel they worked hard for it, plus that they might not get through the eyes of the needle next time.
Or even a "how hard do you throw yourself into overcoming hurdles" test, which would be more useful. Because the attraction of a job inversely correlates somewhat with your value to the company, since weaker applicants would be outperforming their ability in terms of pay.
> but facebook and google have very hard leetcode interviews and are known for the best software
...are they? whenever I go on facebook, I always have a slow, unresponsive, buggy experience just using the website, and that's on firefox with 32gb ram on a 10th gen core i7
The former only in case of the latter.
I.e., if someone isn't willing to grind leetcode, their intelligence is irrelevant.
If someone is willing to grind leetcode, then yes, their intelligence is a secondary variable.
Regardless, such styled interviews are intended to minimize the number of false positives (since the companies have so many applicants it's safer to err on the side of "no"; there are more in the pipeline), which means they will also generate a tremendously high number of false negatives.
I have also worked with many companies that are pretty huge, cash positive, exploding growth, and with very complex software as their product. They made a rule of not hiring using DS&A after accidentaly coming across a tier-3 college student with no knowledge of DSA and still doing pretty well (and he didn't jump ships too), so then changed their strategy to: only test them for what you need from them. I will give it to Google and FB that their technical scale is much much larger than that company, so they might demand engineers who can understand that.
> facebook and google have very hard leetcode interviews and are known for the best software
This has got to be a joke of some sorts. More like the worst software. Sure Google is super rich, they have unlimited engineers who produce a lot of code. Almost all Google products I used were crap, buggy, bloatware.
My Google Voice app lags for seconds... _seconds_... while typing out a text. It's insane.
It's a great way for second-raters to grind their stats instead of actually building something.
>second-raters to grind their stats
so sayeth the people on the outside of companies building the most complex software in the world.
Most of the engineers at those companies (and really all companies) don't work on those projects. Some people work on truly complex projects, but those people are a tiny fraction of the entire workforce. It wouldn't even make sense for a company to allocate people that way.
Also, I'd contest the statement that Google or Facebook works on the most complex software. They don't work in fintech, medical, hard real time that I know of (waymo does, but they've been spun out), and many many more fields of SW, HW and CS.
They work on hard stuff, but don't discount the complexity that other companies deal with.
i mean do you speak from experience or is this just more conjecture? what people on the outside fail to realize is that while individual projects might not seem complex (maybe frontend engineering on FB is less complex than pytorch) it's the scale of the systems you have to orchestrate that is incredibly complex.
That wasn't really a dig, but I should have said jr. devs instead of second-raters.
couch it however you want but you're deluded if you think your projects stack up against even a BS grad at FAANG if you're discounting their abilities just because they got in through grinding LC.
>You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
Really? Google APIs I've used have been 70% crud, 30% great. Facebook has been consistently good, eg PyTorch and React.
But really, mostly just of willingness to study.
And grind, grind, grind, grind, grind.
This might get you OK marks on the "problem solving" portion of a coding interview. But a programming interview will also look for signals of other skills -- did you communicate your approach? Really good candidates will verbally communicate their solution before writing code. The best often write out how a particular data structure changes with some iteration.
If you're a senior engineer, a coding interview should include some demonstration of correctness. Maybe you write one primitive test harness for a positive test case and add comments or empty methods enumerating a few other test cases.
Interviewing has become a theater of the absurd.
You get examined on things that won't be useful in 99% of your work.
People put in hours to prepare and game it.
Overall countless hours go to waste to just play the role.
Most times it doesn't even show whether someone can think (what you might see as thinking might be well rehearsed "act as dumbfounded for a while").
The first example test given (rate limiter) gives a bad answer. I wouldn't use this method.
You think "I'd use a well tested 3rd party library for this problem" would eliminate you from the running on an interview with this question?
This, and many other of these algorithm questions, are asking people to come up with solutions to problems solved umpteen times with well designed and tested libraries available for all of them.
A sign of competence as a software engineer would be to use them.
It's also trivially solvable in O(1) time using a double-ended queue, but that would deviate from the "algorithm" proposed by the article.
Instead of a linked list I would have:
allocated an array of size 10
an int i that increments on every call mod 10
Each successful call puts the current timestamp in array[i] and increments i. On each attempt to call, you check if array[i] is > 1 minute ago. This way you are only ever checking 1 array element instead of potentially deleting multiple list items and counting.
Elaborate?
The solution described in the article is likely to be extremely wasteful in both time and memory, by allocating a queue entry for each call, and then O(n) scanning and dropping stale entries on each successive call.
Tabulating call count by division(s) of time would be less obviously problematic.
> The solution described in the article is likely to be extremely wasteful in both time and memory, by allocating a queue entry for each call, and then O(n) scanning and dropping stale entries on each successive call.
Even if n elements are scanned in the the worst case, the expected time it takes to perform such a scan is O(1). This is because we perform a scan on each insertion and O(n) elements are deleted in a scan of length n. That means the number of elements scanned is proportional to the number of elements inserted.
> Tabulating call count by division(s) of time would be less obviously problematic.
Tabulating call count by division(s) of time doesn't quite work. If a 9 calls are made at the last second of one division of time and 9 more calls are made in the first second of the next division of time, you will hit 18 calls in a two second interval which is over the limit of 10 calls for any one minute interval.
> hence the proposed solution has a worst case complexity of O(N)?
Worst case yes, but the question is concerned with the average case.
How bad is the code I want to write even though I know it is wrong :
initialize : double credit = N
At every request :
penalty = (elapsedTime / window_size) - (1/N)
gain = N* (elapsedTime / window_size)
credit = min( credit + gain - (penalty<0) , N)
if( credit < 0 ) throw exception
I was trying to express the sliding window constraint as a violation of the pigeon-hole principle : to violate the constraint you have to have at least N+1 intervals that are shorter than window_size/N over the window_size.
But I'm not really sure of the value it adds over simply taking a cost of 1 as you suggest. When time beween requests are spreaded more than window_size/N they don't cost credit. This mean you have a smaller number of "hot" clients (clients who are not full credit) to monitor.
The second idea that often occurs in problems with sliding windows is the telescoping sum which is hiding implicitly in the sum of elapsed Times.
Probably safe to say you aren't having to rate limit over a thousand endpoints from a single spot, so the linear scan sounds like a non issue. That said, I'd hesitate to get to fancy in client rate limiting. Sounds like an area that will hide bugs that are stupid hard to deal with.
Depends on the requirements. If it's enough that the client doesn't make >N requests within the same "calendar second", then counting in bins is enough, no need to store timestamps. But then you could have a client send N requests in the second half of second k and N requests in the first half of second k+1, so you'd have a 1 second period (that spans two "calendar seconds") with 2N requests allowed. Since such rate limits are often not extremely exact, just an order of magnitude, this quantization/rounding effect may be negligibe and the speed may be worth it. But maybe not always. The original problem statement did not say that it's fine. Probably it's a good sign if an interviewee asks if this would be okay too.
I'd assume a linked list was used to be able to drop a lot in a single shot. In particular, seems safe to assume the queue is sorted by time. Such that as soon as you see a time that is outside the window, you can drop all of the rest in one shot.
The page wasn't loading for me, so I can't see the proposal. Just don't let a linear scan scare you, for small n.
One thing that I have noticed that FAANG doesn’t do well is: help people with sleep issues.
It’s not their job [0], but I do think there are some talented insomniacs that are great employees (given that they can mostly work on their own schedule, except for meetings). I remember that I interviewed for Facebook and I was so sleep deprived, I could barely write a for-loop. Ironically, I passed that interview but I couldn’t pass others that required some more creative thinking or more memorization.
[0] While it is not their job, FAANG companies do mention that they are accommodating candidates with mental health issues, and I’d argue that insomnia (diagnosed or undiagnosed) should be one of them.
The anagram solution leaves the actual hard part unfinished^. Also, to solve in O(n) you need a bit array (or a database like Postgres that can do an XAND on a string).
> Sort the characters of the words alphabetically. Since anagrams are all made up of the same letters. This will give us the same string for any pair of words that are anagrams of each.
Correct.
> Produce a dictionary of the number of times each letter occurs in each word.
Well, yeah...algorithms aren't the hard part for this kind of question. "Produce a dictionary of the number of times each letter occurs in each word." is heading the wrong direction. You want an encoding that does not care about the number of times each letter occurs in each word.
Take a string of zeroes, which is (maxWordlength*26) long. Each segment of 26 represents a compositional letter from the word. So 100000 00000 00000 00000 00000 is 'a'
Now you can create your dictionary from the sorted word indexes. pin becomes inp becomes 000000 00100 00000 00000 00000 000000 00000 00100 00000 00000 000000 00000 00001 00000 00000 same as nip (becomes inp, etc)
You can add 26 zero sets, to pad out to the length of longest word, if words are variable length.
^We're going to assume normalized words, which are all lower case and no punctuation, comprised of English letters from the 26 character alphabet. Getting the set of all of the possible words of any given length, is also quite an exercise.
> "Produce a dictionary of the number of times each letter occurs in each word." is heading the wrong direction. You want an encoding that does not care about the number of times each letter occurs in each word.
How so? "aab" is an anagram of "baa", but they are not anagrams of "ab".
100000 00000 00000 00000 00000 100000 00000 00000 00000 00000 010000 00000 00000 00000 00000
aab and baa
100000 00000 00000 00000 00000 010000 00000 00000 00000 00000 000000 00000 00000 00000 00000
ab
You aren't "counting" the number of characters. You're making an encoding that includes that information, without the aggregation. The aggregation is irrelevant as is the order of duplicate characters.
>which is (maxWordlength*26) long. Each segment of 26 represents a
>^We're going to assume normalized words, which are all lower case and no punctuation, comprised of English letters from the 26 character alphabet. Getting the set of all of the possible words of any given length, is also quite an exercise.
I hate solutions like this because they'd never be even close to being viable in real world
You could use a lookup table to turn the nth letter of the alphabet into the nth prime number, then multiply those primes together. This "hash" would be unique up to reordering of the primes/letters.
Sorry, I don't understand your comment. Why would the proposed "naive" solution of generating a dictionary not work in O(n) time? What is gained by this bit encoding?
And as far as I know, XAND does not exist?
What dictionary? How do you generate it (handwaving notwithstanding, as mentioned)? It all comes back down to encoding, rather than aggregation of letter count.^
> XAND does not exist?
XAND being a convenient way to mean binary AND. I thought it was easy to infer. Taking the binary encodings, you can do a binary AND to determine if a set is an anagram subset (or full anagram) with more flexibility.
^Create a hashmap (any searchable K/V will do, redis, postgres, etc). For each word, check to see if encoding exists in set. If not, add key of encoding and value of word. If it exists, you have a match on the search word and the found key's value. Aggregate on a hashmap there for your final output, to avoid adding the same words multiple times.
> What dictionary? How do you generate it (handwaving notwithstanding, as mentioned)? It all comes back down to encoding, rather than aggregation of letter count.^
OK, first, I'm not sure your original post presents the right reading of the blog post. Sorting and using a dictionary are given in the blog post as two separate solutions, not two components of one solution. (Sorry if I'm mistaken – but it seems to read this way to me?)
Regarding your concerns about dictionary generation: I might be missing something, but this doesn't seem so hard? The first thought that comes to mind is to associate to each word a 26-tuple of integers that counts the number of appearances of each letter of the alphabet. This can be done in O(N) time, since it's an O(1) operation for each word (length of words is assumed to be bounded above by some constant). Then
1) Create a (multi)-dictionary with these tuples as keys. Insert is O(1) in the average case, you're doing it N times, so this step is O(N).
2) Go through the dictionary by key and delete all singletons. Again O(N).
3) Print the dictionary by key to get the groups of anagrams.
This is O(N) on average. It's not immediately clear to me whether one can do O(N) in the worst case – are you claiming that your way does that?
[Of course, this is not so far off from what you're doing. But it illustrates the claim that bit arrays are not necessary.]
What is XAND?
exclusive and? Just guessing, I use some of these chips in my synthesizer
After practicing bunch of Leetcode questions (around 300), and looking at videos from hiring managers that claimed to work at FAANG, or even the example interview at FAANG itself. I am actually more confused, whether Leetcode is the thing that they are looking for or not. Here is what I observed:
- Some Leetcode questions require tricks/complex algorithm that you absolutely won't find in the span of 45 mins. If you do, either you are ultra genius, that able to find and modify a variation of Kadane's algorithm in 45 mins (even though it is a simple one), or you just have seen the question before and you just somewhat memorized it. Obviously the answer is the latter. In this case, what do actually FAANG companies really looking for? Is it the proof that a candidate has been finishing hundreds of questions?
- I've seen videos on interview example on Youtube, and they all present medium to easy Leetcode questions. The onsite interview at FAANG is mostly hard questions. So that means there are disconnect from how FAANG actually interviews and how these interview samples are conducted. And again, solving a hard question that you've never seen before optimally require way more than 45 mins, unless you've seen the question before, which goes back to no. 1
- Hiring managers or recruiters that gave advice. Saying stuff like "talk a lot, talk to the interviewer, interview is a 2 way street" and other feel-good self-help stuff that has nothing to do with the actual interview and don't help at all. Claimed that what the companies are looking are your thought process. Nope, lies. What companies are looking is whether you can solve this problem optimally in 45 mins. If you have thought process, speaking during the interview, etc, and you couldn't even finish the question in brute force, then you fail. If you able to finish the question with brute force, but not optimal, you also fail.
- Experienced senior engineers that definitely rusty on Leetcode and just did basic DS&A brushup and definitely couldn't pass Leetcode hard, got offered with TC way higher than these fresh grads.
So, I'm quite confused on what are the companies really looking for? I think I concluded that FAANG companies are indeed practicing what they say "eliminating false positives and have a really high false negatives". Therefore the whole interview Leetcode fiasco is just a number's game.
A few things:
- If you are senior engineers and have track record at other companies, then you don't need Leetcode to prove your worth. Since statistically, you are already worthy and can perform in your job.
- If you are not a senior engineer from other FAANG companies, then you just have to play the number's game. I.e, keep interviewing (and Leetcoding) until somehow the combination of your studies and the mood of the interviewer and the questions that come up during that interview day rolls in your favor.
For the second problem, I had an idea for a different hash function. Basically map each letter to a prime number (e.g. c = 5, d = 7) and then multiply the values in a word together. I guess depending on the length of the words it might overflow. But I thought it was interesting.
I gave this very solution in Google interview many years ago. It does assume that the input characters are of a limited set, e.g. English alphabet. The interviewer said they hadn't heard that answer before. It wasn't what they were expecting and took them a while to be satisfied it would work. I was offered the job but didn't end up taking it.
Good stuff. A couple of comments to the post:
Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
Post doesn't mention this, but one other common pattern I've seen requires a tree data structure, and those interview questions can be solved with recursion. Basically once you have a tree data structure, you write a f(node) that will simply call f(left) & f(right) recursively -- or for-each(child in children) { f(child) } for non-binary trees.
>Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
Why on Earth? Given that the timestamp cannot randomly jump backwards, you can achieve the same results by just storing the timestamp of the first call after a reset, and the number of calls sharing the same date/hour/minute, resetting it to 0 when the minute advances.
That said, it won't be entirely what was asked, since it won't have the rolling behavior.
> Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
As Bradley said this implementation can be called more than 10 times within a 1 minute window. If it's called 9 times at the last second of one minute and 9 times at the start of the next minute, it will be called 18 times within a two second interval.
I agree with your answer on the rate limiter, but I think it depends on whether the spec is for rolling or fixed one-minute windows. The question is ambiguous, but the sample answer implies that we want to track rolling windows.
If that's the case, an improvement on the answer is to use a ring buffer instead of a linked list, which allocates a fixed set of memory up front (slower to initialize) but then is allocation-free on each invocation.
Here's a similar resource for anyone interested in this topic: https://transitivebullsh.it/gaming-cs-interviews
Maybe the title should be "An Algorithm for Passing Algorithm Programming Interviews". This wouldn't help you pass our Programming Interview in the slightest.
Or don't agree to interviews that include algo challenges, leetcode/hackerrank nonsense, array shuffling shenanigans. Choose companies that put thought into their evaluations.
Just my $0.02: I'd much rather spend a few weeks refreshing basic data structures and algorithms than doing the "non-algo" challenges I've gotten because they feel way less objective than getting questions where there's a correct answer.
Most of the time the alternative to non-algorithm questions is some form of take home assignment that's usually multiple hours long and that you get no feedback on or chance to correct.
For example, I interviewed with a company where they asked me to design a program to parse some CSV data and I was told in the next round I would pair with an engineer to go over an expansion of the code. The parsing was trivial in Python and I submitted my solution with full unit tests and what seemed sensible to me, but it was as useless of an exercise as asking me algorithm questions because on a job I'm almost always going to adjust my coding style and paradigm to the context and needs of the company.
I commented my code as such to explain that if there were different constraints on the data size or API I would make different design choices than the one I submitted, but it didn't matter - I submitted and received a rejection less than 24 hours later and never got to the next round where I would actually walk through the code with an engineer. It was a complete waste of multiple hours of my time trying to produce thoughtfully designed code - I'd much rather bang out a breadth first search in 45 minutes.
On one hand I like take-home exercises because they are closer to what is a real world job and I guarantee the quality of my deliveries. Up to now I can say I've passed every single take home exercise that was thrown at me. On the other hand... the time I spent on those is abysmal. If I sum it up, I must have spent over a month working on take-home exercises. And even if I passed all of them, in the end no company gave me an offer because I wasn't a cultural fit or something. One company even ghosted me after having me to do a 1-week take-home exercise where I had to build an entire app, front end and back end, that consumed an open data API that had 1 million rows.
I still prefer take-home exercises because are more realistic and make more sense than timed, algorithmic exercises. But I wish companies would do their cultural fitness thing _before_ having candidates to spend their time on their code challenges.
The interview is about interpersonal skills as well, even in the case of "technical" tests. I've given specially crafted answers based on what I could tell about the interviewer and what he wanted to hear. Your solution might have looked simplistic/smug and that's why they rejected you.
Was that a good reverse filter for you? For me it wouldn't have been, and I'd have given then the sensible solution in the scope of the interview.
At least you got a rejection! I’ve done take home assignments that resulted in nothing but :crickets: in response.
I had the same thoughts, but changed them over time. If not leetcode, it quickly comes to very specific questions about the interviewer's favourite thing and its quirks. A complete hit or miss.
Backup link https://archive.is/fg8hI
Hash Tables
Linked Lists
Binary Search Trees
I would add tries to this list - I was asked trie-based questions in 4 of the companies I interviewed with over the last couple of years, including a FAANG.> You probably won’t be expected to implement a binary search or sorting algorithm, but you should be aware that they exist.
Lol. Pixar‘s coding interviews are well known to do exactly just that: implement a binary search tree from scratch.
Many interviewers ask candidates to explain their thinking. I don't think this dubious process of elimination would pass that test.
The "algorithm" I wrote in this post is a formalized version of some problem solving techniques I picked up from the classic book "How to Solve It"[0]. In particular "working backwards". You start from the goal you want and see what you know that is applicable to getting you to goal.
Right? When I’m interviewing I care most about the thought process and the ability to communicate in depth about something technical, the actual solution isn’t really that important at all.
This is the thought process. I use something very similar and have done well in interviews.
I found I did better in interviews when I showed my thinking less, or adapted it to the interviewer. Stating the things I'm not doing is useful, but too risky. Just hearing the words "linked list" in a problem that can't be solved with linked lists (even in the sentence "linked list doesn't seem to be right") startles some interviewers. A high enough percentage I don't want to roll that die six times during a round.
The silliest example is I like to work on problems by writing code. Yes, even if I don't understand the problem perfectly. Seeing some code helps clarify my thinking and I throw most of it away.
Freaks. People. Out.
Writing down some wrong line of code may as well be blasphemy. Strangely, not the case with comments. So my technique in interviews was to write comments that were as equivalent as possible to code and then convert it to regular code when I felt things were close enough. The biggest challenge in Python is not making it so similar that it becomes obvious what you're doing.
lol yeah having been through these loops and training to do these loops it really seems like most people say they want to see “thought process”, but what they’re really looking for are the usual cues you see in all the prep info online.
E.g. ask some clarifying questions before coding, write or suggest a brute force, and then write an optimized algo.
Without clarifying questions, often you assume something about the problem that's not actually the case. And a brute force solution is often very useful to learn more about the problem domain. Personally I really like doing a bad solution before a better one. Especially in the setting where it's not completely whiteboard, so I have the ability to run and test my code, then having a brute force solution around to debug the problem is useful.
I find writing and even sometimes describing the brute force to be hazardous. Seems better to wave in its general direction.
Narrowing the solution space makes sense. "I don't see how we can apply a DFS to this problem, so let me see if I can use a hash".
But the post does go through the thinking behind the solutions, why would this method - as it does lead to correct answers - disqualify someone?
It’s almost as if you’re learning what is expected of a suitable candidate.
Just be likeable
That website's choice of font colour makes for frustratingly low contrast and therefore is a recipe for eye strain.
I didn't waste time reading it.
What kind of hash table is O(1)? Or did I forgot everything from university already?
You cannot have a general O(1) hash
Well maybe in best case/average case, when you design the hash space big enough. Ugh I really did forgot everything.
Not sure I understand your concern. Are you asking about worst case? All hash tables I believe are O(n) worst case because of the possibility of collisions (or the need to resize in the case of an insert). Most people talk about amortized performance though (ie when you are successful at reducing the likelihood of worst case to a rare event). Amortized time is obviously dangerous if you’re in a real-time system (eg games) where it’ll manifest as stuttering or latency spikes or potentially even worse if you’re having to respond to hardware events in a timely manner for proper operation.
It's possible to gradually resize a hash table to prevent spikes.
For collisions, if you use a good hash then you can adjust your load factor and chaining strategy to make sure it's never a problem. Where 'never' is 'less likely than the computer spontaneously self-destructing'.
> It's possible to gradually resize a hash table to prevent spikes.
Can you provide details? The only trick I know of is the same one as with arrays where you overallocate. That still doesn't get rid of spikes and the C++ STL and Wikipedia both list worst case time as O(size of hash table). It's also not enough to just have capacity - often times hash tables will abort finding a slot if gone far enough in favor of resizing & rehashing the table.
> For collisions, if you use a good hash then you can adjust your load factor and chaining strategy to make sure it's never a problem. Where 'never' is 'less likely than the computer spontaneously self-destructing'.
Yes, but the vast majority of implementations don't do a good job here. Howard's "Types Don't Know #"[1] paper is extremely well thought out and I have yet to see this adopted (except for Rust - Rust has a good track record of adopting good ideas like this and having the benefit of coming late to the game). I think it's safe to say that most hash tables don't have good worst case times (which when combined with resizing behavior of insertions, means you have super-linear worst case times on insertion).
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n398...
That's a really neat idea. I don't think I've seen any containers that implement this idea. Wonder why.
The trick with O and hash tables is to remember that they're amortized O(1). Imagine a bag with the cost of resizing sitting inside it. Then, spread the cost across all inserts. There are always more inserts than the cost of resizing, so you end up w/ amortized O(1).
No, if you really want to game it you sign up for membership on Chinese forums where people post the questions word for word minutes after completing the interview. That or work exclusively with private recruiters that tell you the questions verbatim because they have a vested interest in you passing.
Interview questions don't rotate that frequently, especially for smaller companies or more specialized roles, and a $60 membership for a month will buy you internal referrals and potentially land you hundreds of thousands of dollars of value in a new position.
As an interviewer, this is so incredibly frustrating - we don't change our questions often and because of that the questions and answers are all over these forums. With that said, it is incredibly easy to spot someone cheating - they often write the most perfect optimal solution from start to finish, helper functions first, often with the same function names as the forums themselves. The trick I've learned is to ask "why" - "why did you decide to use a linked list for this problem?" - the real cheaters often freeze up and can't give an explanation at all. It's rare, but still too common.
If you've seen the question/answer before just say so! I will totally appreciate the honesty and it goes a long way.
> If you've seen the question/answer before just say so! I will totally appreciate the honesty and it goes a long way.
Why? You're testing their ability to produce the right answer to a given problem - not their problem solving ability. To that end it shouldn't matter if they've seen the problem or not.
I always find it hilarious when recruiters say that "getting the optimal solution isn't everything." I've failed numerous interview rounds where due to time constraints or implementation details I'm not able to completely code the optimal solution, but I am able to talk/walk through each step of it in pseudocode with the interviewer. By your own criteria, being able to clearly explain the solution and demonstrate an understanding of the different tradeoffs should count for much more than just being able to copy/paste the solution from memory, but I've never advanced in any round that finished without a "working" piece of code.
Honestly, the one thing I appreciated about FB/Meta's recruiters is that they were always honest about the process and what was expected - 2-3 Leetcode mediums/hards in 45 minutes and they only care about optimal solutions. I much prefer that to disingenuous sentiments of "getting the right answer is important, but we also want to see your thought process and how you might work with another engineer on the team."
> "getting the right answer is important, but we also want to see your thought process and how you might work with another engineer on the team."
That's how it works in all the companies I've hired in.
It doesn't matter if you don't get to the end of the problem, I just need to see you can think and that you know how to code. Do you think your daily job will require you more than that?
And believes me, this is enough to filter out plenty of bad apples.
Poor performance in my experience was never about not being able to solve a technical problem, it was always personal issues / not having motivation / hating the environment.
> the one thing I appreciated about FB/Meta's recruiters is that they were always honest about the process and what was expected - 2-3 Leetcode mediums/hards in 45 minutes and they only care about optimal solutions
A counter data point: I recently passed Google's interview a few months ago. In one round, I was asked to solve a certain kind of bipartite graph matching problem; not being a fresh new grad with ACM medals, I obviously couldn't solve it in polynomial time and just implemented a brute-force solution in the end. In another round, I came up with a solution the interviewer said they've never seen before, although it could be slightly slower than the "optimal" solution he had in mind depending on the input size.
As an interviewer in my last company, I always made sure the solutions were well motivated, and have rejected candidates who could write down the perfect solution but couldn't explain why. If I were to be asked by the candidate for the specific runtime I was looking for, I would probably just reply with "don't worry about it, just try your best" or "let's focus on correctness first and worry about efficiency and other trade-offs later".
Testing for problem solving ability is hard, but that's still one of the key signals we wish to extract whenever possible.
Really? In my position as a senior member on my team, one of the biggest mentoring costs is just teaching junior members how to search and find answers for themselves.
Yes, day to day I'm not copy and pasting huge blocks of code from Stack Overflow, but when I need an answer and I don't immediately know it my first move is always to search internally or externally for others who may have already shared it.
Why is being able to effectively search for for answers not considered good problem solving?
Why not just throw out the questions. They seem useless, easy to cheat, and as commenters are suggesting there's a "pay to win" component. What are you really getting from these questions? Why is software engineering so different to other industries when it comes to interviews? Is there really a positive to this type of interviewing? Why not do what everyone else does and look at previous work (which in software we have a lot more considering GithHub), ask some questions relevant to the job, and see if the person has a personality match. Most jobs don't need the best of the best.
I'm not convinced the coding interviews have improved upon the standard interview style in any meaningful way.
> If you've seen the question/answer before just say so!
I did that once at FAANG interview, instead of honesty credits I felt like the interviewer just got annoyed by having to come up with another question.
Unfortunately goal of interviews these days is to know what you don’t know, instead of what you know
> If you've seen the question/answer before just say so!
In my experience failing to answer the alternative question you give me has (on average) a much more negative impact than pretending I don’t know your question (especially when I can explain it).
> As an interviewer though, there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job.
This is what I've done as an interviewer. But then, my questions actually lend themselves to that. Something that the sample LeetCode questions just don't.
In my experience (as interviewer) LeetCode kind of questions are "gotcha" kind of tests: Either you know "the trick" or you don't, but there's no real constructive value.
In contrast, I prefer tests like let's write a Tic-Tac-Toe, Snake, Twitter-clone (no DB in memory), etc. in 30/60 minutes together in your computer with your IDE, your software, google and language of choice. I am able to do a quick coding session with the person, see her weaknesses and strengths, and even if he has done it before, looking at his real project coding style is super useful.
> there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job
Which should be a clear indicator that interviews aren't only judging problem solving skills, but also the candidate's ability to withstand the pressure of being watched and judged while they solve complicated problems.
For the interviewer, it's just another day and sure, they "want the candidate to succeed" and all that, but for the candidate, their future and livelihood are on the line and that's tough for some of us to just ignore while we focus on the not-easy school quiz problem of merging overlapping intervals or whatever.
A advantage to having standard interview questions. Plus the questions we ask lend themselves to extensions (also predefined) so you can still see how they reason about and solve those. And given that our interviewers are familiar with them, we can calibrate across many candidates. We also have a pool of people who maintain the standard interview questions.
>If you've seen the question/answer before just say so!
In reality no one will do this though. There is way to much of an incentive on the candidate side to lie and work through as if they haven't seen it before.
> someone cheating
It's not "cheating".
> If you've seen the question/answer before just say so!
Haha good one.
Why are they "cheaters"? If candidates are supposed to study LC to pass a stupid interview, why get all pissy when they do exactly that?
>I will totally appreciate the honesty and it goes a long way.
Not far enough to get a job in most cases, at which point, after the rejection, you and I are likely done and won't talk again. On the remote chance we cross paths again, the honesty might be worth another interview (if you're still in that position), but not much more. Person-to-person, I know the honesty is appreciated, but here's why candidates are not likely to be honest and simultaneously have no moral/ethical failing during interviews:
You're an agent of the company with a power dynamic over candidates when interviewing. You can't be honest with us via feedback, even if you wanted to, because there's potential legal problems for the company if you are. Many companies don't give feedback as policy for these reasons. So, knowing that I know that you can't be completely honest with me, I only hurt myself by being honest by telling you I've seen the problem before.
If I already knew the answer, but can't answer 'why?', then that's on me - I likely don't fully understand the solution in that case. Reject those people, sure. But they have no obligation to be honest with you. Similarly, if I already knew the answer and disclosed that fact, I might get a tougher question. So there is real a risk that I hurt my chances by being honest.
The risk of getting a tougher question is important. For it to be really fair, you'd have to grade these questions and determine if the next question is of similar difficulty. But your interview and problem grading process (if you have one) is not likely to be disclosed to me. The honesty would be appreciated, but we both know that interviewers aren't going to share those details chiefly because it defeats the purpose of the test, but also it'd end up on a forum if you're a well-known company leading to more cheating.
If you were giving a well-known test developed by professionals (GRE, GMAT, LSAT, SAT), there are significant resources to show me that those are fair tests developed by academics and other professionals. I would trust those testing admins to substitute questions of a similar difficulty. Were that the situation with your interview, the risk of getting a tougher question by being honest is significantly diminished because I know you have a bank of questions with accurate difficulties attached.
---
I'm not arguing that you should change your process. A candidate unable to answer 'why?' is a perfectly good reason to say the candidate failed the question and maybe the interview. I'm arguing that this really shouldn't be considered cheating or a moral/ethical failure.
If you had honest experience with the question at hand, it seems a little weird, to go into an interview, and basically say "I already know I can answer that one. Give me a new one."
can you give me a link to one of these forums?
1point3acres is the most prolific, but there are many others.
Google translate is sufficient. They’ll do it pretty carefully, complete with “pretend you get stuck at this specific point and if you get asked why to use a hashmap, act baffled for a moment, and then say X”
Interesting. I had never expected 1P3A mentioned on HN. It's mainly a forum for Chinese overseas to discuss their life and so on. Surrounding this topic they have also developed side features like COVID-19 statistics and some system for school applicants.
Yes. And American immigrant communities in other countries. The size and formality will generally correlate with the size of the immigrant community. From real-world social networks to Facebook groups to dedicated sites and forums.
Just another kind of networking.
Thx
Please suggest some Chinese forums: I’m not sure how to google for them…
@o10449366 what is the $60 website you mentioned?
Many people say this. But the reality is that solving 100 LC questions and actually understand the solution enough to solve a variation of the problem is a lot of work. Especially if you are working full-time. I wouldn't call that "game it", just usual study and hard work.
When gaming it is just passing by studying, a repeatable process that anyone (with a CS degree) can do, just means the interview process is quite well designed.
The interview process is a test of endurance, not intelligence. And it should be exactly that, since software engineering is mostly an exercise of endurance and focus.
Every time a friend of mine QQs about failing a FANG interview, I give them the study prescription that worked for me. The reaction is always the same: they don't do it. If you can't get past these interviews, you probably won't make it past a year in these high performance companies anyways. Because you actually have to exhibit the same work ethic that interview studying requires.
I felt the same when I failed a leetcode interview. The reason behind my annoyance was that I had spent considerable amount of time over the past few years improving my skills in the areas I thought mattered (practicing TDD, design patterns,learning FP etc.,) I felt disappointed when none of these mattered compared to an artificial set of challenges setup in an unreal environment.
I can't say for sure that such challenges (array manipulation, backtracking etc.,) are not typical for a software job but I know that they aren't relevant for my line of work. Even if they are, the challenge of solving problems in a live coding session is not the same for all. Different people need different settings to perform or excel.
But now I am committing myself to leetcode and getting better at it, but it has become one among hundred other things I want to get good at. At some point of time, I will feel the burnout. But I sincerely hope the acquired skills are useful in my professional setting.
Skills such as design patterns etc do matter. There are separate rounds for testing just that, think Design rounds (LLD or HLD) but you usually have to pass the Algorithm rounds before to get to that stage.
I don’t know if it’s as useful a signal as you imply. For me, it’s easy to be motivated studying leetcode: small, self-contained puzzles with just the right amount of challenge and immediate gratification. Actually doing a FAANG job can be a slog where it takes months to see results from your work.
I can get hired as a software engineer wherever, but I’m only mediocre at doing the job. I’m not the only person I know like this.
I don't want to underestimate the skill of being good at LC. In the past couple of months, I have solved about 200+ LC and I have been interviewing vigorously too. Its a tough skill for sure but beyond a certain point, you start seeing patterns you have applied before and once you are in the flow, it becomes slightly easier.
I only wish the real Software Engineering job was as simple as being good at LC, because it clearly isn't.
Sure. But I’m (by my own estimation) a mediocre software engineer who is very above-average at solving fun coding puzzles, and therefore at interviewing. Usually these threads are full of people complaining that they are good engineers who aren’t good at interviews; I’m suggesting that the opposite isn’t uncommon, even if it’s more rarely admitted.
Haven’t really figured it out yet. Working on something I care about seems to help, although it’s not always easy to find. Having a strong, trusting relationship with my coworkers also helps. Easy to get detached otherwise.
Having other sources of meaning in life keeps me going during periods where my career isn’t going as well, gives me perspective and keeps me from getting depressed (I’m prone to it).
Working at top companies has helped me meet a lot of amazing people, including many of my closest friends, so I’m grateful for that at least.
Also, this gets thrown around a lot on HN, but if you’re brilliant at hard programming puzzles but not good at engineering jobs you might have ADHD. I do. Medication and/or ADHD-targeted treatments and accommodations could help. They’ve been modestly helpful for me.
you're not mediocre at the job. Trust me, you haven't seen mediocre
Software engineers that QQ about how unfair algorithm interviews are are clearly out of touch with how difficult and truly unfair interviews are for other high paying industries like law or medicine (in the US) where getting interviews is based on pedigree and getting one or two rejections can permanently deny you from top firms/positions.
There are always things that can be improved about interview processes, but many software engineers display great immaturity when it comes to trading a few weeks of time studying content that should already be partially familiar to them from school for six figure salaries/raises.
While it's true that asking an experienced surgeon basics of anatomy is worthless, please take into account that all surgeons at least operate on humans. If you take 10 random developers, chances are most of them worked on completely different levels and domains throughout their careers (e.g. embedded dev, backend, frontend, devops will have vastly different skills).
> but I’ll take it over how finance jobs are where it’s all about connections and where you interned when you were 19
Is this different than Big Tech really? If you do not go to the right schools, internships, whatever it can take years to have the right employers in your CV to be called for an interview.
(Not to mention that that internship might be gate-kept for reasons outside of expertise, to boot)
This comment is incredibly out of touch with the world of big law. Not only do associates get cut, but many people stall out and can't make partner.
(also the whole concept of "Up or out" comes from Big Law/Consulting... https://en.wikipedia.org/wiki/Up_or_out)
You are just as "set for life" in big tech as big law. In fact, if you're looking at the last decade, big tech won hard considering tech stocks and how big law froze (and even slightly cut) salaries during the great recession.
> The average employment length at a faang is 2 years or less
Because if you're hiring 40% of your current headcount a year and some people leave that gets you to very short average tenure very quickly.
What is the relevance of this whataboutism? Some other industries have bad or worse interview practices as well. That doesn't make this process good. What is the point you're trying to make?
Its immature to think we can do better because others have it worse? Rubbish.
That's pretty presuptuous. Do you truely believe that working at a FAANG company requires the same amount of 'edurance and focus' as working at a non FAANG company _and_ spending several hours every week practicing coding skills?
Seriously. Many people coast while collecting $. Acting like work at a FAANG is some ultra-endurance test is a joke.
The major frustration is that its endurance in an mostly irrelevant task. If the problems were more relevant, active software engineers wouldn't need to study outside their day to day duties, no?
Can you share your prescription
Whats your technique?
How did you practice system design? Did you work from problem sets or just study principles?
resources for sys design?
If somebody has been doing something for 20 years and an assessment test can yield dramatically different results with 20 minutes of preparation, then the assessment test is total bullshit. It's going to yield a false characterization and is fundamentally garbage input
I know that's pretty strongly stated and it matches the strength of my convictions here. I've tried many methods. Asking someone to figure out something is a sliding window is a garbage test
Problem is, there is not a lot of incentive for people looking for a job to use LC and similar to understand algorithms and data structures, and with good reason;
Many interviewers do not ask the questions to check for thought process or problem solving ability, they treat it like some TV quiz: Ask question, get answer, compare answer to note in hand, applaud if its the same answer, next question. Why? Because its a lot easier to sit there watching the candidate squirm at the whiteboard, while thinking about what's for lunch, than engaging the candidate and gasp talking to him/her.
This creates incentive for people taking these BS interviews to learn-for-the-test: Get a list of the current top50 questions asked regularly at interviews (there are resources for that) and memorize them.
Why? Two reasons:
1. It is alot easier than understanding the concepts and purpose of different algos and data structures.
2. Trying to solve them by applying actual understanding, runs the risk of getting stuck on an unfamiliar problem, or producing a slightly sub-par solution instead of the "correct" answer, and getting booted out despite demonstrating the exakt thing aka."problem solving ability" the interviewers allegedly look for
And, unsurprising, because there is money involved, an industry has sprung up around this: Pay-For-Tech-Interview-Training is a thing, including regular updates on popular questions.
The result of course: Companies running the risk of hiring people who are great at answering LC questions but fail when they actually have to solve a problem where they cannot copypaste the solution from SO.
I’ve gotten interview questions I’d recently solved and my problem was it was too easy. I had trouble acting like it was the right amount of struggle. Is there a trick for that?
I've interviewed folks for FAANG roles. If you know how to solve the problem already, just tell the interviewer up front. Either they have another question or they will go deeper into a discussion about why and how you solved it the way you did, testing it, other approaches and why they are or are not good tradeoffs, etc.
It's pretty obvious to interviewers if you've solved a problem before, and we appreciate the honesty. Interviews are not adversarial; they're to see if a candidate is a good fit for the role and dishonesty is never a good fit.
This is how dumb these interviews are.
"We expect you to study and be prepared for algorithmic questions, BUT NOT TOO PREPARED! Only just enough. We will give you a random question of our choosing, but if you already studied this, then you will be deducted points, unless you tell us, so that we can ask you a question you've never studied before."
A true interview would give the candidate their choice of question with the expectation that they know how to solve the question. It makes it a lot fairer for everyone involved.
I was always kind of confused about how advice for interviewers always comes along with warnings of the type "Some people talk a good game about programming, but they can't actually do it. You have to watch out for these people. Never hire one. This is the biggest trap you can stumble into as an interviewer."
After getting some interview coaching, I think I understand how this complaint arose. The whole problem is an artifact of the interview format, in which the design intent is for the candidate to be unable to solve the problem on their own. So you get scored based on how well you can charm the answer out of your interviewer while making them feel like everything they said was your idea. Instead of a test of how well you can program or solve math problems, it's a test of how good you are at cold reading. ( https://en.wikipedia.org/wiki/Cold_reading )
And unsurprisingly, a test of cold reading will end up delivering people who are good at cold reading without reference to whether they're good at other things. If you want to avoid this problem, just start giving assessments that don't involve interaction with the interviewer.
A good way to avoid this conundrum is to ask questions that are worth solving in real life.
If the candidate breezes through the discussion because they've actually had to solve the problem before, then their victory is well earned.
If on the other hand its an academic question in the same vein as the data structures or algorithms puzzles you find on $interviewprepforum, then the fact that they've solved it before tells you very little.
I don't think there's much difference either way. In both cases, a candidate gained a large advantage in a way that tells you very little about their ability.
I think this is why contrived questions gained popularity in the first place - they eliminated noise due to candidates randomly having solved similar problems before (that and "real life" problems usually can't be explained and solved in 1 hour).
I think this is on the right track. The best interviews I've been a part of involve the interviewer asking a question they don't fully know the answer to. Then the interview turns into a conversation where both parties are trying to work together to formulate an answer. The candidate should be scored based on how constructive that conversation is.
Be honest: a candidate that works through a seemingly novel problem correctly is getting a much better review than another candidate that admitted they've done the question before and breezes through it. And this is assuming your interview round doesn't get thrown out entirely to begin with.
> Interviews are not adversarial
I'm struggling to imagine a definition of "adversarial" that would make this true. You have two parties with conflicting goals.
The interviewer's goal is to evaluate the interviewee accurately.
The interviewee's goal is to be evaluated inaccurately.
If you really need an example, then look at it this way:
1. The interviewee's goal is to be hired.
2. Assuming there is no conflict of goals, then the interviewer's goal is to hire the interviewee.
3. This immediately implies that the interview is a pure waste of time. You can just make the hire without having the interview.
If you don't believe that interviews are -- in every case -- nothing but a waste of time, then you must reject one of the two premises. You either believe that (1) The interviewee does not wish to be hired, or (2) there is a conflict between the interviewee's goals and the interviewer's goals.
We can make a similar observation purely by knowing that interviewers sometimes reject interviewees.
> It's pretty obvious to interviewers if you've solved a problem before, and we appreciate the honesty.
> ..
> dishonesty is never a good fit.
If you rejected every such “dishonest candidate”, the paid services offering candidates practice questions would have gone out of business long ago.
Given the popularity of such services, have your considered the possibility that you overestimate your ability to spot candidates who have solved the problem previously?
I understand the sentiment about honesty and finding a legitimate good fit. Neither interviewers or candidates are perfect. But there's something off about this, isn't there? If you ask a candidate a question, and they answer it well, they've done what is asked of them. Why should they put themselves in a position of vulnerability just because they're well prepared?
To some extent it paradoxically sounds like you guys want applicants to not have studied basic algorithm problems, since the problems you find if you try to study are the same problems you guys ask in interviews.
Yes, you need to pull a George Costanza and look/sound really annoyed while you code the solution
Empty your bowel into your underwear. It’s all a show anyway, at least make it a good one.
Just tell them it’s familiar. Depends on the company, but they can often tell. If you blast through a tough question and then struggle on easier ones, it’ll show up in hiring committee (or equivalent).
It could easily cost you the offer if they think you’re being dishonest.
If you say it’s familiar and they ask you to do it anyway, just solve it very quickly and thoroughly. They’ll give you points for your performance and probably ask you another.
Just go through the steps out loud, start with a really naive solution, say what's wrong with it, go on to the next. You can usually go through at least 3 levels of that. Also spend some time verifying the specification of the problem, ask about any potential edge cases. If you really know the problem well, go into maybe some extensions of it or harder versions.
Why would you want to seem like you're struggling?
The interviewers are ideally trying to get a sense of how you think through problems, not just that you can spit out an answer you know. At least that's what they say.
It's not the struggle, it's the questions you ask, and what you share about your thought process trying to got to a solution. You might work yourself into a hole solving what you think is a simple array manipulation problem, but then realize some edge cases require you to switch to a heap/priority queue, etc.
Not really, no.
You still should need to ask clarifying questions, because the specification of many problems is intentionally incomplete.
Beyond that though, you can just be honest and say you know a problem and they can either pick another or just talk about it anyway.
Practice with a friend.
I have a better one, refuse to play the game unless desperate to get a job no matter what.
This is a great way to start off a business relationship with dishonesty and cynicism. Enjoy your career.
Please don't cross into personal attack, no matter how wrong's interview preparation strategy is or you feel it is. Perhaps you don't feel you owe them better, but you owe this community better if you're participating here.
https://news.ycombinator.com/newsguidelines.html
Can I have some of that ‘work through tens or hundreds of coding problems that are of no future relevance to me’ dishonesty and cynicism?
It filters out the frauds.
Fraud on resumes is very common.
This is a great way to start off a business relationship with a company that wants to haze you to get you to prove your desire to work there.