So here's the thing... hiring at larger, growing companies often means weeding through a stack of a hundred resumes, for one or two or five roles. Most of those resumes need to go away. The interview process, for competent candidates, is going to cost you at bare minimum a half hour of one person's time for initial screening. A quality interview is going to take a couple of hours each for a few important people - managers and senior/lead tech staff, for engineering jobs. Considering pg's maximum about maker schedules, the hiring process is very disruptive for the productivity of some of your most valuable tech staff.
So you want to get rid of 19 resumes out of 20, without ever interviewing them. And for that, you need heuristics. And those heuristics are, in all likelihood, going to be biased and stupid in some ways.
For example, I will automatically reject any resume that has more than two typos. I consider it evidence of carelessness - I don't care if you can't spell, but I do care if you don't bother to run it past someone else who can for editing.
The danger is that a heuristic - any heuristic - for filtering out resumes will inevitably lead to missing some candidates who would excel in the role. Oh well. I'm not going to waste time interviewing every possible candidate, just hoping to catch that magic person. It's irresponsible.
I'm a web developer who has been thinking about engineering hiring and culture a LOT over the last 6 months. I'm confused about employers' desire to cast a wide net, only to filter through applicants thoughtlessly. Instead, why don't engineering teams save time and energy by articulating their specific engineering values, and sharing that information up front? Be honest. Say divisive things. This will allow job-seekers to self-filter. Your goal is not to attract every software engineer. Your goal is to attract the people you are most likely to hire. (Career pages can be fluff, and job descriptions are dry and uninformative beyond a list of technologies.)
I've been working on Key Values (https://www.keyvalues.io) for many months now trying to do just this: surface details about actual team members, day-to-day processes, and a team's engineering culture. At least this way, we as engineers can be more informed when deciding where to apply, where we'll devote hours/energy interviewing. Of course engineers might be skeptical or have questions, but at least we have something to respond to.
Ultimately, neither job-seekers nor employers want to waste time and energy interviewing people they aren't culturally aligned w/. Maybe the lesson here is that engineers should seek out the teams that care about what they care about, whether it's mentorship, high quality code, or work/life balance. I know it's controversial (especially here in HN), but is salary really more important than doing work that is exciting/challenging/energizing/stimulating or feeling valued/appreciated/respected/passionate?
I'm confused about employers' desire to cast a wide net, only to filter through applicants thoughtlessly.
What's to be confused about?
They don't give a fuck - about you, or your time.
Why? Because they perceive (rightly or wrongly) that there's no business incentive for them to do so.
Either way... if they leave you dangling, or make you answer just a few silly questions too many, and/or sign up for (nearly) unlimited numbers of whiteboarding or hacker ranks sessions, or "take-home projects" they never (or at best only barely) look at, it's because... look, do you think these people have time to think about what they're doing, let alone how it affects the time and patience of those candidates they don't necessarily want to hire... or even, in some abstract sense, their long-term reputation as a company that makes at least some kind of effort to create a not entirely-unpleasant interview process?
Of course not. It's much easier to just throw shit at the wall -- or as it were, at the whiteboard -- and see what sticks.
Even more important than that is to get people who really are aligned with what the company actually is, instead of just the standard bullshit that everybody says.
I, too, really like the idea, but I wonder if, in practice, it wouldn't work because companies don't introspect deeply enough to understand what makes them distinct? For instance, if what's remarkable about your company is that you have a great culture where people goof off a lot and have a lot of fun, and still manage to get a passable amount of work done, would anyone dare to self-identify that way?
You're so spot on. Some companies do not introspect deeply enough, nor do they have the desire! I've met some of them and guess what –– they don't end up creating a profile on Key Values. Honestly, I think it says a lot about team who put in the time and care to create a profile. The exercise of selecting 8 values and qualifying them signals a real desire to invest in people, culture, and to make sure values actually translate into behaviors.
Re: your example, I see it as my job to encourage them to communicate who they are loudly and proudly! There's pressure on both sides (job-seekers and employers) to check all of the boxes. It doesn't make sense to do that at all though.
I think there are a couple of confounding problems. One is the "Astrology" problem. Well-written horoscopes are written in such a way that people want to believe these things about themselves- and they are nebulous enough that they can. Who doesn't want to think of themselves as being a great friend when someone really needs you? In the case of your website, I think you should consider giving scenarios and force people to make choices between options on a Likert scale.
The other issue is that companies may truly be trying to find the most intelligent (for whatever that word means) person as long as she or he isn't malignant- someone who will be successful in future endeavors after the one they are hiring for is finished. In that case they are just dragging a wide net and trying to get the biggest fish. This is all throw out the window, of course, when some of their screeners have ad-hoc and very personal criteria for rejection (two typos, or not being gender-inclusive, or whatever). Then the process becomes ipso facto a cultural screening process.
Applicants do not self-filter. If you've ever read resumes for a job listing that haven't been prescreened, you'll be blown away by how many people refuse to self-filter even for basic fitness for the job. Put out a job listing asking for a college degree, and you'll get applicants who are still in school with their graduation date two years in the future looking for an internship. That's an extreme, but people will apply for jobs they are very much not fit for. Frustrated job seekers will often submit things under the assumption that "the worst they can do is tell me no" (they're not wrong). If someone is receiving unemployment benefits, they may have to apply to X number of jobs every week/month in order to qualify. The incentive of the applicant is to apply to as many jobs as possible and get one.
As an applicant, I don't self-filter from positions I think are interesting, because I've gotten several offers from companies after to applying to jobs I wasn't technically qualified for.
Some companies are pretty strict about their qualifications. For some, the qualifications are more like guidelines. But if the position is interesting, I'll take that chance.
Applicants don't self-filter because they need a job. They like to eat, they like to wear clothes, they like to sleep indoors. They often have other people in their lives that also like to do those things and rely on the income of the applicant to do so. They also aren't very able to pool risk. Most people have between one and two wageearners paying towards "household" expenses. There's not a lot of redundancy here. Contrast that with the average amount of employees at an employer. When you're hiring, having a position vacant for weeks or months is something you can work around much, much easier than you can work around being unemployed for weeks or months.
That's why job descriptions should not be authored by hiring managers alone. Most of the time, it's not the employer, it's the hiring manager who has unreal expectations. This is driven in my opinion by multiple factors including but not limited to:
Unreasonable expectations created by hyperspecialization and role blending as part of corporate restructuring, or as pushed by the investors/stakeholders that follows economic recessions.
The expansionary period that follows provide people opportunity to leave, providing more liquidity in the labor market.
So talent leaves for greener pastures and now you have a hiring manager who's direct report has left them with a gaping hole and finance dictates that you have 1:1 for this spot.
So you type of the list of roles and responsibilities and send your requisition to HR to help find candidates. Meanwhile you assign out duties to remaining team members because work must get done.
The search lingers because the hiring manager looking for the proverbial purple squirrel, meanwhile the team seems to be handling the workload with no measurable impact on quality.
This then just repeats itself over and over, at companies large and small.
This is only one use case, there are many, but I think one that plays a big part in the problem of job search and matching talent with opportunity.
You're not wrong. Hiring has just become a numbers game and I think this is part of the problem. Recruiters blast a thousand engineers with the same dry email. Engineers apply to a hundred different companies. A lot of companies don't consider applications without a personalized note or cover letter, and for good reason. They too want a personalized message.
Let's compare it to dating. A lot of people like dating apps where you can rifle through hundreds of options within minutes. But most people would agree that a matching process that is more personal and human leads to longer lasting relationships.
Despite how many people play the numbers game, there are tons of people who don't, myself included. (And I happen to mean that for both my personal and professional relationships.)
... get sued or prosecuted by the EEOC. There are reasons things are stated in a certain way and the net has to be cast wider than necessary. First fix the landscape of U.S. labor law, and then a discussion of efficiency and values can be productive
Mmmm not quite sure what you're thinking of, but I'm talking about starting w/ baby steps.
"Do exciting work that has real impact, learn a lot, and have fun doing it."
Do you know how many companies say that on their job descriptions?! All of them.
Here are some examples of being (mildly) divisive and using strong language:
- "We consider it a disqualifier if a candidate does not value our mission to grow and sustain local food systems" (Good Eggs)
- "All of our communications are out in the open, and we have a rule that you cannot send emails to a fellow Remixer and instead must communicate over Slack where everyone can see. (Emails are only used for people externally.)" (Remix)
- "We advocate that everyone leans towards releasing and reverting. It is better to revert small changes than to spend time perfecting code. While this might rub some people the wrong way, we believe that the best way to learn is by doing." (Amplitude)
All of these things will either resonate w/ an individual and make them really excited about applying/joining, or turn them off because it doesn't quite align w/ their personal values/preferences/goals.
""We advocate that everyone leans towards releasing and reverting. It is better to revert small changes than to spend time perfecting code. While this might rub some people the wrong way, we believe that the best way to learn is by doing." - Amplitude"
I must say, this would give me pause. That kind of language tells me that they don't value testing, and likely don't value quality requirements capturing either.
Good! It sounds like you wouldn't be a good match for their engineering culture - both you and they would be unhappy. Everybody wins when they put that out there clearly.
It seems that your example will filter candidates based on how much they like the company, not how good they are. The distribution of candidates will largely be the same or even worse, because a few good candidates may be put off by such rhetoric ("Why do I have to believe in local food systems, when they want me to write an Android app?"), but mediocre candidates will apply anyway.
So you still have to go through the same number of candidates until you can find a good one.
I like your examples - it helps you understand the culture of the company more. Esp if there are some strict rules or preferences. I found that I ended up learning most of these things in an in-person interview. I could tell other aspects of the culture too - if they expected late hours, or had better work/life balance in person and that has helped me.
This is exactly what motivated me to build Key Values!! These are things people end up learning during the in-person interview (or during the first week). I don't see why we have to write cover letters, do phone screens, do take home tests, or go through coding challenges before learning this information. My goal is to reorder the steps and give job-seekers insight to a team's values before making the decision to apply/interview. If anything, it allows us to ask better questions when we first speak to or meet with interviewers.
You're basically implying that all your controversial opinions are borderline racist/sexist/ageist.
There are lots of things that would attract/repel a candidate from a company that aren't discriminatory. "At our current stage of growth, we value speed over reliability," "we measure employee productivity," "we work early/late," "people here do/don't wear suits," "we do/don't do code reviews," "we do/don't do Scrum," just to name a few.
I think they mean stuff like, "If you want the latest and greatest framework, you won't find it here. We use spaces and not tabs. Our stack is locked down."
This sounds nice, but is very far away from actual reality. Sadly, almost 25%-30% of applicants, in my experience, don't even read the full job description or ignore key parts that are listed in bold text, some examples based on actual real life:
- "You need to be based in <European country>, we don't do remote" -- applies from US/Pakistan/Russia, wants remote work.
- "Make clear what attracts you in our business/industry" -- ignores any mention of industry, business or such.
- "Don't repeat your CV in the cover letter" -- guess what? Cover letter is repetition of his/her CV...
I wish I was lying, but this is daily routine when hiring.
The first example because they hope your desperate enough.
For the second example, because most engineers value honesty because a lack of complete transparency cost time/money or worst kills people. And frankly, most engineers like technology and could care less about the company and/or the industry. And that's why they are horrible at sales. And what your asking for is a kind of sales.
For employers, it's casting a wide net and then struggling to filter out "noise" quickly. Employers will use pass on great engineers and interview not-so great ones (see parent comment). Instead of saying, "Work for us, we're the best!" why don't engineering teams say, "We prioritize speed over quality, are great for individual contributors, and prefer asynchronous comms over lots of meetings. Work for us if this sounds like you."
For job-seekers, the problem is applying to dozens to companies that you don't even want to work for (and just don't know it yet). If I value quality over speed, prefer working in teams rather than independently, and believe in-person meetings are incredibly valuable, then I would know not to even both applying to the company above.
A bigger problem in many cases isn't casting too wide a net in terms of corporate culture... it's big companies that need to hire quickly, using cheap mass-market recruiting companies that aren't picky about what they're sending over.
And, in the enterprise, the elaborate rules for recruiters, created by too many layers of HR (process is the scar tissue of organizations), means that small, boutique recruiters that are more careful about fit can't even get in the door.
About that heuristic, when hiring for sysadmins in the early 2000s, I found there to be an inverse relationship between résumé quality and candidate quality (with one notable exception — résumés written in LaTeX were usually of high quality and almost always indicated a quality candidate).
Originally I had a similar conception about résumés — they should be one or two pages max, should look nice, and should contain no errors. For obviously this indicated a candidate who cared about how they represented themselves. How could a candidate with a slap dash résumé be any good?
But time and again the better candidates had the worse résumés ... résumés that were pages long, or completely unformatted, or filled with errors. Perhaps these candidates with poor résumés have simply prioritized their time on their work, not tending their résumé. Or perhaps they feel that they should be judged on the content of their résumé and not its presentation.
Regardless, I no longer put any stock in how a résumé looks, only wheter it indicates an appropriate background for the job.
IMO, the root cause of this is companies pre-filter leaving high quality individuals to search longer. If you look at a random sample of all people to look for a job in a given year then say a long absence may be a bad sign.
However, you get a biased sample with obvious rock-stars quickly finding a job. Thus, negative signs on a resume are positively correlated with candidate quality. On the other hand resumes that look great are likely linked to people who regularly fail interviews.
PS: This is less common with collage job fairs where you get closer to a random candidate selection.
How do you determine that? I wrote mine in LaTeX but submit a PDF. Usually that default LaTeX font is a dead giveaway, but I find it to be pretty ugly so I switched it out.
It’s part of the metadata. Look for content creator and PDF producer fields.
Or even more simply, just use your eyes. The superior line breaking algorithm is extremely obvious, with or without microtype. Then less obviously, look at ligatures, kerning etc.
The internal HR tools at some BigCo's may convert your PDF to plain text, so it's possible that a hiring manager will never even see your superior formatting. The interview tracking system at Amazon was doing this at couple of years back.
Doesn't mean much since tooling like Sphinx will generate Latex as an intermediary for PDF generation.
I don't care for the rigidity of Latex and don't want to hand craft my own TeX documents. I once had a resume generation system that would convert ReST to Docbook and from there used XSLT to produce plain text, docx, and PDF output using XSL-FO. Sadly the code was on a failed Seagate drive and is lost. And yes I had a backup, also on a Seagate drive that also failed due to a firmware bug around the same time.
Honestly I've never understood people's problem with LaTeX, where everyone says it si "hand crafted". Honestly just grab a template. Maybe modify it a little and tweak it, but then save that skeleton. Have one for a resume, research papers, other docs, etc and you're golden. You have to do the same thing in Word too, the difference is in LaTeX it looks like code. But IMO LaTeX looks nicer than Word every time. I can usually spot a paper written in LaTeX vs Word and there definitely is a correlation to quality.
You CAN get fancy with LaTeX, but I doubt most people will be doing those things. For every day research papers or resumes, just pick up an online template and make whatever minor tweaks you want. Then fill with text.
I just use the moderncv package[1] included with most LaTeX distributions. It produces beautiful output while needing only minimal LaTeX customization from your end.
I remember those, I had one of those fail too. Seagate Barracuda 7200.11 c. 2008. Found all the stuff on the Internet about the bad firmware after it failed.
At the time, the default font, that there were a few common LaTeX résumé templates, and the superior formatting compared to alternatives. If I wasn't sure, I'd just ask.
> But time and again the better candidates had the worse résumés
Perhaps this is selection bias?
Their resume being bad caused them to not be able to get a job at other companies, even though they are good. The only reason the resume ended up on your desk was because nobody else wanted to spend 30 minutes of their time interviewing them, therefore they were passed up.
> I consider it evidence of carelessness - I don't care if you can't spell, but I do care if you don't bother to run it past someone else who can for editing.
Getting someone to look over your stuff and looking for spelling errors sounds like a simple and easy fix doesn’t it?
But in my experience that doesn't work as well as one would think, prof reading is a lot harder than it looks. After all it's isn't enough to catch some of the errors, you need to catch them all.
As someone that has close family members with dyslexia, and have struggled with spelling all my life. I have to say that seeing those challenges characterised as carelessness is infuriating. For a lot of people getting spelling right is easy, and that is great. But for some of us it is not, while prof reading and spell checkers might help, they are not a magical fix all. Some errors often sneak through, no matter what you do.
To be clear here - being a poor speller, due to dyslexia or any other reason, is not the problem. But if you know you are a poor speller, you should have someone who is a good speller edit your resume, looking for spelling errors.
If you know a misspelled resume can cost you a job, and you don't take additional steps to make sure it's spelled correctly, that's careless.
As I said earlier, getting someone to prof read your text, and catch all the problems is not as easy as you would think. Nor is getting a spell checker to catch all your errors.
I have to admit I thought I was done with getting graded on spelling in high school.
I haven't seen anything on either side of the fence, in my carer to indicate that some spelling mistakes is enough to cost people an opportunity. But I'm sure your right and there are companies out there that does that. But I don't need to work in every company, only the good ones. So it hasn't been a problem for me so far.
One of my best friends has a MD and an MBA degree. He currently manages a $100M investment portfolio. We chat online almost daily.
He is, by far, the worst speller I know. Even after 20+ years of online conversation, I can barely understand what he's saying sometimes.
He also passes bits of professional writing past me for validation, usually to check his intent. His professional writing is always perfectly spelled, with impeccable grammar. He relies heavily on both software and human checks (secretaries and copywriters) to insure the basic quality of his writing. That's one reason why a truly wretched speller makes more in bonuses alone than I make in salary.
So yes, I have a hard time forgiving a misspelled resume. If he can do it, you can do it, too.
I don't mean to challenge you, but I think your assumptions about how easy it is for others to do something you consider rudimentary are based in an undeclared set of privileges and fortune. A few possible scenarios where getting help might not be as easy as asking a partner, friend, or colleague to help review things.
A first-generation college grad paying their way through school by working full time in a new town.
A single parent trying to get into a field while working and caring for their family.
A non-native English speaker who perhaps understands the general structure of the English language, but not the nuances of grammar (or what is expected in a Western CV/Resume).
I hope you could see that your earnest appreciation for proper spelling is unfair to apply in such a black/white dichotomy. I kindly ask you to reconsider your approach.
The college grad is surrounded by other college grads who can review his writings.
The parent is surrounded by family who can review his writings.
The non native English speaker with poor spelling knows first hand that he needs his resume to get reviewed. And he should not expect to easily find a highly qualified job that requires to write English in an English speaking country if he can't read and write English.
So far, your examples only manage to show that people could manage just fine and if they can't they were not qualified for the job.
English is not a rare skill. There is no shortage of people who can write decently.
But that leaves me with a problem - I still need to turn this stack of 100 resumes into something I can be expected to work through, given the limited availability of my most expensive resource - time. I don't want to interview 100 candidates. I want to interview five. That means 19 out of 20 need to be eliminated before it even gets to the interview process.
The spelling heuristic gets rid of between 5 and 20%, right away. A lot of those resumes would also be caught by other heuristics (no ten page resumes, etc). Most of your concern cases would also fall to my other filters as well. And quite frankly, if a resume jumped out at me in a positive way, but it had three or four or five typos? I'd interview. It's not a hard and fast rule.
Back when we were college students, he wasn't rich. Yet he got through med school (med school!). He asked friends like me to check things, when they mattered.
You keep looking for excuses, not solutions. He knew, and knows, he has a problem with spelling. So he does everything he can to get the help he needs to keep his spelling from limiting his career. He did it when he was a poor college student. Now, he can get secretaries to check, because he has a successful career built around his strengths, rather than letting his limitations define him.
Isn't that rather circular? You need to ensure you don't have spelling mistakes, otherwise that indicates carelessness because it could cost you a job. Why would it cost you a non-spelling-related job? Because it indicates carelessness!
If we criticize job applicants for mistakes which are unrelated to their field, why shouldn't we do the same for employers? Rejecting applicants for spelling mistakes is not in and of itself a problem, but it indicates carelessness in the hiring process so maybe it's not a good place to work.
It's interesting to think about crafting a resume, not only to get a favorable response from places you'd want to work, but to deliberately create an unfavorable response from places you wouldn't
> but it indicates carelessness in the hiring process
Perhaps we are using the word careless differently, but I don't see how using heuristics like this implies carelessness on behalf of the employer. If anything, it seems to indicate that the employer is making a well-reasoned choice to balance economic constraints with company values.
I run a small software consulting firm and have now hired three people (not too many). I believe that attention to detail and pride in how you present yourself, either via your resume or your code quality, is essential. For this reason, I think applicants should spell-check their resume (or use Grammarly).
In fact, we even mention this in our company values page:
> We value correct grammar and a strong grasp of the English language.
> We write with a clear and professional tone in our external communication.
> We try not to send vague or confusing emails.
> We invest in improving our writing, presentation, and conversational skills.
I couldn't care less if I make a typo in HN; But an email to my professor, I'll review several times without even thinking about it. Probably won't get someone else to proofread it, but that's because I trust my own competence in reviewing my own short texts, and in writing English.
But my friends from India, they'll send me emails to proofread if its important, because, y'know, it's sensible, correct and most importantly, it's polite to do so.
It seems absurd to me that the significance of typos needs to be defended; Maybe the treatment of resumes as any more important than a text message, but if we assume it's important, then why the hell would you let something as simple as typos through easily?
I can see that it's good to avoid typos. But at the same time, why the hell are you judging the suitability of a professional in a complex technical field based on typos in something they only write once every few years?
The argument for the latter seems to be circular: we reject people on that basis because typos indicate a broader lack of care, which they indicate because we reject people on that basis.
Its mostly a question of where to spend effort; In the same way I'm probably not going to read a paper with a shoddily written abstract, unless external reasons apply (ie the paper was recommended to me by someone trustworthy), it doesn't make sense for me to try to parse a shoddily written resume.
It takes more effort to do so, and its nonsensical for the person not to put effort into it, because its only written once ebery few years, and because its the first point of contact.
If you're going to operate at the level of a tenth grader, why would I assume you have anything more to offer than what a tenth grader might?
Of course the metric might fail, but hey, there's a hundred other resumes to have to go through too, and you managed to imply you're incompetent right off the bat, so I'm probably better off looking elsewhere.
Theres also the aspect that I personally don't want to put up with shoddy writing, and also the fact that these are not long documents. Its like 1-3 pages half-sentences and spacing everywhere; I'm hardly asking the world of you by requiring you bother verifying what you write.
>But at the same time, why the hell are you judging the suitability of a professional in a complex technical field based on typos in something they only write once every few years
Because, with the exception of some rare positions, there are a lot of people suitable for the job. There are also a lot of people who will apply without being suitable. And a lot of the people who apply at random, aka sloppily, are presumably also sloppy about the rest of the process.
So filtering on sloppiness is probably a decent utility; of course it'll remove the sloppy suitable candidates, and it won't remove the non-sloppy unsuitable candidates, but hey.
Alternatively we can do modern hr stuff like filtering resumes on key-word searches, so if you're an expert in Oracle 11d but not Oracle 11e....
Every filter is fucked somehow; but typos seem to me one of the least fucked, precisely because its so easy to avoid. You only write it once every few years.
I too have close family members with dyslexia. If they asked whether they'd make good programmers, I'd honestly tell them that their disability would make it very challenging and frustrating. They could still do it, but their passion would have to be enough to drive them to work much harder than their colleagues to achieve the same results.
I'd also recommend that they go to whatever lengths necessary to eliminate typos in their resume because it's a prime opportunity to demonstrate that they can achieve exacting precision when necessary.
That being said, they are all happy in careers where character-for-character precision is not as important.
Re: typos in resumes, I'm less strict than I used to be and try to focus more on work experience or other real world metrics than using typos as a proxy for programming productivity.
So I'm really really biased here. But as a programer with (mild) dyslexia, I have to say I think it's one of the best professional fields for this. We have linters, syntax highlighting editors, pretty printers, compilers and a whole suite of tools to make sure typos never go live.
No other other field has these tools, and high paying professional work does in fact require "typo free work" to make it anywhere, in almost any field.° Sure in plenty of fields you can try to paper over this with process - but this just means as a dyslexic your promotions are limited by the caliber of your secretary, PA, MA, editor or whatever person fixes your work in your field. You get a good one you flourish and bad one your career dies.
As I've moved to management I've had to delegate every piece of writing which is a frustrating experience, and that writing I end up doing usually ends up passing before multiple peoples eyes before being sent -- even for a simple email. This is not easier than coding - it's way harder.
° Spelling and grammar checkers help, but really are no where near as good as linters and all the other tooling.
Wow, I never thought about it that way. You have single-handedly changed the advice I'd give to my family on the issue. (Except for the resume advice above, which I think still holds.)
> but I do care if you don't bother to run it past someone else who can for editing.
Even Chrome has a spell checker. Not sure the distinction here between code typos and English typos is a valid one.
The OP is saying not bothering to check is the issue, not the specific errors. And this is, by definition, careless. This is a lack of caring for correctness and a lack of awareness of mistakes. It could even be considered a lack of respect, which would be a lack of professionalism, but the OP did not go there.
If you have dyslexia, then a perfect resume that includes your challenges as a dyslexic would be most impressive.
Screening is all about finding good proxies for the skills you need for the position. Typos on resumes might be a good proxy for something, but not necessarily attention to detail (since you don't know the process the resume went through to get into its error-free state). At the extremes, the proofing could have been forced on them unwillingly by an overbearing parent or partner -or- the applicant could have painstakingly run the resume through every checker out there and begged for lots of feedback from others.
It's more accurately a proxy for the applicant understanding the context of various processes and the relative importance of a task. That may or may not be a skill you need in the job.
Finally, if the resume error rate is higher or lower than what you observe in the candidate's live, observed writing/coding then you should take note assuming such errors are a proxy for what you're looking for.
Typos could be anything, period. As could anything else.
Bring a resume with typos and your rebuttal and see what happens. The trouble you go through to defend typos will be far more work than fixing your typos. It's mostly automatic.
Reasoning doesn't excuse you from making mistakes, because excused don't make up for those mistakes. In any professional work environment, your mistakes are someone else's paid job to fix.
OP is talking about attitude and mindset. The "I can reason away typos" mindset is not appealing. The "I will do anything to not make mistakes" mindset is.
I mean, that is still what we are talking about here right? Being attractive to employers.
Not to mention people with dyslexia objectify words differently. This is why the exact token representation of words are less relevant in our brains than those without it. Given that in programming words are just symbols for values or conditions, we view the words of code very differently than the words that convey linguistically meaning. The structure of words in a sentence is very different than the structure of words and symbols in code. It has never presented a problem for me when coding.
You're biased, of course, but exactly in the right way in my opinion. Personally this is a beautiful example of why representation matters. People without dyslexia (myself included) simply cannot know what people with dyslexia can and cannot do. Many thanks for sharing such an important point that I hadn't thought about before.
Your right about dyslexia adding challenges to being a programmer. I've worked with a couple of developers with dyslexia, and it is clear that being a slow reader is sometimes a challenge for them. But on the other hand, both of them are respected for their skills, doing well, and seem to be happy. So it is absolutely possible to succeed as a developer even if you have dyslexia.
I do know someone that works as a high school teacher and has dyslexia. You would think that, that would be a horror show. But as far as I know, that is actually a success story. Where it works well both for her, and the school.
We also have the Norwegian prime minister Erna Solberg, which is dyslectic.
So I would be careful about telling people what they can and can't succeed at, even if they have dyslexia.
Honest question: Would you toy with the idea of adding a small blurb right at the start of your resume stating this?
I'm thinking a box that acknowledges up-front that you have diagnosed dyslexia, which would explain any spelling mistakes, but that you're still highly competent as an engineer.
Not sure about others, but if I were hiring, I would welcome the honesty and instantly stop caring about any errors.
Note: I'm not an employer, but I have been asked to interview candidates.
I used to work with a friend who has pretty severe dyslexia. As far as I could tell, it was never an impediment. He was phenomenally productive. Aside from the occasional misspelled variable name, you would never have known about his dyslexia.
I would never misspell on purpose. That was a "something" accident. But I do see it now, after you pointed it out.
Oh well, 20 years ago I would have been horrified being caught out posting something with spelling errors in public.
Actually, I would probably not have posted in public at all.
But at some point you just have to stop caring about trivialities like that.
This has always bothered me because I grew up understanding the correct phrase to be "the proof of the pudding is in the eating" and assumed that the shorter form arose because of people who simply didn't know what it really meant.
So I googled it and it turns out that the short form, "the proof is in the pudding," has been around since the 1920's (https://en.wiktionary.org/wiki/the_proof_of_the_pudding_is_i...). So, it's got some legs! I still don't think the short form makes any sense, but at least it's not a recent phenomenon caused by clueless hipsters :-)
Most idioms don't make sense when taken literally. There is an understood meaning beyond the literal words. Monkey business, pull the wool over your eyes, cost an arm and a leg, chip on his shoulder, break a leg, raining cats and dogs.
And some idioms might make sense, but they're used in a way which deprives them of sense.
For example: "Well, that's the exception that proves the rule!"
There are a number of ways to make that make sense, such as the use of "prove" to mean "test", as in "proving grounds", so that's the exception which tests the rule, or, alternatively, the fact it is an exception is proof the rule exists in the first place. Both sensical interpretations.
Of course, it isn't used like that. It's used more like this:
A: "Women can't program!"
B: "Wrong. Look at Grace Hopper, to begin with."
A: "Well, that's the exception that proves the rule! Women can't program."
> Getting someone to look over your stuff and looking for spelling errors sounds like a simple and easy fix doesn’t it?
There are a lot of people who don't speak English as native language. I, as a non-native speaker learn about very subtle rules of the English language all the time, where the author often explains that even many native speakers are often not aware of these rules and thus write in their own native wrongly. It already occurred multiple times to me that I asked, say, 5 people at the floor of the institute where I work at, about some subtle grammar details in English. All of the persons who I asked were much more fluent in English than me (though not native speakers), but nobody could answer me the questions. I even sometimes confuse English native speakers with my questions about subtle details of English spelling, grammar, or word usage (side remark: it is my impression that native speakers of German are often much more aware of all the subtleties in their native tongue than native speakers of English).
So it is really not easy (even if you have English native speakers to ask) to find someone to proofread your English texts.
Yes, I pointedly confine my filter to spelling errors, not grammar, out of respect for non-native English speakers (or even native English speakers who grew up with other dialects). English grammar can be terribly difficult, even for well educated native speakers. Spelling, on the other hand, has a consistent reference.
What is "simple" is a matter of opinion, but in this case I think your opinion is pretty incorrect. The fact that the "rules" vary with region, generation, level of formality, and so on seems to fly in the face of your claim. Not to mention, most native speakers may speak correctly but cannot articulate a correct list of rules.
What I referred to with 'rules of the language' was the national standard version of the language. Those are publicly well known and promoted by a majority body. I argue that these rules are much easier to learn than rules of dialects.
I'll argue that Korean spoken in Seoul, South Korea is vastly different from one in Pyong Yang, North Korea, and arguably harder to learn because one can trace some of the phrasing to old way Korean was spoken which is a mix of Chinese-derived words with Korean. This is due to the language having been invented by King Sejong circa 400(?) years ago when most written language was Chinese in Korea. It's harder because you have to trace the origins of these words to Chinese characters and it's not just learning one language anymore--language rules and meanings cross over across regions and boundaries that exist or existed in the past.
I think you're mostly talking about orthography (writing), and not language itself. You might be saying that because of the different writing systems, the spoken language is different today, which seems plausible. In any case, no matter how "mixed" a language's history is, I bet that for its native speakers, it is acquired as one language and feels like one language. When people learn and acquire language, they don't have to trace any origins of any words.
Anyway, we migrated from arguing about the claim "The rules of language are simple" to arguing that some language rules are simpler than other, which to me makes room for the claim that language is, generally speaking, complicated and its rules are complicated.
> I think you're mostly talking about orthography (writing), and not language itself. You might be saying that because of the different writing systems, the spoken language is different today, which seems plausible. In any case, no matter how "mixed" a language's history is, I bet that for its native speakers, it is acquired as one language and feels like one language. When people learn and acquire language, they don't have to trace any origins of any words.
I don't know how native speakers of other languages think about this, but for German (my mother tongue) I don't think this is true. The reason is that in German loanwords often keep their spelling. So if you want to know how to pronounce a loanword you better have a good intuition from which language it might come from and how it is pronounced in this language. By some "language nerds" in Germany it is even considered as a sign of education to apply the correct rules of the source language to form the "correct number" instead of the "Germanized number" when declining the word.
An example: The commonly used German word for "the courgette/zucchini" is "die Zucchini (singular)". And it is common usage to use "die Zucchinis" to form the plural.
Language nerds will disagree: In Italian, from which this word is loaned "zucchini" is already the plural, so the "correct" German word for this vegetable has to be "die Zucchino" and "die Zucchini" must only be used to refer to the plural. And then even more hardcore language nerds will come and and point out that "zucchino" is male in Italian, so it is wrong to make it female in German, so "der Zucchino" is the "ultimately correct" German noun for this vegetable.
I am really not kidding.
OK, for more sane examples:
You better know that "Giro" (as in "das Girokonto") or "das Cello" come from Italian - otherwise you will pronounce these German words wrongly. The same holds for "das Trottoir" (an old-fashioned German word for "the sidewalk"; the common German word is "der Bürgersteig") - you better know that this word comes from French and how French words are pronounced. I don't even want to start with English loanwords...
The beginning of your comment is still talking about the writing system. If you are a kid learning to speak (or in the rarer case that you never learn the writing system of the language) then you don't notice that the orthography is inconsistent. Furthermore, if you are a literate native speaker of a language whose orthography is very inconsistent to begin with, due to an extreme liberal use of loan words, introduced at different times, and pronunciation shifts (I'd say English is a good example), where technically knowing the origin of a word (and when English borrowed it) can help you match its spelling to its pronunciation, you still feel like you're only speaking and writing one language. I'd say an even a stronger example is writing systems that aren't phonetic at all, where no matter what you do you can't figure out the pronunciation from the orthography alone.
Your point about language nerds making up rules about how load words should behave is interesting and true I'd say in most modern languages whose societies contain... academics. That's why you "can't" split infinitives in English, end sentences with prepositions, or say "pendulums" and "octopuses". Unlike German, English doesn't have gendered nouns (pronouns excluded), but still people will try. See "alumnus". The argument about "Zucchini" might exist literally in English, but an analogous one for "Cannoli" certainly exists.
So I'd say your statements about German apply just as equally in English, and so perhaps other people can chime in whether they actually feel like they're speaking a "mixed" language, and at what point they felt like that. In a way, inevitably, almost all modern languages are "mixed".
I think you'll find with all your examples that there is contention. Aren't there German speakers who say "Pommes Frites" the German way and to hell with who ever says that's technically not correct? In the cases that it's not exactly contentious, there's still a funny feeling. Why do people want to say "Computer mouses" so much, even though they utter it and think afterwards "wait, computer mice??". When you take a suppletive form (irregular form) and put it in another context, it s up for debate (academic and probably inside your brain) what you do with it. Not exactly the same as what happens with loan words, but this demonstrates that when you look at language, you'll arrive at multiple answers for how things "should" be, and which one you actually say depends on some decision, by language nerds, or by youself consciously or unconsciously. Language rules are anything but simple!!
> Aren't there German speakers who say "Pommes Frites" the German way and to hell with who ever says that's technically not correct?
As far as I am aware, "Pommes Frites" is pronounced in German nearly the same way as in French. So this is clearly not true.
What is true, is that if you appreciate it to "Pommes" (which is rather colloquial; at least when I hear "Bitte ein mal Pommes mit Mayo", I intuitively think of uneducated, fat (because of bad eating habits) people), this word is pronounced "German". But, as I said, this contraction with German pronunciation is rather associated with uneducated people.
thanks for the correction. I certainly intended just "Pommes". Glad you understood my meaning.
Of course language that deviates from what's spoken by the educated (and powerful) people in the country is considered uneducated. I think you'll also find that in many languages. That's the same principle behind the people who think you are stupid for not saying "der Zucchino". I won't get into the stereotypes about fat people...
First, there may not be a tech shortage if teams are “weeding through a stack of a hundred resumes, for one or two or five roles....”
But, if there is, I’m assuming that candidates are just applying randomly to any positions that are available. The ‘Tinder’ strategy is probably a good one if the goal is to maximize earnings. However, if one is a ‘5’ it makes sense to spam all of the positions. But, if one is a ‘10’, it makes sense to avoid interviewing all together because there is diminishing returns: too much work for too little potential gain.
So, the other question is if the hiring process is so ineffective, why aren’t companies innovating? There must be lack of competition in the market if companies are willing to wait several months and spend thousands of dollars to fill a role while also missing out on the best candidates.
The shortage is an imaginary bi-product of two actual problems. One problem is bias and the second is talent gaps.
Bias is a factor in that many interviewers have no idea what they want from a technical perspective, or simply lack confidence themselves. When in doubt hire somebody exactly like yourself. This isn't objective or a valid representation of competence, but it is common.
Talent gaps apply when there are a surplus, plethora is a better term, of new developers and senior developers are purple unicorns. It takes time and lots of practice to transform a newb into a rockstar. Throwing money at the problem isn't a magic formula for providing extra time and practice. Years of employment experience isn't an indicator of quality either as that doesn't necessarily mean practice solving hard problems.
When you solve for bias suddenly there are a lot more competent candidates available. If you realize a single senior paired with several newbs accounts for the talent gap more quickly suddenly you can hire functional teams without false expectations.
I think google did a good job of this in the early days. They got the word out on the street that their interviews were notoriously difficult. That probably dissuaded a lot of the random applicants.
I also read somewhere that some big bank (Goldman) just has a very circuitous, lengthy application process and the main point of it is to weed out the spray and pray applicants.
I think that really started around 2005 when they made a concerted effort to ramp up college hiring (and by extension, started advertising among the general public).
I knew someone who worked there in 2002. (Well, now having worked there, I know lots of people who worked there in 2002, but in 2002 I knew someone who worked there.) At the time, they were a small startup with a reputation for hiring only the best, the way you might think of Medallion or D.E Shaw now. There wasn't a perception that you could just walk in off the street and get a job like in The Internship movie.
I think there probably is a tech shortage in terms of strong candidates. I've worked with some companies where there are people that are employed as "developers", but they can't tell you the simplest of basics; I don't consider myself a developer, yet my meager knowledge and understanding far exceeds the skills of some of these people. I see more of these types than you'd think could survive for long (much of my work is outside of SV and with non-tech companies). When these types enter the job market, the local maximum they've been in for some time tells them that they're developers and they think they can apply to some of these more sophisticated gigs. But there's a clear distinction between these people that mostly copied code from SO and elsewhere in the code base and got it to work vs. those that actually devised the methods they're using... and I think there are many more at this lower end of the spectrum than there are at the higher end: just the bar is so much lower.
As for companies innovating in this regard, it's kinda funny to think about it. When it first got started, I think LinkedIn was trying to be exactly this. Rather than encouraging people to add any and every person they ever thought about (or that asked), the idea was that you should only add those people that you know and trust. In this way, you could use your LinkedIn network as a network of trust for things like hiring or looking for work. That's changed I think both by LinkedIn wanting to have a large network of data to mine and by any number of participants that will add connections since they're really after a bulk network rather than a trusted network. In both cases the business case for such a network of trust doesn't seem to work well for the company providing the service and doesn't work well for the participants... so it breaks. Naturally, that's just one approach and there could be others, but I do think there have been efforts here.
> I've worked with some companies where there are people that are employed as "developers", but they can't tell you the simplest of basics; I don't consider myself a developer, yet my meager knowledge and understanding far exceeds the skills of some of these people.
Hypothesis: these people might not have knowledge, but they're very willing to schlep, and so—with enough effort and iteration—they can take a plan and turn it into working code.
And on the other hand, some people who are very "talented" on paper are nigh-unemployable because they just can't get things done. (They can easily tell someone else how to get the job done, but they have no drive or desire to schlep for themselves.)
Sure. I would agree, though at some point this approach runs out of steam. Larger problems tend to lead to failure and the technical debt eventually demands to be paid. A not insignificant portion of my work comes from sorting out the messes when such failures happens; I have a solid understanding of the business domains and the application types (I basically have a good product management type knowledge, what the applications -should- do).
The other side of this is that, at this level of developer, these are exactly the positions where offshoring makes sense. If you're going to get people whose primary talent is a willingness to schlep just adequately enough to be employable... well, I can find those kinds all over the world, and can probably do better in the skills category, for much less cash. So if I'm looking for domestic (I'm in the US) technical talent, I'm usually looking for those that are worth the higher pay commanded in the US... which means I am absolutely trying to sort out the schlepers for the most part. Naturally, offshoring has its own difficulties regardless of why you're doing it; so there are times when schleppers (I like that term :-) ) are sufficient, but at some point that strategy changes (growth, etc.)
I don't even disagree with this. It's just also an admission that you don't hire the best.
We can argue about whether there are 10x developers or not, but it doesn't matter for this discussion. If you assume that a developer is going to be with you for a year or more, then it starts to be worth a week or more of your (collective) time to improve the expected productivity of the people you hire by 10%. Not 2x, not 10x, just 10%.
If you aren't spending that time, then you're tacitly either admitting that you're not good at accurately assessing talent, or that you think any warm body will do equally well at your job. Either way, I don't think you're really disagreeing with the core of Dan's post.
If you have 100 candidates for a job, then doing a half-hour phone screen for all 100 of them is 50 hours of work -- a bit more than one whole work-week.
And if you're committed to not missing a candidate who might be really good but your heuristics would lead to a false negative on, how much does a half-hour phone screen really tell you? I mean, can you really reject more than half of the candidates based on a half-hour phone screen if you're committed to a very low false-negative rate?
What are you going to do, bring 50 candidates into the office for half-day interviews? You've now brought your sunk cost of interviewing from one person-week to 6 person-weeks. And you've also actually probably spent 3 months doing this process, so you've probably also delayed actually hiring someone for at least a month. 10 person-weeks worth of productivity starts looking a lot less good compared to a 10% productivity bonus.
Given how unreliable interviews can be (both phone and in person), you'd have to solve it with another mechanism.
At my last company, we "screened" ~200-500 candidates per job with a work sample. It was really hard to put together, and worked amazingly well once we had it going. We did a structured interview everyone who passed the work samples and it never felt like a waste of time.
If you use a technique that cuts a lot of people, but cuts more bad people than good people, you'll still end up with plenty of good people. The difference in comparison isn't between the average and the good person who was cut, but between the good person who was cut and the good person who wasn't cut. What is the chance of the heuristic cutting someone 10% better than the best person not cut?
That was not my reading of it. To me, given a large enough pool (~20+), you can statistically assume you'll get a top 10% candidate with this method, at least that is what i read.
Still, this method at least has some structure to it. As opposed to the anecdotes that the rest of the thread is.
(Disclosure: I'm one of the engineers and technical interviewers at Triplebyte.)
This comment is spot on: most companies are forced to create cheap heuristics for filtering engineering applicants, and most of these quick-to-evaluate rules will have surprisingly high false positive and false negative rates. You'll miss out on great engineers you auto-reject, and spend hours interviewing people you shouldn't.
Part of the reason for unfairness is that everyone doing hiring is creating these heuristics independently.
Better filters are possible -- we've found that it's even better when you avoid resumes entirely and go background-blind! But these are no longer simple "if typos >= 2 then reject" or "if GPA > 3.x" heuristics, and are of sufficient complexity that they're beyond the scope of each individual company or hiring manager to develop independently.
If these heuristics are terrible, shouldn't these companies just throw away applications at random? At least this way you don't delude yourself in believing your heuristics are rational.
You joke, but this is very similar to the Secretary Problem's optimal solution [0] [1]:
> The optimal cutoff tends to n/e as n increases, and the best applicant is selected with probability 1/e.
So, you deliberately reject a first cohort of applicants, and then pick the first person you encounter that's better than the best one you rejected. Obviously, in the joke that's not possible as they are neglecting to interview them, so I think the sibling comments are correct about it being a skewering of the hiring process, but it's neat that there's such a close parallel.
And recruiters and managers wonder why applicants spam hundreds of cookie cutter resumes and cover letters instead of painstakingly crafting each one. Or why they get annoyed at 'job portal' applications where we have to retype our resume into the various little text boxes the web application provides.
Its just a numbers game if you don't have a relationship with someone who can get you a new job.
> refuse to hire unlucky people
Not getting hired there seems to be the 'lucky' move. Arbitrary hiring guarantees, over time, some pretty terrible characters that a proper hiring process would filter out. If you're pissing away 50% of your potential and, say, there's a 20% chance of getting a 'good' person, now there's a 10% chance of that. Over time, a place with such hiring standards would most likely build out a corporate culture that's subpar. The rare 'good' hires would probably take off much earlier than need be because there's so many more 'bad' hires. Eventually the 'bad' hires rule the roost and then people act surprised at gross incompetence as the norm in many companies, shameless dishonesty, and various forms of harassment being normalized.
I worked at a place that didn't take hiring very seriously and this is exactly what ended up happening.
Except that isn't how statistics work. If you have 100 applications, 20% of which are good, and drop 50% of them, you get 50 applications, but you still have the 20% good application/bad application rate.
Of course, you still get 50% less total good applications, but ratios != total.
Sure, but usually you don't want to hire _all_ of the good applicants - the reason you get bad hires is the false positive rate, not because there aren't enough total good hires in the pile.
Perhaps, don't take it too literally. I suspect that comment must have been out of desperation, a satirical comment meant to criticize their woefully inadequate hiring process.
From what I've seen in academia, this is a surprisingly adequate process. I've seen them do phone interviews, Skype interviews, divinations, psychometrical testings, on-sites - and then none of their unicorns is right colour. The commitee declares a failed search and tries again the following semester.
You'd think that many, if not most of the people in the applicant pile could do an adequate job, perhaps with some training, considering what the job market looks like in the life sciences. Instead they play precious and then they have to make do with some last-minute lecturer, and that at times plays out really bad.
If you get a thousand resumes for a few jobs, then reading/analyzing/grading resumes are useless indicator of talent. You have to resort to some other process.
You could probably do just as well with random selection as relying on a heuristic. Most people with relevant training and experience are capable of doing the job, most of those are capable of doing it extremely well when supported with training - probably better than "the best", because their cup is not full.
I've yet to see a resume screening process or interview process that reliably weeds out the 1-5% of bad picks. I have seen screening processes reject perfectly good candidates.
A company could probably do very well by letting go of the idea that you need to find the best candidate in a stack of 100 resumes, and instead just pick anyone with the needed qualifications and support them with training. As the article mentions, the current approach performs poorly and wastes time and money.
We can recognize that hiring is a noisy process, and still seek to raise the noise:signal ratio. As long as management doesn't excuse existing heuristics as good enough because all 'heuristics are going to be biased and stupid in some ways'. Like half of engineering is designing new & improved heuristics.
Dan's point is largely that for each new & shiny technology you add to your stack, and then add to your hiring filter, you shrink your hiring pool substantially. You can do new & shiny, or you can hire day 1 productive staff experienced in your tech stack. Many firms look at their budgets and choose the latter, but if you don't have budget to train staff, you probably don't have budget to hire experienced staff either. So while the recruiting slogan is 'we only the best', in practice you get 'we only hire the tiny fraction of new grads who've been researching our exact tech stack in their spare time'.
This reminds me of a story about hiring (from a book which name I can't remember offhand). The author was hiring a team in India, and facing an onslaught of thousands of basically identical resumes. How to filter them? So he added "Ruby" as a requirement - not because it was required by the project, but because at that time, only programmers who really cared about their craft learned Ruby. This got rid of 99% of the junk resumes.
So, I think we need to discuss the legal concept of 'disparate impact.' We've decided that, as an employer of 15 or more people, it's your duty to avoid even accidental racial bias without justification.
Normally this isn't a problem as screening criteria like experience in language X is a bona fide job requirement, so even if such criterion filtered out minorities disproportionately, it's not a liability. Tacking on filters like 'must know programming language not used at this company' is going to be a huge stretch, and if your African American candidate pool is filtered out more regularly, that's a lawsuit waiting to happen.
So yes, crafting heuristics is hard, but manager's paychecks are larger for a reason.
If I ever get to the point where I'm in charge of a large-scale hiring process, I absolutely want to use name-free resumes, to reduce gender and racial bias, plus other steps that are effective as I learn them.
That said, the African-American candidate pool in America is so vanishingly small that subtle tech bias at hiring time isn't the issue - the heuristics to filter them out started before they ever finished high school. (This has actually been a point of interest for me for a long time. I mentioned elsewhere a friend who has terrible spelling but a very successful career. He also happens to be black, which has been something else to overcome. We've talked for years about ways to encourage more black students to go into IT, which we both believe is a route to middle-class success with substantially less racial animosity than they'll face in other fields.)
Your approach is ctually the problem. Hiring should be very very important - maybe even the top priority. The two most important things in the company should be: hiring and sales.
In that respect, you need to invest into hiring in same was as you invest into sales. For example, sales have much much better heuristics algorithms while hiring is all about weird and random rules.
Hiring is the most important thing a company ever does. Bad hiring means you'll get a bad company. In the US you can partly balance this by firing diligently, but that has its own problems.
Tech staff shouldn't waste their time in hopeless cases, but assuming your applicants aren't all about the same the return on hiring the best one could be huge.
I don't like resumes as 98% of each is probably bullshit unless the candidate is a truly amazing rockstar with 10+ years of valid experience. As a result any heuristic applied to resumes is just as worthless as most candidates.
When I have interviewed in the past I literally stopped reading once I see contact information and save reading it for during the interview. If the contact information is not the first thing on the resume it goes in the trash.
For valid heuristics I start by telling candidates something they don't want to hear. If that means they drop out then good riddance as I have 30 other people who are more interested in the position.
Examples:
* When I did this years ago I would email JavaScript candidates to set up an interview time and I would tell them jQuery isn't available in this job. Half would immediately drop out. Good.
* For modern JavaScript candidates tell them the DOM will feature heavily in the interview and there will be no MVC framework. Half will drop out.
* A rough equivalent for Java is Spring MVC or forcing architecture considerations.
You can filter people pretty fast just by focusing on foundational simple vanilla code questions. It is astonishing how many unqualified people apply for jobs whose resumes are a complete waste of paper.
The current employer would only take contractors for new hires (let agencies find the people) and would give out some interview questions to the contract agencies so that candidates knew what to study for before showing up for the interview. Even still only 9 out of 73 interviewed candidates were selected (who knows how many resumes were filtered out).
You've mentioned several ways to reduce the number of candidates that you consider for each role, but I don't see how any of it leads to better quality hires.
It also seems to remove the people who may or may not actually understand the fundamentals of the language they claim to have experience in, rather only knowing how to cobble together frameworks and/or copypasta code.
Candidates that need a coding crutch are less valuable than candidates that don't. Likewise candidates they are so petty or so completely lacking of confidence to perform without their preferred toy are less valuable and less flexible than those candidates who can perform in the actual technologies.
If this is a measure to eliminate candidates who are ultimately less interested in doing the work then so be it.
I fail to see what's good about your approach. I suppose if your company has a lot of pointless bullshit that doesn't apply to the actual problem, then yeah, you're selecting for people who either are willing or have no other alternative.
It was pretty clear in that the primary motivation was to eliminate candidates who either lack confidence or competence so that you can spend more quality time with those candidates who care more about the respective skills.
I don't know you, but I am guessing from your highly defensive tone that you likely fall into one of the categories I described in the previous paragraph. If this means of candidate elimination is emotionally offensive you should ask yourself why. When candidates exceed the number of openings somebody must be eliminated.
I highly disagree. And no, I don't fall into one of those categories; I'm just not willing to put up with random and arbitrary bullshit. Which is what those things are. Yes, I could operate under all those conditions. I choose not to, because those conditions suck ass.
While someone must be eliminated, you need to ask yourself why you're selecting for those who would opt to work under random and arbitrary bullshit when they don't have to.
>I would tell them jQuery isn't available in this job. Half would immediately drop out. Good.
Unless the job description made clear why it wasn't available, I would take this as a sign of a toxic workplace and avoid it. It isn't about jQuery being banned, it is about the type of workplace where jQuery is not only banned, but such a ban is mentioned early in the interview process. I guess if you want people who are okay with such a workplace, then you are using an appropriate heuristic.
> I would take this as a sign of a toxic workplace and avoid it.
You avoided explaining this. If eliminating unnecessary abstraction is offensive to you then I would be more happy to eliminate you from employment consideration at the earliest possible moment so that I don't waste your time.
Good way to filter out candidates that understand tradeoffs or are willing to question potential fallacies. Technically no abstraction is necessary. But it's often useful.
>If eliminating unnecessary abstraction is offensive to you
If the hiring individual dictates what trade offs are made so far in advance that it happens in the interview, it means they aren't just eliminating unnecessary abstractions. Premature optimization isn't generally a good thing. Premature optimization showing up in an interview as a way to get people to quit is even less likely to be a good thing.
>As a result any heuristic applied to resumes is just as worthless as most candidates.
Disrespect permeating the industry. A candidate is either a rock star 2% or worthless the rest. And everybody does successfully hires only rockstars. Mathematical paradox. Though seeing how you mention 10 years experience as something important I'd guess you're young, and that would explain your gimmicky and capricious approaches to candidates selecting and interviewing.
Have you tried recursively applying your own statement about heuristics to your own heuristics?
The only thing you are doing is declaring that your own simplified view of the world is truer than other people's simplified view of the world. At least most people are self conscious enough to know that the simplifications lose information but is somewhat unavoidable due to amount.
Actually, I have. It isn't about my opinions. It is about the underlying technologies and languages more directly. It is about understanding why things work the way they do and how large applications come together and how to organize things.
Yes, but: Being hand-delivered a resume from a trusted colleague is a filtering mechanism. It won't guarantee a phone call, but it's a better recommendation than a recruiter trying to gain a commission on this week's "really awesome superstar."
If you are so flooded by resumes, you really should be more specific in your job ad. By that, I mean you should describe the position you look to fill I details, including uncomfortable parts.
People will self select way more. As is now, many job ads work as something attempting to sell position - but you are flooded by people you don't want (else you would just pick 5 random to interview).
When I've been in this situation, I wasn't the one making the job description.
In general, I feel the hiring process focuses on the wrong things. Buzzword compliance is a major problem. Experience in problem spaces is much harder to capture in a resume than a list of technologies that can be software-grepped for the benefit of technically clueless recruiters and HR.
Similarly, I'm not a fan of most technical reviews. They tend to focus on gotcha questions and think-on-your-feet trivial programming exercises that have virtually no bearing on real life as an engineer. To be fair, they can help, but they often exclude excellent engineers who aren't as good at the on-your-feet game as they are at the big picture game.
When interviewing candidates, I tend to pick up on an interesting experience on their resume, and ask them to talk about it. What went right? What went wrong? What was your role in the solution?
Heuristic: Candidates who blame others for failures, especially when they don't praise others for success, get a big black mark. I don't usually like working with people who are looking for someone to blame, regardless of their skill.
Heuristic: When someone starts talking about some project and really geeks out about it, getting into the details of how the solution worked and why it delighted them, gets a huge plus. First, it means they were actually at the core and understood what was going on. Second, it says what they care about.
Sometimes, I'll get them talking about their best experiences as an engineer, or their worst, just to hear the geek come out. It's very telling.
And I have usually decided on a candidate in the first five to ten minutes of an interview. If it's no, it's no. If it's yes, the rest of the interview is mostly about validating my initial impression.
I think that asking overly hard technical questions is bad for reasons you said, but if an engineer gets offended over simple "implement linked list" or other basic exercise, something is wrong with that engineer. Either it is that he/she masks some lack of basic knowledge or he/she is too much of primadona. It is perfectly ok for interviewer to check on basic skills, since all interviewer goes by is what you say about yourself.
But, if I had enough social skills to tone down "geeking" withing 10 minutes of interview with stranger, you would not hire me? That is just side note through. The older I am, the less I like to frame myself as "geek", there are some cultural aspects I have grown to dislike. A lot of "geekinesss" is actually trying to be cool rather then anything else.
People learn about and optimize for those heuristics. Those resumes will on average get better and better over time until half of them are almost perfect. Those people all spent considerable time and energy on optimizing their resume. You get people who look good on paper. They will continue to excel in your company (on paper). If your job is to look good on paper. Yes, go on.
You should at least look good on paper. I'll not hire you because you have a wonderful resume, but I'll interview you. And I'll not interview you if your resume looks bad.
(truth be said, I'm not the one selecting resumes right now, and sometimes I get garbage that I wouldn't even interview)
I think it's fair to keep in mind that the developers you want don't have a lot of experience interviewing (they tend to get snatched up as soon as they're on the market, if not before), and the ones you definitely don't want might have more experience interviewing than doing their actual job.
To be perfectly honest, I can't see proofreading your resume to be that big of a burden. And if you can't bother to do that, what else can't you bother to do while on the job?
"Let me randomly filter these and hope for the best" is the essence of your suggestion for screening candidates.
It doesn't sound very good. It sounds like you should be investing more time into interviewing more people instead of throwing away folks who could excel, but fail an arbitrary heuristic.
That is to say, instead of throwing away 19 out of 20... don't throw away so many. Obviously you don't need to call in all 20 people, that would be a huge waste of time. But your approach seems too sensitive to the heuristic for the tradeoff of saving a few hours of time to find a person whom you'd hopefully employ for years. I wouldn't take that decision lightly, especially with a small company, though I suppose you can always let them go if it doesn't work out.
The thing is, it's not "random". I have a good reason for my filter - it represents attention to detail. And yes, it will exclude the occasional good candidate, but it will exclude far more bad candidates.
If I have 100 resumes for two roles, and I can, through heuristics, knock it down to five, then I'm starting with enough "good" resumes to stand at least a chance of filling the two roles. Mind you, we're now about a day's worth of work for a manager, and five interrupted afternoons for the senior/lead engineers doing the interviews.
On top of that, the good candidates are going to have other companies competing for them. If my response is "Well, we like this candidate, but let's spend two weeks interviewing another twenty people, just to be sure", my odds of missing the good candidate are pretty high.
Why is that people don't run a spellcheck? It's rare for me to see even official internal documents without several spelling mistakes even after they have been approved for publication. Just turning on the spelling checker in your favourite word processor, editor, or even web browser will go a long way toward catching most typos.
The grammar checker in Word is also worth a try. Did you mean "P. G.'s maxim" when you typed "pg's maximum"? :-)
I noticed that even these little HN checkboxes have a grammar checker. Earlier, I carelessly wrote "the the", and it underlined the second "the" as incorrect.
And yet, the comments in response to my OP are full of easily detected errors. It's like you have to make an effort to screw it up.
In my experience, spellcheck leaves a lot to be desired. Even on a finished document that I've manually verified as being perfect in terms of spelling and grammar, there's usually still a sea of red and green underlines offering incoherent suggestions in English as dictated by Google Translate.
If the authors of the documents you mention are anything like me, we're so used to ignoring false positives that the net result now permits minor errors to slip through.
As someone who does a huge amount of writing, it's always surprising how many errors still end up slipping through even though I do at least one re-read when I write something. It's basically impossible to do a good proof-reading job on your own work, especially if it's something you've gone through and reworked multiple times like (presumably) a resume.
If you're going to filter on typos, you may as well put a hackerrank/leetcode question as the initial filter, or invent your own question. That will save you and your team more time since you will only get respondents that pass and you don't need to review every resume yourself for typos.
I don't know about the original poster, but I'll immediately spot every error in spelling, punctuation, and usage in a ten-second skim of a page. My eyes just go right to them.
And yes, I consider that it's difficult to write correct English. But if you're willing to send out an erroneous resume without getting it competently reviewed, I have to suspect the depth of your commitment to quality overall.
> And yes, I consider that it's difficult to write correct English. But if you're willing to send out an erroneous resume without getting it competently reviewed, I have to suspect the depth of your commitment to quality overall.
Is this theory or practice?
People who individually craft CVs and cover letters (I do) go through a very resource consuming process.
Having all the CVs and cover letters externally reviewed makes the process even more resource consuming.
It's crucial to differentiate spell-checking from proofreading.
The parent specified spell-checking; there is a reason to be strict on this, and it's that spell-checking a document takes less than a minute with a word processor, so one really needs to be careless to leave more than a couple of spelling mistakes.
Proofreading on the other hand, is much more demanding (finding people, coordinating, reviewing and applying changes), and often, gives feedback that may not be very useful or even conflictual between reviewers.
I don't think proofreading for more than a handful of CVs/CLs is realistic (expect for the CV-maniacs).
If someone ever ships a bug, I'd have to suspect their commitment to quality overall too, and that's way more relevant than a typo.
I wouldn't want anyone to pay for a resume review because the advice you get from that can be quackery or even conflict with other articles about resume writing (apart from 'no typos', they can pretty much agree on that).
Really, the point was if you're going to use a superficial filter like that, you may as well automate it to something like a code test so you can save yourself even something as small as 10 seconds x N resumes per year.
It is obvious that you did not scan your comment for any errors for 10 seconds before you submitted it. Try a little game and find the error you have made.
I don't want to be too critical, but you have a number of grammatical errors in your comment. I believe this shows how easy it can be to get at least 2 typographical errors in even a short piece of communication. I am sure that this comment will infringe on someone's standard.
really? you don't think its evidence that we invest varying levels of effort into things we care about? thusly, a candidate with typographical errors is closer internet-comment-level interested effort-wise in the position.
Claiming something is easy to do and then making the same errors simply diminished the strength of the argument presented. The comment comes across as "do as I say, not do as I do."
it's fun for text-based debate, but aren't you just ignoring the greater contextual situation for the sake of argument? the distance between anonymous internet comments and professional discourse is wider than what you're implying
Why should the distance between between anonymous internet comments and professional discourse be wide? If your comment is about professional matters then you should have enough professionalism to make you point appropriately.
i mean, from my understanding of the english language, these kinds of things are usually contextual based on the medium, not the content of the message. i suppose everyone has their different interpretations, though
I'm focusing on spelling, not grammar (I consciously exclude grammar from my heuristic, out of respect for non-native English speakers). Are there any misspellings in the OP?
I took it as conversational and the errors are in that vein. When speaking there is a specific flow in the words used. When writing that same conversation, specific typographical conventions are used to exhibit the spoken word flow.
While I agree, it would be irresponsible to exhaustively interview isn't a key metric for a _good_ hiring process how efficiently you can find/place candidates who _do_ excel in the role?
By that turn, shouldn't a responsible hiring manager constantly be evaluating and improving their heuristics so that their front-line filtering is less biased and stupid in the ways that materially affect the quality of hires?
I don't think the point is that we can remove heuristic filtering from the process. Heuristics are necessary in hiring but this doesn't justify being dismissive of a discussion about whether a certain class of heuristics is really serving the needs of the organization.
So optimizing for making resumes go away, and resigning yourself to shouldering the long-term burdens the article describes. That's what you're describing and that's certainly one way to do it.
Honest question - how does this relate to some of the anecdotes in the OP - where someone referred to recruiter/HR as a desirable hire by a current employee gets denied because <some clueless resume scan criteria fail>?
That's the real kicker in the whole article... when "We want this person!" doesn't work because crustimoney proseedcake.
In my experience recruitment consultants will often try and "improve" the CVs they submit, even if it's just to fit into a template. I've seen many spelling mistake sneak in here.
You may well be rejecting candidates based on their recruiter.
Now you're promoting "who you know", which is irresponsible as it can pass up many qualified candidates who don't happen to be in the same circles as you.
Does "trusted network" mean alumni clubs? Church groups? Gym friends? What would be an un-biased trusted network to filter candidates with?
The thing is you can't avoid bias. Even filtering based on resumes is a form of bias.
Networks and referral are the way many folks find jobs and they're especially important for candidates who have educational pedigrees and career trajectories that are different enough to cause HR drones to reject them in less than 2 seconds.
Now you have a defacto "If you're not in the boss's religion or congregation you can just piss right off" thing going on which is toxic for the workplace. Even if you were, say the same Christian denomination as him, are you willing to drive to the boss's wealthy suburb to go to the wealthy suburban church every Sunday solely for your career?
Nothing says the organization doesn't care about merit or diversity than church hiring.
What about having to drive to the wealthy suburb and pay an initiation fee and an expensive monthly membership to join the gym your boss goes to? How is that any different?
In my experience, too much of that easily brings in existing biases, too much "this is just how X and I did things at our last company", and sometimes toxicity from previous companies too.
Where I work, interviewers are sent candidates without knowing who is and isn't an employee referral (except for a couple of cases where the candidate has told me or something), and employees who were recommended by another employee statistically perform much better. But we intake a lot of people who just applied too. Referrals are good, but they can't be everything.
The downside of hiring only from "trusted networks" is that you are building an echo chamber.
If you care about diversity (as most companies like TrendCo claim to) then hiring only referrals from your trusted network is likely to lead to entire teams of people that ARE JUST LIKE EACHOTHER. When I talk about diversity I refer more to diversity in viewpoints and experiences, than I do to race, gender orientation, sex, etc.. Whether we like it or not, our friends and networks are made up of people who think and work exactly like we do, that is just how humans are.
So hiring only from networks like that can lead to a huge disservice to the company, especially a growing startup company that usually tends to already suffer with the "diversity of thoughts" problem.
In addition, you are back to the original problem of potentially missing on valuable candidates because you narrowed your search to only people inside of your network. Who is to say that your network is best engineers in the world? By narrowing the search within the network you are missing out on likely better candidates that might come from a normal resume search.
From a pure statistical perspective, lets say you get 300 resumes the first week after posting a job (that's about what I would get depending on the jobs I was posting when I was last hiring). To make your life easy you cut it off after only a week. Well you have 300 potential candidates here that we can assume have self-qualified themselves based on your amazing job posting you wrote. So let's assume at least 80% of them are qualified for the job. That's 240 qualified candidates of which you need to choose 1. How many people are realistically in your network that are available for new jobs when you have an opening? Most of the time I was getting 2-3 at most for any given job coming in as referrals. Compare that to the 240 other candidates you had and you are MORE LIKELY STATISTICALLY speaking to find a better candidate out of the resume pool than the referral pool.It contains a large sample size, simple as that.
Hiring only from trusted networks is really a cop-out for the hiring manager. It makes their job easier because they can blindly narrow down the search to a handful of people without truly inspecting other talent out there. I would argue its more irresponsible than what the original comment suggested.
Plus in my experience I have hired some great people that came in as referrals. But if I also look at my top 5 worst hires I have ever made, all of them have been referral candidates. The truth is that most of the time the friend that referred them will prep them for the interview to coast through the interview. Telling them not to mention certain things and focus on things that our company is looking for in new hires. It creates an extremely biased job hiring process.
Tl;dr - Don't pat yourself on the back for only hiring from your network. You are likely missing out on the best talent by doing so, and creating an echo chamber in the process. Plus you stunt the future growth of your own personal network.
Trusted network is even worse for bias; at least from my experience with shops that have tried both.
It's how you kill diversity, arrest your growth, and let fallow the connections to recruiting pipelines that might help you escape once you realize how deep you've gotten.
You also tend to reconstruct failed teams. I've worked at places where people hired all their old friends who (collectively) failed to deliver on the last big promise, or executed poorly. You end up spending your referral bonus money on rebuilding cliques and their biases toward failure.
This was the most depressing part to me: "Another person I know is in a similar situation because his group won’t talk to people who aren’t already employed."
It's a long story, but both my wife and I have most recently been at places, in a small-medium city, that have had huge turnover in the last several years due to mismanagement (her entire unit, except for her, left twice while she was working with them; we've lost about 30% of our employees in the last few years). She was pressured to resign after returning from her maternity leave because, as far as I can tell, they figured out they could cut her position and offload the work onto others while she was gone. So she's stayed home to raise our child. I haven't lost my position, but every week brings a new clusterfuck of horrors that I'm running out of ways of coping with.
The problem is, now she's wanting to return to work, and has a job offer, but it's in another state, and it's created a situation where basically I have to give up a job and probably career, or she has to run the risk of her career dying off because of lack of opportunities in the area.
So if I take time off to watch our child while my wife rebuilds her career, and maybe move to a different career myself, somehow I'm penalized? Pardon my language, but fuck that.
I'm also in my early 40s, so there's that.
I'm starting to feel like my life is ending, really, like opportunities are just vanishing left and right. It's odd because up until this point I always felt optimistic, like there was always something out there for us. My wife comes from an elite school, we both graduated with Ph.D.s from a program in the top 5 in our field (according to dubious rankings), if you're into that. I don't mean this narcissistically, but I just feel like there's this huge discrepancy between what I know my wife and I are, in terms of work ethic and competency, and what our opportunities are.
There's not a lot of empathy in these comments, so I just wanted to say that I've been there and I know it's scary. It's a shame that hard-working people with an education can't find opportunities outside of a handful of metro areas. I moved to the Bay Area for the job density, but I understand why people with families aren't eager to do the same -- I'm single and childless and most days I wonder why I keep living here.
Best of luck with everything. If you don't want to freelance and don't mind travel, DM me for an opportunity.
> "Another person I know is in a similar situation because his group won’t talk to people who aren’t already employed."
This point alarms me a bit. It doesn't even have to be a family circumstance. What about departures for your own mental health, or sabbatical?
So if I take some time off to learn a new stack (I'm leaning toward Elixir), and build a few projects with it in my free time, I'm a lesser candidate?
This being said, I do have a friend who did take a year off to pursue music. However, he's what they'd call a "rockstar" and has freedoms most of us don't.
Wait, so let me get this right:
1) Your wife gave up her career to take care of the kid
2) Now she's found a job and is asking you to do the same
3) Additionally, you consider your job a clusterfuck with incompetent management
I hope you and your wife get what you are looking for, something just as good, or something better. I don't have any advice for you beyond trying to find opportunities in your hardship, and taking good care of yourself and your family. Even though I know that the former may ring hollow, distant, and lacking in empathy, doing that (and probably more) helped me persevere onward to a new opportunity.
Is there a possibility for remote work? Even part time or lower paid would help to ease the stress of long term outlook. And frankly, as someone who have been and done that, being at home spouse can be frustrating/isolating/etc and having something outside, some goals and duties to focus on, helps a lot. I mean, if you are frustrated from discrepancy now, it will got only worst. So having a goal, part time job, small business (even if it earns no money) helps really a lot to deal with all those frustrations.
"So if I take time off to watch our child while my wife rebuilds her career, and maybe move to a different career myself, somehow I'm penalized? Pardon my language, but fuck that."
Agreed. Anyone who wants to implement some kind of asinine policy like that should instantly be forced to spend the next 6 months unemployed, and then try to find a new job.
I don’t think it’s a policy as much as it is common sense. Unless the OP was producing output during that time (which he doesn’t suggest he would be doing), he _will_ be less attractive to hiring managers. Why would someone want to hire somebody who is letting their skills atrophy when they can hire somebody who’s been growing their skills actively?
There are a lot of candidates out there and to ignore your competition for employment is to do yourself a great disservice.
Again, it’s not a policy (you keep using that word). Are you a hiring manager? If so, would you really hire a person who hasn’t done anything work related for over a year over someone who has?
At the end of the day employment is a mutually beneficial deal for the employer and employee. It’s not some sort of social safety net. Performance counts.
Small kids are a lot more work than many people realize. Ditto building a business. Trying to do both at once without being financially independent first is not a choice anyone should make if they can help it.
Building a business, particularly a consulting business, is pretty easy and doesn't take financial independence, especially if it's IT related. Incorporating is easy, you can do it online. Then call up ye olde consulting firms and have them find business for you until you build up a customer base that calls you directly when they need work done. Do this enough and soon you'll have so much work you'll have to farm it out to acquaintances (your first employees).
Typically if you have an organization that only talks to people with current jobs, that rule is for people sending in cold resumes. On the other hand, if they know who you are, you probably won't have the same filters applied.
Let's say you go ahead and move and take time off to watch your child. Two years down the road you want to start looking for work. What you've done with those two years is going to really matter. If you've invested time in activities like writing about your field, speaking about your field at conferences/meetups, creating an online course about your field, contributing to related open source projects, teaching a class at a university, etc. you are going to be in a very different place than someone who just checked out for 24 months.
Being currently employed sends certain signals to a prospective employer, but there are other things that can send much stronger signals. The unknown is still going to be scary and I don't know your field, but your Ph.D. can probably open a lot of doors to make yourself visible to market.
> What you've done with those two years is going to really matter.
Caring for their child is what he'll have done with those two years. It's an exhausting full-time job. These suggestions are great for someone who's unemployed, but unrealistic for a full-time caregiver.
Well there were a lot of suggestions and there are plenty of other things one could do that I didn't suggest. Your chances of getting a job after 24 months having done absolutely nothing related to your career are much lower than if you do some things like I mentioned--even if you are only spending 10-15 hours or so a month on it.
I'm not downplaying how much work it takes to care for a child, but I don't think it is helpful to downplay the need to stay at least somewhat active in your field if you want to back to the workforce someday.
No. But he is right that if you do some activities in the meantime, you will be in very very different position then if you don't. Maybe the biggest difference is level of confidence, bit it such a huge difference that it is worth mentioning.
I'm in no way implying that taking care of your children is less important than your career. I think it is great what he is talking about doing and is a great investment to make in his child.
Leaving the workforce to take care of family and not doing anything to keep yourself marketable is "checking out" when it comes to your career. You are going to have to be proactive in making sure that you are doing things each week that will put you in a good place to return to the workforce IF that is what you eventually want to do.
> So if I take time off to watch our child while my wife rebuilds her career, and maybe move to a different career myself, somehow I'm penalized? Pardon my language, but fuck that.
No offense but this reads as fairly entitled. Why would your value continue to rise as an employee if you take time out of your career? You will be competing against people who are bettering themselves without taking > 1 year breaks.
It’s not any employer’s responsibility to plan your life, you must do that yourself.
Your premise seems to be that people taking one-year breaks aren't doing something to sharpen the saw. I don't know about you, but I've added more to my skills during some times that I wasn't full-time employed doing study and personal projects than during some times that I was (and yes, that's a sign you should leave a position you're in, but this doesn't always happen immediately).
It's really unfortunate to see how common it is to look down on people who spend their time differently. There are some tasks I wish to undertake that likely mean I would have to leave the office job experience for about a year.
Given how this may affect my future hire-ability, I doubt I will pursue it. I can't really afford to have my resume thrown in the trash because I felt something else was worth devoting my limited time to, even if only temporarily.
Very true. He doesn’t mention anything about personal projects or self-directed study though. Only that it’s unfair that leaving his career would be unattractive to future employers.
Because they are not. This is a competition with a set of rules, like it or not. If you think those rules are silly and arbitrary, then you can start a company that does not have those rules and hire people who do not want to follow them.
Would your company succeed because it does not have such silly rules? I have no idea.
He didn’t say anything about working on his own app. He said he was going to take care of his child. I’m not saying he won’t be doing that but it wasn’t in the OP.
You've got a PHD, your probably very specialized. This definitely closes alot of doors for you, but it's you closing the doors.
I don't envy the years I could look through the want ads and say to myself, I can do that, I can do that, I can do that. I prefer now, even though it's a bit a crap shoot whether someone is looking to fill a position with my skill set.
I disagree with this article, to some extent at least. Perhaps it's specific for trendy companies and startups. Most of which are just looking for the next shiny object, true.
However, when you look at companies that move the needle in different industries, companies that have repute, market share and profitability, they couldn't care less what is trending these days. They look for domain expertise and excellence.
I have friends who work at Renaissance, the hedge fund. The company couldn't care less about your grasp of the latest ML framework or Keros or whatever you were. As long as you know what you are doing and are exceptionally good at it.
Having worked, full time, at Microsoft, I'd say the same goes there and at Apple, Oracle, even Google for the most part. They don't care about what is trending, just prove your weight.
I think this conclusion was drawn from the companies that make the most noise but are actually not major players in industry. The same companies that are hot for a minute until they meet their eventual demise.
The most robust, relevant and profitable companies out there basically say, 'F* trends, show us your worth in salt'.
It's the hippie companies that ruin it all yet dictate social media conversation...for the 2 minutes their company is hot, then it dies.
It’s funny you use Renaissance as an example. They won’t give me the time of day because they “don’t hire people from finance”. Which is just as much of a bullshit reason as the ones from the start of the article.
> They won’t give me the time of day because they “don’t hire people from finance”
RenTech is a weird firm. The following is a fictional account.
Out of school (I was a finance/engineering double major), I interviewed at RenTech. They told me this was my last chance, that if I worked a day on Wall Street they wouldn't want me because they didn't want that culture (I think it's more about ensuring you have no industry connections outside the firm). They're geographically isolated, encourage employees taking mortgages to buy local homes and frown on industry interactions outside the firm. If you try to leave, they will enforce their non-competes (which are legal in New York) and sue your new employer [1].
I was wary of that need for control then. Today, I think it's morally wrong. Few people can predict what will matter to them ten years down the line. If I worked at RenTech today, I'd be depressed knowing (a) my work went to enrich the likes of Robert Mercer [2], and (b) there is no exit.
Renaissance Technology has been around decades and is one of the biggest hedge funds in the world. I don't think it qualifies as an early-stage company.
Re: renaissance. Of course. I'm highlighting where the 'we don't hire finance' trope originates. Nothing to do with Renaissance.
For all of you too young to remember:
In the 'dot-com' era - many 'new companies' still hired 'top down'. They would hire an 'executive team' first, and then maybe developers.
Often, a CFO etc..
Think PetStore.com - two MBA's hiring 'others to do the work, as workers do'.
The notion of 'all hands on deck founders' etc. was still novel.
Many people still wore suits.
So - 2000-ish - those attitudes evolved - and the 'CFO' for a very early stage company became obsolete. CFO/finance types are generally not required at the most early stages of a company wherein the issues are 'money in, money out, money in the bank'.
So, the slightly aggressive hipster/startup trope of 'we don't hire finance types' I think evolved essentially out of this new understanding of how early stage companies work.
I'm not disagreeing with your point but I think it's a bit out of context here. The parent comment was specifically talking about what they heard from Renaissance Technologies, not a some hipster startup.
How does taking 'no finance' out of context and applying a much more general comment that in no way applies to the very specific original context help?
Sure they may lose some great candidates that way, but the reality is that they don't want people in finance because:
1. They have preconceived notions about how finance "should" work
2. People not from finance are driven by other things
This is not to say that there aren't people from finance who don't fall under this, but it's a good way to weed them out without inviting a billion applications from people in finance.
So then by your own admission they’re not hiring the best due to this heuristic.
One thing my experience in HFT has taught me is that technical ability is only loosely correlated with profitability. So I wouldn't conflate Renaissance's success with their hiring practices.
>So then by your own admission they’re not hiring the best due to this heuristic.
Picking up on your use of the word heuristic, "hiring the best" is very much an optimization problem. Specifically, there's a signal detection problem inherent to hiring in which you want to maximize your hits (selecting the best candidates) while minimizing your false alarms (hiring someone who turns out to be a bad candidate).
There's another layer to this optimization problem, which is that you want to minimize costs, both in time and in money.
As such, there are bound to be many heuristics that are close-to-optimal. I don't know whether excluding people with a background in finance is one of them (in fact, I suspect it's not), but the use of such a heuristic is not prima facie absurd.
In fact, using (good) heuristics in hiring is a feature, not a bug.
The problem here is that the heuristic is (I think) bad.
Two things:
1. what was more or equally important than technical ability for HFT programmers' success in increasing profitability for the company? That was an interesting statement: can you elaborate?
2. That fun might have these heuristics that GP talks of, and for them, the people they hire may well be the best for them. As in, motivated and attitudinally oriented towards what they want, and more able to work free from the assumptions of _knowing_ how it's done in finance. They may be wrong in your case, but that's how heuristics work. To me, that's less of a bullshit case than trendy languages etc.
Trading always has this trade off in terms of (dumb and fast) vs (smart and slow). In the past few years, I think it's been more beneficial to be fast enough, but very smart. So, in this sense, technical ability only takes you so far since "smarts" come from domain expertise or, plainly, just creativity. Team composition matters, since your technical guy needs to be complemented. The objective, of course, is making your team take your organization towards the holy grail of (smart and fast). I'm not sure if this directly answers your question, but perhaps it describes the scenario in which technical ability isn't everything.
I do not think assembling a team for HFT is easy at all, especially now. There are a lot of good reasons for being very peculiar and selective. I am not sure industry outsiders who are just applying understand the dynamic or, in many cases, why a seemingly great candidate is rejected.
I don’t know much about the company, but a firm with extraordinary long term returns with a penchant for secrecy, cult-like hiring practices, and all of the employees invested in their extraordinary investment vehicle sounds sketchy to me.
You have never worked with someone from Oracle. They do not try to hire remotely the best, they hire cheap. Anyone that can churn out barely functional code, use their customers to functional test it, and take often half a fiscal year to provide a bug fix for you. Oracle is now more of a technical holding company than a software company, aside from their db, they merely pickup/purchase new companies and wring them out for revenue. Anything Oracle writes themselves is usually terrible.
> You have never worked with someone from Oracle. They do not try to hire remotely the best, they hire cheap. Anyone that can churn out barely functional code, use their customers to functional test it, and take often half a fiscal year to provide a bug fix for you.
They have a huge services division and that's the same formula used by all the big players. It's a numbers game for billing and they shoot to make it work by having a few highly competent people cover up for the C- players.
> Oracle is now more of a technical holding company than a software company, aside from their db, they merely pickup/purchase new companies and wring them out for revenue. Anything Oracle writes themselves is usually terrible.
Spot on. The database is incredible though the use cases I'd recommend it for over open source alternatives has shrunk to nearly nothing. Outside of that, the Oracle App landscape (Financials, HR, ...) are hilariously terrible, especially factoring in the prices paid for it.
This is something I've heard over the years but I'm not sure if it is still holds true. (I work with Oracle databases daily).
Oracle DBMSes are very robust, to be sure, and you can expect them to run forever. But in my mind other databases seems to have caught up and there really isn't a compelling reason to choose Oracle over other commercial databases anymore (also, Oracle pricing is a big deterrent).
In terms of performance, Oracle's licensing prohibits any benchmarking, but anecdotally I haven't found it to be particularly performant for most of my queries. Oracle used to be known for the innovative under-the-hood algorithmic improvements to the database, but lately I haven't seen anything too exciting, whereas SQL Server is getting better every year with new innovations (columnstore indices, in-memory features, Polybase, in-database Python/R computations, etc.).
> use cases I'd recommend it for over open source alternatives has shrunk to nearly nothing.
I agree, though I would say for heavy transactional use cases, I would still choose a commercial database over an open-source one. However, among the commercial databases out there, Oracle would be my last choice. It has too much legacy crud that have to worked around. The only reason Oracle is still around is because (1) In the enterprise, no one gets fired for buying Oracle. (2) fungible expertise in their services organization ensures business continuity, albeit at a lowest common denominator level.
Yes. SQL Server and others used to have the same terms but was later removed.
The original intent was to prevent non-neutral third-party benchmarking by biased agents that sought to discredit their products through contrived setups (which admittedly can be a problem for any product), but to enforce it through licensing seems a bit heavy handed. Oracle does allow benchmarks that are favorable to them (see TPC-C benchmarks).
I noticed you said "nearly nothing", what are some exceptions? I'm curious why anyone would start using Oracle now. The only reason I would see to use it is because you are reasonably dependent on it already and cannot migrate easily.
> I noticed you said "nearly nothing", what are some exceptions? I'm curious why anyone would start using Oracle now. The only reason I would see to use it is because you are reasonably dependent on it already and cannot migrate easily.
Ultra high end OLTP system leveraging Oracle RAC. It's a very specific use case that goes beyond basic replication where you require ACID compliance, HA, multi master (in this case via shared disks and distributed locks), all atop an MVCC database. The MVCC implementation of Oracle does not require a VACUUM style operation which is another plus for a 24/7 environment.
I know of a couple financial services companies that have this type of setup though I've yet to find one that (IMHO) really justifies it. In all cases they've got boatloads of money to throw at a problem and the guys in charge of making the tech decisions don't mind having one of those boats sail off to Larry Ellison.
Interestingly even in this use case I believe PostgreSQL would be my choice, although as you said it is a silly setup. I had not taken into account particularly narrow implementations that rely on specific Oracle pieces, I could see some going that way if that's the case.
My guess - government or ultra big corporations orders for very large sums, excessive for required task. Part of the sum goes straight in the pockets of people in charge of ordering it. Won't work in all countries but in some certainly will. Same with SAP and other overpriced solutions.
Other idea - "prestige". "We are using top end solutions, unlike the lowly startups, we are serious businessmen see."
> From what I saw Hyperion FM got relatively better after the Oracle acquisition.
In my experience the opposite seems to be true. I have heard people describe Hyperion as "pretty good" prior to the acquisition (this was a while ago, before my time). Today, if I talk to anyone about it, sentiments range from negative to very negative.
Almost every Oracle acquisition has suffered the same fate.
Counterpoint: I currently work with someone from Oracle, and they are one of the best programmers I know. "Usually terrible" doesn't mean "always terrible", which is the point of the article.
Technically the same, just with different start dates. But there's a catch: suppose your fiscal year starts Oct 1st, and you request help on August 15th. The answer is often "we're at the end of the fiscal year, but we have approval to start work on that in October".
So if it takes half a fiscal year, depending on when you first say you need it, it can be much longer.
Except all of those companies put an unreasonable amount of weight on degrees. Good luck getting Google to talk to you if you didn't graduate a top collage, let alone not having a degree.
The people I know who never got a CS education yet ended up programming got their break interviewing at Google. Google seems more comparatively willing to hire those without degrees since there's so much confidence in the interview process.
Not true. I've got a Ph.D. in theoretical physics, am very math heavy, created a number of algos for my work over the last 25+ years (in physics sim, bioinformatics, systems management/orchestration, etc.), run sessions at an ACM conference, yadda yadda yadda.
Two google interviews, and nothing. From what I hear from other people I consider way smarter than I, they also got nothing.
Google has a much copied process, but as a creator of something of huge value notes:
Mebbe their filter isn't quite as good as they think it is. Talking to a number of absolutely brilliant engineers who didn't get hired, it likely has nothing whatsoever to do with talent, algo knowledge, mathematics, etc. There are other factors.
Being an above 40 guy probably didn't help me, google and others seem to have lots of trouble with ageism.
Dan's article was not specifically about being the "top", rather, what does the "top" mean in context, and how do people judge. What is the opportunity cost of doing this? As he points out, as I point out, it can be very high.
The smartest programmer I met in my first decade of work, was a person who had a high school diploma. No college degree. The guy was brilliant, personable, humble. He is quite successful now, and still doesn't have degrees. Chances are, he doesn't have formal education around the math/algos, but has picked up everything he knows.
At the end of the day, hiring is something of a crap-shoot. Past performance is not a guarantee of future performance, either negative or positive. You are after passion, intelligence, fit, experience if it exists (re-inventing wheels can be time consuming/expensive if you are forced to do it, and getting a guide who has been down that path can save you making some mistakes/time/money).
I know people are telling themselves that google has a good process, but honestly, it looks like it enforces homogeneity more than it brings in needed talent. I am not sure this is a good thing. Poor replication of their processes is rampant throughout the industry. I am not convinced this leads to positive outcomes.
Just kiddin'. I think, inverting the binary tree probably means mirroring it. I had an interesting Google interview as well a few years ago where I aced the automated coding test but then the first human interviewer didn't get why I said that regular expressions run in linear time :) Our background was just very different.
> I've got a Ph.D. in theoretical physics, am very math heavy, created a number of algos for my work over the last 25+ years (in physics sim, bioinformatics, systems management/orchestration, etc.), run sessions at an ACM conference, yadda yadda yadda.
> Two google interviews, and nothing. From what I hear from other people I consider way smarter than I, they also got nothing.
Then you and your friends weren't fluent enough with algorithms. That is the point, they don't care about all of your degrees, years of experience, conferences etc, they care about your fluency with maths and algorithms. This means that even a person with a shitty background can get hired at Google while a person with a stellar background gets rejected. Should you have gotten hired? Probably, but their system lets them find a lot of diamonds in the rough who wouldn't get hired anywhere else which is why they use it.
> Then you and your friends weren't fluent enough with algorithms. That is the point, they don't care about all of your degrees, years of experience, conferences etc, they care about your fluency with maths and algorithms.
I didn't fail those portions. Actually did quite well on them. So did my friends.
You are making a number of invalid assumptions, starting from the assumption that their processes are fundamentally accurate or correct. My supposition is from the viewpoint that all systems are fundamentally flawed, and the goal is to minimize risk associated with a flawed system.
I know it is generally hard to acknowledge that google does things wrong, but ... IMO (and I am fairly sure I am not alone here) ... they have a number of significant issues that they haven't quite moved past yet, and this is one of them. Remember, they started out with brain teasers, and school pedigree. The new system isn't demonstrably better IMO, but it helps them convince themselves that it is.
> I didn't fail those portions. Actually did quite well on them. So did my friends.
Then I don't see your point, what are you saying caused you to fail? I have a physics degree from an unknown school, learned to code in my thirties and got a job at Google by just doing well at their algorithms and maths questions so it is definitely possible to get in without ticking any of the hip boxes.
His point is that Google's and everyone else's hiring process is subject to large amounts of randomness and capriciousness. Google themselves have at various times mentioned how their hiring scores don't strongly correlate to performance.
Don't feel because you got in that you are some ordained snowflake. If you had interviewed on another day or with another group within Google you very possibly wouldn't have gotten in.
There are many variables at work when it comes to getting hired and hiring.
they care about your fluency with maths and algorithms
"Maths" is a red herring -- a physics PhD who's still active in academia will definitely be very fluent in maths.
It's all about "algorithms", but I think a lot of software people have tunnel vision about that. There's a lot of fancy terminology you pick up in a CS degree; requiring people to know that filters out a lot of potentially good candidates, unless they've studied CS in their own time.
That's fine if the special CS terminology is absolutely essential for all programmers. But is it really? Realistically, 90% or more of your time as a programmer is spent working on other stuff (automation, testing, designing friendly APIs, catching sneaky bugs, scripting, just generally plumbing stuff together). If you're on a team, does every single team member need to have a great understanding of data structures? Or is it just nice-to-have, specialized knowledge?
> That's fine if the special CS terminology is absolutely essential for all programmers. But is it really?
I'd argue that it matters more for Google more than most employers. The combo of their scale, combined with their large amount of custom infrastructure, combined with their desire to be able to retask engineers on a whim, means that individual engineers will have pretty good chance of touching code where the choice of Big O could make or break a product.
Mayyybe, I'm not so sure but you could be right. [Edit to add: even at Google, most programmers are not doing that kind of stuff most of the time.]
On the flip side, though, I think many programmers (even programmers who are up to date on their CS) are fairly weak at mathematics. We think we're good because we can, you know, invert a binary tree, but how about figuring out an appropriate filter to smooth some data, or verifying that some randomized process is unbiased?
For something like digital filters, if you have basic knowledge you can just look up wikipedia for the details. But the same applies to data structures and big-O!
A lot of companies (including Google) can benefit strongly from people with good maths or stats skills. Do those people also need to be strong in CS? Or if not, do they need to be siloed into a separate hiring process, and placed in separate departments?
I reckon CS, maths, stats and other specialized academic training should all be treated as nice-to-have skills, of varying importance depending on the team balance and project requirements.
When Microsoft hired me in 2006 they didn’t even ask about my lack of degree. I’ve also interviewed at Google and Amazon. Amazon didn’t ask either. Google probably asked, but it didn’t disqualify me.
Do you think the difference now is that the market is more saturated with developers since it started to become a little more of a hot-ticket, and carries less of the old 'nerd' stigma?
Also, I'm curious about how you managed to even get your foot in the door at such large organizations and not just get swept out in a filter at the gate. I'm assuming of course that such large companies employ some kind of application tracking system (which may be in error).
Large companies need a constant supply of new talent, so they develop multiple filters that work in parallel. Being cut by one filter doesn't mean you won't be grabbed by another.
I never perceived there to be any degree filter at the BigCos, or less so than at narrower software companies. In house recruiters from every big5 have contacted me regularly through LinkedIn for as long as I can remember, and still do.
I didn’t graduate from a top college and google has pinged me numerous times to interview. I always turn them down because I like my company and do not want to move. I may also fail the google interview, but they certainly talk to me.
I didn't graduate from college, period, yet recruiters talk to me. It's funny how quickly they backpedal their interview offers when I mention that, however.
Most of my SWE coworkers at Google did not graduate from a "top college". Not sure where you're getting this impression from. There simply aren't enough annual graduates of the top programs to fully populate all tech companies, and there are plenty of people from other schools (like me) who also do well.
It is, however, a filter that will be applied against applicants, and from what I've heard, it's more than enough to trigger the "we'd rather have false negatives than false positives" filter if you're not in the lucky minority.
Even if it's never used as a filter by Google, their hiring practices just don't apply to the rest of the industry - Google can afford to pass on 99.9% of the high quality talent that applies. Other firms who try this are only crippling themselves.
I strongly disagree. Most of my coworkers are not from "top colleges". Neither are most of the people I interview. Even for Google, it is tough getting interviewees who do very well. Given that, it'd be ludicrous to discard most interviewees out of the bat because they didn't come from a "top school". The interview questions and relevant job experience are the true hiring bar. Possession of a relevant degree, and the granting institution for said degree, is far down the list.
"Except all of those companies put an unreasonable amount of weight on degrees"
I don't think it's 'unreasonable' at all to put strong emphasis on education.
Surely, at the end of the day it's possible to be great without it, but having a good education is pretty strongly correlating factor with so many things.
'Being cool', I don't think is correlated with much at all. Unless it's super consumer facing and inner culture is gong to have to match outer culture on some level.
Touched my first computer almost exactly 40 years ago.
I can troubleshoot most hw/os issues with one ear and eyeball tied behind my back.
Can program decently in many general domains.
Never went past my associate's because the work was more compelling.
Knowing how many PhD's never work in their field, and how many Master's get earned and laid aside, I can't agree with you at all.
I think you might be imagining that correlation.
" I think you might be imagining that correlation."
If you think the entire Silicon Valley is missed this, because it's not 'my' correlation.
People who study CS are probably more likely to better at CS that those who have not.
It doesn't prove or mean anything in general, and your personal situation is not relevant: of course there are tons of 'non-degreed' great techies out there. Nobody is denying that.
I've hired a lot of people and there's no doubt that you get better luck with degreed than non - and even school rankings matter.
In Canada, for example, you get consistently strong tech recruits out of U Waterloo. Impressively so. Not always but usually. Other schools in the region - much more hit and miss.
There are advantages to going to college over a boot camp or self learning. In fact, CS and SE degrees are ones that pay for themselves. If you're not willing to put in the work, that tells me you won't last long at Google.
I had to leave college early due to a family thing and start working to contribute. That was eighteen years ago, and I've put in so much work because I run into attitudes like yours.
Of course, in the past twenty four hours I've created an app and potential side-business from a technology stack I had absolutely no experience in any of it just to prove to a potential employer I can do 'front-end' stuff for them.
The average developer can't even CSS, don't even play with that 'degrees are the ultimate arbiter' mess.
What do you mean by that? How sophisticated is it to "do CSS"? When I've dealt with it, you basically look at the inspector and at your CSS libraries of preexisting classes, and basically do a combination of attaching classes and writing new ones with custom styles until you get it all to look right.
He means do it well. There's a lot to doing CSS well, and frankly I just don't have any interest in learning all the ins and outs. It's not a fun platform to work in, it's very fiddly, and it's not exactly programming either.
There's an art-form to doing amazing pixel perfect web designs, and I really respect the people who are good at it. That's not me though, and it's never going to be me.
I had run into a similar situation, though less time has passed than in your case.
I've had to do similar things to find work: (started young out of interest and put it aside for a few years, then), spend time volunteering (UNOV), freelanced, voluntary freelanced for small businesses/local orgs, build pet projects, just build things in general and study.
Maybe it prepares one better as an engineer than a scientist in the field — the application of a science vs the theoretical and experimental work in developing the science.
---
To expand on the parent subject...
I've had interviews since where I've been able to lay out some of the things I've designed and built both independently and while working for a major media company and still had interviews last hours upon hours to receive no call to even notify me that they've decided to go another route all for some odd questions with no definite answer like:
Implement a poly-fill for bind by extending the Function object
Or,
Explain what this css does "if you don't know that's okay,
but don't get it wrong – that's 'bad'.":
.box {
display: flex;
min-width: 1024px;
min-width: 52em;
margin: 0;
padding: 10px 15px 10px 15px;
}
.box--item {
flex-shrink: 2
}
.button {
appearance: none;
border-radius: 3px;
background-color: blue;
}
.button--green {
background-color: green;
}
In the case of the CSS, no DOM context was presented, and the class names I'm giving provide far greater context than the test I'm referring to did...
Only to have every answer given either go unremarked-upon or just told "no", even though the answer could not have been a definite wrong. It was just not the preferred answer.
I was also informed you would be paid more just for having a Masters or PhD, regardless of your contribution. They had graphs and were happy to show me.
I'm talking smaller, trendier companies here though. Buzz words and egos seem to abound. It never made me bitter, but it just seemed so bizarre-o. Not really for me.
> Of course, in the past twenty four hours I've created an app and potential side-business from a technology stack I had absolutely no experience in any of it just to prove to a potential employer I can do 'front-end' stuff for them.
This tells me you're wasting your time and potential. There are other ways to get a job.
They didn't request it, I just didn't have any direct professional experience with React/Redux or the Python ecosystem, which are the lion's share of front-end positions in my area. I've gotten a lot of immediate 'passes' from recruiters unfortunately.
Thanks for your reply though, I agree with you but then I run into reality and the need to pay bills.
Where exactly did I say that was not the case? There are plenty of options for getting a college degree, and there are way to get a job at these companies with needing. However, if you're going to try to get one be sending in your resume, expect to get turned down without a degree.
Beyond that, software development is plagued by this mindset that college degrees are useless and that only self taught people are worth looking into. I've worked with many college educated people who can't write good code, and I've worked with many self-taught people who really don't understand what they're doing. The best has always been a person who went to school and then continued on learning after the fact.
>Beyond that, software development is plagued by this mindset that college degrees are useless and that only self taught people are worth looking into.
Ugh, yes. I've worked for a leading SIM and smart card manufacturer and they required a 4-year degree to even get in the door. I interviewed countless people, including some who were able to answer some quite technical questions that others couldn't. And many of those had to get disqualified since we found out during the interview process that they didn't have a degree. There were so many degree holders there that didn't know what the hell they were doing, and it was obvious.
One doesn't prove the other, that's my point. The only thing that can be said is that a degree doesn't prove if someone fits a position or not, it only proves they've finished an education.
I've also been in the field for many years and held several interviews. My takeaway, to put it a bit clumsy: "knowledge is easy to fix, mindset and experience not as easy."
As I told someone over Twitter the other day: mindset isn't something that can be readily taught, which is why STEM is still so in-demand with a gross labor shortage. For most people it's something you either innately have or don't have.
But yeah. I was reinforcing your point, not being counter.
I was referring to "If you're not willing to put in the work...". Some people are more than willing to put in the work but there are things in life that might hinder them from doing so at the time. People take a break in their studies for (various) personal reasons and some end up working in the field despite never finishing to get a degree.
"...expect to get turned down without a degree" and
"...software development is plagued by this mindset that college degrees are useless ..."
My point is that there's a middle way. Only looking at degress is stupid because it doesn't guarantee anything other than that they got a degree. With that in mind it doesn't make sense to instantly disqualify people without a degree either.
I would like to meet the person who knows everything! I have met some 20 years olds who think they know everything! But as the old saying goes, "The more I learn the less I know"
Having two kids getting their CS degrees right now at top
50 colleges.
I am not impressed.
A friend of my son goes to UNFS (North Carolina State Film School). For four years he makes films. After the first year he has to choose between photography, directing, writing, etc. Then he spends 3 year fine tuning his craft. Great program!
CS Programs seem to be a little of this and a little of that, none of it coordinated. Quality is teaching is varied, occasional a great teacher or TA, much more frequently poor teachers or TAs.
I remember seeing at 14/15 some of my older cousins course notes form his computing course at UMIST (one of the 4 good places to do computing then) - and thought this isn't much different to what I am doing in my CSE class (and CSE's where the vocational track for those expected to leave high school at 16
I'm also self-taught, but I don't see a degree as valueless. There's a lot of concentrated knowledge to be had in a hurry from a good (probably even a decent) program, and while I've yet to run into a case where I could not pick up what I needed on the fly, I certainly have many times felt it would be an enormous timesaver, and thus make me more efficiently able to do things I enjoy doing and that get me paid, to have had that knowledge preloaded.
On the other hand, I didn't start my adult life a few dozen grand into the red with student loan debt, which, from what I gather through long acquaintance with many who did, has very considerable advantages of its own. So I'd have to call it a tradeoff - but, then, that's my whole point: it's not accurate to say that either option is strictly preferable to the other.
I'm self taught (as in, I've taught myself a lot outside of school, in fact I left school early and only finished my degree after several years in the industry), and I still learned quite a bit from my computer science degree.
It forced me to study a wide variety of things I probably wouldn't have spent much time on if on my own (i.e. OS programming, making my own compiler, prolog, finite automata, assembly programming, etc).
Also did A.I., 3d graphics, network programming, and various other things, but I probably would have learned that on my own to a certain point (I developed video games for awhile).
Pretty much. Managed to work on about eight games in a row that didn't make back their investment for three separate companies while I was in the game industry. They failed for various reasons, and a few of them really should have been successful, in my opinion, but oh well. Bad timing (releasing at the same time as heavy hitters), bad marketing, bad luck with reviewers, getting screwed by platform holders, aiming at the wrong audience, bad choice of difficulty, bad choice of which idea to pursue, technical issues that weren't in the testing environment and not discovered until release, overly restrictive and expensive update patching policies with platform holders, all sorts of fun reasons.
I assume you are talking about US undergraduate degree (if not, can you clarify). If so, most good colleges give you ample opportunities for additional learning -- go to grad classes, do research -- those avenues are usually very easy to open.
I probably agree that for each student at least one of those 4 years is usually wasted; but this is different than whole program being a waste for a person.
Yes the US undergraduate degree. I have gotten into some research and got picked up for an exceptional internship which have been good. It doesn't change that I could have passed out of all the actual curriculum on day one though.
I guess "community taught" would be better than "self taught" since I learned from so many people online. But I had been programming for 10 years before school, and while I was in the military (over 6 years) I took advantage of many MOOCs from MIT, Stanford, UNSW, etc. and many books from Knuth to most of the No Starch Press library which I understand is not the case for traditional students. It is frustrating though, especially not being just out of high school.
If you're taking MOOCs from MIT, Stanford, UNSW and others can you really say that university CS curricula taught you nothing? They probably taught you quite a bit since, well, you took their courses.
You were just overprepared for an undergraduate degree by the time you got around to trying for it. For anyone else taking a similar path, specifically the military part, you can often knock out an associates degree with that amount of military experience. Do it, you can skip most of the core curriculum and focus on the CS part and be done in 2-3 years (less if summer courses are available) and be in grad school in your 3rd or 4th your of full-time college education.
Yes the Navy "trained" me by saying "here are 20,000 pages of documentation on the E-2 Airborne Early Warning aircraft avionics, now go fix it".
That has nothing to do with my CS studies, of which I have learned nothing from college and could have CLEPed a BS degree on day one if it was possible and moved on to a masters where I should have been initially placed.
You ever notice how most people never tell you what state they're from unless you ask them? It's almost like they don't believe their state of origin to be all that important to their personal identity. Some of those people might even be from Texas.
But the hombre from Texas is not one of those people. He will drive for 30 minutes, past dozens of ordinary, mundane restaurants, to reach the nearest Texas-style barbecue restaurant, run by a fellow expatriate Texan, who shoulders the burden of living outside of Texas for the sake of fellow Texans who tragically cannot be in the best state all the time, yet need to regularly consume bits of Texas in order to survive.
Ironically, Texas has so many people in it, all constantly surrounded on all sides by Texas, that you cannot easily identify that guy until he actually leaves the state.
Also, the thing that's wrong with Texas is football. That is easily 120% of my problem with Texas.
How many classes did you take in college? Because my first few CS classes I didn't learn a whole lot from either. It wasn't really until the 200 and 300 level classes that I started learning anything substantial that I hadn't picked up on my own beforehand.
I only have a couple classes left. I had already studied it all before starting, materials up to graduate level are mostly available openly online now. I did expect more from the 300-400 level classes and lost a lot of motivation when I realized they were not as advanced as I expected.
I did take some 400 level electives completely outside what I am interested in and had not studied which were great though. One professor for those asked in the beginning what each person expected from and I mentioned what I am actually specializing in and he consistently gave examples of applying it to what I do which was excellent.
If waiting for 2-3 years before you're able to learn anything substantial is considered normal - and we're not talking about monasteries and such - then I think there is something seriously wrong with the system.
I was already writing various types of software, taken a class in high school, gone to summer camps, etc, before I reached freshman year of college. There was plenty of students who had never even touched a programming language before and it was all brand new to them.
And I still learned things my first year (especially in ancillary classes, I only took three computer classes my freshman year), I just didn't get much out of CS classes until sophomore year. One of those classes was a gimme class I really should have tested out of ahead of time in hindsight, though (basically computers 101, I had quizzes on identifying what was the desktop, mouse and monitor).
You should read all the context before stating that this comment thread means the system is broken. Just because extremely self-motivated individuals now have readily available resources to learn almost anything on their own does not mean the system is broken for everyone else.
I have no college degree, or college classes at all, and have been doing systems engineering/software engineering for about 13 years now. All self taught/learning on the job.
While it is nice not to have had college loan debt, I know there are things I may have a better grasp on if I spent months learning a particular topic. In fact, I am jealous of people who actually got to spend time being taught computer science and learning interesting things. I am sure there are topics/approaches/patterns that would be very helpful. Not that I am complaining, things certainly have worked out for me very well, it is just something else that I know would "boost" what I already know.
On the flip side, I know of one job that was a very short interview, as they said they could only offer X (which was laughably less than what I was making currently) because I had no degree. What I knew meant very little (to the place, not the interviewers), having the piece of paper meant more. You could say "their loss" but on the flip side, I have no idea what opportunities I have missed because of it.
I think you're mostly right, though. I think the article is probably right that companies who _say_ "we only hire the best" don't necessarily even hire good programmers. I don't think the companies you mentioned really care if they have the best, especially since that's pretty hard to measure anyway. They want people that are going to do great job in the area they were hired to work in.
You seem to be not exactly disagreeing with the article, but maybe tightening the slack on what we would call a “trendy company.” The author did not specify any successes of the “trendy companies” being discussed(I’m not including fundraising as a success of the company), but they did speecify some failures, and left the options open for more. The author, like you, praised Google for not hiring on basis of “trendiness.” I think you’re in more agreement than disagreement.
The article’s title/headline could be adjusted to more accurately represent the claims made in it.
I wanted to post a snarky response to your comment, and so I went to the Rennaisance web page to look at their vacancies. And lo and behold, no Keras or Kafka in sight. I'm impressed. That's certainly not true for the Facebooks and the Apples of the world.
Also I think you mean 'hipster' instead of 'hippie'.
But - there are a lot of said 'hipster companies' in the Valley, moreover, the cult has spread beyond: it exists even in the copycat cities of Montreal, Van, NYC, Austin, Boulder, yada yada.
I'm well into my 30's an the last few start ups I've consulted with - both in the Valley but not well known - were undeniably trying to be too hip.
I'm as cool as late 30's something can try to be without losing any dignity :) but I felt like Grampa Simpson (not my antiquated pop culture reference)
As far as I can tell it's spread across the whole country. At the very least, it's definitely present at startups in Chicago, where I'm at, and Chicago doesn't have that strong of a startup scene (it does have one, but it's pretty small compared to other metro areas).
Most SWE interviews at Google take place in either Java or C#. These aren't sexy, trendy languages. They're boring line-of-business languages that just work. And the frameworks you already know matters very little.
At my in-person interview at Google, they told me I could use whatever I felt most comfortable with, which at that point in time was Objective-C.
In retrospect it was a terrible idea. That language is stupid verbose, and I ran out of whiteboard for every single question I had there. They were mainly interested in me for iOS development though. They did not give me an offer, although I don't think it was for that reason (I was a little nervous and two interviewers gave me some major head-scratchers).
It's definitely true that some languages are better than others, because the types of problem and time limitations are the same regardless of the language chosen. Lower level languages tend to be worse because they take longer and there's more ways to trip yourself up.
Personally, I think that C# is the ideal language for interviewing in, as the .NET standard library is very powerful. SortedDictionary alone can easily polish off entire classes of interview questions, and the same can be said for pretty simply LINQ statements. I actually solved one problem so trivially using a combination of the right data structure and LINQ that the interviewer asked me to provide an alternate imperative implementation of the LINQ statement, just so he could see that I actually knew how to implement something like that rather than just use it.
You can approach similar levels of power with use of Java 8's Streams APIs and a good collections library like Guava, but Guava is only likely to be understood by a majority of interviewers at Google, whereas the C# standard library should be understood by a majority of C# interviewers everywhere.
It's absolutely worth spending a few hours brushing up on a "better" interviewing language and then using that rather than using a less optimal language just because it happens to be all you've used recently.
I would probably use Python on a whiteboard nowadays, even though at my day job I use C# and only use Python periodically in my freetime at the moment. Python's a lot more concise of a language in general. Although C# is smooth like butter when paired with Visual Studio, and I could use it if requested.
I think my views here might be biased since the only people I see interviewing in Python tend to be bootcamp graduates or new grads, and their coding performance in general tends to be not as good (which is fine, as the expectations for L3 SWE are lower).
I agree with you that the Python language syntax is nice, but it doesn't have quite the same level of built-in support for algorithmically useful data structures as C# or Java with Guava. To give you a concrete example, it's not unusual for interview questions to require the use of a self-balancing binary search tree or a similar data structure in order to reach optimal runtime complexity. Realistically no one is ever going to implement a self-balancing BST along with solving the actual problem inside of a 45 minute coding interview, but it's nicer if you're able to refer to an actual library implementation that exists and can be used versus hand-waving away the existence of one.
that the interviewer asked me to provide an alternate imperative implementation of the LINQ statement, just so he could see that I actually knew how to implement something like that rather than just use it
I wonder why interviewers look for these traits when it is very clear that you'll almost never implement a low level data structure.
JacaScript is another frequently used language. It along with Python are typically what I see new grads using. More experienced interviewees tend to use Java/C#/C++. I interviewed in C# despite most of ,y interviewers not knowing it (though any dev can figure C# out well enough to evaluate algorithm correctness). I've also interviewed one person in Go ... despite me not really knowing Go. That's when it's really helpful to have written down every character during the interview for writing up the evaluation afterwards.
When I hear "we only hire the best" it just sounds like generic crap an HR person added, not a big red flag that they are snobs. I don't think it really means anything. By definition, anyone hiring anyone wants to only hire the best. Nobody wants to hire only the mediocre. If I hear things like that I generally just ignore it.
Anyway, the author makes all kinds of unfounded assumptions. Someone with .NET and Windows experience may not actually be relevant to a backend Unix system. Their assumption though is "they don't like Windows people." Is it not actually possible their work experience was irrelevant? Is the first or most reasonable conclusion you come to really, 'they don't like windows people." Seriously?
That is purely an assumption driven by their own stereotypes and opinions. "They said they didn't hire me because my experience was irrelevant, but I know the truth. They are bigoted against Windows people! That's the REAL reason!"
>By definition, anyone hiring anyone wants to only hire the best
I agree with your overall sentiment, but this isn't true at all. The best cost money, and the best don't want to work on boring stuff. Most software is boring. A team of mediocre devs is more often than not just fine to do the job and it keeps cost down (and turnover likely lower).
[1] Edit: Though in some cases, one could say that, while legitimately accepting the tradeoffs it entails. Say, an environment where extremely competent people make a huge difference in terms of dollars and you have the budget to draw such people away from their alternatives. Perhaps asteroid mining, where all kinds of things could go wrong but you can't always provide remote assistance to the miner. Needless to say, "your scrappy startup with a CRUD app" is not that environment!
Not all criteria is binary, though. It's perfectly reasonable to say that "we hire the best for our given criteria". And probably more honest than most job promos. In fact, most meaningful criteria _isn't_ binary.
I'd say so, do you want to drive a ferrari/porsche around your local city/town traffic? How about if nobody ever saw what you were driving and you just arrived at locations?
High-end resources are a pain in the ass to acquire and maintain... the best usually know they're the best and expect a lot of upkeep. The truth is that most work (as someone else mentioned) is boring and using high-end resources when lower-end will do is bad economics.
Most times when you get in a car, you're not racing someone else for your life or pink slips... you're just looking to go to the store and come back. Why waste half a tank and risk a flat in a racing machine just to go pick up milk?
Definitely. Like any other decision, there's a cost/benefit assessment to be made. Why am I trying to hire super-ninja-ex-google-guy to code up my DB interface for the marketing folk? Doesn't make any sense, the value just isn't there, and super-ninja-ex-google-guy isn't going to take that job anyway.
Hell, I'm in this boat currently. I'm looking for a junior dev to help with certain menial tasks here and there. I'll pick the best _that I can quickly find_, but really, I just need someone halfway competent and I need them now.
It tends to get the job done cheaper and quicker instead of waiting for the "best" to get recruited, and then paying massive money to keep them at the company. Is that a suitable definition of "better"?
If you look at Triplebyte posts they've said before that there is significant bias against enterprise backgrounds. And so they recommend that if you have an enterprise background to try to minimize it in tech interviews.
And there methodology isn't just "enterprise people get hired less", it's there is a large discrepancy between how well they do on Triplebyte internal tests and reviews which are pretty standardized and how often their client's hire them(which is usually a much lower quality, more biased process).
It is very possible their work experience was irrelevant. I have worked in enough different companies--some that I chose, and some that bought my employer--to see that no two of them were similar enough for more than a fraction of my technical experience to be relevant.
I consider my aptitude and adaptability to be my most valuable work qualities. My experience only comes into play when I see my current company about to make a mistake that I have already seen at previous companies. Like gratuitously adding a stand-up meeting to the dev process.
My experience with two previous attempts at cargo-culting the stand-up meeting led me to caution against a third attempt, at a third company. And then I got "at will" fired for not being "enthusiastic" enough about the new dev process. So now my experience tells me that only power can speak truth to power. In the absence of a union, I will not warn a manager whom I do not trust when they are about to do something stupid. I will instead send out resumes, and ghost the instant I get an offer from a less-bad company.
My current company is gutting the health care plan. That's fine. Everything is fine. No, I have no problems with it. I am updating my resume for completely unrelated reasons.
So in some sense, "irrelevant work experience" might actually mean "experienced enough to call us out on our bullshit".
Employee education is such an undervalued idea. Like you can get workers on the cheap and then setup incentives for education/self-education. I'm yet to see a company that sets up some sort of education pipeline for it's employees. I know that startups don't have time and or money but education doesn't need to be as long as the school system made you believe.
A few notable organizations offer paid continuing education (Masters degrees) and have extensive in-house training, with career progression mapped out.
Here are some:
- Accenture
- Booz Allen Hamilton
- Deloitte
- Department of Defense
- Lockheed Martin
- Northrop Grumman
- Raytheon
It’s not just consulting companies and the defense industrial complex, these are just the only companies I have exposure to that offer managed career paths.
A drawback is that in those organizations, the pay is often low. Except the consulting companies, which just expect lots of hours. Either way, there are trade-offs involved.
I haven’t seen a serious investment in education from Bay Area companies, probably because attrition is higher and it’s not seen as a safe investment.
Defense has career progressions? Is this for management? From what I've seen engineers are treated like cattle and hired and fired for the whims of every project.
Having spent the first 5 or so years at defense companies, I agree. There was no career progression,especially without switching companies.
They want you to have a degree so they can charge more, but that's about it. You can easily work on the same project doing the same things for 20 years; I worked with people who did.
The reality is that they bill your time to the government a certain way based on your qualifications, and the contract specifies what qualifications you're expected to have. If you grow or change in some way the contract doesn't capture, they can't give you a raise without losing money (since the contract pays them the same).
So classically you have to switch to another job - within the same company, or at another, to advance.
Compounding this is the nature of classified work (which most DoD contracts are). You're in a windowless lab, and you can't really say what you do, or even, often, who you do it for. You get 0 visibility outside your immediate team.
There are exceptions and some ways to get around these realities, but you're really fighting against a system that strongly prefers things remain static.
Actually, not on the list, even though I worked in two (of the bigger) companies, and one of the ones on your list was across the street on solutions drive in McLean, VA, from where I worked :).
Also, that company would compensate you to get a master's degree.
> I haven’t seen a serious investment in education from Bay Area companies, probably because attrition is higher and it’s not seen as a safe investment.
Red Hat hires a lot of fresh college graduates rather than aiming at existing "open source stars". Many then grow to become the maintainers of the projects they work on, and choose to stay in the company that let them grow. This is different from many other companies which hire current maintainers for public work and have the younger ones work more behind closed doors.
(I work at RH, but actually I heard the above observation first here on HN from someone that is at Google).
> By definition, anyone hiring anyone wants to only hire the best. Nobody wants to hire only the mediocre.
...unless the best ask for more money.
When I hear "we only hire the best", I assume that the company is willing to pay very competitively. The other end of the spectrum would be companies that try to do everything with juniors or outsourcing teams.
Any decent programmer I know is perfectly fine with switching OSs - if you know .NET and Windows, learning Unix will slow you down for a few weeks, but that's it.
"Work experience" is rather overrated. Critical thinking skills are what matters for the job, and those are in awfully short supply. Rejecting a candidate who possesses them because they don't have the right keyword on the resume is...
Wait. On second thought, keep doing that. Because that means I can hire them. :)
I agree that rejecting people for lack of a keyword is a silly practice. But all the decent programmers you know must be really quite good. I know a lot of devs who I consider to be at least decent and switching os, stack, paradigms, everything would trip them up for more than a few weeks in the best case.
In a lot of reasonable cases people have a preference what tools that use, that's why there's a "Ruby way of doing things" and a "python way of doing things" and a "go way of doing things".
> But all the decent programmers you know must be really quite good.
That may well be. The question for the hiring process then becomes, how do we distinguish between people who lack keywords but could adapt, and people who lack keywords and can't. (Alas, the keywords don't solve anything - because I've interviewed quite a few people with all the right keywords whom I wouldn't consider decent programmers at all)
Programming is one of the disciplines that suffers from the fact that the initial skills hurdle is very low, but the mountain of knowledge is high. And constantly shifting. What we all want, ideally, are people who can navigate the shifting landscape easily. I'm not sure resumes easily give us that. (Unless it's a reasonable long career. If you've got 30 years of adapting to new tech, it's easy to infer you'll probably learn the next one, too. If you've got 3 years, nobody can tell)
You've got to be careful about assuming things about what people can do based on prior positions.
Speaking personally, I have been employed doing windowsy things for ~10 years, probably being typecast by my stint at Microsoft, but I am willing to bet I can out-unix a very sizeable chunk of candidates who think of themselves as Unix people. (Did some Linux kernel hacking in spare time, have been a home user of different *BSDs for ~17 years, learned C on Unix.. none of this you can tell from my resume)
Not to mention skills from one niche often transfer to another, and smart people with generally applicable skills can cope with the differences pretty quickly.
> (Did some Linux kernel hacking in spare time, have been a home user of different *BSDs for ~17 years, learned C on Unix.. none of this you can tell from my resume)
Maybe you should add that stuff to your resume. I include 'skills' that weren't necessarily relevant to any particular job.
I think such a section would likely be ignored or not understood, and the problem being described is that people brand you as this or that based on work experience. People may even consider it a red flag that you list skills you didn't work with in your most recent position.
Most people with which I've interviewed recently seem to understand that lots of tech people do significant work outside of their formal employment.
But I also wasn't imagining a separate section. I have a "skills" section and that's where I'd imagine adding the things about which I was responding.
> People may even consider it a red flag that you list skills you didn't work with in your most recent position.
I could understand this somewhat if someone had been at their most recent position for a decade but for anything less than that it doesn't seem unreasonable to me to include things one might not have used, or used frequently, at the single most recent position.
This assumes we all follow the same scale. A Camry could be the best car for me because I want a car that cost little money and will continue to work for years with little up keep. Where as the Ferrari may be the best for you because price is not a limit and maintenance is a non issue. In both cases we got the best car for us.
> We like to think that we’re different from all those industries that judge people based on appearance, but we do the same thing, only instead of saying that people are a bad fit because they don’t wear ties, we say they’re a bad fit because they do, and instead of saying people aren’t smart enough because they don’t have the right pedigree… wait, that’s exactly the same.
This is a great quote and an interesting point. I don't know what it's like in the US but in the UK it's not unexpected to wear quite formal clothes to an interview, even if the position is at a very relaxed company.
I don't think I would judge someone for what they wore in an interview, but I fear I might if they continued to wear something overly formal at work.
Also in the UK, and I would definitely not be surprised to see candidates show up with suits at places where that’s obviously not required. No one cares. “Pedigree” that’s another question.
I did show up in a tailored three-piece suit with a tie and pocket for my interview at Deliveroo (a notorious streetwear influencer now). There was an eyebrow justifiably raised (by a guy in a t-shirt who, I learned afterwards, was the CEO) and I felt the need to explain that I had another interview for a very different company just after. That other interview went horribly and I got the job at Deliveroo! I’d still recommend showing up wearing biking gear.
We had an interviewee showing up in shorts and t-shirt. Turns out he wanted to combine the interview with a tee off time in the afternoon. While not customary (even though we don't wear suits/business attire at our company) he totally got away with it and it didn't affect the interview.
I did interviews in suit before, but if I ever have to do interviews again I'm not going to do that anymore. I hate suits, they don't fit me well and I don't feel comfortable in them. I think it is silly that it is 'expected' to wear a suit for a job interview where none of the employees wear suits. For finance/legal it's different matter, obviously.
When I interviewed for my new job (at a tech company), I knew it was fine to wear casual clothes, but only because I have a friend who works there. I probably would have worn a suit otherwise.
I gave feedback that they should just tell candidates to wear casual clothes when they invite them to interview. It's such a simple thing to do and really helps put candidates at ease.
Normally that would be a better solution, but like I said, he totally got away with it. It was a nice way of breaking the ice. I wouldn't recommend it to just anybody, though.
I did exactly the same thing for Thread (a fashion/tech startup) - wore the cheapest most unfashionable suit possible, because I was a poor student and needed to wear something that would work for multiple interviews in one trip to London.
No one at Thread cared, no one at most of the other places cared, but I got the impression one more corporate place cared. I ended up taking the Thread offer, and despite working with people who's job it is to be good at fashion, I've not once felt out of place even though I have no real idea what I'm doing.
Impostor syndrome is one thing — but you might want to know a bit of how fashion is structured if you work at Thread!
I got a ton of points during an interview at a similar company by mentioning my uncle’s fashion brand as something “they might not have heard about”. Instant braggadocious “try me” from the interviewer. Turns out, my uncle’s sneakers are genuinely cool.
Depends, I've worked (as a consultant / developer) at a number of widely different companies, and knowledge or interest beforehand of the domain wasn't necessary. (investment banking, trains, shipping, e-commerce, etc)
Dude, were your interviewing as a software developer or as a delivery man? For the former, I'd rock up in a suit and a bike helmet, but only if I really did ride my bike to the interview.
As it was, I interviewed for Google in a suit (no helmet, I took a bus), even though the instructions recommended against it. This was in Switzerland; and as far as I can tell, nobody even noticed, let alone cared, what I was wearing.
Data scientist, but Deliveroo is big on empathy and you are meant to ride (handle deliveries, but also understand the issues with riding a bicycle or a moped in a big city: rain, parking, etc.)
I’d say half the candidates I interviewed (I was far down the pipeline) had done deliveries and all had learnt something important doing it. The key one was: it’s a physically challenging job.
A few years ago I went to AKQA suited and booted - I think I realised that I might be over dressed when on the way a passer by asked me if I knew where the Ivy was :-)
The interview was a total bust the guy that interviewed me was wearing a t shirt so scruffy I would only wear it to do the gardening.
I’ve seen a lot of those t-shirts and generally what is printed on them is the important part: a memorable hackathon, a cool product, etc. For an outsider, it’s often tricky to tell.
My favourite: the “I’ve done 100 interviews and the only thing I got was this t-shirt.” t-shirt at Facebook, with the 100 stroke-through to read 200. If looking really tattered, you know you are talking to someone who’s probably done a lot more interviews since the 200 mark. Expressly worn to show the candidate you know what you are talking about.
It was plain one and looked like he had spilled a burrito down it - he was also totally clueless about the role he was nominally recruiting for and seemed not to know any of the cool stuff AKQA had done
I did a lot of interviewing at my last job and I used to ask agencies to tell people they could come along to interviews dressed casually - which seemed fair as I'd be dressed fairly casually. To my surprise a number of recruiters insisted that the people they send along came in suits and ties.
Can be important for anyone with client contact. Even if you don't care what people wear, external clients will react differently depending on what you wear. It could be that you specifically don't want to wear suit to give the impression that you're a dynamic start-up but in the end clothing always matters.
Programmer is rarely a client-facing role and I think the odds that someone overdressed can successfully dress down, if requested, are a little bit better than the opposite.
Depending on your position it might also be different for different clients. My friend kept a suit at work in case they had to meet with the feds or a bank.
If I ever wore a suit to work, my coworkers would be wondering what was wrong or who was visiting.
Exactly, being overdressed can be as bad as underdressed. If everyone expects you to wear jeans and t-shirt, a suit can be as damaging as not wearing one when you're expected to.
I would be bothered that someone dressing excessively formal might act excessively formally - after all, dressing formally is normally a deliberate display of conformity.
It's not an act of conformity in that context, but it's usually an act of conformity. The most likely explanation I can see for someone dressing formally in an informal environment is a habit of conformity coupled with an insensitivity to change in social environment, neither of which seems like a desirable quality.
How is it any more a habit of conformity than wearing jeans and t-shirts, especially when the set of workplaces where wearing suits is the norm is smaller?
True, but only if they repeat it more than a couple of times, after they have had time to take on their environment and adapt! (or if a new graduate, afford clothes that fit the 'new conformity' of the particular office casualness...)
I dressed up nicely because I wanted to pay respect to my interviewer, they made an effort to be there, I wanted to show that I cared to make an effort too.
Additionally: I find it hard to judge a place's definition of "casual" if they aren't fairly explicit. I've worked at places where "casual" meant "hooray, you don't have to wear a tie!" and other places where casual meant "it's a good day if you can't see your co-workers underwear".
Of course, I still see job postings requiring a "CS degree from a top school."
I think it's great they come right out and admit that they're a bunch of elitist douche-bags so I don't even bother applying. Could you imagine working at such a place with a degree from a :shudder: public university?
My first job was on campus at Cranfield University which regarded those top schools as "alright" (ad Stanford wasn't on the list btw) for your first degree but you had better got a first had the right supervisors.
It would be nice if the process were a little more transparent.
If people just knew that a company rejected them for reason X, that would make a huge difference. If X were stupid, they'd know they missed a bullet. If X were rooted in a misunderstanding, they'd have a chance to clear it up.
There is so much side-channel information used in the corporate hiring process, and it gets mixed-up in HR far more often than any company is willing to admit.
I think the issue is the massive, massive power imbalance. Sure, as a worker you would like those things, but an employer has no reason to care. Even if they make a mix-up there are plenty of other candidates for the same position. Maybe they get someone actually worse but they have no way to know that and no reason to care.
For the employee it's a rejection from 1 out of 1-10 companies they are talking to, so it's a big and personal deal. For the company it's just 1 out of probably hundreds of candidates for 1 out of possibly hundreds of jobs.
Thing is that the dating market goes both ways in all cases. There was a term for this that I lost somewhere, but basically it allows all parties to be very picky when there is a feeling of infinite candidates.
It's true that there might be more e.g. straight men than straight women in the bay area, but it's not like the number of straight single women is so small that candidates are rare.
The practice is exactly the same in countries that aren't the U.S. And countries that don't even have discrimination laws.
The fact that companies behave the same in the presence of dramatically varying legal norms suggests that your belief that discrimination laws are the problem is baseless.
At least I know that in Germany before the "Allgemeines Gleichbehandlungsgesetz (AGG)" (general equality law) was passed, companies were more open about reasons. Since this law was passed, any openness can easily lead to a lawsuite against the company where the company has to prove that they did not discriminate against the candidate. Indeed when the law was passed there were lots of lawsuits of really untuitable candidates, where the company could nevertheless not prove up to "valid in court" that it did not discriminate against the candidate for reasons specified in this law and thus had to pay the unsuitable candidate some months of salary as compensation.
US discrimination laws are not that strong. Unless you're doing something idiotic like telling people "sorry, we didn't hire you because of your religion" it's hard to win. Of course anybody can sue for any reason of they want, though.
It has little to do with that. There's really very little risk in giving feedback like "we didn't like your code" or "you had the wrong kind of experience for us." The bigger problem is that a lot of candidates take this as an opportunity to dispute the reasons, which is not generally desired.
I don't think it's even that. It's just that companies don't feel like they have much to gain from taking the time to explain rejections to candidates. Same reason that often times you don't even hear back that a company is not interested. It saves them time.
Discrimination laws are probably just a drop in the bucket.
Once you start getting into drug tests, references, background checks, credit reports, reservists, social media, etc., it's suicidal to share any more than the bare minimum.
In the US, we have a so-called right to privacy, but that's not really true with employers today. The information asymmetry is both astounding and obscene.
Meaning discrimination against reservists in the armed forces because of their service requirements? Or discrimination because the employer doesn't like the armed forces?
Service requirements. Even employers who are vets are terrified of this one. Hire a guy, he gets deployed just a week after he starts, you have to keep his seat warm, then when he's back several months later he's taken a job somewhere else...
There are alternative risk markets where you can get insured for just about anything, but it's still a cost that gets disproportionately shouldered by employers of young men.
"Additionally, discrimination against employees on the basis of past, current, or intended military service is prohibited. As with other types of discrimination, this applies to hiring and firing decisions; compensation and benefits; and other employment decisions."
...so the best solution from an employer perspective is to enter a risk pool that distributes the risk. Being a big enough employer solves that problem, but there should theoretically be pools for smaller employers to join as well. I wasn't aware if that was the case, which is why I was asking.
Isn't that illegal? I worked for one company that sponsored a TA signals unit - I suppose they though experience as a RSM running a royal signals unit in the gulf war II was good management experience
Meh, being told over and over again that I'm a good developer but not a cultural fit isn't particularly helpful. It sounds like a polite way to say: "You have the skills but, uh, we hated you." I'm not sure what to do with that feedback, but it has been pretty consistent.
Being told I'm a good culture fit but don't have enough experience is equally frustrating. Especially when it's a rails dev position and I've been working on rails for 9 of the 12 years it has existed.
I eventually figured out that they meant something else - like I hadn't lead a team (bigger than 2) directly before and was interviewing for a lead dev job, or I didn't have much operational experience since the previous startup I worked at was very simple and built on Heroku. In some cases it might be that I didn't have experience with the specific JS frameworks that they were using (e.g. Angular, React, whatever).
But the impact is I come and interview, it goes well, smiles all around, and then they say no, we were looking for someone with more experience and I'm thinking... what? What experience? I have to kinda guess.
Anyway, yeah, saying no culture fit sucks because you don't know what to do with it; maybe you could do practice interviews with friends and see if they see any glaring annoying tendencies?
> I'm not sure what to do with that feedback, but it has been pretty consistent.
That is actually valuable information. Find a mentor or career coach to help you out with mock interviews. You could be signaling something that you're not aware of during interviews.
Think outside of tech--like getting a bad reference because last manager's phone rings over to one of your old coworkers that didn't like you, or failing a credit check because the rating company sent a report for a guy with the same first, last, and middle, but totally different SSN.
What strikes me about this is how little weight recruiters put on referrals from current employees. I would have thought that being recommended by an internal employee would get you an initial interview at the very least. Surely a trusted employee’s recommendation is more reliable than whatever random sift the recruiters have.
Does anyone with experience of recruitment have any stats on how successful referred applicants tend to be?
I don't think recruiters keep metrics on anything. Once the candidate is hired, they consider their job done. Companies appear unwilling to measure recruiting and employee success, so the inefficient hiring processes continue.
Depends upon the type of recruiter. An external one will likely have a few follow-ups, because it's how they get paid. An internal one, however, won't bother unless the company is excessively on the huggy-feely side.
I´ve recently been in charge of hiring a lot of people and have probably done +100 interviews in the last year.
I generally don´t mind the stack/language candidates have been using but I always value very highly that they have used different stacks and languages during their whole carrer. When I interview 5+ years .NET developers who haven't used another language, not even in their spare time, the result is usually a rejection. Since the time I can spend on interviews is limited I need to discard candidates based on signals, even if that means discarding good candidates.
It might be worth bearing in mind that there are people who do not have free time to spend on further learning outside of work. Parents, those caring for elderly or disabled family, new grads who might have a CS degree but had to work to fund their education so didn't learn much outside of the degree, etc. Not only might this approach exclude these groups, there are often close ties between under-represented groups in tech and these sorts of experiences.
I've been at the other end of that scale. I've had a fair few candidates that have used 10 different stacks, but wasn't truly good at any of them. On the other hand, I've had candidates with 10 years of C#, but used it in different settings, web, backend, actor based system, ect, and they blew the socks of most other candidates. What I look for, is that they show versatility and depth. The combination is really important.
I usually sift through candidates based on CV, and then a first interview where I try to asset how good the candidate is at his/hers prefered stack. Then I throw them in the deep end of the pool to see how they cope in the unknown. That can be having a pure C# OOP guy write some SML, or Java/VB/C++ guy write Haskell. I've also tasked a guy that worked with OCaml to write some workflow component in JS.
"Then I throw them in the deep end of the pool to see how they cope in the unknown." That's a good sign that the tech interview process is very broken.
> That's a good sign that the tech interview process is very broken
That could be the case, or maybe at that stage we're past a verbal phone screening, 1. interview, and they've been selected for the last round of interviews. They know before hand what the nature of the interview will be, including the "deep end of the pool" question. It is far and away the best way I've found to see how well they can adapt to new things, which is important to us.
Why not ask them to build an anti-gravity device as well while you're at it?
I just ask interviewers what the point of these questions is now.
If you can't give someone an interview with relevant questions, how can you justify not hiring the candidate? AFAIK at least in the UK candidates have a legal right to know why they weren't selected to make sure it wasn't based on prejudices, etc. So if your response is "you didn't do something that we both knew from your CV you never claimed to do", that sounds immediately unfair.
At that stage we're past a verbal phone screening, 1. interview, and they've been selected for the last round of interviews and they know before hand what the nature of the interview will be, including the "deep end of the pool" question. It is far and away the best way I've found to see how well they can adapt to new things, which is important to us.
But of course, you assume it's not relevant, and that I'm asking them to do the impossible.
In the UK (IIRC IANAL, etc) the interview must be about assessing a candidate's suitability for a job and unsuitability for performing the work is the only justifiable reason for not offering someone a job, as opposed to, e.g. gender, disability, age, attractiveness, etc.
So I'm wondering how you'd justify not taking someone on by assessing them on a task that:
1) you know they can't do and,
2) if you've already vetted their CV, presumably those skills aren't required for the performance of the job in the first place.
It seems irrelevant whether they know you'll ask this kind of question, since they can't prepare for it, and flagging up something that is potentially unreasonable doesn't make it acceptable.
So, how do you objectively assess that person X did something that wasn't required for the job better than person Y, and justify to person Y why that matters?
My developers frequently need to jump into a completely unknown stack. And even if they didn't, I still expect the senior developers to reason and make decisions about things out of their comfort zone. I'm not asking them to rewrite WordPress in Elixir, or implement vEB trees in C - I'm giving them simple tasks where they are given a codebase to work with, and the task assumes zero knowledge about it, and they have time to prepare.
Well, if you are applying that filter at the resume scan then you could be missing 'the best' as I would expect people to edit their resume for length (to one page) and to focus more on skills related to the position.
That's something I know and it's a decision I make due to not being able to interview everyone. If I can interview 5 people I won't pick 5 random cvs. I'll pick those that, in my opinion, are more likely to be a good fit.
If you don't mind me asking, how long have you been recruiting for and what size/type of companies are you recruiting for? Top tier, Fortune 500 or just mere funded startups?
Context is everything.
Also, I find people abuse the word signal. How are you generating your signal? And is it just within your domain of companies you recruit for or across the whole industry?
For me a signal is something (good or bad) that hints about a the developer skills and curiosity. They can be as simple as knowing about their preferred programming environment: if you claim you are a 5+ year senior .NET programmer and you haven't even heard about .NET Core to me, that is a bad signal.
It is also finding a balance of course. All these technologies you mention are good for most webdevelopment projects. No point in writing everything in a different language
I was about to say that too. I've also used many languages (PHP, Java, C#, JavaScript, C, C++, Scala, Haskell), and even though one language might have been especially good for the task at hand, it would not have been so much harder to do it in one of the other languages. I feel like a statement like "use the right tool for the job" is pretty meaningless.
True but it does show exploration and curiosity to at least try another language or two. If you’ve been doing .NET or PHP for 5+ years without even trying something else I’d treat it as a red flag too.
Level of interest is the #1 quality in the great programmers I’ve been around. If you aren’t exploring even a little that’s a concern.
>If you’ve been doing .NET or PHP for 5+ years without even trying something else I’d treat it as a red flag too.
But if I put other languages on my resume outside of .NET, I might look unfocused or "not specialized". Plus, some people treat resume skills as an expert matter, so if I'm only doing hobby stuff, the conversation might not be that interesting (hobby projects usually never pay technical debt so I can just write whatever I want).
So in one corner, we have you, that thinks different languages on a resume are good for showing curiosity and breadth. You don't have as much of a focus on depth (or, you want us to be deep on a few and broad at the rest, which requires a ton of time)
In another corner, we have hiring managers that can ask difficult questions of whatever's on your resume. Or who will think your .NET experience is lower than it should be because you spent time on other languages.
So I, as an individual, get both of those and basically have to flip a coin to figure out which people I want to side with. But since I can't know what kind of resume-parsing person you are before talking to you, I still have to waste my time applying to everyone vs. knowing that ahead of time.
The same is true for technical interviews. Some are Algs/DS, others are CRUD examples, others are some hard problem someone solved last week, some are knowledge quizzes.
There's a huge amount of breadth in the interviewing world and it makes it a whole skill you have to waste time on.
I’ve never even heard of an interviewer knocking someone for having more than one language on their resume.
I’m not saying you need 10 or all the trendiest things. You need at least 2.
Programming is a job of constant problem solving. If you’ve done the job close to 10 years and never seen a problem with the language that you are using...that actually tells me more about you than any other question I can answer. It tells me that you are married to one stack, you’re invested in it, you’re probably going to be highly resistant to solutions that don’t fit in that box. And it tells me that because languages are all about trade offs because there isn’t a perfect language out there. It tells me that you have your hammer.
People are strongly confusing my comment with “know all the cool tech trends and program in your spare time constantly”.
I did not say that. I said use ONE additional language if you’ve been working for more than 5 years. ONE. It’s a very low bar and if that is shocking, silly or offensive it is probably worth it to take a hard look at why. You could meet that criteria in a single weekend if you wanted to.
This isn’t directed at you but the comments I’ve seen so far.
>If you’ve done the job close to 10 years and never seen a problem with the language that you are using
Only having professional experience with %LANGUAGE% does not mean you've never seen a problem with %LANGUAGE%. Why on Earth would you assume that? If you're doing paid work you get minimal input on what technology stacks the projects you are working on use and 99.9% of the time that decision has already been made before your arrival. In fact, they could be looking for a new job because they want to change stacks, I know a few people who changed jobs just for that reason.
Someone's single weekend "Hello World" project is completely irrelevant when I'm looking to hire a developer with 10 years experience.
>It tells me that you are married to one stack, you’re invested in it, you’re probably going to be highly resistant to solutions that don’t fit in that box.
If you are doing paid work you work with the tools you are told to work with. You don't get to go off on your own and do what you want.
That, however, doesn't preclude you from understanding the flaws of your tools. In fact, I'd expect someone to be an expert in knowing the limitations of a language they have worked with for so long. That doesn't even mean you like your tools or you'd prefer to use them over other tools if given the choice.
Your jumping to completely unwarranted conclusions and it makes it sound like you have very little real world professional experience interviewing and hiring.
>You could meet that criteria in a single weekend if you wanted to.
But that's the issue: one person will like that I spent a weekend on that, someone else will start asking technical questions when it was really just a hobby project. I don't want to answer questions on that because I'm not likely to give good/correct/interesting answers. It's only served me well in the first case when they didn't ask any questions and chose to instead focus on my strengths. It's on my resume, so it's fair game to them.
> You could meet that criteria in a single weekend if you wanted to.
So, basically, it's just a absurd arbitrary criterion? I don't think there's anybody on the planet who could claim proficiency in a new language in a single weekend's exposure. Most won't even remember anything it about a week later with that little exposure. Logically, your requirement is meaningless.
How silly. Level of interest [in doing unpaid work] doesn't mean a damn thing as far as skill level and work ethic are concerned. Some of the best programmers I've ever met were the 9-5 types and the absolute worst of the worst programmer I've even known spent all his free time writing code.
Once you get past 30 it's difficult to spend all your free time doing unpaid work even if you wanted to.
How old are you? I'm closer to 40 than I am 30 and I don't put projects I worked on when I was 22 on my resume. If I can't remember the details, it's not going on my resume. If I can't give good/interesting answers when asked about that project, its not going on my resume. I don't want to fill my resume with a lot of irrelevant crap since I have real experience to put on my resume.
I've worked with many languages both professionally and casually, probably over a dozen over the years. I only put the ones I have mastered on my resume.
If I were conducting an interview for a mid-level position (and I do) and I asked a candidate about their experience in Rust, which is on their resume, and it's anything but over a year of professional or semi-professional development, then I'm going to assume the rest of their resume is just as bogus. A weekend hack-a-thon doesn't "count" as "experience."
As someone who actually has a say in hiring decisions, I'm not going to think highly of a candidate with over 5 years of experience who puts code they only spent the better part of a weekend on on their resume. That means they don't understand what experience actually is.
> I've worked with many languages both professionally and (very) casually, probably over a dozen. I only put the ones I have mastered on my resume.
And that is excellent and I would consider that perfectly normal and rational behavior. You have more than one. You took the time to explore. It's a field of constant problem solving and lack of exploration of solutions is something that sticks out as a giant red flag because of that.
More experience is definitely better. If I were hiring for a particular language I'd certainly want more than a weekend's experience, but some experience is still better than no experience.
At my last job we interviewed a guy who was about 5 years out of college and had been working in a .NET shop the entire time. He was applying for a ruby position and his only experience was spending a couple of weeks in his spare time building a Rails site. From that minimal experience, we were able to get him to explain why he decided to pursue it, what he liked about it, what problems he was trying to solve, what weaknesses he was trying to overcome that he was seeing in .NET, etc.
Despite the lack of language experience, we hired him based on the "geek gene" that clearly showed problem solving skills and desire to learn. The guy was absolutely rock solid and became one of our most valuable team members within about 2 months time. I believe he's the CTO of a funded startup now if I remember correctly.
If he'd had only Rails and I'd asked him about what weaknesses he saw with the stack...my very next question would have been what he tried to do to overcome those weaknesses. Maybe he answers about what other things he tried within the stack, caching techniques, using jRuby, refactoring in certain ways, etc and as long as that could be articulated it would generally be okay. In general though, seeing how other languages handle the same problem is going to be perfectly normal part of the process...which should lead to having more than one language on the resume.
Making the assumption we're talking about good programmers here:
If you've been doing .NET or PHP for 5+ years, you're just now starting to become an expert at those languages and their environments. That 5+ years experience could just as easily display dedication and care about their craft; a commitment to understanding the ins and outs of their tools.
For example, who would you rather have write your payment gateway code: someone who sticks to a project for years and handles the nitty-gritty details, or someone who finishes the blue sky MVP and moves on to the next project every year?
I'd go so far to say that if a candidate has 5 languages on their resume for 5 years of work, they've raised a red flag. Why haven't they stuck with any of them?
> if a candidate has 5 languages on their resume for 5 years of work, they've raised a red flag. Why haven't they stuck with any of them?
I'm a contractor - I use whatever language the client is going to pay me to use. In the past that's been C, Ruby, Tcl, Perl, Python, a tiny snippet of Java, etc.
I'm replying to myself here because it's interesting how different of a reaction this comment is getting than it did 2 years ago when the same thing phrased slightly differently was the most popular comment on the Interviewing Software Engineers article.
Your job, as a job seeker, is to get passed HR to a hiring manager. You can go through HR, around HR (a buddy ships your res to a hiring manager) or over HR (an exec or VC flips your res to a hiring manager). You will not be working for HR. They are merely an impediment to be overcome.
If you wish, you can think of it as the border. You can apply for legal immigration (lengthy and dicey; think of the immigration and interview scenes in Thunderdome), you can have a mule help you across (ideally this should cost you a lunch; the mule will get a bonus from BigCo), or you can be awarded special status based on your illustrious and meritorious service to the glorious republic as determined by and at the sole discretion of its benevolent leaders.
Any which way, get passed HR. Do not complain about HR. Just show your skill set (networking, interview prep, ...) and get passed them.
"We only hire the best" is a statement made by those who subconsciously don't believe they project outward quality and excellence in their everyday execution. It's as if those on the selling side believe that others see them as just-another-job, and they need to refute it. It's not a red-flag, but quite literally ignore the remark -- it's pointless white noise.
As for uninformed biases about tech stack and experience and what-not -- yep, that's going to happen. It's fairly well-known that hiring is basically a broken experience, even at the very best of companies. Don't expect everyone to get it.
In terms of MS stack vs non-MS stack, the reality is it's a hit and miss proposition. The OP's story about his colleague being competent in general comp-sci can be matched with plenty of others where the individual's competency seems to stop at Win32. My advice -- if your resume points only to MS-stack systems, get something non-MS on there.
I'm in Seattle, we run non-MS systems and I see quite a few MSFT-based resumes. I'm ex-MSFT and feel I can gauge competency across these lines fairly effectively. Many companies are willing to give some a chance, but learning Linux on their dime-and-time? That's a lot to ask.
With all of this, take it with a grain of salt. The common basis most people are told to follow is "hire slow, fire fast". In reality, this just means be super-conservative in your hiring decisions. Super-conservative leads to people "checking off all the boxes" in case something doesn't pan out -- that way they won't be criticized later.
If Google's hiring is as good as this article suggests, it must have changed a lot in the last few years. Can anyone point me at an up-to-date article on Google's hiring process?
Not so long ago, they were very focused on school rankings, GPAs, and academic qualifications. I always felt this was misguided because it was nowhere near as objective and quantitative as they thought. More recently they said they had a big data-driven effort to improve their process, but it was hard to tell if that really led to new ideas, or if it was just more of the same veiled elitism.
As an anecdote from the other side -- a friend of mine got referred to Google, was interviewed but not hired. A year later they called up and asked if he was still interested, interviewed him again, and rejected him again. Then a year later they called him again, and... you get the picture!
Damn, this paints a picture where only the shiniest candidates matter, and the rest of us mere mortals should apologize for breathing their air.
How do you know if you're "the best", and if not, how do you become "the best" (or at least employable), or is this industry a "ya got it or ya don't" kind of place like music or art, where only the top 10% deserve to make a living?
This doesn't just paint the picture, it holds a mirror up to the reality of our industry. If you aren't in SV (or NYC maybe), don't have a degree from an expensive college (or worse, no degree at all!), or haven't worked for FB/Google/Apple, you're basically dog shit to any interviewer.
All the clamoring for developers is because so many companies delude themselves into thinking they should only hire the shiniest candidates, as you put it, even though there are thousands of other people who could do the job almost or just as well if they would look at people living in another city, without degrees, and who haven't worked at a huge company and/or unicorn startup. As a bonus, people like that (like me!) are generally cheaper since they don't have to pay the ridiculous cost of living in the Valley, or realize that making enough salary for a comfortable living without going into multiple hundreds of thousands is perfectly fine.
Basically, you must understand that you are fighting against yourself. Each day, you fight your former self. You must WANT to become better. The best thing for that is to code a lot (:important) & reread your ancient code (:super important). You will fall in ALL the pitfalls, at first. And then, you will learn to circunvent them. And then you will learn to draw straight lines between the pitfalls, and things will start to become clean. At the end, like Neo in the Matix, you will no longer see the pitfalls.
(And then, you will learn FP and you will have to relearn everything. And then Haskell... And then...)
"who was tragically underemployed for years because of his low GPA in college"
Who puts their GPA on their resume, or even discusses it, 2-3 years out of school? I have never discussed GPA or anything about their academic experience to mid/senior level candidates, nor has anyone ever asked me.
My engineering friends thought Mike’s resume was fine
If they thought it was fine, they could easily tell HR, "Interview this person" and just like that, they're past the screen. (I don't know why it too me so long to learn this.)
I thought this was a great commentary and it reminds me of the tendency of early-stage CEOs to say, "we really don't care that much about your current skill set, we just want to hire athletes" ('athlete' in this case meaning someone very talented and hard-working).
The message these founders are trying to communicate here is, "we want to hire people who have the capacity to learn and grow and who are willing to work long hours, so we'll overindex on those qualities and be accept the fact that many people who fit these criteria are more risky hires, in that they have not been doing a similar job at another company prior to us hiring them."
Based on anecdotal observations of many hiring decisions, what these founders often actually have in mind is, "we want to hire people who've achieved prestigious milestones". And so they will hire a person whose school prestige/GPA/employer prestige factor is high, but who is not likely to succeed in the role.
Anyway the upside of all this is, as OP mentioned, there's a lot of underserved talent out there.
I have some experience because my first job programming was at a small nonprofit, on a tiny team, doing .NET software, and during the job search when I sought to move on there were a lot of backhanded complements about how they didn't expect me to do that well on their evaluations.
This is great news for any company who's able to see a few millimeters through the bullshit. Just hire people who are strong with computer science fundamentals, any maybe a few others who can play cross-functional roles. Treat them like a team and create proper mentorship/leadership dynamics. Work towards a common goal and empower the team. Watch them become one of the more functional teams in the tech industry.
The companies that hire fresh-out programmers from top schools have such an uphill battle. They have to handle people who have huge ego's from being sought after all bickering with each other on the team. They have to deal with the mistakes of people who think they know how to build something good for the company but are actually just wide-eyed and pig-headed. And then they have to deal with the fact that these people who are the "cream of the crop" will leave in a year or two to another companies' offer because even though this first company hired with a high salary no company in tech understands how to promote or give raises from within. It's always about the fresh and shiny new employee because almost all of tech is broken and once the reality of a team/person is seen it never meets expectations.
I was there a few years ago: good academic title, good coding skills, some industry experience. But I also hit a wall trying to get a job ....
My problem was that I was doing it "the right way" which turns out to be the wrong one. The application process is terribly broken, as somebody noted above.
Which makes me think, as developers we get this entire hiring thing wrong. Next time I want a job, I will think hard about a way to hustle it without entering the application process.
Large companies IMO aren't setup to take advantage of arbitrary levels of ability. They're antifragile in being able to ingest common denominator type skills and plod along despite bad engineering. The lynchpins are already in place BEFORE the hiring/growth phase. New hires that can actually compete with the established lynchpins only scatter the attention and increase the failure rate by questioning leadership.
A lean chain of intent is necessary to hit a goal as the slack compounds each layer down the chain. This is terrible for exploratory action because the output space is myopically limited but necessary for goal setting. Fatter chains often produce more total value but that energy is dispersed and the revenue streams often cannot take advantage of the innovation.
An analogy I like to use is trebuchets and daggers. JavaScript ability is like a dagger, generically useful but fungible and always in some sort of demand. Trebuchets are much more complex and much better at doing certain types of damage but the only buyers are castle seiging kings who may have already invested everything in a catapult producing supply chain.
Most fields like archaeology or art history don't expect their specialization to be highly compensated but often fields that go through periods of hype don't realize their skills can lose relevance. There are a lot of people learning ML now for example but once commoditization of tools and techniques happens the actual number of jobs may become too low to have market liquidity. Unlike A/B testing consultants.
Currently everyone in software can have their cake and eat it too but it's a position of privilege and luck and often this is forgotten.
The comments about Microsoft tech stack are interesting. The best, most successful, most reliable most loved by users application I worked on was distributed C# SOA system windows with a relational database. However coming off that job I had trouble finding work. Now I work on a Scala/Java/Python with containers - we have so many problems - its difficult to use and maintain but suddenly I'm drowning in job offers.
I will soon get my Master's degree in CS, but it is via the path of least resistence. Therefore GPA is not shiny. I decided that spending my finite time and energy can be better, so I concentrated on coding (my own projects) and socializing, sports and games etc.
When I studied for exams, at point where I knew that I was going to pass, I shifted to other projects.
Next year when I join the workforce and start looking for jobs, will low GPA bite me?
It will likely have a negative impact on your initial job search; however, once you get a job it will become irrelevant. A lot of times HR systems filter out candidates below a certain GPA threshold. As a hiring manager, the low GPA would signal to me either lack of ability or lack of focus, though I'd be more concerned about the focus/work ethic issues than ability issues.
My advice: whatever you do, don't make any excuses about why your GPA is low. If asked, I'd just stick with the side-projects comment, but don't treat it as a good thing (i.e., you probably should have spent less time on side-project and more time focusing on studies, etc.). I would also make sure those side-projects are extremely impressive.
In my experience there are three types. Note: We don't have GPA, but some companies look at study grades (some only masters, others bachelors grades) and some even go further back to high school/middle school which would be sort of GPA.
The types I've met (companies/recruiters alike):
1) Don't care, look at what you think you can do and your motivation for that assessment. They usually also give some form of shorter first contract or traineeship with reduced pay while in training but at least you will get a chance. On the other end, they hire you simply because you have the relevant skill set and show good work ethics (e.g. can work on your own, which is somewhat deducible from your curriculum).
2) Select solely on bachelors/masters grades and don't care about GPA/High school. In my experience this dies out as soon as you have +3 years of work experience. There are so many levels in this category from only looking at relevant fields or looking at all even non relevant.
3) Look at GPA/High/middle/etc school grades and everything else to 'properly select' their employees. They pay good though and usually wear suits, most of them to hide their incompetence in the field they should excel in (I'm not claiming they aren't any good, but usually not in what they are supposed to do).
Really? There is a possibility that this is my lack of industry experience talking, but I find the idea that a company looks at high school grades very difficult to believe. As for middle school... I would seriously wonder about the basic competence (and sanity) of someone who asked me about middle school grades.
The question is: does it ever matter past the first job? Sure, for your first job, you have no work experience, so your education is all a potential employer has to go on, but I'm wondering if I should remove the GPA from my resume since it's been over a decade since I graduated and have a ton of work experience since.
This is a culture of fabricated hyper-competitiveness just for the sake of making a few feel special. It's completely integrated, cultural and also immature.
There is really not that much difference between people for the range of jobs available to create this mythology. The stars or 'the best' are Sergey and Larry not the people now working at Google etc. Employees could be talented, efficient but they are like the millions of other faceless people working at corporations and this is not meant to diminish them.
But the special ones usually made their own way and are know for that. And you don't have to be a billionaire. There are tons of software folks who have distinguished themselves. There is a lot of self aggrandizement and pandering to egos on among employees and hiring companies.
>By going after people with the most sought after qualifications, TrendCo has narrowed their options down to either paying out the nose for employees, or offering non-competitive compensation packages. TrendCo has chosen the latter option, which partially explains why they have, proportionally, so few senior devs – the compensation delta increases as you get more senior, and you have to make a really compelling pitch to someone to get them to choose TrendCo when you’re offering $150k/yr less than the competition.
What is the "compensation delta"? Is that part of a formula for calculating expected compensation? For that matter, what is a good formula for calculating compensation when weighing job offers?
Are there not plenty of companies that hire only older, more experienced people? Such as government entities and large corporations? Seems like we always hear about the pro-youth companies. Perhaps its only silicon valley.
I think reason people don't want to hire windows/.net is because of very low signal to noise ratio. This makes detecting a good developer very difficult from bad developer as 9/10 times it is a waste of time interviewing them and after doing this for a while u develop unconscious bias towards some technology/platform.
is this a problem just in areas with the big trendy companies?
I've always liked working at smaller companies as it always feel like what you do is more significant and a lot more diverse. I know other smaller companies < 30ish people and they tend to like broader experience and people who can get stuff done. School background counts for little.
Yeah hiring seems to be something hard. I mean it is not as simeple as writing a value to every candiate, sort them and choose the best, but instead you have to individually rate each candidate while considering the effect it will have on your existing company.
Evolution works over many generations and is most pronounced when there is an evolutionary advantage. Certain kinds of "pattern matching" could be vestigial like wisdom teeth or less important than other things, like freckles and moles.
Point being, maybe the company has great sales or some unassailable market advantage. They could form an untested hypothesis around their hiring preferences since the success of the organization is largely unaffected by it.
But if they're using heuristics proven to have no value over recommendations, which probably have some value, then I'm not sure that's what's going on.
I just want to say that I absolutely love this no-frills type of page - tapping the link on mobile and instantly getting nothing but content is such a pleasant surprise!
It is truly funny how first people want 4k monitors but when an application actually uses the given size at full, they cry about "oh, it is too wide", and we end up with disgusting design of web pages with ultra-narrow column of content and wide empty (or ads-filled) columns by both sides.
And sometimes more (maybe another editor window), all open at once. Right now, that all gets a bit crowded. Even on a single 4k screen, it would probably end up slightly crowded.
I love this CSS-free style so much. It's such a relief on the eyes after all the templated same-looking websites out there. Oh, and it's naturally mobile-responsive.
(I don't actually disagree with your rant, much, at least for a significant subset of the HN community. This particular article is maybe a somewhat tangential prompt for it, though.)
But there are plenty of fish in the sea and one man's trash is another man's treasure. On the other hand, what's good for the goose is good for the gander.