Jeff Holmes MS MSCS
codecypher

Follow

codecypher

Follow
tech interviews really are broken

tech interviews really are broken

AI and other tech interviews are spinning out of control

Jeff Holmes MS MSCS's photo
Jeff Holmes MS MSCS
·Jul 12, 2022·

6 min read

Cover photo: Agê Barros on Unsplash

I am in the process of interviewing for machine learning jobs. I have an MS in Mathematics with more than 25 years of relevant software engineering experience, and I recently obtained an MSCS in Artificial Intelligence. Since 85% or more of AI projects fail [7], it seems that the emphasis should be on academic credentials rather than previous experience and whiteboard hazing problems, unless companies want to repeat the mistakes of others. To me, the interview process just seems broken [1] [3].

Background

Here are a few things that I have encountered so far:

  • Several rounds (usually three) of one hour or longer interviews, in some cases as many as five or more interviews.

After 30 minutes, it really is an interrogation rather than an interview.

  • I have two masters degrees and plenty of relevant software engineering experience, yet companies want me to complete whiteboard hazing [2] problems as if I were a junior programmer with no degree or experience.

  • I have usually been interviewed by someone whose second language is English and I am only able to understand less than half of what the interviewer is saying.

  • I have not been interviewed by anyone with similar credentials and none of the interviewers had an advanced degree in computer science. One interviewer with a national laboratory (again without similar credentials to mine) even asked me why I went back to graduate school and when I explained, his response was “I guess that makes sense”.

  • One FANG company where I interviewed wanted an MLOps engineer to retrofit their new proprietary MLOps solution into existing software engineering projects piecemeal which would clearly be considered a software design antipattern.

  • One Fintech company where I interviewed described a waterfall software development process where I would spend a career just tuning the current model which violates a plethora of AI software development principles (Occam’s Razor, No Free Lunch, MLOps best practices, etc.). Another Fintech company where I interviewed just wanted someone to automate the model deployment process since the Data Scientists were performing the core ML tasks which is problematic since a data science degree does not really cover AI/ML (see Data Science vs AI Engineering).

Clearly, these companies (especially the interviewers) lack the necessary academic credentials for successful AI projects, given the multitude of newbie mistakes that I uncovered during the interview process. Therefore, I will be more adamant in the future that interviewers share their credentials. The last thing I want is to become part of a company that is part of the dysfunctional 85% of AI projects. However, this will most likely be problematic given my interview experiences so far.

Therefore, I decided to apply for some AI positions with some smaller companies. Right from the outset, I was told by one company how they cannot afford the compensation of the big FANG companies which is understandable. Next, I was told they have three rounds of interviews and the first round has three scenario-based assignments: Literature Review, Feasibilty checklist for pharmaceutical company, and a program to process a custom protocol (transaction log). It was nice that they were using realistic scenario-based assignments, but this seemed excessive. All told, it could easily take 6–8 hours or longer to complete the assignments and this was only the first round of interviews. Turns out the small company (10–20 employees) is using the FANG interview process on steroids!

I have also chatted with users on Slack AI forums who clearly are managers trying to defend the broken interview process. The most common defense is the claim that candidates with advanced degrees are “unable to code” which is somewhat dubious. I find it difficult to believe that a student could obtain an MSCS degree without knowing how to code. In fact, it is more plausible that the candidate was simply suffering from performance anxiety while trying to endure the whiteboard hazing, online tests, etc. Thus, it is really proof that the interview process is not working.

In general, the tech interview process seems to be spinning out of control [6] which is ironic given the Great Resignation that is currently occurring following the “hire fast and fire faster” approach used previously,

Problems

Here are some concerns about the current interview process that are discussed in [1] and [2]:

  • Whiteboard algorithm hazing (aka live coding problems).

  • People spend weeks preparing for this process (many paid code camps pander to it) which is similar to SAT prep.

  • Demoralizing and unrealistic test of actual technical abilities.

  • May contribute to the industry’s diversity problem.

  • Tend to favor recent computer science grads from top-tier schools who have had time to cram.

  • Discriminate against people who are neurodiverse and people with disabilities.

Here are some weaknesses in the current interview process discussed in [3]:

  • Does not show if people can understand and solve real-world problems.

  • The obsession to optimize — even in complex projects, optimizations rarely occur.

  • Does not show whether people are good at solving programming problems.

  • Does not show if people can build software that is easy to maintain.

  • Does not assess the person’s learning skills (very important).

Of course, these criticisms are subjective in nature but so is the current interview process, and whether or not the process works is debatable as well. In fact, it is often difficult to determine whether companies really want to find the best candidate or not [2] [5] [6].

A Better Interview Process

In my experience, the best indicators for success are: degrees, certifications, GPA, code repos, and published articles. For interviews, I have found that discussing relevant work experience and/or presentation of relevant personal projects is very effective. In addition, the Gitlab approach [4] using a scenario or assignment relevant to the job opening (without a time clock) seems to be a fair and inclusive approach*,* provided the assignment can be completed in a reasonable amount of time (say 1–2 hours).

Conclusion

In general, live coding problems, online tests, and other assessment tools (such as hackerrank and leetcode) are problematic. However, there are better approaches to evaluating the skills of candidates. With 85% or more of AI projects failing, the emphasis really should be on academic credentials (degrees and certifications) rather than trying to create yet more forms of whiteboard hazing.

Resources

[1] A. Jeffries, “The Broken Job Interview Process,” The Outline, Feb. 28, 2017.

[2] EditorDavid, “Are Whiteboard Coding Interviews Just Testing For Social Anxiety?Slashdot, July 19, 2020.

[3] G. Pasarkar, “Tech Interviews are broken,” Geek Culture, Jan. 7, 2022.

[4] S. Kassabian, “The trouble with technical interviews? They aren’t like the job you’re interviewing for,” Gitlab, Mar. 19, 2020.

[5] K. Wong, “Recruiter shares why companies don’t usually hire the best candidate — and what job interviews are really about,” June 16, 2022.

[6] M. Johanson*, “*The rise of never-ending job interviews,” BBC, Aug. 1, 2021.

[7] J. Holmes, “The AI Process,” Towards AI, May, 18, 2022.

[8] A. Resnick, “What Is Neurodiversity?very well mind, Feb. 7, 2022.

[9] A. Resnick, “What Is Neurodivergence and What Does It Mean to Be Neurodivergent?very well mind, July 21, 2022.

[10] R. D. Austin and G. P. Pisano, “Neurodiversity as a Competitive Advantage,” Harvard Business Review, May-June 2017.

[11] Neurodiversity Career Connector

[12] Hiring Without Whiteboards

 
Share this