Execution Best Practices

The goal of execution interviews is to determine if you know how to establish “success” for your users, define that behavior, and set metrics that accurately track progress towards your goal. These metrics should help you tradeoff between product decisions.

There are three types of execution interviews:

  1. Defining success for your product
  2. Trade off between two alternative products / behaviors
  3. Diagnose a metric decline

1) Define success for product X

  1. Clarify the question
  2. Goal of Product X
  3. People
    1. Split into basic groups/segments
    2. Determine what they care about
    3. List a few metrics for each group
    4. Try to create a metric that takes both into account
  4. Prompt Interviewer
    1. I’m going to talk a bit about the pitfalls and this metric and then go into the tangible things we can work on in order to move this metric
  5. Pitfalls or Brainstorm
  • Prioritize the most important parts of the space based on their question and always start with those bits/ask if they’d like you to go into the other bits

2) Trade off between 2 alternatives

  1. Establish baseline understanding of what your product is and what the trade off is - Don’t jump to assumptions because you and the interviewer may have different understandings of the product
  2. Define mission for your team or product - What is the change that you seek to make in the world with this product
  3. Set a hypothesis as to why you think one of the 2 options is better so you can explicitly test it
  4. Identify your users and what they are trying to achieve - e.g., Video creators want to easily share their work and build an audience; video consumers want to easily find and engage with the best videos
  5. Define metrics that you will use to track that we are helping our users achieve these goals
  6. Determine which metric will be the most important one that you will use to determine success
  7. Talk about experiment setup on how you’d test and when you would launch one feature or the other
  8. Talk about risks with the test and what could go wrong with the test group

3) Diagnose a metric decline or increase

  1. Establish baseline understanding of what your product is
  2. Ask questions:
    1. Clarify the “exact” metric that went down
    2. Was it gradual or sudden
    3. How big is the problem
  3. Create a funnel that starts from entering facebook and goes all the way to the specific action that’s being hit
  4. Ask clarifying questions as to whether we know which section of the equation is being hit the hardest
  5. Set a hypothesis as to what you think are the likely culprits (E.g., people were boycotting Facebook, hence less people were logging in and therefore going to events)
  6. Use a framework like internal/external to explain how you will test your hypothesis
    1. Internal
      1. Bug
        1. OS
        2. Location
        3. Connectivity
        4. Devices
      2. Feature changes
        1. Did it start at a certain time
        2. Can we narrow in by GK/QE
      3. Logging
        1. Is this happening to other features
        2. Are there SEVs out
        3. When you manually test does it work
    2. External
      1. OS problem
        1. Other apps having issues
        2. Developer notice
      2. Geography issue
        1. Government regulation
        2. Censorship
        3. Connectivity
      3. User behavior
        1. Other apps connected are going down
        2. There’s a new competitor
        3. Feature creep has set in

Resources