Answers to all these questions are included in the Product Data Science course
INSIGHTS: Netflix offers a 30-day free trial. Currently, people need to put their CC info to join the free trial. We are thinking of letting people sign-up for the free trial w/o asking for CC info. How would you figure out if running this test makes sense?
INSIGHTS: How much would you charge for youtube premium? I.e. being able to pay to get youtube w/o ads. Would this be the same, higher or lower than the current avg value of a user per month?
INSIGHTS: In your opinion, what was the data-driven hypothesis behind testing FB stories or similar products having the content disappear within one day?
INSIGHTS: At MS Bing, we increased the number of ads shown after a given search. Revenue is up, but the total number of user searches is down. Is this good or bad?
INSIGHTS: Tell me about a situation in which your analysis results were different than what you would have expected. Why was that? What did you do?
INSIGHTS: Is it better to place google ads in a deterministic way (always in the same place on the page) or probabilistically (each search result spot has a fixed probability of being used for ads)? Assume both ways have the same expected number of ads shown.
INSIGHTS: Would you test a new feature that makes it easier for Facebook users to switch between accounts?
INSIGHTS: How would you figure out if it makes sense for FB to run Whatsapp and FB Messanger as two separate apps or merge them into one?
INSIGHTS: We want to build a logistic regression to predict conversion rate. One variable is country. It is categorical with many levels. What’s the difference between building a different regression for each country level vs applying one-hot-encoding to country and building just one regression? Which approach would you choose?
INSIGHTS: How would you minimize the avg number of booking requests per booked trip at Airbnb?
INSIGHTS: FB mobile web stopped working because of a bug. Surprisingly, this led to a spike in engagement per day, defined as total actions/total active users. How would you explain it?
INSIGHTS: If you had to improve FB marketplace, what would you do?
A/B TESTING: FB developed a new feature and performed an A/B test. Results: actions per user is up, likes is up, comments is down, timespent is down. All else neutral. Would you make the change for all users based on these results?
A/B TESTING: Conversion is a dummy variable, i.e 0/1. Why can we do a t-test on conversion rate if the main t-test requirement is that the metric we are testing follows a normal distribution?
A/B TESTING: How would you test the success of a new ad campaign?
A/B TESTING: Define test statistical significance in layman’s terms. Why do people often choose 0.05 as threshold? Wouldn’t say 0.4 or 0.45 lead to higher gains in the long run?
A/B TESTING: We ran an A/B test. Results were non-significant, but slightly so. P-value was 0.06. What would you do?
A/B TESTING: How to test different prices?
A/B TESTING: Some companies run tests with the following strategy: firstly a test is run on a small percentage of users (say 5%). Then if the test group wins, an additional 5% of users enters the test. If this group also wins, the change gets implemented for everyone. What’s the difference between this strategy and running a normal single test?
A/B TESTING: After running an A/B test on conversion rate for 1 week, the width of the 95% confidence interval is about 1%. For how long you should have run the test to have a width of 0.5%?
A/B TESTING: A/B test won. We made the change for all users. After a few weeks, we want to double check if the metric actually went up after the change. How can we do that?
A/B TESTING: A/B testing can lead to over optimize for the current user base missing out on growth opportunities on new/different users. How would you avoid this problem?
A/B TESTING: Can you describe a situation in which a higher pvalue threshold for significance (>0.05) could potentially make sense?
METRICS: At a given ecommerce site, conversion rate goes down, but the absolute number of conversions is up. Is it a good thing or not? Can you think about a scenario that could have this outcome?
METRICS: What are the pros and cons of these two possible Youtube metrics: avg view time per user per day vs percentage of users who watch at least X minutes per day? What are the practical differences in optimizing for one vs the other one? Which one would you choose and why?
METRICS: At Amazon, we are running an A/B test to check if a given UI change increases conversion rate. Would you also test on other metrics, such as, for instance, revenue or number of visitors? What are the pros and cons of testing on multiple metrics?
METRICS: Our fraud algorithm has 98% accuracy. Do you think this is good or bad? Follow up: Assume the cost of not catching a fraud (false negative) is super low. Would you be OK in that case having 98% accuracy?
METRICS: Our dashboard at Google shows a sudden drop for a given metric. It was because of a logging bug and was fixed quickly. Would you still do some analysis related to that event? What would you look at?
METRICS: Same as the previous question, but this time it was because of a product bug. For instance, the video recommendation model stopped working and people started getting random recommendations. The bug was quickly fixed and things went back to normal. Would you still do some analysis related to that event? What would you look at?
METRICS: Avg number of likes per FB user per day is larger, smaller, or same as the median? Can you give an example of a metric where the relationship avg-median is flipped?
Answers to all these questions are included in the Product Data Science course