👋🏽 Welcome to Brick by Brick. If you’re enjoying this, please share my newsletter with someone you think will enjoy it too.👇🏽
Customer discovery is one of the most critical activities for an early stage startup, it is concerned with discovering the company’s customers and validating that it has identified a need(s) that customers have. It is one of the 4 parts of Steve Blanks’ Customer Development process
In this post, I will focus on one element of the customer discovery process, which is interviewing customers. If done well, customer interviews can help validate hypotheses, identify unmet needs and hone in on a product that meets customer’s needs. Conversely if done incorrectly it yield false positives and serve as a self-validating mechanism, leading a company down a path of building products that no-body wants. Not the best outcome.
There are several elements to the interview process, which are outlined below:
Finding prospects to interview
Conducting the interview
Analyzing the data
Finding customers to interview
You’ll need to interview lots of potential customers during this discovery phase. There are various tools that can help provide access to these customers. The main one is to tap into people in the company’s extended network. Those will include people you know who fit your target customer profile, as well as people that others members of the company know. Your investors can also be incredibly useful here as well.
It has been my experience that this rolodex based approach tends to run dry quickly. You also want to diversify your pool by talking to people completely outside of your network. This is where LinkedIn can be very useful.
LinkedIn is a fantastic way to find customers to interview. It does require that you identify your target personas, their typical job titles, industries they work in and so forth. It also requires having a member of the team, typically the product team, spending many hours crafting and sending out LinkedIn emails. Much like sourcing candidates, this is a numbers game. For every 100 or so emails you send, you should expect a certain number of responses (in my experience ~10-15%) some of which you will actually agree to be interviewed (~3-5%).
My suggestion for the emails you craft is to ensure that the intent is product research and not sales. You should also set expectations, especially on time commitments. In my experience these interviews last about 1 hr.
Conducting the interview
Before jumping in and conducting the interview, I like to do some prep work. At a minimum I would want to identify the key questions or hypotheses I would like to validate during the interview. Key questions to me are ones that are concerned with the most ambiguous and riskiest product propositions, which in the earliest stages tend to be everything about your product.
I also like to partner on these calls by having one person drive the conversation and the other act as a scribe. The scribe writes down every part of the interview and annotates key portions. I used descriptive # tags, much like Twitter’s with an optional value, like so #[tag_name]:[value]. For example, if I learn that the interviewee is in the software industry, I would annotate the transcript like so: #software. Similarly, if they mentioned that they have a large pain with 100PB of unstructured data I would add this tag #pain:unstructured_data #data_size:100PB. We’ll come to why these annotations are important in a minute.
With that, let’s dive into the interview. We’ll assume for the sake of this exercise that I am working on building a database management product.
I try to allocate no more than 15 minutes at the beginning of the interview to first thank them for their time and remind them that this is solely for me to learn about their experiences with NoSQL databases. I also use this time to get some basic demographic information. Do not ask questions that you can easily get from their LinkedIn profile or from the company’s website. Examples of those include the company name, the industry vertical they are in, the interviewee’s titles and so on. The demographic questions I ask are ones related to my product., I’d ask about what databases they use today, how much data is stored on them, the various workloads and use cases that run on these databases, do they have a team that manages these databases, the annual budget allocated to databases, are they running on-premises or in the cloud and so forth.
Next comes the bulk of the interview, which typically lasts about 30 minutes. During that time I want to learn as much as possible about their day to day, especially when it comes to working with databases. I do not tell them anything about my product nor do I pitch it. One of my favorite opening questions is to simply tell them to walk me through their typical day (as it relates to databases) and then I dig, as shown in the made-up conversation below.
Me: Tell me about a typical day for you
Bob: Well, my days vary, but the past few days I’ve spent on preparing a financial report to my CFO. We used to do centralized billing for database services and are now moving to a chargeback model. The CFO wants me to prepare a report showing how much each department spent on databases. It’s been hell.
#charge_back_pain:10 [10 is max pain]
Me: Sorry to hear that. I can imagine how difficult it must be to pull all these numbers. How are you actually doing that?
Bob: It’s hard, because I have very little to go with. I can pull storage costs at the database level, but I need to blend it with compute and networking resources. I am also struggling to link databases back to their respective orgs. We have more than 650 databases here.
#databases:650
Me: Wow, that’s a lot of databases. Are those a mix of SQL and NoSQL?
Bob: Yes, we have everything from Oracle, Postgres, Snowflake, Terradata and now moving stuff to the cloud as well.
#Oracle #Postgres #Snowflake #Terradata
Me: Let’s go back to this chargeback. I assume you have to do it every quarter, which is probably very time consuming to you. Have you seen a tool or product on the market that would ease this pain?
Bob: Yeah, the cloud guys are great at that. Self-provisioning and they meter the usage of both the storage and compute. I’d love to have my on-premises databases do that. All I need to do is pull down a monthly report in PDF format and share that with my CFO
Me: That makes sense. I use AWS all the time and really like their management dashboards. I assume you are just pulling this data manually now and tracking it all in Excel. Is that correct?
Bob: Yes. It’s terrible and doesn’t scale. I wish there was something better out there. I waste one week every quarter doing this exercise. I also pull in a few of my team to do that. It’s a terrible waste of time.
Me: I assume you’d want one tool to do this for all your databases. Is that correct?
Bob: Yes, I don’t want to have 650 tools. I want one tool that can do chargeback for all my databases, SQL and NoSQL. I’d pay a lot of money for this tool. Sadly, I looked around and I don’t see anything like that on the market.
Me: Let’ move on to other parts of your job that don’t involve chargeback. What else keeps you buys?
Bob: Putting out fires, provisioning new databases
And thus the interview continues. You’ll notice that I haven’t said a word about my product, apart from a quick word or two in the introduction. Instead I am focused to learn as much as I can about Bob’s database life, which as you can tell from the above is quite a lot of good information. I learned about the pain of chargebacks and can quantify the cost to Bob of not having a tool doing those.
Once this part concludes, we move on to the closing portion of the interview. During that time I start telling Bob what we are up to, ask if he wants access to our product (if applicable) and whether or not we should remain in contact
Me: Bob, thank you so very much for your time. I suppose it’s time for me to talk now and tell you what we’re up to. We’re building a database management console. Our product will be hosted in the cloud, you don’t need to manage it and it can help you manage databases - SQL and NoSQL, both on-premises and in the cloud. You can create new databases, modify them, monitor them and more all through our platform. To be honest, we haven’t thought about chargeback but you’ve given me lots to think about! We’re still in early development phase, but do have a trial version that I can offer to you if you are interested in seeing the product.
Bob: Yes please, that would be great.
#trial:1
Me: Great, I will send you a link to the trial version after we are done here. I assume it’s OK to stay in contact and send you updates about our product too?
Bob: Yes.
#keep_in_contact:1
Me: Thanks again for your time Bob!
Analysis
In time and if done correctly, you would have amassed some interesting and insightful data from these interviews. This data needs to be analyzed to help you identify trends, zoom in on unmet needs, market failures or areas you believe you should invest in. You should also be thinking of what new questions and hypotheses to validate as you analyze the data you collected. Your interviews are not static, they should be adaptive based on your learnings. Your analysis should be done qualitatively and quantitatively.
The qualitative aspect of your analysis will come out of reading the interviews and summarizing your findings. This should be a continuous and rolling exercise. The right cadence for this exercise will depend on the number of interviews that you are able to conduct in a given period of time. I’ve seen a team of 4 conduct about 10 interviews a week. At that rate, a monthly summary works great. You’ll have a good amount of interview data to learn from.
Remember the tags that I introduced earlier? Those will come in very handy to help you do a qualitative analysis of your interviews. In order to do so, you will have to parse the interview documents (I used Google docs) and create an output file, mine was a CSV. Each row in this CSV file will correspond to one interview. The columns in this file will correspond to all the unique tags across all the interviews. For example, parsing the fictitious interview I presented earlier would yield the following row and columns.
Note, that the columns (tags) might not be uniform across all interviews. If I conducted another interview and discovered a new pain, that column will be specific for this new interview.
The table below shows how this might look like once I interviewed Rick. Rick uses Redshift, which is a new database that I hadn’t encountered before. He also has identified a pain with capacity planning, which is also a new tag. Bob wasn’t interested in a trial or to remain in contact with us - he’s not a target customer at least not with the product we have in mind. The new output file from parsing the tags for both Bob and Rick’s interview looks like the below.
It is not uncommon to have lots of empty cells in this output file, especially as you start this interview process. However, once you have this data you can start pivoting on it and identifying trends.
Final thoughts
There are a few critical components to interviewing customers that you I recommend you always try to maintain. First, is to shy away from pitching your idea or product and seeking validation. You’ll get false positives and might end up building products that no one truly wants. Second, is to always try and have two people conduct the interviews. One acts as a scribe while the other drives the conversation. An added benefit of this approach is being able to debrief together after the interview to evaluate what you learned - pair programming of sorts. Third is to learn from these interviews by analyzing them. This helps you hone in on new questions to ask and hopefully on unmet needs that your product can fulfill.
Thanks for reading! If you’ve enjoyed this article, please subscribe to my newsletter👇🏽 I try to publish one article every week.