Showing posts with label Product Talk. Show all posts
Showing posts with label Product Talk. Show all posts

Product in Practice: Creating Opportunities for Deliberate Practice at BBC Maestro

How often have you read a book or article and been inspired to take action on it? And how often have you actually taken action on it? Chances are there’s a big gap between how inspired you feel and which concrete actions you end up taking. 

If that’s the case for you, you’re not alone. Reading a book is (relatively) easy. Changing our behavior is much harder. 

Reading a book is (relatively) easy. Changing our behavior is much harder. – Tweet This

This is why we created the Product in Practice series here on Product Talk (and why Teresa designed the Product Talk Academy courses and the CDH community the way she did). We want to help you see how real product people and teams are taking steps to apply continuous discovery. The more you see how others are doing it, the more you can be motivated to take a few of these steps yourself. Well, that’s our hope, anyway.

And if you’ve been thinking about taking action for a while but unsure about how to get started, we hope today’s Product in Practice will be the gentle push you need. 

We were thrilled when we heard Lily Smith’s talk at Product at Heart. While Lily had no shortage of product inspiration and knowledge, she realized that one element that was missing was deliberate practice, so she set out to change that. Her early experiments on her own and with the product team at BBC Maestro had some promising results and show how easy it can be to get started with building your product skills. 

Do you have a Product in Practice story you’d like to share? You can submit yours here

Meet the Continuous Discovery Champion, Lily Smith

Lily Smith is the Chief Product Officer at BBC Maestro, an independent, VC-backed startup with a close relationship with BBC Studios. Through high-quality, long-form video content, they provide people with access to the teachings of some of the best-known and experienced professionals in creative and wellness topic areas.

In her role as Chief Product Officer, Lily looks after Product, Design, Research, and Data within the business. Lily says her team’s purpose is to provide both business and customer value with a strong focus on growth. “We have to ensure that the content we create is presented well and consumed easily and that this helps our customers achieve their goals.”

A photograph of a group of people sitting together on bleachers in a large room.

As CPO at BBC Maestro, Lily gathered a large group of product people to participate in the inaugural ProductPlay sessions.

In addition to her day job, Lily co-hosts The Product Experience podcast and founded the Bristol chapters of ProductTank and ProductCamp. Describing her commitment to learning and practice, Lily says, “I’m an avid supporter of product communities in general and always looking for ways to improve my own knowledge and skills.”

Lily’s Challenge: It’s Hard to Find Places to Actually “Practice” Product Skills

It’s clear from her resume that learning is important to Lily. But over time, she came to a realization: She wanted to do more than just learn on her own. Here’s how Lily describes it: “In my 15 years in tech companies, I’ve done lots of self-directed learning. I love going to talks and hearing stories and advice from peers, reading books and articles and generally absorbing as much as I can. However, I always felt that there’s been something missing—a place to actually practice some of the things I was learning.”

Because Lily’s work experience was mostly on small product teams, she didn’t have a lot of opportunities to learn from her peers at work. She explains, “Having worked mostly in startups, I’ve been the sole product person in most companies. I don’t have the benefit of learning from other product people in a big team. I think there are a lot of other people in this situation.”

I’ve been the sole product person in most companies. I don’t have the benefit of learning from other product people in a big team. I think there are a lot of other people in this situation. – Tweet This

The first time Lily became aware of this was when she learned about design sprints. “It sounded amazing, but taking a whole team out of their normal routine for a week without really knowing what the outcome would be was pretty terrifying,” says Lily. “I needed to gain some confidence in the activities and in the process before trying it for real.”

Luckily, Lily had a friend, Nick Forsberg, who was interested in experimenting with her. Lily and Nick decided to condense the design sprint (which is usually conducted over a five-day workweek) into a single day. “The practice run was so helpful, and at the end of the day we concluded the sprint and felt more confident about the framework. But it would have been even better to have done this multiple times,” says Lily.

The success of this first experiment stuck with Lily and she had been wondering how to help others experience the benefits of practice firsthand. She explains, “Fast forward a few years and product practice was still very much on my mind. I really wanted to try and run some new types of events with product people where we work together instead of just chatting and sharing.”

When Petra Wille invited Lily to speak at Product at Heart on the topic of getting better at product management, it was just the push Lily needed. “I used this as a catalyst to start trialing these events,” says Lily. “I call them ProductPlay.”

Lily’s Answer: ProductPlay Sessions to Practice Interviewing and Opportunity Mapping

For the inaugural ProductPlay session, Lily chose to cover Jobs To Be Done (JTBD) interviews and opportunity solution trees as the two frameworks to practice. “The reason I chose these is because my team and I were doing lots of interviews but I felt that we could be better at digging deeper into the ‘why’ behind people’s attitudes and behaviors,” says Lily. “Including opportunity solution trees meant that we could really see the value of understanding the why and how we can turn that into action.”

Lily came up with some rules and guidance (pictured below) to help the team understand what to expect and get the most out of the experience. These included things like, “Everyone here wants to learn. Withholding feedback blocks a learning opportunity” and “Relax! You can’t go wrong here, product is hard and we’re all here to help each other.”

A screenshot of a short list of rules and guidance for the ProductPlay session.

Lily tried to keep the rules and guidance for ProductPlay simple—the goal was just to learn together.

For the first session, Lily scheduled a block of one and a half hours (which in retrospect she says was quite ambitious to squeeze everything into) and planned the agenda to include time for an ice-breaker, voting on a topic, dividing into small groups, creating interview guides, and conducting interviews.

After the team chose the topic of “reducing plastic waste in the home,” they interviewed Lily. They took it in turns to ask questions and observe each other. At the end of the interview, they all gave feedback to each other. “Being the interviewee, I was able to shed some light on other information they missed out on because of their question style and direction. We all learned a lot,” says Lily.

A screenshot of the agenda for the ProductPlay session, including time for an icebreaker, voting, interviewing, feedback, and building an opportunity solution tree.

Lily says trying to fit everything into one 90-minute session turned out to be quite ambitious. Click the image to see a larger version.

After everyone had practiced interviewing, they focused on the goal of helping people reduce plastic waste in the home and mapped some of the opportunities, solutions, and experiments from the single interview. “We learned loads in this process, too,” says Lily. “The session was a lot more successful than I imagined it would be. The team really valued the intentional feedback.”

Because the first session had been a success, Lily decided to run the same agenda again for a larger group—and with people who didn’t know each other. “This was a bit trickier,” admits Lily.

“Running the same process in a larger group—with just a small amount of context setting—had a few more challenges. One of the teams ended up being quite a junior group so they didn’t have anyone who had much firsthand experience of interviewing or opportunity solution trees.”

A photograph of a large room with groups of three or four people seated together and talking to each other.

For the second ProductPlay session, Lily organized a larger group of people who didn’t all know each other already.

There was also some confusion over including JTBD in the interviewing or trying to uncover the ‘jobs’ that the customer needs to achieve within the topic. “The majority of the attendees had little experience with JTBD, apart from one participant who was more purist about its application and wanted to do some job-mapping exercises before doing the interview,” says Lily. “For everyone it was an exercise in ‘letting go’ of perfectionism and fear of getting it wrong (myself included, in terms of running the event!).”

For everyone, ProductPlay was an exercise in ‘letting go’ of perfectionism and fear of getting it wrong (myself included, in terms of running the event!). – Tweet This

A photograph of three people sitting at a table and talking to each other.

Breaking into small groups gave everyone a chance to play multiple roles and give feedback to others.

Key Learnings and Takeaways

At the end of the session, Lily asked everyone to provide some feedback, including how it could be more successful, as well as their key takeaway. “One of the things that came back from a couple of people was that they felt they needed some training at the beginning of the session. I had sent some links to JTBD and OST resources prior to the session and explained the session wouldn’t include training, it was pure practice, so it was interesting that this had been raised,” says Lily.

Reflecting on this feedback, she adds, “I think it validated for me that reading about a framework and tool and practicing it are really different, and having someone walk you through that first time is so helpful. This feedback just validated for me that an event like ProductPlay is really needed by others, too, so that product people can help each other through this and learn from each other outside of their product team.”

Reading about a framework and tool and practicing it are really different, and having someone walk you through that first time is so helpful. – Tweet This

One surprise for Lily was the group’s relative familiarity with opportunity solution trees as opposed to the JTBD framework. “JTBD has been around for a lot longer, but there was a lot more familiarity and use of OST in the groups’ day-to-day work,” notes Lily.

These initial experiences have given Lily confidence to continue pursuing the ProductPlay format. “I plan on running more events with the same group—we all agreed we wanted to do it again now that everyone is more confident with the format and the playfulness of the practice,” she says. Lily is also hoping to expand the practice to other groups and build a library of resources so people can set up and run their own ProductPlay sessions.

Ultimately, Lily sees the chance to expand ProductPlay to a much broader audience. “Like any good product person, I have a vision for ProductPlay—I’d love for it to be a movement of self-organized practice sweeping across the international product community,” Lily says, “There’s nothing more heart-warming than creating a culture of learning and support—to expand that influence beyond my immediate team and community would leave me a very happy woman.”

I’d love for ProductPlay to be a movement of self-organized practice sweeping across the international product community. – Tweet This

If you’re excited about the idea of deliberate practice but not quite ready to set up your own ProductPlay session, come join us in a Product Talk Deep Dive course. These sessions are designed to help you practice skills like interviewing and opportunity mapping in an open, low-stakes setting. We’d love to see you there! 

The post Product in Practice: Creating Opportunities for Deliberate Practice at BBC Maestro appeared first on Product Talk.


Product in Practice: Creating Opportunities for Deliberate Practice at BBC Maestro was first posted on December 27, 2023 at 6:00 am.
©2022 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.


from Product Talk https://ift.tt/4RPl0OL
via IFTTT

Ask Teresa (and the Community): What Do You Do With Stakeholder Feature Requests?

Let’s say you’ve got a good thing going with continuous discovery. You’re creating a regular habit of talking to customers, you’re identifying opportunities and assumptions and building out your opportunity solution tree and starting to run small tests to explore different ideas.

Everything is going great right up until one of your stakeholders, like your CEO, approaches your team with a shiny new idea. This idea (which is probably in the form of a solution you should build) didn’t originate from your discovery work. Maybe it just came to the stakeholder spontaneously or through a conversation with someone else. In any event, this idea is not connected to any of the discovery work your team is currently doing. How should you handle it?

Let’s say a stakeholder approaches your team with a shiny new idea (which is probably in the form of a solution you should build). It didn’t originate from your discovery work. How should you handle it? – Tweet This

This is a common scenario and one that came up in the Continuous Discovery Habits community. I especially liked how several members of the community stepped in to share their thoughts on this one, so rather than a typical Ask Teresa, we’re going to share this topic in a new format, Ask the Community. We’ve summarized the responses from community members and I’ll add my own thoughts at the end.

Question: What do you do when a stakeholder has an idea that isn’t related to your outcome or your discovery efforts?

One of our community members, Rachael Jones, shared that a stakeholder came to her with a new idea for a potential solution. As is often the case, the suggestion wasn’t related to her team’s current outcome.

Rachael wondered if it made sense to assign the stakeholder’s idea to an existing outcome and conduct customer interviews to identify related opportunities. But she was concerned that this approach might lead them to “validate” the original idea that came from the stakeholder. In other words, she was concerned that confirmation bias might lead them to pay attention to signals that confirmed the stakeholder’s idea and ignore signals that negated it.

So she turned to the CDH community for help.

The Community’s Advice: Ask Follow-up Questions

"Ask your stakeholder, 'Who would you expect to use that solution?' Knowing the target customer or user can help you identify which team would be best to evaluate this need." - Hope Gurion

One community member, Ahmed Osman, suggested working backwards by asking the stakeholder which outcome or opportunity they are trying to address.

Product Talk instructor Hope Gurion had similar advice. She suggested responding with, “Excellent idea!” and then immediately asking a series of follow-up questions. Here are Hope’s recommended questions:

Who would you expect to use that solution?

Knowing the target customer or user can help you identify which team is serving those customers and who would be best to evaluate this need.

Ask your stakeholder, ‘Who would you expect to use that solution?’ Knowing the target customer or user can help you identify which team would be best to evaluate this need. – Tweet This

What are your customers doing today and why isn’t that working for them? 

Before we commit to a new solution, we want to make sure we understand what we’ll be competing with. If we can also identify where alternatives are falling short, we can quickly identify where we can differentiate.

Asking these questions can also help you and your stakeholder uncover and think through what opportunity your stakeholder is intuitively trying to address.

What is the risk of not solving that problem?

Identifying the risk of not solving that problem will help you assess the relative importance of solving this opportunity against the many others you might be encountering in your interviews or from other stakeholders.

If we had that solution, how would we measure that it was working? 

This helps you define how you would measure success if you solved that opportunity and what impact on outcomes it might have.

If we had that solution, what would that do for our business?

This question helps you narrow in on what business outcome you would expect to drive by solving this need for these customers.

Ramsey Solutions: Great Idea Collaboration

Image titled "Ramsey Solutions: Great Idea Collaboration". At the top, the phrase "Oh, that sounds like a great idea!" is displayed. Below this phrase, there's a checklist of questions, inspired by Hope's recommended list, laid out in bullets.

Another community member, Trevor Acy, said he’s had success with Hope’s list of recommended questions. At his company, Ramsey Solutions, they coach product managers to respond with a similar set of questions that they call the “Great Idea Collaboration.” They start by acknowledging the suggestion: “Oh, that sounds like a great idea! We can definitely explore that.” Then they move on to the following questions:

  • When we build it, what problem is this going to solve for our customers?
  • How will we know it was successful? What will we measure?
  • What other ideas did you consider before deciding on this one?
  • Can our team explore some other ways we might (solve this problem) and move (measurement)?

Trevor says the purpose is to bring the person into the thinking and working on the solution together. Either they have done these things and it’s a way to get them to externalize the why behind the idea or they haven’t and you can work with them on defining the opportunity and specifying the measurable success and hopefully get permission to explore additional solutions.

He emphasizes the importance of the last question—asking if you can explore other ways to achieve the same outcome. “That way you can run the assumption testing pattern with the stakeholder’s idea as one solution plus two to four more the team comes up with.”

Ask stakeholders if you can explore other ways to achieve the same outcome. Then you can run assumption testing with the stakeholder’s idea plus a few more the team comes up with. – Tweet This

Teresa’s Advice: Story Map and Generate Assumptions Together

"When we story map with the stakeholder who generated an idea, we expose gaps in our thinking and surface underlying assumptions together." - Teresa Torres

My suggestion is to start with story mapping and generating assumptions with the stakeholder. I want to make sure that I fully understand the idea before I start to evaluate if it’s a good fit or not. Too often, we jump to fast conclusions. So it helps to slow down and first make sure we have the full context.

Story mapping helps us align around how an idea could work for a customer. It also helps to expose gaps in our thinking and to surface underlying assumptions. When we story map with the stakeholder who generated the idea, we do this work together.

When we story map with the stakeholder who generated an idea, we expose gaps in our thinking and surface underlying assumptions together. – Tweet This

More often than not, an idea falls apart in story mapping. The stakeholder realizes the idea wasn’t fully thought through or it was more complex than what they originally thought. All ideas seem simple when they are vague concepts in our head.

If the idea survives story mapping, we can then work together to uncover the hidden assumptions the idea depends upon for success. We can work together to identify how much risk underlies the idea.

The key here is to explore and draw conclusions together. This shifts the balance from you saying no or the stakeholder saying “do this” to the two of you working together to ask, “Is this worth pursuing?”

If the idea survives story mapping and assumption generation, then I’d take the time to remind the stakeholder what we are currently working on. For example, I might remind the stakeholder of our outcome, our current target opportunity, and the solutions we are currently exploring. I’d then ask the stakeholder to weigh this idea against that other work.

There will always be times when a stakeholder changes our priorities. That’s a fact of life. But it’s our job to make sure our stakeholders know what they are giving up in exchange. This approach enables us to do that.

There will always be times when a stakeholder changes our priorities. That’s a fact of life. But it’s our job to make sure our stakeholders know what they are giving up in exchange. – Tweet This

We regularly discuss our most pressing product challenges and share potential solutions in the Continuous Discovery Habits community. You should join us there!

The post Ask Teresa (and the Community): What Do You Do With Stakeholder Feature Requests? appeared first on Product Talk.


Ask Teresa (and the Community): What Do You Do With Stakeholder Feature Requests? was first posted on December 20, 2023 at 6:00 am.
©2022 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.


from Product Talk https://ift.tt/UWxTt50
via IFTTT

Product in Practice: Iterating on Outcomes with Limited Data

We often hear from people that building an opportunity solution tree is hard work. And perhaps one of the most challenging parts of building opportunity solution trees is defining the outcome that you put at the top. 

One of the most challenging parts of building opportunity solution trees is defining the outcome that you put at the top. – Tweet This

As a quick reminder, your outcome should be something the product team is able to directly influence, but it also needs to connect to a business outcome like growing revenue. 

Even in the best circumstances, it can be tough to find an outcome that meets these criteria. And when you add other limiting factors, like having a small number of users or a long sales cycle, arriving at a good outcome is even trickier.

For today’s Product in Practice, we caught up with Thomas Groendal to learn how he worked within several constraints to get closer to a meaningful outcome for his product. It’s a tale with a couple of twists and turns, so you might want to buckle up for this one!

Do you have a Product in Practice you’d like to share? You can submit your story here.

A Brief Introduction to Thomas’s Work: What Is Intelligence Analytics and the Dark Web?

Thomas Groendal is the Senior Product Manager at Bluestone Analytics, a subsidiary of CACI, a $6 billion company that focuses on intelligence analysis. What exactly is intelligence analysis? Thomas describes it as the type of work someone might do to serve law enforcement or national security objectives—think trying to identify the whereabouts of a terrorist or stopping the flow of illegal drugs in a specific location.

A screenshot from the Bluestone Analytics website outlining some of their main services.

Bluestone Analytics, where Thomas works, offers a range of solutions related to data and analysis. Click the image to see a larger version.

Bluestone Analytics in particular has two main products, DarkBlue and DarkPursuit. DarkBlue takes activities that were collected from the dark web and makes them safe to view and analyze while DarkPursuit is a browsing platform that allows users to browse the dark web with a layer of anonymity and safety.

A screenshot of the DarkBlue website outlining some of the product's main use cases.

DarkBlue intelligence, the product Thomas works on, can support activities like investigations, due diligence, and cyber threat intelligence. Click the image to see a larger version.

While we’re on that topic, what is the dark web? “If by using a certain browser that connects you through a series of different randomly selected computers, you can totally obfuscate your IP address and make it impossible for somebody to truly identify who you are, then it’s a dark web,” explains Thomas. The dark web is built to protect users’ anonymity in ways that are hard to break.

Because the dark web can be a hazardous environment (both psychologically and from a security standpoint), Bluestone Analytics provides their tool suite to law enforcement, the defense department, and the intelligence community at large to help them make their investigations easier and safer.

Describing the product he’s responsible for, Thomas says, “My product squad owns presenting that data in a way that will make it easier for you to find the clues you’re looking for. We’re really about maximizing the signal to noise ratio. And my counterpart product manager’s squad is about maximizing the volume of data.”

To drill down into a specific example, if an investigator came across a dark website called “Bob’s Fentanyl Emporium,” there may be technical information that Bluestone Analytics had collected from a previous iteration of the site, such as the email address that was used to set it up. This would be the type of clue that allows an investigator to get a subpoena, gain access to someone’s computer, and potentially figure out who their suppliers are.

Thomas’s Challenge: Defining an Outcome for an Atypical Product

“In our case, measuring user activity is particularly challenging because many of our customers have legal and operational reasons for us not to track what they’re doing.”

When Thomas joined Bluestone Analytics, they didn’t have much instrumentation. And as you might guess from the nature of Thomas’s product, there were some restrictions on how he could go about measuring user activity or instrumenting the product.

“It’s not like most apps where it’s not even a consideration—of course Slack knows whether I sent messages and it probably knows who I sent messages to,” says Thomas. “But in our case, it’s particularly challenging because many of our customers have legal and operational reasons for us not to track what they’re searching or doing.”

In our case, measuring user activity is particularly challenging because many of our customers have legal and operational reasons for us not to track what they’re doing. – Tweet This

Plus, the product doesn’t have a large number of users. “It’s a low information, high noise environment,” says Thomas.

These restrictions meant it was hard to know where to start. Thomas continues, “It took us a long time to figure out the right ways to abstract the identity of a user away from their activities, but still be able to figure out what activities in the application were going to be indicative or predictive of whether a demo customer buys.”

Getting Started: Going from Zero Information to Some Information

Thomas knew it was important to define an outcome for his team, but he also needed some information to get started. When he joined Bluestone Analytics, he described the situation as a “zero information phase.” The team’s goal at that stage was just to find any kind of metric they could collect, given the constraints mentioned above.

They started with a simple customer satisfaction rating, asking people to rate how helpful the tool was. But this wasn’t particularly useful since no one clicked on it or rated the tool.

They also tracked what Thomas refers to as “blunt measures”—things like the numbers of users logging in or not.

At this stage, Thomas says the information they had was mostly anecdotal and qualitative. He knew that they’d need to find some way to get more quantitative information. The question was how to achieve this.

Early Phases: Gathering More Information About User Behavior and Identifying a Few Potential Outcomes

In the next phase, Bluestone Analytics began working with Amplitude to instrument their product. The team began with a set of assumptions about user behavior and through conversations with customers and the Amplitude team, the team began to narrow in on a few outcomes that might be worth measuring.

Here’s how Thomas describes what he imagined as the ideal user journey: Someone comes into the app, they look at what people are talking about on the dark web, and they find someone who looks interesting and exciting. In that case, they’ll pin, save, export, or share something they’ve found. Within the product, they called this activity “clue captures.” Thomas says clue captures are what he believed customers probably cared about, based on a combination of what he’d learned from customer interviews and common sense or intuition.

But as we mentioned earlier, Bluestone Analytics’ tools have a small number of users. DarkBlue is also not the type of tool you use every day. Thomas compares it to TurboTax—it’s a tool that serves a very specific purpose, but people don’t tend to log in unless they have that particular need to fill.

While clue captures seemed like a strong contender for an outcome to measure, the team was concerned that because they don’t get much signal, it’s a slow way to measure whether the features they’re building are succeeding.

Through conversations with the Amplitude team, they learned that one user behavior they could track was any time a user hit “Control C” or used the in-app copy button. Thomas had some hesitations about defining these “copy events” as an outcome because it felt like too much of a blunt-force object or a traction metric. But at the same time, using the copy button does indicate that a user found something that was at least a little interesting, even if it eventually led them to a dead end.

Now Thomas had a few potential outcomes—clue captures and copy events. He says, “We started tracking everything and then trying to understand which behaviors correlate to revenue behavior.”

A brief aside: At this point, Thomas shared his challenge with the Continuous Discovery Habits community, explaining the difficulties of getting good data from clue captures and his concerns that copy events were simply a traction metric rather than a good outcome. Teresa weighed in that since copy events were a leading indicator of clue captures, it made sense to set copy events as the outcome. She wrote, “When a traction metric is a strong indicator of value to the customer, it’s okay to set it as your metric. However, you probably want to set a counterweight, something like ‘increase copy events without negatively harming clue capture rate.’ Here’s why: if you just focus on copy events exclusively, you are going to incentivize your team to encourage users to copy events that won’t eventually lead to clue capture. You don’t want that. So you want to make sure that those metrics stay linked.”

The Next Phase: Starting to Connect User Behaviors to Revenue

“Even these events in our app that we were sure were going to be super predictive didn’t end up being that predictive.”

As we’ve already discussed, Bluestone Analytics tools present a few challenges when it comes to measuring user behavior. Thomas sums it up this way: “There aren’t that many people that do dark web investigations, and their work is cyclical. There aren’t that many purchases and we have a very, very long purchase cycle, so people can take 18 months to make a purchase. And so we still have a really hard time adding a button to our UI and then watching the numbers and then seeing five days later if they go up or not.”

But the team remained committed to finding measurable outcomes for DarkBlue. Due to the long sales cycle, they decided to look at retention. What activities make people come back to DarkBlue more often?

After they’d identified clue captures and copy events as potential outcomes, they conducted a statistical analysis to determine which actions within the tool correlated to retention. In other words, which specific actions increased the chances that a user would return to the tool in the future?

The analysis revealed an interesting trend: If someone logs in today and copies at least five things, they’re much more likely to log in again in two weeks than if they hit copy once or if they save one clue.

This was a huge breakthrough moment for Thomas because it meant evolving the narrative of how people were using the app. As he described earlier, he’d imagined that someone went into the app, looked at a bunch of data, found a bad guy, and saved the evidence. “We pictured this whole investigative lifecycle happening in the app. And so while the end of investigation events like exporting a record still seem logically like they would be most important, what we learned is that copy events were really the most important thing. It’s a much earlier stage in the investigative lifecycle.” He continues, “People get value there, so it’s just a thing that then drives us back into talking to our customers to understand why our copy events are more important than what we thought would be the end of the journey.”

Their original theory about what was their desired product outcome and what was a traction metric turned out not to be accurate. “Reality has revealed that it’s a different framing,” says Thomas. Now they’re using copy events as their desired outcome and clue captures as a health metric.

Thomas shared a few screenshots from his dashboard to help illustrate his learnings. The chart below shows the number of rolling users and which activities they’ve completed, such as copying one time, copying more than five times, or completing a clue capture or what this chart refers to as a “strong signal.”

A chart showing a series of dates and the total number of users, the number of users who copy once, those who copy five times, and those who do one strong signal event.

This chart from Amplitude shows the rolling number of users as well as the users who copy one time, those who copy more than five times, and those who complete a “strong signal” event like a clue capture. Click the image to see a larger version.

Next, Thomas shared a chart that shows the relationship between number of copies and retention. If someone logs in on day zero and copies one to four times vs. five or more times, how likely are they to log back in again? It turns out that copying five times is almost 50% more predictive than a little bit of copying, which is not very predictive at all. And 75% of the people who have copied five times come back once every two weeks. “Even these events that we were sure were going to be super predictive didn’t end up being that predictive,” says Thomas.

Even these events in our app that we were sure were going to be super predictive didn’t end up being that predictive. – Tweet This

A chart showing retention for users who have copied one to four times compared to those who copied five or more times or who completed a strong signal event.

This Amplitude report shows that retention for users who have copied five or more times is significantly better than for users who copied fewer times or performed a strong signal event. Click the image to see a larger version.

To show how they’re slowly improving traction, Thomas shared the chart below, which shows the percentage of active demo or paid users that perform a copy action five times.

A chart with dates at the bottom and a line that shows the number of users who have performed a copy action five times. It is below a dotted green line labeled "target."

This Vistaly graph shows the percentage of users who have performed a copy action five times over a few months. This proved to be a critical metric for Thomas to track.

Now this is where things get interesting. Thomas says it could be tempting to focus on the chart below instead, which illustrates the number of clue captures over time. At first glance, this looks great—not only has the number of clue captures increased over time, but it’s essentially reached their goal (illustrated by the dotted green line).

A chart with dates at the bottom and a line showing the percentage of users who have performed a clue capture. It is hitting a dotted green line labeled "target."

At first glance, this Vistaly chart looks impressive since it has essentially hit the target. But Thomas cautions that it’s not telling us the whole story.

“This is a much more appealing story as a product manager,” says Thomas, “but I don’t want to tell this story, because I know that it would be self-aggrandizing nonsense.”

Wait. What? Why did Thomas say this?

Based on the analysis they’ve done, he now knows that clue captures are a health metric. Clue captures don’t appear to offer the same user value as simple actions like copying more than five times. As Thomas explains it, “It’s the goal we thought was right based on our intuition as internal users. But it doesn’t meet the goal that ended up tracking the behavior of external users.” He continues, “Retention is a good indication of product market fit. And so far, based on our small pool of data, we thought that doing things like sharing something you found in our app or saving it so you could look at it later would be that great indicator of product market fit, but we learned that in fact any activity perusing, coming back and just checking every day might have more value.”

The goal we thought was right based on our intuition as internal users doesn’t meet the goal that ended up tracking the behavior of external users. – Tweet This

To help illustrate this point, Thomas shared the following example. If your job is to work on the physical defense team for the president, you might go into DarkBlue and search for “President Smith kill” and your search might come up dry. You might find nothing. But that would be a win. In that situation, you wouldn’t save a record or share a record.

Not every search ends in catching a bad guy. But that doesn’t mean that users don’t get value out of the tool. “The reality is that maybe we missed that there are other jobs that our customers value,” explains Thomas.

Key Learnings and Takeaways

“We really had to iterate with low information three or four times until we got to a metric that I feel good about. You’ve just got to keep trying things. You can’t give up.”

What has Thomas learned from this process? First, he’s quick to say that he’s far from done—he still has plenty of discovery and learning ahead of him.

But reflecting on what he’s experienced so far, he has a few observations to share. The first is that it’s tough to be in a no or low information environment, but even if that’s your situation, it’s worth taking small steps to gather as much information as you can. “If you keep trying, the juice is worth the squeeze,” says Thomas. If you’re going down this road, just be prepared that it will take time. “We really had to iterate with low information three or four times until we got to a metric that I feel good about,” he adds. “You’ve just got to keep trying things. You can’t give up.”

We really had to iterate with low information three or four times until we got to a metric that I feel good about. You’ve just got to keep trying things. You can’t give up. – Tweet This

Thomas’s other piece of advice is: “Keep an open mind and be wary of your own desire to be central to your user’s world.” Going back to the example of clue captures, Thomas says, “Our comfortable assumption was that users spent hours in the tool and then came out on the other side with the name, address, and social security number of the bad guy. I’m exaggerating here, but we assumed they came away with that winning conclusion and then they went on to whatever was next. And that’s just not the reality for a lot of our users. A lot of our users might dip into our tool to solve a narrow problem, and then go to other tools that solve other problems. And even though that’s sort of less satisfying to my ego, it is more satisfying to customers.”

Keep an open mind and be wary of your own desire to be central to your user’s world. – Tweet This

To sum it all up: “Coming up with some genuine measure of user value and then validating whether you hit the right spot is tough, but it’s really valuable.”

If you’re facing similar challenges with defining your outcomes, you don’t have to face them alone. Come join us in the next cohort of Defining Outcomes, where you’ll get plenty of hands-on practice with like-minded peers!

The post Product in Practice: Iterating on Outcomes with Limited Data appeared first on Product Talk.


Product in Practice: Iterating on Outcomes with Limited Data was first posted on December 13, 2023 at 6:00 am.
©2022 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.


from Product Talk https://ift.tt/t7G1Acq
via IFTTT