curved line
PODCAST
EPISODE
104

Ep. 104: Exploring Validation in Agile Development

SUMMARY

The episode discusses validation in agile development, emphasizing business validation, automated deployment, and system monitoring. Key takeaways include predicting customer behavior with new features, using varied metrics, and involving stakeholders.

apple podcasts buttonspotify podcasts buttongoogle podcasts button
podcast recording

Description

Curious about how validation works in agile development? By this, we mean the process of verifying the product does what it is supposed to. Join Peter Maddison and David Sharrock as we unravel the nuances of this crucial process. We'll dive headfirst into the technological and business aspects of validation, discussing everything from automated deployment processes to system monitoring and the importance of telemetry data. We'll also show how business validation ensures your product behaves as planned, giving you the confidence to move forward. We’ll also talk about how validation differs across types of business.

This week's takeaways:

  • Try to predict what customers will do with your new functionality.
  • Think of the many ways you might measure the impact of your new functionality.
  • Involve all stakeholders in the validation process.

Contact us at feedback@definitelymaybeagile.com with your thoughts, questions, or suggestions for future episodes. Remember to subscribe to stay updated on our latest releases.

Transcript

Peter: 0:05

Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale. Hello and welcome to another exciting episode of Definitely Maybe Agile with your hosts Peter Maddison and David Sharrock. So we're standing here laughing a bit because we were just chatting about this episode beforehand, and today we are talking about validation.

Dave: 0:28

Validation. We've got to start Validation of what, so we've got this note written down. Post-release, pre-release validation. What are we validating? Let's get agreement on that.

Peter: 0:38

Well, there's these different levels, right. So we and I think part of the reason this is interesting is because you and I come from different backgrounds and when we look at products, I've got PTSD from memories of being up at like 1 am in the morning having services, restarting services, trying to get everything right and then having to wait for people to reconcile reports and validate that things are doing what they're supposed to do. So there's validation that happens all at the wrong periods of time. There's validation that happens from a technical perspective, and then there's validation that happens from a business or a customer perspective, and that's where we're heading.

Dave: 1:16

And let's break this down a bit, because I think both of us I mean we've all had that experience of being on the phone trying to get something over the line at that very late o'clock in the night, right, and that validation you make a change to your systems, you push those changes, you get everything stood up and now you've got to figure out and there's like two levels of validation straight away. The one is does the system work? That's your basic, which I think nowadays is much less of a sort of story than it maybe used to be, which is, you know, pushing changes, live and then fingers crossed and watching all the lights come on on the servers or whatever it might be, to see where you get to. That validation is the system is up and, we believe, is operating the way we expected to. It Behaves appropriately when we look at it from an operations perspective.

Peter: 2:01

It should be. It definitely should be. This is where we get into a conversation, though, about the difference between things like telemetry, monitoring and observability, and what the various pillars of that are, and how that actually impacts your ability to understand whether the service is doing what it's supposed to be doing, because what you do see a lot in organizations today, even recently with teams that I've met with if you ask them a question like so what is your service doing right now? And they'll look at you blankly and go well, I don't really know. We don't have access to the telemetry or we don't have that visibility to be able to tell what it's doing, and this ties back to not having the feedback loops from operations into delivery teams to actually see what's happening and can we see what's happening with the customer, and I think this is going to lead into some of the other aspects of validation that you're wanting to talk about.

Dave: 2:52

Yeah, well, I mean, and I think we should let's pause and not just gloss over what we're talking about here this is that field of DevOps. It's that field of DevOps being, first of all, can we automate as much of that deployment process as we can so that it's standardized, so that we can control and configure and reproduce it repeatedly and safely. And so there's a whole bunch of monitoring that goes into, say, did that deployment go the way we thought it would? But then there's another aspect you're bringing in, which is almost the. I just think of the machine that goes ping right and in the Monty Python space right, where we're monitoring the light signs of our system and we therefore know how is it as stable, is it going in the right direction, and so on. And I think that's what you mean by telemetry right.

Peter: 3:40

Telemetry being the collecting of information from the target system, which is the information axiom, and just like in knowledge management, like if you have data which gets turned into information, that gets turned into knowledge, which gets turned into insights, and now you also then start to think about predictive analytics and to be able to understand where might this start to go wrong. And then you start to get into another whole field. But I think this might be going a little bit further than a part, for sure, yeah, so, and I think that was like step one.

Dave: 4:10

step two is, and it used to be called, I think, uat right, but business validating the product is behaving the way they expected it to, and we could argue that there's the business validating the unchanged part of the product is behaving in a stable way, the way the unchanged part of the product used to behave, and then there's part of the validation of the changed part of the product is behaving in the new way that, as expected from a business content yes, and then we put it in front of the customers and everything falls over. Well, and I think we should. We want to be careful here, because the bed in front of the customers introduces a whole other thing, and that's the bit when we were writing this post it Certainly when I put this together, I was thinking that's the thing that we want to talk about. But you raise some excellent points, which is the first thing we need to know. Is you know?

Peter: 4:55

can I deploy the system?

Dave: 4:56

and check it and make sure you know it ticks the box from it, let's say from an operations, from a running system perspective. The second bit is the business aware of that? Their system is behaving the way it had before, with only the changes where the system comes in and bear in mind so much of that is probably in some cases still manual?

Peter: 5:16

Yes, and although this is if you've heard of observability and its place in the marketplace, this is exactly what observability is targeting. Is that automation of that? Business verification is one way of looking at it, it's the. Can I understand that and expose business logic that's inside of the applications and services that I'm building, so that I can start to make those judgment calls as to is this doing what it's supposed to be doing in more real?

Dave: 5:41

time. Is your by flow behaving? Can I actually place an order? And is it behaving correctly or whatever it might be that you're looking at? So and I think again, we just kind of hinted that a lot of that is still manual. I think there's a whole bunch of work to be done, whether it's acceptance tests or automation or the observability conversations that you're just touching on there, and in a sense there's not much point moving past that until those first two are well understood. It doesn't mean 100% coverage. It means well understood. You know what parts of your system change a lot, so you're making sure that's automated and there's rapid feedback and telemetry, whatever it might be in place. There are other parts of your system. You probably have not gone to that. You know that level of investment for, and only then moving on to what's the customer's behavior, and are they doing what we hoped they would do?

Peter: 6:30

And before we leave that particular piece, the thing that drives that in today is actually largely this kind of shift to microservice architectures, because if you just have a traditional stack, you know front, middle, back end and it's just straight up and down Java stack with a front end and database at the back end, you can get away with not needing to know quite so much about the telemetry and monitoring. It's kind of like this thing's up, it's working, we're good, and I can verify that manually. Once I go and start to break apart my business services into a much more complicated ecosystem of services which are all dependent on each other in different ways. I now need to really understand what does good mean? How do I know that this is doing what it's supposed to do if because services are being reused or operating in different ways within that ecosystem? So as complexity, the dependencies from an operational perspective, increase, the more critical observability becomes.

Dave: 7:28

Right and automated observability. When I'm working with teams on this one, I often call this the release cost, because these are the before. You can sort of shut the door on the deployment and say, okay, we can go home, now, everything's good. There are all of these things that have to be run through to make sure we're confident that everything's behaving as we would expect it to, and that time can be measured in days or even weeks. I've seen in some cases because there's a lot, I mean compliance we've not even talked about some of the things that go on there from that side, so that that time it takes from the moment you decide to deploy something to the moment it's in front of a customer and we're happy that it's in front of a customer is the release cost, which can be long in some cases, and we certainly are working with organizations to shorten that as rapidly as possible. I mean, my argument would be that should be an hour is, not days or weeks.

Peter: 8:16

For sure, and minutes really.

Dave: 8:19

Ideally, yeah, but that's where you're working.

Peter: 8:23

It depends if you're working with me or not. I've seen this. One of the other pieces that's worth calling out is this doesn't just apply to large legacy, complex organizations. It applies to organizations operating in modern environments, too, where because I've seen it like fully cloud native setup, distributed architectures, and still there's this we're going to deploy this at a certain time of day and we got everybody around cloud, around, making sure this works, and even then, there's a lot of uncertainty as to whether it's behaving the way that it should, and which tells you that we don't necessarily trust some of the automation that we have in there and that we don't have the right visibility into the right things to be able to for certainty. Okay, I should be able to switch this button here and just go.

Dave: 9:09

Right, but I always think of that as an old, traditional, I don't know a different mindset with shiny new tools right, they've got the shiny new tools. But then there's still in that that, like you said, you know we have to, we have to deploy overnight or at the weekend, when nobody's around, just in case, and all of the things that, that cautious approach that comes, quite frankly, with the, the experiences that we were describing right at the beginning of our conversation, which is being up late at night on the phone trying to get something out of the door, with all of the hurdles that used to come with that, and that's certainly, I think, changed with with more modern approach to that approach, and that's sort of problem.

Peter: 9:48

And more on tooling. I mean the, the tooling that is available to enable you to get to a good state. Now is is much better than it was previously. There's a lot of abilities to extract and aggregate information and using machine learning on top of that to discover services and tying all of this together and it's got a lot, lot better over the last few years, right Sort of intent of our original conversation.

Dave: 10:12

Now I feel like we can get going on that.

Peter: 10:15

Which is that.

Dave: 10:16

Let me know. So if we have all of that in place, then the question becomes how do we, how can we, watch what's going on from our customer side and see that the functionality that we've put out is behaving the way we want it to, is is giving? Us the results we were hoping to for. All the customers were hoping for.

Peter: 10:35

Real user monitoring run or user experience monitoring, is the kind of traditional sort of operations DevOps answer to that. I've stories from many, many years ago when a certain big SI had bought another product and they sold it into an organization I was working with and my team ended up inheriting this and it was kind of embarrassing because it cost quite a bit of money and it turned out to be complete vaporware. Because we're going back a few years. This, this, just didn't exist, so we ended up pretty much having to help them write it at that time. So the but a lot of what you're looking for there I mean this is when we start to talk about that, what? What are we looking for the product to do? Are there any indicators we might have that things are going the way that they should? And, for example, when you're looking at something like an e-commerce site, you'd be looking at shopping cart abandonment and things like this. You'd be looking for metrics that you can extract from that to tell whether the system's behaving the way it would. And you're still going to want to at some point, go and ask your customers, so you put either surveys, but even better, as if you can get focus groups together and then start to say, okay, I'm going to try something. And this is where we get into this concept of experimentation too. If you're exactly, it's like I'm going to take a new way of doing something, I'm going to make it available to a small group of customers, and then I'm going to ask them what they think before I roll it out to absolutely everybody else.

Dave: 12:11

Well, I think the key there is number one you have to know what you're testing for. It's not a case of watching the customers and hoping you can learn something from the behavior. You're going to learn something by predicting what you hope they're going to do and watching what happens as they start using your product and behaving being delighted by it or not. So, and then the other side of it is, and I would argue that the two things that we see a lot of organizations kind of, let's say, drop the ball on or not spend enough time on one is really making those predictions. What do we expect to see when we push this functionality out? How are our customers going to make use of that functionality? And the second thing is are we tooled up to be able to measure what happens when they do so if we put a different buy flow in? Are we tooled up so that we can differentiate between the old way of working and the new way of buying, whatever it might be.

Peter: 13:03

Yeah, because, well, we want to be able to monitor, well, how are they using it? Because, depending on what it is, you might find that they're using it in ways you weren't expecting, or behaviors happening that you weren't thinking were going to happen, and so at that point you really need to understand, like, how should I behave now? Right, getting notifications on my computer.

Dave: 13:24

So in, and so, as we kind of look at pulling this together, then I think we, when we look at the validation of functionality versus the validation that the deployment went smoothly, the validation around functionality is or the adoption by the customers. I mean, we're seeing this a lot in. Actually, it's quite interesting. We see a lot of conversations right now about internal systems for employees, because a lot of investment is being made to put new systems in place that will somehow make the employees life easier, better, let's go with that right. And then what they're finding, of course, is I think we're now in a world where employees have been sort of they've got more. They're more likely to say no, I'm not going to do that. And so now like finding the adoption of these internal systems is kind of really, really low or barely you know, just barely getting what they need to what they expected. And I think a lot of that is you know, back in the day before the pandemic, we kind of we knew to work within the systems and now we're empowered a bit to say, actually that isn't the way I'm going to approach that problem, you've got to do something different. I think, which is quite an interesting shift because customers we kind of hope they're going to do that, but employees we can. You can see a big investment kind of to sit there and idle because it's not getting the traction and the usage that was expected.

Peter: 14:48

Yes, yeah, and I've seen that many, many times in my career before the lockdown and COVID where it was, something got purchased and it wasn't the right solution for the problem and so it didn't actually solve it. So it ends up sitting on the shelf for a variety of different reasons, that all sorts of different layers of the stack and there's. You also have instances where something gets built internally, but the internal you've got the same. That's your problem. You're not getting feedback from your customers, your customers, the internal people. If you're, you still need to do that same work internally to. Am I building the thing that is going to excite and engage my people right? So that's the. There's that kind of missing piece. Yeah, we, we really need to sort of put that together.

Dave: 15:39

Well, and I do think it changes the conversation. I'm really like when we started the conversation, we talked about things that have a long, long history. Any operations team is well aware of what happened when you deploy something. Any business group that's tied to those, they're well aware of what's involved in those pieces.

Peter: 15:58

And what?

Dave: 15:59

I find interesting is the awareness around once it's out safely in the wild and being used by end users and customers. The awareness of did it do what we thought it would, did it achieve that end goal, is talked about but is not in the DNA of organizations the way the first two examples we just discussed are.

Peter: 16:18

Yeah, especially for older organizations, the ones which haven't grown up in the cloud. Yeah, because I would say that if it's a SaaS organization, for example, they very, very much look at things like churn and they look at retention and they look at numbers because that drives their business. It's just intrinsic to how they behave, they live and breathe. So it does depend on the type of organization that we're talking about.

Dave: 16:43

And thank you for saying that, because I'm just thinking, all the examples that I'm thinking about are all in, I'd say, organizations which are making that shift in that direction, and it's not something they automatically go after. And what I'm finding interesting right now is they are beginning to. They're really beginning to. You know, we're going into conversations where the adoption of new functionality, the impact it's having on end users and customers, is being talked about and it's set up that that information has to be there. So how do we do it? What's the kind of key, one or two things that you're going to say? We talked a little bit about experiments. I guess we have that. You need to know what experiment, what question you're asking at the outset, before it's out of the door. We need the well, you used the word telemetry earlier on but we need the analytics. We need the information coming back in that says so that we can identify the behavior. Anything else.

Peter: 17:35

To measure the experiment. So, like, if we're going to run an experiment, we need to be able to measure the outcome of the experiment, even if the measurement is running that focus group, because that's the only way we can think of doing it. But we need some way of gathering information about was the experiment successful or not. We've already spoken many times about how a lot of people run experiments that can only be successful, but in this case we really mean it in terms of we have a hypothesis, so we need what are the indicators that will tell us whether it's going to be so we need to be able to gather that information from the system. I think the other piece that I thought was kind of interesting as we were talking through that, that there are these different layers of validation and the there it is possible because we can see it in the organizations that have sort of grown up this way to have this automated all the way up the stack. A lot of what I've seen holds back other organizations that are trying to go that way is the friction between the operations groups and development groups and like this silo approach to it, and because there isn't that free flowing communication across the areas, then you've got like development teams who don't have access to the telemetry that they need, so you don't know this kind of thing, and you've got business groups telling them also do so. There isn't that conversation about like, well, how are we going to measure whether this is successful or not?

Dave: 19:09

I know I would add to so the organizations that I've seen that do this really really well. The conversation is, before it enters, the delivery teams, the development teams for building right.

Peter: 19:20

So it's part of the weather stories.

Dave: 19:22

However you're writing the work requirements is part of the conversation at that point, driven by the business, through a product owner, whoever it might be, so that it's on everybody's sort of horizon awareness all the way through the process of getting that feature built, so that when it's deployed we understood all the way along that this is going to need to be measured, we need to understand what the impact is rather than sort of cherry picking the features at the end and trying to shoehorn in some sort of analytics and measurement and understanding. That's not going to work.

Peter: 19:54

No, you've got to be able to have that learning, because that's a really important piece is that, if we know the sort of thing that we're going to try, we need to be talking with our technology teams so they can work out. How are we going to measure this? Because it and if that happens too late, we won't be able to get the right information, potentially depending on the nature of the systems you're working with. As with all of these things, there's lots of caveats.

Dave: 20:25

Wrapping it up? Yeah, so.

Peter: 20:26

I think we wrapped it up nicely there. I think those are three good points. I think I did like the kind of layering on that we were doing there as we went through all those different pieces and, yeah, I mean it's important to make sure that you do this, understand that like tracing it all the way through to the outcome that the customer, with the customer right, not just looking at the technology side of it, but you can use technology potentially to capture some of those insights and, if you're running certain experiments, work out how are you going to do it ahead of time.

Dave: 20:56

Well, I think this feels like that All parts of that collaborative group working together right? We've talked about how it's dependent on the DevToOperations boundary, the business-to-dev boundary. It's a little bit of everything, so that's one of the reasons why it's so difficult to do. You're not relying on one handoff. It's really something that is proud.

Peter: 21:16

Yeah, excellent. So if anybody wants to send us any feedback on this episode, they Feedback Feedback@ definitelymaybeagilecom and remember to hit subscribe and look forward to next time. Thank you, Dave. Thanks, Dave. Thanks again, peter. You've been listening to Definitely Maybe Agile, the podcast where your hosts, Peter Maddison and David Sherrock, focus on the art and science of digital agile and DevOps at scale.