Join Scott and Jen as they interview Max Rudman about Prodly regression testing—what it is, why you need it, and how it works. Don't forget to watch the demo at the end!
Transcript:
Scott (00:12.485)
Welcome everyone to the third edition of Did you know? We Prodly has CPQ regression testing. I wanna introduce kind of the panel that we have today. So as always, I'm Scott Teeple, the Director of Customer Success. Really excited to bring Jen, my co -host back onto the call. I you know us very, very well. But we're really excited to bring on the CEO of Prodly and also the founder of Steel Brick, Max Rudman. So Max, you want to just kind of introduce yourself real quick?
Max Rudman (00:46.046)
Hey y 'all, glad to be here. Happy to join you, Scott and Jen, and really excited to dig into testing. As Scott mentioned, I Prodly is my second company. My first one was SteelBrick, which of course was a native configure price, quote CPQs platform solution built on top of Salesforce platform, which we sold to Salesforce in 2016 and is now part of the Revenue Cloud product line at Salesforce. I have been in the Salesforce ecosystem for, God, since 2006, I think. So what is that, 18 years. So obviously quite a bit of experience in CPQ, in Salesforce, change management specifically. So really, really happy to join you guys here and talk shop on testing.
Scott (01:34.717)
There you go, well let's get right into it. So, Prodly has CPQ regression testing. As you know, you trust us to do all of your complex data and migrations around CPQ. But why should you really trust Prodly with your CPQ regression testing? So Max, what is CPQ regression testing and why is it important?
Max Rudman (01:57.122)
Well, regression testing is obviously important in CPQ or not, but what makes setting CPQ particularly an interesting case is obviously there's a lot of calculations that are happening, right? Particularly you're on the P side of CPQ, right? You have lots of different discounting rules and pricing rules. And at the end of the day, what you really want to make sure that your prices are coming out right.
And as anyone who's done a CPQ implementation of a good size knows, right, there's a ton of permutations on those different scenarios. And so testing it manually every time you make a change to make sure that every type of quote, you every permutation of a quote is coming out with the same number is challenging. And so that's what our tool is meant to automate.
And of course, what makes us different is it's built specifically for CPQ and it works really differently. It doesn't exercise the UI part of the flow because really, you know, button moves five pixels to the right or to the left. You really don't care. What you care about is that the numbers coming out, right? Yeah. So that's why we're exercising the API. Of course it doesn't, you know replace other parts of testing. That's just a different approach, right? That's a trade- off between maintainability of test cases and or ease of maintenance of test cases in our case. You know, the trade off that that you give up of course is the coverage. And so if you really want to be best of class, you should pair this with other testing approaches that you have to cover the rest of Salesforce and test UI and all that.
Scott (03:51.197)
So then Max, why Prodly to do this? What do we bring that maybe some others don't?
Max Rudman (04:00.254)
Well, I think what we bring is obviously deep understanding of the space, deep understanding of the product that we're testing, right? Like, literally build it. So, you know, I think that's really just that, quote unquote, the main expertise.
Scott (04:17.792)
No, it's perfect. It's perfect. All right. So then the basics of a regression tool. So Jen, I think this is your slide.
Jenn (04:24.789)
Yeah, thanks, Scott. So here in this slide, we have kind of the basics of a regression testing tool. And of course, that starts with test case management and then moves on to test execution and result logging. So this is kind of the core there, right? Being able to set up your tests, execute your tests, and then have those results to review later. Of course, there's the comparison of the results, which is essentially also how we're determining if those are a pass or fail on the test as well as reporting those. And then we have the ability to send you those notifications directly to your email as well so you're notified immediately of any passes or fails on your testing. Reusability, of course, is a huge component there, right? Not having to rebuild those test cases every time you want to run a test. And then we have also environment targeting, which allows you, of course, to run tests in other environments as well. So we'll kind of talk about that a little bit more later to highlight here that these kind of core functionality for regression testing tool Prodly does provide all of that and covers it.
Jenn (05:36.343)
I think you're muted there, Scott.
Scott (05:40.797)
Look at me, I'm talking and just one cut, that's great. So I think the one feature that I really like the most, and I think our feedback from our customers is the notification, right? And the ability is to get that email. It's that you set it up and it's that peace of mind, Jen. I've heard you say this so many times with our customers, right? And it's the abilities to have that test automatically be running, getting that notification, and knowing that, hey, now's the time when I have that problem instead of catching it a week or two down the road. So, yeah, that's good.
Max Rudman (06:09.01)
Yeah. A good analogy I've heard is, Ray, that's like a smoke alarm, right? You install it in different parts of the house, but once it's in, it's in, and hopefully you'll never hear that sound. But if there is a fire God for bid, you wanna have the peace of mind that something will alert you to the fire. And so I think it's a really good analogy for regression testing in general, but particularly with how our product works.
Jenn (06:45.722)
So I think this slide is mine as well. So Max, we do have some questions on this slide that we want to ask you directly. And these are kind of just questions that a lot of our customers ask when they first see our test tool and they want to know more about it. So kind of first and foremost, of course, is what problem does test solve?
Jenn (07:10.805)
Muted Max.
Max Rudman (07:14.49)
As I said, the main problem that our test product solves is this regression testing, which is a particular type of testing, which we'll talk about in the next point, right? And even further, even more specifically, regression testing versus CPQ. And I do think that we have a unique approach, a unique take on this problem, which I mentioned before, that we're really focused on exercising the business logic, the pricing engine, the... new engine, you an engine in an effort to make sure that the numbers are coming out right, which is really what you care about. And there is certainly some downsides to that approach in terms of coverage, right, but there is some real, real strong benefits in terms of making it super easy to create test cases and more importantly, a much lower overhead in maintaining test cases, which is I think something that other regression tools struggle with in general.
Jenn (08:18.559)
Okay, our next question is how is the CPQ regression test meant to be used? What's kind of the best practices around that?
Max Rudman (08:28.442)
Yeah, so I think it's really important to understand what regression testing is and what it isn't. There's different types of testing and I think it goes without saying the more testing you do the better. There's unit testing where you make sure that an individual change that you've made actually works as you expect it to work.
System integration testing where you make sure that the change or changes that you've made when integrated into the wider system work as expected given all the other systems that are in the mix and all the integrations that are in the mix. There is user acceptance testing right there. You're making sure that the usability of the product or the changes that you've made is acceptable and that the users sign off on how it works. There's performance testing, there's lots of different kinds of testing. So regression testing is specifically focused on making sure that change or changes that you made did not regress functionality, did not break functionality that previously worked.
Jenn (09:41.294)
Thank you, Max. Okay. And when we say that we can do a CPQ regression testing, what exactly can we test at this point in time?
Max Rudman (09:52.178)
We can test quite a lot. In fact, we can really test the entire CPQ lifecycle workflow from the pricing the quote to creating a contract with subscriptions to amendments to renewals of that contract. The only thing we can't really test right now is the configurator piece. And that's because there's no APIs that support configuration or that configurator is not exposed through a robust API. So we really can't test that. And then we are, as you know, are hard at work adding support for Advanced Approvals, which is not technically part of CPQ, but it's, you know, a product that is frequently used with CPQ to support Advanced Approval workflows.
Jenn (10:49.783)
I'm very excited for that. Thank you, Max. And lastly, we have, which environments can you run your tests in?
Max Rudman (10:58.927)
Well, regression test sort of implies that you're probably not running into production, right? If you're running into production, that's too late. But you can and should run them just about everywhere else. And I think sort of that's the vision, right? As you're making changes and as they move through your development pipeline and you know from dev environment to whatever the next environment for you is, QA or SIT or UAT or what have you, you you can set up automated, know, automations to kick off regression testing, you know, at every stage. And you should, and for that reason, our tests can be executed through the API, they can be scheduled. So there's lots of ways for you to ensure that you run these tests often.
Jenn (11:52.407)
Thank you, Max.
Scott (11:53.853)
Well, that's great. I think now we're moving over to a quick little demo of the testing suite. at this point, I will stop sharing and let Jen throw up her screen and kind of take you through a quick little demo.
Max Rudman (11:54.814)
Thanks.
Jenn (12:11.837)
Give me one moment.
Jenn (12:19.213)
Alright, let me just give this a quick little refresh here.
So when we're talking tests, tests does have its own tab that it lives in. So if you don't have access to the test tab, please feel free to reach out and we can speak to your team here at Prodly about getting you access. But it all starts with test templates. So to build a test template, you'll hit new template. And of course, you'll have to give your template some sort of name.
And then we get to choose a template type. Did you want to work with quotes or contracts today? I'm going to go ahead and proceed with quotes. So once you have selected your template type, that's when you will then see the available objects that exist on the template type that you selected. So you can see we have quotes, quote lines, opportunities, opportunity products, and accounts. And just for reference, I can show you as well with contracts, have contracts, assets, and subscriptions. So coming back into quote,
you can select the available object that you want to load the fields for. And then this part is really just a matter of picking and choosing which of these fields are mission critical for you. The ones that you want to know if there's any changes to the values of the baseline quotes that you established, you want to know immediately. So you come in here and just pick and choose the fields that you'd like. So for example, I can come in here and search for amount, let's say, and I can take maybe the list amount, net amount, deposit amount, and so forth. can select as many as you need. But once you have what you're looking for, you can go ahead and save that template. And we're going to be referencing this in just a moment with our test cases. So we have our template. We move on and create a new test case. Again, give that some sort of name that you can identify.
Jenn (14:12.427)
And then you get to choose what environment you actually want to run that test in. So go ahead and choose whichever environment you intend on grabbing your record from. Again, you'll have to select a function here. So we have various functions. Of course, they are related to the template type that you're working with and what kind of objects you're working with. So we'll just work with Calculate today since we're working with quotes.
Go ahead and find that test template that you created earlier. And then this is where you're going to need to plug in a quote ID. So you're actually going to grab the Salesforce ID of a quote that you want to use as a reference for your test. And go ahead and just drop it in here. And go ahead and hit Save once you're ready.
Jenn (14:56.043)
And so on this page, we're just giving Prodly a moment to just go ahead and establish a baseline. And what that means is that we're coming into the quote and we are taking a kind of a snapshot, if you will, of the values that exist on the quote. And we're only looking at the values that you specified earlier in your template. So just those few fields that you've selected, those are only the fields that we're going to be looking at the values for. So now we have that baseline established. And so we can move on to create a run now. Of course we also have an option to do test suites in case you ever want to submit multiple test cases in a single run. You can do that as well but today we'll just focus on running that single test run. So give it a new test run. Once again have to choose the environment that you want to run the test in. Go ahead and find that test case that you created earlier.
And then we're going to run immediately today, but really this is kind of where the power comes in is scheduling this to run. So you can even schedule this to run and why not, why not go ahead and schedule to run every single day of the week. And the idea here is that you'll be receiving an email every day when we run the test with either a pass or a fail, as well as the results to indicate why it was a pass or a fail. But the reason that the power is kind of in running this every day is sort of going back to that analogy that Max mentioned about a smoke alarm, right? You would want to be receiving those notifications every day knowing that everything is passing and so if one day comes and you do receive a failed test, you can always go back to the previous day to see if that test passed or failed and assuming it passed the day before, you now have that kind of 24 -hour window to go back and see what changes have been made in the last 24 hours, you know, were they intended changes or perhaps something maybe broken behind the scenes there. So we're just going go ahead and run immediately today and I'm just going to submit this to do so.
Jenn (16:57.015)
So once we submit that, we're just going to wait for Prodly to go in there and actually run the test. And very quickly we see that the test has passed. Now when I want to see the values there, I'm just going to click on that or do the view details here. And again, this information does come in the email, but in case you're in the UI to see it like this, you can open it up here. And so this is where we see the results. So of course now in this case, we didn't make any changes to that quote, whatever the baseline values were when we created the test run, they still are now.
So everything is passed and you can clearly see that the expected values are all the same with the actual values which is why this is a pass test. Now just for an example let's say I come in here and start playing with some of these numbers. Let's go ahead and change this discount.
Jenn (17:57.321)
Alright, so now this is recalculating. We're expecting to see some of these values changing. And then we're just basically going to rerun the test just to see what a failed test run would look like. So just give this a second just to recalculate. And let go ahead and refresh.
Jenn (18:20.013)
So now I can see that some of those numbers have changed. So again, if we rerun this, we can expect that this is going to now fail. So let's give that a rerun.
Jenn (18:34.835)
And we got our failed results. And of course, we can come in here and confirm that the expected values...
Jenn (18:51.162)
Okay, let me try setting this up once more.
Jenn (19:17.409)
All so we have our failed overall results. And again, coming in, we can always see exactly why that failed. So we made some changes there. The net amount changed, the deposit amount changed, thus those two have failed. And again, you can always see the value that Prodly was expecting versus the actual value that Prodly received. And thus you can see why this test has failed. And again, this would have been emailed to you as well. So you have that record no matter where you are, whether you're in Prodly or not, you're going to be notified of that change and notified of your pass or fail test.
Max Rudman (19:55.14)
Obviously we used discounting here for the demo. I mean, it's not a particularly realistic test case, like changing discretionary discounting doesn't really trip up regression testing normally. Normally you would probably set this up on some system generated discount field, right? The idea being that if something had changed in the config where, you know, it's system discount for a particular product or quote overall has changed, then that's a problem or at least something for you to implement. Maybe it's not a problem, maybe it's expected given the changes that you've made, in which case you go in and re -baseline and capture the new values as expected and from that point on, you'll pass.
Scott (20:44.493)
Absolutely, that's great. Thank you for the demo, Jen and now you will I'll take back over the screen here and we will finish this up.
Scott (21:00.795)
All right, so then what's next? So obviously, if you have any questions or want further information, again, this was just a high level kind of discussion and demo of the overall product. But don't hesitate to reach out to your customer success team, Jen and myself. We can get you involved or access to the right individuals. And if you're a prospect and you want to know more information, you can reach out to me as well and we can get you a demo and get it scheduled up.
But yeah, thank you for the time today, Max and Jen, and I look forward to our next edition.
Max Rudman (21:39.88)
Awesome. Thanks for having me, Scott. Really a blast. And thanks for giving me an opportunity to join you and Jen on this episode.
Scott (21:47.421)
Perfect. Thank you.