**which of the following is a reason for not measuring the effectiveness of promotional campaigns?** This is a topic that many people are looking for. **bluevelvetrestaurant.com** is a channel providing useful information about learning, life, digital marketing and online courses …. it will help you have an overview and solid multi-faceted knowledge . Today, ** bluevelvetrestaurant.com ** would like to introduce to you **Advanced Campaign Analysis: Measuring Campaign Effectiveness and Long-Term Impact – YouTube**. Following along are instructions in the video below:

“I m yohai optimove. Chief data scientist and here with me is nitay alon. A a data scientist in optimove research lab. Now when it comes to analysis of campaigns and data.

Analysts usually tend to use a very specific set of tools they usually base their analysis on the average. They may look on the median sometimes they add few common statistical tests like t test or z test to check the difference between the averages. But that s about it and in this session. I would like to share with you a more scientific approach on how to conduct campaign analysis and the key motivation is to allow you to squeeze the data of your campaigns and get more insights of it actually to be able to ask and get answers to highly relevant questions regarding your campaigns now in order to do so i will share with you our mindset.

How we as scientist address this challenge and i will also share with you some of our secret knowledge about how the data really looks like and what are the relevant distributions in this domain. And at the end we re going to provide you with few handy tools that will allow you to leverage this understanding into a better analysis. Okay so what is the key motivation. As you all know a campaign is always born with an objective.

It may be to increase conversion to win back churn customers to cross sell to upsell whatever. And if the marketer did his job properly and he planned the campaign as a scientific experiment. So once the campaign was executed and data starts coming so there are many relevant questions that we can ask this data. There also many ways or many methods to approach this analysis to conduct this analysis.

But we scientist. We never pick the tool for the analysis and then we ask the relevant question we always do it the other way around we first ask the question. We see what is the question in hand and then we find the right tool. The best tool to tackle it to address.

It and what are the relevant questions that i would like you to ask regarding your campaigns. So first is the campaign effective and if it is effective on whom is it effective should we keep running it can we estimate the future performance can we estimate how much value we re going to get next week next month in the next execution is it even possible to do that. And what are the alternatives are there any alternatives for that campaign should we send another offer from what we re sending now when the data starts coming instead of narrowing it down into one figure like the average or the median. We always strive for the bigger picture.

We always look on the entire distribution now by doing this we are avoiding a very let s call it human instinct to simplify and aggregate things. Which is in general it s a good instinct. But my job here is to convince you that in this case. It s highly beneficial to do that because there are very interesting.

Very many interesting insights that are hiding in this distribution. Now it would have made all the sense in the world to base the analysis solely on the average. And the standard deviation had that was the case and people usually think that this is in most of the cases. This is the case that the data is distributed normally and if that is the case so the average and the standard deviation really they perfectly tell the entire story of the distribution.

When you know them in normal distribution. So you know how most of your data points or most of your observation. Look like but this is extremely uncommon not in retail. Not in gaming.

Not in fintech and definitely not when it comes to campaign analysis. We almost never see this kind of distribution. What we do see is this it s called a long tail or a fat tail distribution and here. We have a very bizarre behavior.

Because the maximum here is sometimes as much as important as the overall sum and we may have like one or two data points..

That are responsible for as much as 95 of the overall sum and we also don t have a central mass. We don t have all or most of the observations gathered around the average and we have a very low probability to see let s call it extreme values and in case. You wonder just to make things clear so the x axis here is any monetary value that is relevant to your business. It may be like order value or deposit amount or wager amount purchase and whatever and the y axis is the frequency.

The number of customers or number of respondents that respond in that value now you can see that we need to see we have to see a lot of observations until we cover all the part of the distribution. And it makes the average kind of bouncy just bouncing until we see what the real let s call value of it and not always we have the time so it makes the average really problematic to rely on we can t rely on it makes it less informative in these kind of distributions in long tail distributions. The key to understand the data or how the data behave is the quantiles. We must use the quantiles in order to understand our data now i want to provide you with let s call it a heuristic or a common method on how to divide your data using the quantiles.

So you will be able to better understand your distribution. So again the x axis may be any monetary value that is relevant for your business order value deposit amount whatever the y axis is the frequency the number of customers now let s divide it into three parts. The red part is the majority the majority of the data. It may be like 80 of the data.

They may be 80 of the data and they tend to have a very low value on average then to the right we have the central bulk. Okay you can refer to these customers as the working class they have higher value in terms of average. And they may be like 20 of the data or little less little more and the last part are the respondents. The values that are located in the tail.

The tail customers the extreme values we can call them the vips and they are like 1 or 2 of the data it depends. But they tend to have huge value on average way higher than all the other parts of the distribution. Now as you probably can guess not all the data parts not all the respondents not all the part of the distribution were born equal and i want to take you through an example that will emphasize to you how problematic. It is to rely your analysis solely on the average.

Now let s take for example a very common baseline. It s a common case of a long tail distribution you can refer to the figures in the table as any monetary value. Let s for the sake of the example use order value in terms of us. Dollars.

So we have an average of 10 to the majority 80 of the data. We have an average value of 10 then 200 for the working class 19 of the data and 7500 for. The vip for the top 1 . Now each one of the scenarios.

Beneath the baseline referring to it okay. I ll show each one of them. And you can see that each one of them has the same average. The first scenario.

We have a decrease of 10 in the average value of the vips from. 7500 to 6750. In the second scenario. We have a decrease of 20 in the average value of the working class from 200 to 160.

And you see that it gives us the exact same average. But the overwhelming fact is that even when we have a total crush in the average of the majority of our data in the average value of 80 of our data. All the way down from 10 to 06 a. Decrease of 94 .

We still get the exact same average as we had when we had a decrease of 10 in the average value of the vips now..

It s obvious. It s crystal. Clear that each one of these scenarios is different. And it should trigger a different reaction a different response from you guys.

But you couldn t tell it if you rely your analysis solely on the average now with that understanding in mind. I will bring the mic to nitay and he will provide you with some handy tools that will allow to leverage this understanding into a better analysis nitay so after yohai set the kitchen for us. I m going to show you how to cook some pasta. So the idea is that i want to take you through a full cycle of scientific.

Compare analysis. Meaning. We start with a question. Then we look at the data.

And decide. Which part of the data are relevant to this question selecting a proper tool from my scientific toolbox and conducting the analysis. And at the end. We make an informed decision after we ve exhausted the analysis.

So as a setting for my examples. I want to portray a very common scenario. So you have a recurring campaign and you sent it last week. And now it s monday morning you come back to the office from the weekend grab a cup of coffee sit in front of the computer and look at the results heads up tomorrow is tuesday.

Not really in my scenario. And 800. New customers are going to be targeted in this campaign. So take two seconds and look at the campaign.

So first one thing pops to the eye. Very common scenario. Very common thing that we see often all the time the test group is way higher than the control group meaning that we have higher certainty. We have more information about the test group than we have on the control group moreover look at the average of the groups.

If you look only on the average you get the feeling that the test is performing better. But if you look at the median you get a different picture. Where the test is not performing as well as the control group you might be spending good customers allocating them to a campaign where they spend on median less than the control group now you get two different answers for one single. Meta question is the campaign effective.

Overall are we doing the right thing should we keep this campaign running or not so let s try and break. The question of should we keep the campaign running or not into three business related question. The first one is am. I making an overall is the campaign effective overall are we making an effect on each and every one of the customers are we causing each and every customer in the test group to spend more if we re not making an effect on the overall population.

Well we might make an effect on only one region of the population. And it s very common that in crm or in marketing. You only effect one portion of the population and last. But not least is let s try and predict the future let s try and estimate how much money are we expected to see why is it important because had we know beforehand how much money each campaign is about to generate it would be very easy decision to make just keep the profitable campaign running and kill the ineffective ones.

So the first thing..

We want to tackle is is the campaign effective overall. And it s a very hard thing to do because we have very very few data points in the control group and remember from yohai s part that when you have very few data points. In a long tail distribution. The location of the true average is very hard to guess.

Because the 22nd responder in the control group will shift the average either to the right or to the left with 100 probability. So we are very uncertain about the true average of the control group and we are less uncertain about the location of the test. Group so we ve developed a heuristic at optimove that we like to call the simulated subsampling and the idea is as follows. Let s try an insert uncertainty into the true location of the test group meaning.

If we take a small subsample of the test customers are they outperforming the control group. So how do you conduct this heuristic. Well you take all your customers all your test customers. All 108 of them you put them in a virtual hat and you draw a small subsample of size 21.

Now you have 21 customers from the test group and you can compute their average repeat the process say 1000 2000. Times now we have a list of plausible test averages compare them to the control average. When the subsample outperforms the control let s assume that you took a subsample and you get an average of 300 then it outperforms the control mark down 1. If it s on the other way around let s assume you took a subsample and you got an average of 200 mark 0 sum.

Of the 1s sp and it will give you a good estimation on the probability we scientist. We love answers in terms of probability we don t like certain answers such as good or bad working or not working. We like probability it will give you an estimation to the probability of the test outperforming. The control and if you look at the results in front of you you get a clear view that the test has less than 50 of chance to win the control so when it comes to make a decision based on this analysis.

The answer is no you didn t made a change on the overall population. But it s okay because we know that you re targeting. Only we usually an effect on a small portion of the population. So let s try and focus our attention on the working class let s try and remove the noise and decrease.

The signal to noise ratio. Now remember that the average is very sensitive to extreme observations remember yohai s table demonstration. Nod if you do okay so a new observation. A new extreme observation will change the location of our average.

Despite the fact that the majority of our population. Didn t change their behavior and we only want to make an analysis on the majority. So let s trim the edges let s remove them i m saving the scissors from you because it s quite terrifying. So.

The idea is that we use a method called trimmed mean and we remove the extremes. We re denoising the extremes and we re also denoising some of the low spenders and focus our attention on the pink region. Okay now we can conduct a simple t test to compare the trimmed mean. And if you do this then the t test come out significant meaning.

Despite the fact that you didn t made a change on the overall population. You did cause the change in the behavior of the majority of your data. Now one thing. Important to remember in long tail distribution when you run a trimmed mean you don t use standard deviation units.

You use quantiles..

I m referring to what yohai said before and the idea is that i trimmed the lower and the upper decile last but not least let s try and predict the future now the future is hard to predict if i had a machine that predict the future with 100 certainty. I would be in the caribbeans now now. But i don t have the machine. So we are trying to approximate the future we are trying to make a forecast.

So the idea is we have very few data points. And if we want to explore the entire distribution of the test group it will be very costly because as yohai said it will take us a lot of customers and a lot of time to explore the entire distribution right might be. 10000 or even 20000. Customers that we need to target to this campaign.

Just in the name of exploration and this is very very costly. There are better alternative there might be better alternatives for those customers so we want to find a method to mimicate the action of allocating. 10000 new customers and sampling 10000. New observations from the same distribution that our test data came from now after years of experiment.

We ve decided to model. The data using the log normal distribution now look it up at wikipedia. Later on. But the idea is that it s a very very common distribution to model monetary variablessorry so voila deus ex machina now we have 10000 new observations from the same distribution and now we can start asking questions such as how much money am.

I expected to see let s assume that you are going to target 800 new customers tomorrow morning. Take a sample of size 800 from this distribution and look at the sum. If you do it repeatedly and then average in this. Case you get somewhere in the region of 250000.

Is this a figure that you re comfortable with ask yourself. This question is this. Campaign generating enough. Money let s assume that the alternative generates only 200000 then clearly this campaign is doing.

Well but if the alternative is generating 300000. Then you re wasting time use the simulation to try and predict the future so in this session. We took you through our mindset of conducting campaign analysis. Always begin with a clear business related question don t be afraid to ask any question you want try and break it down into smaller questions still business related look at the data select the region in the data.

That tells the most tell the tale that you want to hear select the proper tool. Simulation trimmed. Mean subsampling and only after you ve exhausted. The scientific analysis.

Then make an informed decision now all the methods that i ve presented today are available in the optimove. Connect site for your use. Thank you all for listening. ” .

.

Thank you for watching all the articles on the topic **Advanced Campaign Analysis: Measuring Campaign Effectiveness and Long-Term Impact – YouTube**. All shares of bluevelvetrestaurant.com are very good. We hope you are satisfied with the article. For any questions, please leave a comment below. Hopefully you guys support our website even more.

description:

Nitay Alon and Yohai Sabag discuss the metrics and methods necessary for gauging the long-term effects of campaign effectiveness.

tags: