samedi 11 septembre 2021

Challenging task in R involving sampling and elasticity

I have a task that would really appreciate some feedback on how to approach. Elasticity of demand is a measure of price sensitivity of customers. One basic rule is that you need at least 2 observations to calculate it. Let’s assume now I have a dataset with customer transactions (sale or no) and prices. For that dataset I can easily calculate elasticity and call it average elasticity.

My end goal would be to find the best sample that is 50% of my dataset so in the sample I have the highest possible calculated elasticity (the rest would be expected to be lower elasticity vs average).

I have tried repeating the process n times of getting a sample calculating elasticity and storing the highest sample between n however that is not optimal as the algorithm does not learn anything from the iterations it just happens randomly to get a high elasticity sample.

The optimal would be to run through the data and peach time a sample is generated the algorithm could learn what customers contribute to a higher elasticity and keep those while re sampling the others.

Any suggestions are very welcome,

Thanks!




Aucun commentaire:

Enregistrer un commentaire