mercredi 5 mai 2021

Expected worst case vs. average case complexity for a randomized linear search algorithm?

Suppose that algorithms X and Y are both linear search algorithms. We suppose that we only have successful searches, i.e., the max. amount of visited nodes is at most (i.e., the worst case) n-1 (when the value is stored in the last element of the list). We also want to look at this in respect to the maximum number of visited nodes, and not the maximum number of conducted comparisons (thus, n-1 and not n).

Now, suppose we have another randomized algorithm Z that calls either one of these algorithms with a probability of 1/2. What is the expected value of visited nodes for the worst possible input for the list (of length n) for a successful search (see situation above)?

Trivially, we can differentiate between two possible scenarios here.

  1. Z calls algorithm X. (with a probability of 1/2)
  2. Z calls algorithm Y. (with a probability of 1/2)

My problem now is that it is being asked for the expected Worst case, but I am not really sure how to conduct that analysis. My first try was this:

enter image description here

But thinking about it, maybe I'm thinking too complicated and it is just

i.e., E(visited nodes before find)=1/2*(worst case A) + 1/2*(worst case B)

however, I feel like that would be way too simple.

I'd really appreciate any input on this and whether my train of thought is right at all! Thanks so much!




Aucun commentaire:

Enregistrer un commentaire