So after studying lots of data round asymptotic evaluation of algorithms and the usage of Large O / Large Ω and Θ, I am attempting to understand the right way to utilise this in one of the best ways when representing algorithms and operations on information buildings.
For instance there’s a advisable web site the place I bought this screenshot from describing Quicksort and I’ve seen a couple of points that stand out to me primarily based on what I’ve learnt.
- Is it doable for all notations to signify “Finest” “Common” and “Worst” instances? and if that’s the case how is that this doable? For instance for a “Worst” case, How can Large Ω signify the Higher sure. The higher sure is tied to Large O.
- I assumed with a purpose to discover Theta Θ, Large O and Large Ω needed to be the identical values? Within the screenshot “Finest” case is
n log(n)and Worst case is
n^2so how can
- Take as an illustration a Hash Desk information construction, for those who had been to carry out an evaluation on the time complexity for insertion of a component. Would I be right is saying you could possibly interchangeably say
O(N)or conversely “Common Case is O(1)” and “Worst Case is O(N)”?