**by BehindJava**

# What is Big-O notation

# Big Oh Notation, Ο

The systematic way to express the upper limit of an algorithm’s running time is to use the Big-O notation O(n). It calculates the worst-case time complexity, or the maximum time an algorithm will take to complete execution.

Also, do remember that this is the most commonly used notation for expressing the time complexity of different algorithms unless specified otherwise, Simplified analysis of an algorithms efficiency.

- complexity in terms of input size, N.
- machine-independent.
- basic computer steps.
- time & space.

# Types of measurement

**Worst-Case**

**Best-Case**

**Average-Case**

# General Rules

- Ignore Constants.

**5n -> O(n)** - certain terms “dominate” others.

**O(1) < O(logn) < O(n) < O(nlogn) < O(n^2) < O(2^n) < O(n!)**

i.e., ignore low order terms.

# Examples

**O(1)**

Where an algorithm’s execution time is not based on the input size n, it is said to have constant time complexity with order O (1).

Whatever be the input size n, the runtime doesn’t change. Here’s an example:

```
class Hello
{
public static void main(String args[])
{
System.out.println("Hello Earth...");
}
}
```

As you can see, the message “Hello Earth…” is printed only once. So, regardless of the operating system or computer configuration you are using, the time complexity is constant: O(1), i.e. any time a constant amount of time is required to execute code.

**O(n)**

When the running time of an algorithm increases linearly with the length of the input, it is assumed to have linear time complexity, i.e. when a function checks all of the values in an input data set (or needs to iterate once through every value in the input), it is said to have a Time complexity of order O (n). Consider the following scenario:

```
class Hello
{
public static void main(String args[])
{
int i,n=5;
for(i=1;i<=n;i++)
{
System.out.println("Hello Earth...");
}
}
}
```

Based on the above code, you can see that the number of times the statement “Hello Earth…” is displayed on the console is directly dependent on the size of the input variable ‘n’.

If one unit of time is used to represent run time, the above program can be run in n times the amount of time. As a result, the program scales linearly with the size of the input, and it has an order of O(n).

**O(n^2)**

When the running time of an algorithm increases non-linearly O(n^2) with the length of the input, it is said to have a non-linear time complexity.

In general, nested loops fall into the O(n)*O(n) = O(n^2) time complexity order, where one loop takes O(n) and if the function includes loops inside loops, it takes O(n)*O(n) = O(n^2).

Similarly, if the function has ‘m’ loops inside the O(n) loop, the order is given by O (n*m), which is referred to as polynomial time complexity function.

Consider the following program:

```
class Hello
{
public static void main(String args[])
{
int i,j,n=3,m=2;
for(i=1;i<=n;i++)
{
for(j=1;j<=m;j++)
{
System.out.println("Hello Earth...");
}
}
}
}
```

As you can see, there are two nested for loops such that the inner loop’s complete iteration repeats based on the value of the outer loop. This is the primary reason why you see “Hello Earth…” printed 6 times (3*2).

This amounts to the cumulative time complexity of O(m*n) or O(n^2) if you assume that the value of m is equal to the value of n.

**O(log2 n)**

O (log2 n) basically implies that time increases linearly while the value of ‘n’ increases exponentially. So, if computing 10 elements take 1 second, computing 100 elements takes 2 seconds, 1000 elements take 3 seconds, and so on.

When using divide and conquer algorithms, such as binary search, the time complexity is O(log n).

Another example is quicksort, in which we partition the array into two sections and find a pivot element in O(n) time each time. As a result, it is O(log2 n).

**O (n log n)**

This time complexity is popularly known as linear arithmetic time complexity. It performs slightly slower as compared to linear time complexity but is still significantly better than the quadratic algorithm.

It belongs to Master Method Case II, and the recurrence answer is O(n*logn). Since Merge Sort always partitions the array into two halves and merges the two halves in linear time, it has a time complexity of (n*logn) in all three circumstances (worst, average, and best).

Also, remember this, when it comes to sorting algorithms, O(n*logn) is probably the best time complexity we can achieve.

**O(2^N)**

An algorithm with exponential time complexity doubles in magnitude with each increment to the input data set. If you’re familiar with other exponential growth patterns, this one works similarly. The time complexity begins with a modest level of difficulty and gradually increases till the end.

The Fibonacci series is a great way to demonstrate exponential time complexity.