The Self-Taught Computer Scientist. Cory AlthoffЧитать онлайн книгу.
complexity involving trying to guess a numerical password consisting of n decimal digits by testing every possible combination is O(10**n).
Here is an example of password guessing with O(10**n) complexity:
pin = 931 n = len(pin) for i in range(10**n): if i == pin: print(i)
The number of steps this algorithm takes to complete grows incredibly fast as n gets larger. When n is 1, this algorithm takes 10 steps. When n is 2, it takes 100 steps. When n is 3, it takes 1,000 steps. As you can see, at first, an exponential algorithm looks like it doesn't grow very quickly. However, eventually, its growth explodes. Guessing a password with 8 decimal digits takes 100 million steps, and guessing a password with 10 decimal digits takes more than 10 billion steps. Exponential scaling is the reason why it is so important to create long passwords. If someone tries to guess your password using a program like this, they can easily guess it if your password is four digits. However, if your password is 20 digits, it is impossible to crack because the program will take longer to run than a person's life span.
This solution to guessing a password is an example of a brute-force algorithm. A brute-force algorithm is a type of algorithm that tests every possible option. Brute-force algorithms are not usually efficient and should be your last resort.
Figure 1.6 compares the efficiency of the algorithms we have discussed.
Figure 1.6 Big O complexity chart
Best-Case vs. Worst-Case Complexity
An algorithm's performance can change depending on different factors, such as the type of data you are working with. Because of this, when you evaluate an algorithm's performance, you need to consider its best-, worst-, and average-case complexities. An algorithm's best-case complexity is how it performs with ideal input, and an algorithm's worst-case complexity is how it performs in the worst possible scenario for it. An algorithm's average-case complexity is how it performs on average.
For example, if you have to search one by one through a list, you may get lucky and find what you are looking for after checking the first item in your list. That would be the best-case complexity. However, if the item you are looking for isn't in the list, you would have to search the entire list, which is the worst-case complexity.
If you have to search through a list one by one a hundred times, on average you will find what you are looking for in O(n/2) time, which is the same as O(n) time in big-O notation. When comparing algorithms, you often start by looking at the average-case complexity. If you want to do a deeper analysis, you can also compare their best-case and worst-case complexities.
Space Complexity
Computers have finite resources such as memory, so in addition to thinking about an algorithm's time complexity, you should consider its resource usage. Space complexity is the amount of memory space your algorithm needs and includes fixed space, data structure space, and temporary space. Fixed space is the amount of memory your program requires, and data structure space is the amount of memory your program needs to store the data set, for example, the size of a list you are searching. The amount of memory your algorithm uses to hold this data depends on the amount of input the problem requires. Temporary space is the amount of memory your algorithm needs for intermediary processing, for example, if your algorithm needs to temporarily copy a list to transfer data.
You can apply the time complexity concepts you learned earlier to space complexity. For example, you can calculate a factorial of n (a product of all positive integers less than or equal to n) using an algorithm that has a constant, O(1), space complexity:
x = 1 n = 5 for i in range(1, n + 1): x = x * i
The space complexity is constant because the amount of space your algorithm needs does not grow as n gets larger. If you decided to store all the factorials up to n in a list, your algorithm would have a linear space complexity, O(n):
x = 1 n = 5 a_list = [] for i in range(1, n + 1): a_list.append(x) x = x * i
Your algorithm's space complexity is O(n) because the amount of space it uses grows at the same pace as n.
Like with time complexity, the acceptable level of space complexity for an algorithm depends on the situation. In general, though, the less space an algorithm requires, the better.
Why Is This Important?
As a computer scientist, you need to understand the different orders of magnitude to optimize your algorithms. When you are trying to improve an algorithm, you should focus on changing its order of magnitude instead of improving it in other ways. For example, say you have an O(n**2) algorithm that uses two for
loops. Instead of optimizing what happens inside your loops, it is much more important to determine whether you can rewrite your algorithm so that it doesn't have two nested for
loops and thus has a smaller order of magnitude.
If you can solve the problem by writing an algorithm with two unnested for
loops, your algorithm will be O(n), which will make a massive difference in its performance. That change will make a much bigger difference in your algorithm's performance than any efficiency gains you could get by tweaking your O(n**2) algorithm. However, it is also important to think about best- and worst-case scenarios for your algorithm. Maybe you have an O(n**2) algorithm, but in its best-case scenario, it is O(n), and your data happens to fit its best-case scenario. In that case, the algorithm may be a good choice.
The decisions you make about algorithms can have enormous consequences in the real world. For example, say you are a web developer responsible for writing an algorithm that responds to a customer's web request. Whether you decide to write a constant or quadratic algorithm could make the difference between your website loading in less than one second, making your customer happy, and taking more than a minute, which may cause you to lose customers before the request loads.
Vocabulary
algorithm: A sequence of steps that solves a problem.
run time: The amount of time it takes your computer to execute an algorithm written in a programming language like Python.
size of the problem: The variable n in an equation that describes the number of steps in an algorithm.
big O notation: A mathematical notation that describes how an algorithm's time or space requirements increase as the size of n increases.
order of magnitude: A class in a classification system where each class is many times greater or smaller than the one before.
time complexity: The maximum number of steps an algorithm takes to complete as n gets larger.
constant time: An algorithm runs in constant time when it requires the same number of steps regardless of the problem's size.
logarithmic time: An algorithm runs in logarithmic time when its run time grows in proportion to the logarithm of the input size.
linear time: An algorithm runs in linear time when it grows at the same rate as the problem's size.
log-linear time: An algorithm runs in log-linear time when it grows as a combination (multiplication) of logarithmic and linear time complexities.
quadratic time: An algorithm runs in quadratic