Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more.
To find /algorithm general/ look for Dijkstra's FUCKING FACE in the catalog
>Efficiency
Different algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software.
As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c1n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2. The second, merge sort, takes time roughly equal to c2n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2. We shall see that the constant factors can have far less of an impact on the running time than the dependence on the input size n.
Let’s write insertion sort’s running time as c1n n and merge sort’s running time as c2n lg n. Then we see that where insertion sort has a factor of n in its running time, merge sort has a factor of lg n, which is much smaller. (For example, when n D 1000, lg n is approximately 10, and when n equals one million, lg n is approximately only 20.) Although insertion sort usually runs faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c1 is than c2, there will always be a crossover point beyond which merge sort is faster.
For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of 10 million numbers. (Although 10 million numbers might seem like a lot, if the numbers are eight-byte integers, then the input occupies about 80 megabytes, which fits in the memory of even an inexpensive laptop computer many times over.)
Suppose that computer A executes 10 billion instructions per second (faster than any single sequential computer at the time of this writing) and computer B executes only 10 million instructions per second, so that computer A is 1000 times faster than computer B in raw computing power.
To make the difference even more dramatic, suppose that the world’s craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n2 instructions to sort n numbers. Suppose further that just an average programmer implements merge sort, using a high-level language with an inefficient compiler, with the resulting code taking 50n lg n instructions.
To sort 10 million numbers, computer A takes
(2 * (10^7)^2 instructions / ((10^10) instructions/second) = 20,000 seconds more than 5.5 hours
while computer B takes
(50 * 10^7 lg 10^7 instructions ) / (10^7 instructions/second) =1163 seconds (less than 20 minutes)
By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs more than 17 times faster than computer A! The advantage of merge sort is even more pronounced when we sort 100 million numbers: where insertion sort takes more than 23 days, merge sort takes under four hours. In general, as the problem size increases, so does the relative advantage of merge sort.
Are you just posting the first few paragraphs from intro to algorithms?
Isn't that book just used as a reference or should I read through it
>>7750398
Algorithms are trivial, you may as well have a college algebra general.
Anyone know where I can read a good proof of correctness of Miller-Rabin randomized primality test?
I have to prove that Miller-Rabin is in RP and I'm a bit stuck with the proof in CLRS. I wish to find a better explanation.
>>7750398
Daily reminder that the unit of arrogance, formerly the nano-Dijkstra is now the pico-Taleb since 2009.
http://www.internationalstanadards.org
>>7750398
Algorithms are the Holy Grail for most of CS. The fact that you can boil down a a program to its most efficient form is quite amazing.
>>7750440
>trivial
Math sperg triggered. Trivial? look at NoSQL, which allows Amazon, Google, and any other highly trafficed company to exist. It's entirely based on graph theory using Djikstra's and alike algorithms. You can read it in the Protocols.
>trivial
If there is one word I cannot stand hearing it is this. Nothing is truly trivial.
>>7750440
>Algorithms are trivial
No. Learning algorithms is trivial, but creating fundamentally new ones is not. It demands creativity, effort, willpower, sleepless nights, and a lot of paper.
>>7750440
Nah nigger, that's like saying theorems are trivial. It just shows your ignorance.
>>7751482
Djikstra is a pretty cool guy tho.
>Discovered uniform cost search during a lecture when a student brought up the question, coins famous algorithm of his name
> Started the field of self-stabilizing systems
> Invented semaphores
>muh nigga