Rekursion entmystifiziert

Um die Rekursion zu verstehen, müssen Sie zuerst die Rekursion verstehen.

Verrückt, nicht wahr?

Nun, ich hoffe, dass Sie sich am Ende dieses Artikels viel sicherer fühlen, was Rekursion ist und vor allem, wie wir eine rekursive Lösung für ein Problem finden können.

Was ist Rekursion?

Wie erklären Sie einem 4-Jährigen die Rekursion? Dies ist eine ziemlich berühmte Interviewfrage, und im Internet gibt es jede Menge Antworten. Wir werden diese Frage nicht beantworten, da sie zu Mainstream ist.

Wenn Sie so schlau sind wie ich, würden Sie jemandem die Rekursion erklären, der ein Jahr jünger ist als Sie. Lassen Sie sie jemandem die Rekursion erklären, der ein Jahr jünger ist als sie. Fahren Sie fort, bis Sie einen 5-Jährigen haben, der einem 4-Jährigen die Rekursion erklärt. Erledigt. [Quelle: reddit].

In Bezug auf die Programmierung ist Rekursion

Eine Funktion, die sich selbst aufruft.

Die obige Funktion leistet als solche keine nützliche Arbeit, zeigt jedoch eine Rekursion. Die obige rekursive Beziehung wäre

T(N) = T(N - 1) + O(1)

Dies bedeutet einfach, dass die Ausführung des Aufrufs random_function(n)erst fortgesetzt werden kann, wenn der Aufruf random_function(n-1)abgeschlossen ist, und so weiter.

Im Wesentlichen verzögern wir die Ausführung des aktuellen Status der Funktion, bis ein weiterer Aufruf derselben Funktion abgeschlossen ist und das Ergebnis zurückgegeben wird.

Der Compiler speichert weiterhin den Status des Funktionsaufrufs und fährt dann mit dem nächsten Funktionsaufruf usw. fort. Der Compiler speichert also Funktionszustände auf einem Stapel und verwendet diese für Berechnungen und Backtracking.

Wenn ein Problem in ähnliche Teilprobleme zerlegt werden kann, die einzeln gelöst werden können und deren Lösungen kombiniert werden können, um die Gesamtlösung zu erhalten, dann sagen wir, dass es möglicherweise eine rekursive Lösung für das Problem gibt.

Anstatt an dieser scheinbar alten Definition von Rekursion festzuhalten, werden wir eine ganze Reihe von Rekursionsanwendungen betrachten. Dann werden hoffentlich die Dinge klar sein.

Faktor einer Zahl

Lassen Sie uns sehen, wie wir die Fakultät einer Zahl herausfinden können. Lassen Sie uns vorher sehen, was die Fakultät einer Zahl darstellt und wie sie berechnet wird.

factorial(N) = 1 * 2 * 3 * .... * N - 1 * N

Einfach ausgedrückt ist die Fakultät einer Zahl nur das Produkt von Begriffen von 1 bis zur Zahl N multipliziert miteinander.

Wir können einfach eine forSchleife von 1 bis N haben und alle Terme iterativ multiplizieren, und wir haben die Fakultät der gegebenen Zahl.

Wenn Sie genau hinschauen, gibt es jedoch eine inhärente rekursive Struktur für die Fakultät einer Zahl.

factorial(N) = N * factorial(N - 1)

Es ist so, als würde die Berechnung auf einen anderen Funktionsaufruf verlagert, der mit einer kleineren Version des ursprünglichen Problems arbeitet. Mal sehen, wie sich diese Beziehung entfalten würde, um zu überprüfen, ob die Lösung hier mit der von der forSchleife bereitgestellten übereinstimmt .

Aus den beiden obigen Abbildungen geht hervor, dass die rekursive Funktion, die wir zuvor definiert haben,

factorial(N) = N * factorial(N - 1)

ist in der Tat richtig. Schauen Sie sich das Python-Code-Snippet an, mit dem die Fakultät einer Funktion rekursiv ermittelt wird.

Dieses Beispiel war ziemlich einfach. Betrachten wir ein etwas größeres, aber standardmäßiges Beispiel, um das Konzept der Rekursion zu demonstrieren.

Fibonacci-Folge

Sie müssen bereits mit der berühmten Fibonacci-Sequenz vertraut sein. Für diejenigen unter Ihnen, die noch nie von dieser Sequenz gehört oder ein Beispiel gesehen haben, werfen wir einen Blick darauf.

1 1 2 3 5 8 13 ..... 

Schauen wir uns die Formel zur Berechnung der n-ten Fibonacci-Zahl an.

F(n) = F(n - 1) + F(n - 2)where F(1) = F(2) = 1

Diese Definition der Fibonacci-Sequenz ist eindeutig rekursiver Natur, da die n-te Fibonacci-Zahl von den beiden vorhergehenden Fibonacci-Zahlen abhängt. Dies bedeutet, das Problem in kleinere Teilprobleme und damit in Rekursion zu unterteilen. Schauen Sie sich den Code dafür an:

Jedes rekursive Problem muss zwei notwendige Dinge haben:

  1. Die Wiederholungsrelation, die die Zustände des Problems definiert und wie das Hauptproblem in kleinere Teilprobleme unterteilt werden kann. Dies schließt auch den Basisfall zum Stoppen der Rekursion ein.
  2. Ein Rekursionsbaum, der die ersten, wenn nicht alle Aufrufe der betreffenden Funktion anzeigt. Schauen Sie sich den Rekursionsbaum für die rekursive Beziehung der Fibonacci-Sequenzen an.

Der Rekursionsbaum zeigt uns, dass die Ergebnisse, die bei der Verarbeitung der beiden Teilbäume der Wurzel N erhalten wurden, verwendet werden können, um das Ergebnis für den bei N verwurzelten Baum zu berechnen. Ähnlich für andere Knoten.

Die Blätter dieses Rekursionsbaums wären fibonacci(1)oder fibonacci(2)beide die Basisfälle für diese Rekursion.

Nachdem wir nun ein sehr grundlegendes Verständnis der Rekursion, der Wiederholungsrelation und des Rekursionsbaums haben, gehen wir zu etwas Interessanterem über.

Beispiele!

Ich bin fest davon überzeugt, unzählige Beispiele für ein bestimmtes Thema in der Programmierung zu lösen, um ein Meister dieses Themas zu werden. Die beiden von uns betrachteten Beispiele (Faktor einer Zahl und die Fibonacci-Sequenz) hatten gut definierte Wiederholungsbeziehungen. Schauen wir uns einige Beispiele an, bei denen die Wiederholungsbeziehung möglicherweise nicht so offensichtlich ist.

Höhe eines Baumes

Um die Dinge für dieses Beispiel einfach zu halten, betrachten wir nur einen Binärbaum. Ein Binärbaum ist also eine Baumdatenstruktur, in der jeder Knoten höchstens zwei Kinder hat. Ein Knoten des Baums wird als Wurzel des Baums bezeichnet, zum Beispiel:

Definieren wir, was wir unter der Höhe des Binärbaums verstehen.

Die Höhe des Baumes entspricht der Länge des längsten Wurzel-Blatt-Pfades im Baum.

So, for the example diagram displayed above, considering that the node labelled as A as the root of the tree, the longest root to leaf path is A → C → E → G → I . Essentially, the height of this tree is 5 if we count the number of nodes and 4 if we just count the number of edges on the longest path.

Now, forget about the entire tree and just focus on the portions highlighted in the diagram below.

The above figure shows us that we can represent a tree in the form of its subtrees. Essentially, the structure to the left of node A and the structure to the right of A is also a binary tree in itself, just smaller and with different root nodes. But, they are binary trees nonetheless.

What information can we get from these two subtrees that would help us find the height of the main tree rooted at A ?

If we knew the height of the left subtree, say h1, and the height of the right subtree, say h2, then we can simply say that the maximum of the two + 1 for the node A would give us the height of our tree. Isn’t that right?

Formalizing this recursive relation,

height(root) = max(height(root.left), height(root.right)) + 1

So, that’s the recursive definition of the height of a binary tree. The focus is on binary here, because we used just two children of the node root represented by root.left and root.right. But, it is easy to extend this recursive relation to an n-ary tree. Let’s take a look at this in code.

The problem here was greatly simplified because we let recursion do all the heavy lifting for us. We simply used optimalanswers for our subproblems to find a solution to our original problem.

Schauen wir uns ein anderes Beispiel an, das auf ähnliche Weise gelöst werden kann.

Anzahl der Knoten in einem Baum

Auch hier werden wir der Einfachheit halber einen binären Baum betrachten, aber der Algorithmus und der Ansatz können im Wesentlichen auf jede Art von Baum erweitert werden.

Das Problem selbst ist sehr selbsterklärend. Angesichts der Wurzel eines Binärbaums müssen wir die Gesamtzahl der Knoten im Baum bestimmen. Diese Frage und der Ansatz, den wir hier finden werden, sind dem vorherigen sehr ähnlich. Wir müssen nur winzige Änderungen vornehmen und haben die Anzahl der Knoten im Binärbaum.

Schauen Sie sich das folgende Diagramm an.

Das Diagramm sagt alles. Wir wissen bereits, dass ein Baum in kleinere Teilbäume zerlegt werden kann. Auch hier können wir uns fragen:

Welche Informationen können wir aus diesen beiden Teilbäumen erhalten, die uns helfen würden, die Anzahl der Knoten in dem Baum zu finden, die auf A verwurzelt sind?

Well, if we knew the number of nodes in the left subtree and the number of nodes in the right subtree, we can simply add them up and add one for the root node and that would give us the total number of nodes.

Formalizing this we get,

number_of_nodes(root) = number_of_nodes(root.left) + number_of_nodes(right) + 1

If you look at this recursion and the previous one, you will find that they are extremely similar. The only thing that is varying is what we do with the information we obtained from our subproblems and how we combined them to get some answer.

Now that we have seen a couple of easy examples with a binary tree, let’s move onto something less trivial.

Merge Sort

Given an array of numbers like

4 2 8 9 1 5 2

we need to come up with a sorting technique that sorts them either in ascending or descending order. There are a lot of famous sorting techniques out there for this like Quick Sort, Heap Sort, Radix Sort and so on. But we are specifically going to look at a technique called the Merge Sort.

It’s possible that a lot of you are familiar with the Divide and Conquer paradigm, and this might feel redundant. But bear with me and read on!

The idea here is to break it down into subproblems.

That’s what the article is about right ? ?

What if we had two sorted halves of the original array. Can we use them somehow to sort the entire array?

That’s the main idea here. The task of sorting an array can be broken down into two smaller subtasks:

  • sorting two different halves of the array
  • then using those sorted halves to obtain the original sorted array

Now, the beauty about recursion is, you don’t need to worry about how we will get two sorted halves and what logic will go into that. Since this is recursion, the same method call to merge_sort would sort the two halves for us. All we need to do is focus on what we need to do once we have the sorted haves with us.

Let’s go through the code:

At this point, we trusted and relied on our good friend recursion and assumed that left_sorted_half and right_sorted_half would in fact contain the two sorted halves of the original array.

So, what next?

The question is how to combine them somehow to give the entire array.

The problem now simply boils down to merging two sorted arrays into one. This is a pretty standard problem and can be solved by what is known as the “two finger approach”.

Take a look at the pseudo code for better understanding.

let L and R be our two sorted halves. let ans be the combined, sorted array
l = 0 // The pointer for the left halfr = 0 // The pointer for the right halfa = 0 // The pointer for the array ans
while l < L.length and r < R.length { if L[l] < R[r] { ans[a] = L[l] l++ } else { ans[a] = R[r] r++ }}
copy remaining array portion of L or R, whichever was longer, into ans.

Here we have two pointers (fingers), and we position them at the start of the individual halves. We check which one is smaller (that is, which value pointed at by the finger is smaller), and we add that value to our sorted combined array. We then advance the respective pointer (finger) forward. In the end we copy the remaining portion of the longer array and add it to the back of the ans array.

So, the combined code for merge-sort is as follows:

We will do one final question using recursion and trust me, it’s a tough one and a pretty confusing one. But before moving onto that, I will iterate the steps I follow whenever I have to think of a recursive solution to a problem.

Steps to come up with a Recursive Solution

  1. Try and break down the problem into subproblems.

2. Once you have the subproblems figured out, think about what information from the call to the subproblems can you use to solve the task at hand. For example, the factorial of N — 1 to find the factorial of N , height of the left and right subtrees to find the height of the main tree, and so on.

3. Keep calm and trust recursion! Assume that your recursive calls to the subproblems will return the information you need in the most optimal fashion.

4. The final step in this process is actually using information we just got from the subproblems to find the solution to the main problem. Once you have that, you’re ready to code up your recursive solution.

Now that we have all the steps lined up, let’s move on to our final problem in this article. It’s called Sum of Distances in a Tree.

Sum of Distances in a Tree

Let’s look at what the question is asking us to do here. Consider the following tree.

In the example above, the sum of paths for the node A (the number of nodes on each path from A to every other vertex in the tree) is 9. The individual paths are mentioned in the diagram itself with their respective lengths.

Similarly, consider the sum of distances for the node C.

C --> A --> B (Length 2)C --> A (Length 1)C --> D (Length 1)C --> E (Length 1)C --> D --> F (Length 2)Sum of distances (C) = 2 + 1 + 1 + 1 + 2 = 7

This is known as the sum of distances as defined for just a single node A or C. We need to calculate these distances for each of the nodes in the tree.

Before actually solving this generic problem, let us consider a simplified version of the same problem. It says that we just need to calculate the sum of distances for a given node, but we will only consider the tree rooted at the given node for calculations.

So, for the node C, this simplified version of the problem would ask us to calculate:

C --> D (Length 1)C --> E (Length 1)C --> D --> F (Length 2)Simplified Sum of Distances (C) = 1 + 1 + 2 = 4

This is a much simpler problem to tackle recursively and would prove to be useful in solving the original problem.

Consider the following simple tree.

The nodes B and C are the children of the root (that is, A).

We are trying to see what information can we use from subproblems (the children) to compute the answer for the root A .

Note: here we simply want to calculate the sum of paths for a given node X to all its successors in its own subtree (the tree rooted at the node X).

There are no downwards going paths from the node B, and so the sum of paths is 0 for the node B in this tree. Let’s look at the node C . So this node has 3 different successors in F, D and E . The sum of distances are as follows:

C --> D (Path containing just 1 edge, hence sum of distances = 1)C --> D --> F (Path containing 2 edges, hence sum of distances = 2)C --> E (Path containing just 1 edge, hence sum of distances = 1)

The sum of all the paths from the node C to all of its decedents is 4, and number of such paths going down is 3.

Note the difference here. The sum_of_distances here counts the number of edges in each path — with each edge repeating multiple times, probably because of their occurrence on different paths — unlike number_of_paths , which counts, well, the number of paths ?.

If you look closely, you will realize that the number of paths going down is always going to be the number of nodes in the tree we are considering (except the root). So, for the tree rooted at C, we have 3 paths, one for the node D, one for E, and one for F. This means that the number of paths from a given node to the successor nodes is simply the total number of descendent nodes since this is a tree. So, no cycles or multiple edges.

Now, consider the node A. Let us look at all the new paths that are being introduced because of this node A. Forget the node B for now and just focus on the child node C corresponding to A. The new sets of paths that we have are:

A --> C (Path containing just 1 edge, hence sum of distances = 1)A --> (C --> D) (Path containing 2 edges, hence sum of distances = 2)A --> (C --> E) (Path containing 2 edges, hence sum of distances = 2)A --> (C --> D --> F) (Path containing 3 edges, hence sum of distances = 3)

Except for the first path A → C, all the others are the same as the ones for the node C, except that we have simply changed all of them and incorporated one extra node A.

If you look at the diagram above you will see a tuple of values next to each of the nodes A, B, and C.

(X, Y) where X is the number of paths originating at that node and going down to the decedents. Y is the sum of distances for the tree rooted at the given node. 

Since the node B doesn’t have any further children, the only path it is contributing to is the path A -->; B to A's tuple of (5, 9) above. So let’s talk about C.

C had three paths going to its successors. Those three paths (extended by one more node for A) also become three paths from A to its successors, among others.

N-Paths[A] = (N-Paths[C] + 1) + (N-Paths[B] + 1)

That is the exact relation we are looking for as far as the number of paths (= number of successor nodes in the tree) are concerned. The 1 is because of the new path from the root to it’s child, that is A -->; C in our case.

N-Paths[A] = 3 + 1 + 0 + 1 = 5

As far as the sum of distances is concerned, take a look at the diagram and the equations we just wrote. The following formula becomes very clear:

Sum-Dist[A] = (N-Paths[C] + 1 + Sum-Dist[C]) + (N-Paths[B] + 1 + Sum-Dist[B])
Sum-Dist[A] = (3 + 1 + 4 + 0 + 1 + 0) = 9

The main thing here is N-Paths[C] + Sum-Dist[C] . We sum these up because all of the paths from C to its descendants ultimately become the paths from A to its descendants — except that they originate at A and go through C, and so each of the path lengths are increased by 1. There are N-Paths[C] paths in all originating from C and their total length is given by Sum-Dist[C] .

Hence the tuple corresponding to A = (5, 9). The Python code for the algorithm we discussed above is as follows:

The Curious Case of the Visited Dictionary :/

If you look at the code above closely, you’ll see this:

# Prevents the recursion from going into a cycle. self.visited[vertex] = 1

The comment says that this visited dictionary is for preventing the recursion from entering a cycle.

If you’ve paid attention til now, you know that we are dealing with a tree here.

The definition of a tree data structure doesn’t allow cycles to exist. If a cycle exists in the structure, then it is no longer a tree, it becomes a graph. In a tree, there is exactly one path between any two pair of vertices. A cycle would mean there is more than one path between a pair of vertices. Look at the figures below.

The structure on the left is a tree. It has no cycles in it. There is a unique path between any two vertices.

The structure on the right is a graph, there exists a cycle in the graph and hence there are multiple paths between any pair of vertices. For this graph, it so happens that any pair of vertices have more than one path. This is not necessary for every graph.

Almost always, we are given the root node of the tree. We can use the root node to traverse the entire tree without having to worry about any cycles as such.

However, if you’ve read the problem statement clearly, it does not state anything about root of the tree.

That means that there is no designated root for the tree given in the question. This could mean that a given tree can be visualized and processed in so many different ways depending upon what we consider as the root. Have a look at multiple structures for the same tree but with different root nodes.

So many different interpretations and parent child relationships are possible for a given unrooted tree.

So, we start with the node 0 and do a DFS traversal of the given structure. In the process we fix the parent child relationships. Given the edges in the problem, we construct an undirected graph-like structure which we convert to the tree structure. Taking a look at the code should clear up some of your doubts:

Every node would have one parent. The root won’t have any parent, and the way this logic is, the node 0 would become the root of our tree. Note that we are not doing this process separately and then calculating the sum of distances downwards. Given a tree, we were trying to find, for every node, the simplified sum of distances for the tree rooted at that node.

So, the conversion from the graph to the tree happens in one single iteration along with finding out the sum of distances downwards for each and every node.

I posted the code again so that the visited dictionary makes much more sense now. So, one single recursion doing all that for us. Nice!

Bringing it all together

Now that we have our tree structure defined, and also the values of sum of distances going downward defined for us, we can use all of this information to solve the original problem of Sum of Distances in a Tree.

How do we do that? It’s best to explain this algorithm with the help of an example. So we will consider the tree below and we will dry run the algorithm for a single node. Let’s have a look at the tree we will be considering.

The node for which we want to find the sum of distances is 4. Now, if you remember the simpler problem we were trying to solve earlier, you know that we already have two values associated with each of the nodes:

  1. distances_down Which is the sum of distances for this node while only considering the tree beneath.
  2. number_of_paths_down which is the number of paths / nodes in the tree rooted at the node under consideration.

Let’s look at the annotated version of the above tree. The tree is annotated with tuples (distances_down, number_of_paths_down) .

Let’s call the value we want to compute for each node as sod which means sum of distances, which is what the question originally asks us to compute.

Let us assume that we have already computed the answer for the parent node of 4 in the diagram above. So, we now have the following information for the node labelled 2 (the parent node) available:

(sod, distances_down, number_of_paths_down) = (13, 4, 3)

Let’s rotate the given tree and visualize it in a way where 2 is the root of the tree essentially.

Now, we want to remove the contribution of the tree rooted at 4 from sod(2). Let us consider all of the paths from the parent node 2 to all other nodes except the ones in the tree rooted at 4 .

2 --> 5 (1 edge)2 --> 1 (1 edge)2 --> 1 -->7 (2 edges)2 --> 1 --> 7 --> 9 (3 edges)2 --> 1 --> 7 --> 10 (3 edges)
Number of nodes considered = 6Sum of paths remaining i.e. sod(2) rem = 1 + 1 + 2 + 3 + 3 = 10

Let’s see how we can use the values we already have calculated to get these updated values.

* N = 8 (Total number of nodes in the tree. This will remain the same for every node. )* sod(2) = 13
* distances_down[4] = 1* number_of_paths_down[4] = 1
* (distances_down[4] does not include the node 4 itself)N - 1 - distances_down[4] = 8 - 1 - 1 = 6
* sod(2) - 1 - distances_down[4] - number_of_paths_down[4] = 13 - 1 - 1 - 1 = 10

If you remember this from the function we defined earlier, you will notice that the contribution of a child node to the two values distances_down and number_of_paths_down is n_paths + 1 and n_paths + s_paths + 1 respectively. Naturally, that is what we subtract to obtain the remaining tree.

sod(4) represents the sum of edges on all the paths originating at the node 4 in the tree above. Let’s see how we can find this out using the information we have calculated till now.

distances_down[4] represents the answer for the tree rooted at the node 4 but it only considers paths going to its successors, that is all the nodes in the tree rooted at 4. For our example, the successor of 4 is the node 6. So, that will directly add to the final answer. Let’s call this value own_answer . Now, let’s account for all the other paths.

4 --> 2 (1 edge)4 --> 2 --> 5 (1 + 1 edge)4 --> 2 --> 1 (1 + 1 edge)4 --> 2 --> 1 -->7 (1 + 2 edges)4 --> 2 --> 1 --> 7 --> 9 (1 + 3 edges)4 --> 2 --> 1 --> 7 --> 10 (1 + 3 edges)own_answer = 1
sod(4) = 1 + 1 + 2 + 2 + 3 + 4 + 4 = 17
sod(4) = own_answer + (N - 1 - distances_down[4]) + (sod(2) - 1 - distances_down[4] - number_of_paths_down[4]) = 1 + 6 + 10 = 17

Before you go bonkers and start doing this, let’s look at the code and bring together all of the things we discussed in the example above.

The recursive relation for this portion is as follows:

Did I just see “MEMOIZATION” in the code?

Yes, indeed you did!

Consider the following example tree:

The question asks us to find the sum of distances for all the nodes in the given tree. So, we would do something like this:

for i in range(N): ans.append(find_distances(N))

But, if you look at the tree above, the recursive call for the node 5 would end up calculating the answers for all the nodes in the tree. So, we don’t need to recalculate the answers for the other nodes again and again.

Hence, we end up storing the already calculated values in a dictionary and use that in further calculations.

Essentially, the recursion is based on the parent of a node, and multiple nodes can have the same parent. So, the answer for the parent should only be calculated once and then be used again and again.

If you’ve managed to read the article this far (not necessarily in one stretch ?), you’re awesome ?.

If you found this article helpful, share as much as possible and spread the ?. Cheers!