[Dr.Lib]Note:Math – Probabilities 概率

Via http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=probabilities

It has been said that life is a school of probability. A major effect of probability theory on everyday life is in risk assessment. Let's suppose you have an exam and you are not so well prepared. There are 20 possible subjects, but you only had time to prepare for 15. If two subjects are given, what chances do you have to be familiar with both? This is an example of a simple question inspired by the world in which we live today. Life is a very complex chain of events and almost everything can be imagined in terms of probabilities.

Gambling has become part of our lives and it is an area in which probability theory is obviously involved. Although gambling had existed since time immemorial, it was not until the seventeenth century that the mathematical foundations finally became established. It all started with a simple question directed to Blaise Pascal by Chevalier, a nobleman that gambled frequently to increase his wealth. The question was whether a double six could be obtained on twenty-four rolls of two dice.

As far as TopCoder problems are concerned, they're inspired by reality. You are presented with many situations, and you are explained the rules of many games. While it's easy to recognize a problem that deals with probability computations, the solution may not be obvious at all. This is partly because probabilities are often overlooked for not being a common theme in programming contests. But it is not true and TopCoder has plenty of them! Knowing how to approach such problems is a big advantage in TopCoder competitions and this article is to help you prepare for this topic.

Before applying the necessary algorithms to solve these problems, you first need some mathematical understanding. The next chapter presents the basic principles of probability. If you already have some experience in this area, you might want to skip this part and go to the following chapter: Step by Step Probability Computation. After that it follows a short discussion on Randomized Algorithms and in the end there is a list with the available problems on TopCoder. This last part is probably the most important. Practice is the key!

Basics
Working with probabilities is much like conducting an experiment. An outcome is the result of an experiment or other situation involving uncertainty. The set of all possible outcomes of a probability experiment is called a sample space. Each possible result of such a study is represented by one and only one point in the sample space, which is usually denoted by S. Let's consider the following experiments:

Rolling a die once
Sample space S = {1, 2, 3, 4, 5, 6}
Tossing two coins
Sample space S = {(Heads, Heads), (Heads, Tails), (Tails, Heads), (Tails, Tails)}

We define an event as any collection of outcomes of an experiment. Thus, an event is a subset of the sample space S. If we denote an event by E, we could say that E⊆S. If an event consists of a single outcome in the sample space, it is called a simple event. Events which consist of more than one outcome are called compound events.

What we are actually interested in is the probability of a certain event to occur, or P(E). By definition, P(E) is a real number between 0 and 1, where 0 denotes the impossible event and 1 denotes the certain event (or the whole sample space).

As stated earlier, each possible outcome is represented by exactly one point in the sample space. This leads us to the following formula:

That is, the probability of an event to occur is calculated by dividing the number of favorable outcomes (according to the event E) by the total number of outcomes (according to the sample space S). In order to represent the relationships among events, you can apply the known principles of set theory. Consider the experiment of rolling a die once. As we have seen previously, the sample space is S = {1, 2, 3, 4, 5, 6}. Let's now consider the following events:

Event A = 'score > 3' = {4, 5, 6}
Event B = 'score is odd' = {1, 3, 5}
Event C = 'score is 7' = ∅
A∪B ='the score is > 3 or odd or both' = {1, 3, 4, 5, 6}
A∩B ='the score is > 3 and odd' = {5}
A' = 'event A does not occur' = {1, 2, 3}

We have:

P(A∪B) = 5/6
P(A∩B) = 1/6
P(A') = 1 – P(A) = 1 – 1/2 = 1/2
P(C) = 0

The first step when trying to solve a probability problem is to be able to recognize the sample space. After that, you basically have to determine the number of favorable outcomes. This is the classical approach, but the way we implement it may vary from problem to problem. Let's take a look at QuizShow (SRM 223, Div 1 – Easy). The key to solving this problem is to take into account all the possibilities, which are not too many. After a short analysis, we determine the sample space to be the following:

S = { (wager 1 is wrong, wager 2 is wrong, you are wrong),
(wager 1 is wrong, wager 2 is wrong, you are right),
(wager 1 is wrong, wager 2 is right, you are wrong),
(wager 1 is wrong, wager 2 is right, you are right),
(wager 1 is right, wager 2 is wrong, you are wrong),
(wager 1 is right, wager 2 is wrong, you are right),
(wager 1 is right, wager 2 is right, you are wrong),
(wager 1 is right, wager 2 is right, you are right) }

The problem asks you to find a wager that maximizes the number of favorable outcomes. In order to compute the number of favorable outcomes for a certain wager, we need to determine how many points the three players end with for each of the 8 possible outcomes. The idea is illustrated in the following program:

Another good problem to start with is PipeCuts (SRM 233, Div 1 – Easy). This can be solved in a similar manner. There is a finite number of outcomes and all you need to do is to consider them one by one.

Let's now consider a series of n independent events: E1, E2, … , En. Two surprisingly common questions that may appear (and many of you have already encountered) are the following:

  1. What is the probability that all events will occur?
  2. What is the probability that at least one event will occur?

To answer the first question, we relate to the occurrence of the first event (call it E1). If E1 does not occur, the hypothesis can no longer be fulfilled. Thus, it must be inferred that E1 occurs with a probability of P(E1). This means there is a P(E1) chance we need to check for the occurrence of the next event (call it E2). The event E2 occurs with a probability of P(E2) and we can continue this process in the same manner. Because probability is by definition a real number between 0 and 1, we can synthesize the probability that all events will occur in the following formula:

The best way to answer the second question is to first determine the probability that no event will occur and then, take the complement. We have:

These formulae are very useful and you should try to understand them well before you move.

BirthdayOdds
A good example to illustrate the probability concepts discussed earlier is the classical "Birthday Paradox". It has been shown that if there are at least 23 people in a room, there is a more than 50% chance that at least two of them will share the same birthday. While this is not a paradox in the real sense of the word, it is a mathematical truth that contradicts common intuition. The TopCoder problem asks you to find the minimum number of people in order to be more than minOdds% sure that at least two of them have the same birthday. One of the first things to notice about this problem is that it is much easier to solve the complementary problem: "What is the probability that N randomly selected people have all different birthdays?". The strategy is to start with an empty room and put people in the room one by one, comparing their birthdays with those of them already in the room:

This so called "Birthday Paradox" has many real world applications and one of them is described in the TopCoder problem called Collision (SRM 153, Div 1 – Medium). The algorithm is practically the same, but one has to be careful about the events that may alter the sample space.

Sometimes a probability problem can be quite tricky. As we have seen before, the 'Birthday Paradox' tends to contradict our common sense. But the formulas prove to us that the answer is indeed correct. Formulas can help, but to become a master of probabilities you need one more ingredient: "number sense" . This is partly innate ability and partly learned ability acquired through practice. Take this quiz to assess your number sense and to also become familiar with some of the common probability misconceptions.

Step by Step Probability Computation
In this chapter we will discuss some real TopCoder problems in which the occurrence of an event is influenced by occurrences of previous events. We can think of it as a graph in which the nodes are events and the edges are dependencies between them. This is a somewhat forced analogy, but the way we compute the probabilities for different events is similar to the way we traverse the nodes of a graph. We start from the root, which is the initial state and has a probability of 1. Then, as we consider different scenarios, the probability is distributed accordingly.

NestedRandomness
This problem looked daunting to some people, but for those who figured it out, it was just a matter of a few lines. For the first step, it is clear what do we have to do: the function random(N) is called and it returns a random integer uniformly distributed in the range 0 to N-1. Thus, every integer in this interval has a probability of 1/N to occur. If we consider all these outcomes as input for the next step, we can determine all the outcomes of the random(random(N)) call. To understand this better, let's work out the case when N = 4.

  • After the first nesting all integers have the same probability to occur, which is 1 / 4.
  • For the second nesting there is a 1/4 chance for each of the following functions to be called: random(0), random(1), random(2) and random(3). Random(0) produces an error, random(1) returns 0, random (2) returns 0 or 1 (each with a probability of 1/2) and random(3) returns 0, 1 or 2.
  • As a result, for the third nesting, random(0) has a probability of 1/4 + 1/8 + 1/12 of being called, random(1) has a probability of 1/8 + 1/12 of being called and random(2) has a probability of 1/12 of being called.
  • Analogously, for the fourth nesting, the function random(0) has a probability of 1/4 of being called, while random(1) has a probability of 1/24.
  • As for the fifth nesting, we can only call random(0), which produces an error. The whole process is described in the picture to the right.

NestedRandomness for N = 4

The source code for this problem is given below:

If you got the taste for this problem, here are another five you may want to try:

ChessKnight – assign each square a probability and for every move check the squares one by one to compute the probabilities for the next move.
DiceThrows – determine the probability of each possible outcome for both players and then compare the results.
RockSkipping – the same approach, just make sure you got the lake pattern correctly.
PointSystem – represent the event space as a matrix of possible scores (x, y).
VolleyBall – similar to PointSystem, but the scores may go up pretty high.

Let's now take a look at another TopCoder problem, GeneticCrossover, which deals with conditional probability. Here, you are asked to predict the quality of an animal, based on the genes it inherits from its parents. Considering the problem description, there are two situations that may occur: a gene does not depend on another gene, or a gene is dependent.

For the first case, consider p the probability that the gene is to be expressed dominantly. There are only 4 cases to consider:

  • at least one parent has two dominant genes. (p = 1)
  • each parent has exactly one dominant gene. (p = 0.5)
  • one parent has one dominant gene and the other has only recessive genes (p = 0.25)
  • both parents have two recessive genes (p = 0)

Now let's take the case when a gene is dependent on another. This make things a bit trickier as the "parent" gene may also depend on another and so on … To determine the probability that a dependent gene is dominant, we take the events that each gene in the chain (starting with the current gene) is dominant. In order for the current gene to be expressed dominantly, we need all these events to occur. To do this, we take the product of probabilities for each individual event in the chain. The algorithm works recursively. Here is the complete source code for this problem:

See also ProbabilityTree.

Randomized Algorithms
We call randomized algorithms those algorithms that use random numbers to make decisions during their execution. Unlike deterministic algorithms that for a fixed input always give the same output and the same running-time, a randomized algorithm behaves differently from execution to execution. Basically, we distinguish two kind of randomized algorithms:

  1. Monte Carlo algorithms: may sometimes produce an incorrect solution – we bound the probability of failure.
  2. Las Vegas algorithms: always give the correct solution, the only variation is the running time – we study the distribution of the running time.

Read these lecture notes from the College of Engineering at UIUC for an example of how these algorithms work.

The main goal of randomized algorithms is to build faster, and perhaps simpler solutions. Being able to tackle "harder" problems is also a benefit of randomized algorithms. As a result, these algorithms have become a research topic of major interest and have already been utilized to more easily solve many different problems.

An interesting question is whether such an algorithm may become useful in TopCoder competitions. Some problems have many possible solutions, where a number of which are also optimal. The classical approach is to check them one by one, in an established order. But it cannot be guaranteed that the optima are uniformly distributed in the solution domain. Thus, a deterministic algorithm may not find you an optimum quickly enough. The advantage of a randomized algorithm is that there are actually no rules to set about the order in which the solutions are checked and for the cases when the optima are clustered together, it usually performs much better. See QueenInterference for a TopCoder example.

Randomized algorithms are particularly useful when faced with malicious attackers who deliberately try to feed a bad input to the algorithm. Such algorithms are widely used in cryptography, but it sometimes makes sense to also use them in TopCoder competitions. It may happen that you have an efficient algorithm, but there are a few degenerate cases for which its running time is significantly slower. Assuming the algorithm is correct, it has to run fast enough for all inputs. Otherwise, all the points you earned for submitting that particular problem are lost. This is why here, on TopCoder, we are interested in worst case execution time.

To challenge or not to challenge?
Another fierce coding contest is now over and you have 15 minutes to look for other coders' bugs. The random call in a competitor's submission is likely to draw your attention. This will most likely fall into one of two scenarios:

  1. the submission was just a desperate attempt and will most likely fail on many inputs.
  2. the algorithm was tested rather thoroughly and the probability to fail (or time out) is virtually null.

The first thing you have to do is to ensure it was not already unsuccessfully challenged (check the coder's history). If it wasn't, it may deserve a closer look. Otherwise, you should ensure that you understand what's going on before even considering a challenge. Also take into account other factors such as coder rating, coder submission accuracy, submission time, number of resubmissions or impact on your ranking.

Will "random" really work?
In most optimizing problems, the ratio between the number of optimal solutions and the total number of solutions is not so obvious. An easy, but not so clever solution, is to simply try generating different samples and see how the algorithm behaves. Running such a simulation is usually pretty quick and may also give you some extra clues in how to actually solve the problem.

[Dr.Lib]Note:Math – 欧拉函数

正在纠结要不要码一篇欧拉函数……IvyEnd神犇的大作就已经出炉了……

Via http://www.ivy-end.com/archives/1021

欧拉函数\(\varphi \left ( N \right )\)表示小于或等于\(N\)的正整数中与\(N\)互质的数的个数。又称\(\varphi\)函数、欧拉商数。

下面介绍欧拉函数的几个性质:

  • \(\displaystyle\varphi\left ( 1 \right )=1\)。
  • \(\displaystyle\varphi \left( N\right )=N\cdot\prod_{p\mid N}\left ( \frac{p-1}{p} \right )\)。
  • \(\displaystyle\varphi \left ( p^{k} \right ) = p^{k}-p^{k-1}=\left(p-1 \right )\cdot p^{k-1}\),其中\(p\)为质数。
  • \(\displaystyle\varphi \left(mn \right )=\varphi \left(m \right )\cdot \varphi \left(n \right )\),其中\(\gcd \left ( m,n \right )=1\)。

我们根据这几个性质就可以求出欧拉函数。

基本思路是首先置\(\varphi \left ( N \right )=N\),然后再枚举素数\(p\),将\(p\)的整数倍的欧拉函数\(\varphi \left ( kp \right )\)进行操作\(\varphi \left ( kp \right )=\varphi \left ( kp \right )\cdot \frac{p-1}{p}\)即可。

以及求单点的函数值

应用:

欧拉定理

\(a^{\varphi(n)}=1 \mod {n}\)

 

原根个数

\(\varphi \left ( \varphi \left ( N \right ) \right )\)

对于原根的定义,我们可以这样来叙述:

若存在一个实数\(a\),使得\(a^{i}\mod{N},a\in\left \{ 1,2,3,\cdots ,N \right \}\)的结果各不相同,我们就成实数\(a\)为\(N\)的一个原根。

原根的个数等于\(\varphi \left ( \varphi \left ( N \right ) \right )\)。这样我们就可以很方便的求出原根的个数。(参考题目

指数循环节

Via http://hi.baidu.com/aekdycoin/item/e493adc9a7c0870bad092fd9

\( A^{x}=A^{x\%\varphi(C) + \varphi(C)} \mod{C} (x \ge \varphi(C))\)

[Dr.Lib]Note:Math – 不定方程

Via Ivy-Endhttp://www.ivy-end.com/archives/1010
关于这个算法,主要是参考NOIP2012 Day2 T1。即这里所讲的是求解这样一个线性模方程:\[ax\equiv 1\mod{p}\]的最小正整数解。

去年我是暴搜做的(PS+1),当时什么都不会。今天在这里介绍两种算法,一种是我国古代数学家秦九韶发明的「大衍求一术,还一种是著名的「扩展欧几里德算法」。

大衍求一术

首先来看一下大衍求一术。这里只介绍它的计算方法,至于证明可以参考扩展欧几里德算法。

例1:求解方程\(23x\equiv 1\mod{97}\)。

解:我们只需要列出下面这张表就可以得到求解\[\begin{matrix}23^{1} & 23^{1} & 3^{17} & 3^{17} & 1^{38}\\ 97^{0} & 5^{4} & 5^{4} & 2^{21} & 2^{21}\end{matrix}\]结果就是38。

接下来我们来理论化的表述一下这个算法的过程:

假设输入\(a,b\)满足\(a>b\)。那么我们用\(a_{n},A_{n}\)分别表示第一行的底数和奇数,\(b_{n},B_{n}\)分别表示第二行的底数和奇数,如果\(a_{i}>b_{i}\),那么\(a_{i+1}=a_{i}\mod{b_{i}},A_{i+1}=A_{i}+B_{i}\cdot \left [ \frac{a_{i}}{b_{i}} \right ],b_{i+1}=b_{i},B_{i+1}=B_{i}\);如果\(a_{i}<b_{i}\)则上面的结论倒过来即可。

算法结束当且仅当\(a_{i}=1\),此时\(A_{i}\)即为所求的最小正整数解。

例2:求解方程\(97x\equiv 1\mod{23}\)。

解:我们只需要列出下面这张表就可以得到求解\[\begin{matrix}97^{1} & 5^{1} & 5^{1} & 2^{5} & 2^{5} & \\ 23^{0} & 23^{0} & 3^{4} & 3^{4} & 1^{9} & 1^{14}\end{matrix}\]结果就是14。

对于这个结果,如果1最先出现在下面一行,则需要再计算一次,而且这次计算必须使得余数是1。

假设输入\(a,b\)满足\(a<b\)。中间的步骤和之前一行,在计算过程中必然存在一个\(i\)使得\(b_{i}=1\),此时我们只需计算\(B_{i+1}\)即可得到结果。其中\(B_{i+1}=A_{i}+B_{i}\cdot \left(a_{i} – 1\right)\)。

扩展欧几里德算法

可能上面的算法对于某些人来说比较晦涩,我们下面来介绍一下扩展欧几里德算法。首先介绍一个定理:

方程\(ax+by=\gcd\left ( a,b \right )\)一定有解。

这样我们的问题就可以转化为求方程\(ax+b\cdot \left ( -y \right )=1\),在这里,我们先求出方程\(ax+b\cdot \left ( -y \right )=\gcd\left(a,b\right)\)的解,然后只要将结果除以\(\gcd\left(a,b\right)\)就行了。

下面来推导一下扩展欧几里德算法。

我们已知\[ax+by=\gcd\left ( a,b \right )\],且\(\gcd\left ( a,b \right )=\gcd\left(b,a\mod b \right )\)。不妨设\[bx{}'+\left ( a\mod b \right )y{}'=\gcd\left ( b,a\mod b \right )\]。此时就有\[bx{}'+\left ( a\mod b \right )y{}'=ax+by\],展开得到\[bx{}'+\left ( a-\left [ \frac{a}{b} \right ]\cdot b \right )y{}’=ax+by\],化简得\[ay{}'+b\left (x{}'-\left [ \frac{a}{b} \right ]\cdot y{}’ \right )=ax+by\]。因此可以得到\[x=y{}',y=x{}'-\left [ \frac{a}{b} \right ]\cdot y{}’\]。

这样我们就可以用递归来实现扩展欧几里德算法了。

欧拉定理

\(a^{phi(n)}=1 \mod {n}\)
令\(x=a^{phi(n) -1}\mod{n}\)有\(ax=1  \mod{n}\)则x即为答案。
只需求出phi(n)就可以了。

[Dr.Lib]Note:Algorithms – LLRB——红黑树的现代实现

Via http://www.cnblogs.com/Seiyagoo/archive/2013/10/13/3366290.html

一、本文内容

以一种简明易懂的方式介绍红黑树背后的逻辑实现2-3-4树,以及红黑树的插入、删除操作,重点在2-3-4树与红黑树的对应关系上,并理清红黑树相关操作的来龙去脉。抛弃以往复杂的实现,而分析红黑树的一种简单实现LLRB。
 
 

二、算法应用

红黑树,给人以强烈的第一听觉冲击力——红与黑,好像很高端的感觉。事实上的确如此,红黑树是一种高级数据结构,在C++、Java的标准库里作为set、map的底层数据结构实现,以及linux中进程的公平调度。
 
 

三、2-3-4树

标题是红黑树,为什么讲2-3-4树?因为红黑树就是2-3-4树的一种等价形式,更准确地来说,我们用红黑树来完成2-3-4树的各种操作(如插入、删除)。原因就是2-3-4树的实现即维护太麻烦。所以理解2-3-4树才能真正理解红黑树。而历史就是这么发展的,了解过去,现在的一切才有了意义。算法导论关于红黑树这一节就忽略了这一点,让人知其然而不知其所以然。
 
OK,暂时先忽略复杂的红黑树,从简单的2-3-4树开始。
 
1、定义
 
2-3-4树是一种泛化的BST,它的每个结点允许1,2或者3个键(key),那么对应的有三种结点:
2-node:一个key,两个孩子;
3-node:二个key,三个孩子;
4-node:三个key,四个孩子。
注:k-node表示有k个链接(link)。泛化的BST还有2-3树,B树等。
 
从图中可以看出2-3-4树的另一个性质:它是完全平衡的(等高),即从根结点到叶子结点距离相等。
 
2、插入操作
2-3-4树本身就是一种查找树(中序遍历有序),故其查找操作同二叉查找。
 
2-3-4树的插入操作类似二叉查找树,先是查找操作失败(从根结点查找到叶子结点),然后在底部的叶子结点插入。
因为2-3-4树的结点有三种类型,所以操作有点差异。对于2-node和3-node,分别直接插入可变成3-node,4-node;但是对于4-node若直接插入则违反了定义。在4-node插入之前,先分裂4-node成2个2-node,再将待插入的key插入对应的2-node。 如下图,H查找失败,在H插入4-node(由三个key F、G、J组成)之前,先对该4-node分裂(将三个key的中间值提上父节点,剩余的二个key分别作为中间key的左右孩子),然后再将H插入2-node J中。这样操作的结果是查找到达底部叶子结点时,始终是2-node或者3-node。
 
插入算法思想:自下而上的算法由原作者Bayer在1972年提出,自上而下的算法由Guibas-Sedgewick(红黑树这个名字来源于他们)在1978年提出,然后30年后也就是2008年Sedgewick教授又改进了红黑树的操作,也就是后面要介绍的LLRB。
 
自上而下的算法思路是,从根结点向下的查找过程中,遇到4-node就分裂,最后在底部的叶子结点插入。
那么为什么遇到4-node就分裂呢?4-node不是2-3-4树的一种合法结点类型吗?
答案可以从后面LLRB的算法思路可以得出。
 
因为遇到4-node就分裂就保证了当前结点不是4-node,则分裂孩子的4-node有两种情形:
分裂4-node的case 1
 
 
分裂4-node的case 2

注:上面的变换在树中任意位置都成立。
 
 
下面两张图是完整的插入过程(只有分裂结点类型为4-node的根结点才会导致树高增1):
 
3、平衡性分析
2-3-4树的树高在最坏情况下为lgN(所有结点都是2-node型),最好情况下为lg4 N = 1/2 lgN(所有结点都是4-node型),2-3-4树的查找、插入操作都是lgN。
 
 

四、红黑树

 
终于到了高富帅——红黑树。。。
从2-3-4树的介绍可以看出,对2-node、3-node、4-node的不同数据类型进行转换,但所涉及的大部分任务使用这种直接的表示方法来实现并不方便。所以可以用
一种统一的方式完成转换,而只需很小的开销。这就是红黑树存在的意义,既有BST的标准搜索过程,又有2-3-4树的简单插入平衡过程。
 
下面介绍LLRB(Left-leaning red-black trees),而不是标准的红黑树。

1、定义

LLRB有三个特点:
(1)用BST来表示2-3-4树;
(2)用红边(红链接)来连接2-node来表示3-node和4-node(如下图);
(3)3-node必须是向左倾斜的(两者的大者作为根)。
 
LLRB相对于标准的RB多了特点3,在标准的RB中右向倾斜的红链接是允许的。对于特点2,在物理上用一个bit(红或黑)来存储以表示指向该结点的红链接。
红链接来连接3-node或者4-node的内部key,而黑链接则连接外部的key;为了理解,可以消除红链接并将它们连接的结点都折叠起来(即将看做红链接连
接的点缩为一个点),则可以看出黑链接个数不变。
2-3-4树与红黑树是一一对应的关系
 
 
且上下关系中不允许2个连续的红边
 
由特点3可以推出LLRB的一个特性,红黑树与2-3-4树一一对应。
 

2、插入算法

同样地,在LLRB中查找操作同BST。
在插入之前要知道一个操作:旋转。它有两种情况:左旋,右旋。
 
 
左旋 右旋
 
插入算法思路:即前面介绍的2-3-4树
具体实现时,插入一个结点时,始终是红结点,即用红边链接该结点。对于2-node、3-node直接插入(k-node有k个插入点),如违反上面的左红链接和连续的红链接,则旋转作调整。对于4-node(左右都为红链接),先分裂,物理实现是一个翻转(左右红链接变黑,父链接变红)。
2-node插入的两种case
 
 
3-node插入的三种case
 
 
 
4-node分裂操作
 
 
由4-node的分裂可知黑高度不变,分裂操作即翻转在图片上对应为红链接向上传递。
在介绍2-3-4树时,4-node分裂操作有两种情况,4-node的parent是2-node和3-node;再结合k-node有k个插入点,则总共有6种情况。
4-node的分裂case 1
 
 
 
4-node的分裂case 2
 
 
看了上面两幅图后,也许会让人觉得红黑树太复杂了,这么多case,其实不然,在LLRB实现中只有两种操作:旋转、翻转。旋转的目的是保持平衡,翻转的目的是分裂4-node。
看了下面的LLRB插入算法,你就会明白上面4-node的翻转、旋转其实是分开的两个过程(翻转自上而下,旋转自下而上),只是为了统一这个完整的过程而画在了一起,才会有那么多case。
 
LLRB的插入算法:
首先结合2-3-4树的插入算法思路,先从上至下查找(遇到4-node则翻转),然后在底部叶子结点插入,因为在从上至下的过程中,可能会产生不满足LLRB的性质的情况,故插入结点后需要从下至上调整以恢复LLRB性质。
下图是插入算法的核心代码,第2是分裂即翻转,第1是插入操作,第3、4是调整。
 
从插入算法可以看出,如果自下而上再分裂4-node,则会出现它的parent也可能是4-node,祖父结点也可能是4-node;我们可以一直向上分裂,这也正是上面提到的自上而下的思路;而更简单的方法是,在沿树向下的过程中,遇到4-node就分裂,这也正是自上而下与自下而上的区别。
插入算法的核心代码
 
上图的核心代码按照从上而下和从下而上的顺序放入BST的插入(递归版本)操作中即得到下图的完整的插入算法。
注:分裂(即翻转)是自上而下,所以放在递归之前;调整(即旋转)是自上而下,所以放在递归之后。
完整的插入代码
 
 
如果将分裂操作放到递归之后,也就是先自上而下查找,插入结点,然后自下而上调整也可同样完成插入操作而不破坏LLRB的性质。
2-3树的插入操作
 
其实上述描述的就是2-3树的插入操作,它与2-3-4树的插入的区别在于:2-3树先插入,再分裂(down)、调整(up);2-3-4树先分裂(down),再插入、调整(up)。又因为插入总是在最后一层进行,故翻转的位置决定了对应树的实现。
这也是为什么2-3-4树叫top-down,而2-3树叫bottom-up。
 

3、删除算法

LLRB的删除类似于插入,只不过处理刚好相反。插入、删除都有临界点:插入4-node,删除2-node,对临界点的操作都会引起突变,因为它们会破坏LLRB的性质(黑高度)。
所以同插入一样,先从上至下查找,如果查找在3-node或4-node结束,则直接删除;
3-node和4-node的删除
 
对于2-node的删除同4-node的插入相反,2-node的删除是先合并2个2-node为1个4-node,然后再安全地删除对应的2-node中的key。
同样地,因为parent不为2-node(遇到即合并),再结合兄弟结点的2、3、4-node,则删除总共有6种情况。同样,实际的删除实现也
没这么复杂。
2-node的删除
 
在介绍删除任意一个结点时,先分析删除树中最小的结点。因为它是删除任意结点的一部分,后面可以看出来。
首先,为了保证可以直接删除最小的某个结点,需要假设当前结点h或者h.left是红色链。
然后从上而下查找过程中,2个2-node要变为1个4-node,则需反向翻转(红色父链接变黑,黑色子链接变红),
为了将红链从上向左子树传递(删除红结点,不改变黑高度),需保证h为红,h.left和h.left.left为黑;
当h.left和h.left.left都为黑时,
如果h.right.left为红,则要从右边借兄弟(下图case 2),如果h.right.left为黑,则不需要(下图case1)。
注:在翻转的同时,右子树可能会产生连续的红链,则需调整。
case 1
 
 
 
case 2
 
 
 
红链向左移动 红链向左移动对应的example
 
deleteMin的实现
 
 
 
deleteMin的example
 
 
完成了deleteMin就完成了LLRB的删除操作的一大半。现在是删除LLRB的任意一个key,
自上而下查找过程中,左边查找用moveRedLeft;右边查找用moveRedRight;直到最后的底部叶子结点,直接删除即可;同样,自下而上调整。
 
怎样将delete操作归约到delteMin去呢?算法导论提供的一个技巧是:replace,deleteMin(即用后继的key代替当前的key,再删除右孩子的最小结点)。
删除技巧
 
 
 
完整删除代码
 
 
 
参考:
《算法导论》
《algorithm in c》

[Dr.Lib]Note:Algorithms – GarsiaWachs

Via FHQ神犇http://fanhq666.blog.163.com/blog/static/81943426201062865551410/

石子合并(每次合并相邻的两堆石子,代价为这两堆石子的重量和,把一排石子合并为一堆,求最小代价)是一个经典的问题。dp可以做到O(n*n)的时间复杂度,方法是:

朴素DP做法

设f[i,j]为合并从i到j的石子所用最小代价。

f[i,j]=min(sum(i,j)+f[i,k]+f[k+1,j])对所有i<=k<j,其中sum(i,j)表示从i到j的石子重量之和。

设上式取等时k的值为w[i,j],有神牛证明过:w[i,j]>=w[i,j-1],w[i,j]<=w[i+1,j]

这样,枚举k的时候,就有了一个上下界,从而搞掉了一维。

GarsiaWachs算法

而GarsiaWachs算法可以把时间复杂度压缩到O(nlogn)。

具体的算法及证明可以参见《The Art of Computer Programming》第3卷6.2.2节Algorithm G和Lemma W,Lemma X,Lemma Y,Lemma Z。

只能说一个概要吧:

设一个序列是A[0..n-1],每次寻找最小的一个满足A[k-1]<=A[k+1]的k,(方便起见设A[-1]和A[n]等于正无穷大)
那么我们就把A[k]与A[k-1]合并,之后找最大的一个满足A[j]>A[k]+A[k-1]的j,把合并后的值A[k]+A[k-1]插入A[j]的后面。

有定理保证,如此操作后问题的答案不会改变。

举个例子:

186 64 35 32 103

因为35<103,所以最小的k是3,我们先把35和32删除,得到他们的和67,并向前寻找一个第一个超过67的数,把67插入到他后面

186 64(k=3,A[3]与A[2]都被删除了) 103

186 67(遇到了从右向左第一个比67大的数,我们把67插入到他后面) 64 103

186 67 64 103 (有定理保证这个序列的答案加上67就等于原序列的答案)

现在由5个数变为4个数了,继续!

186 (k=2,67和64被删除了)103

186 131(就插入在这里) 103

186 131 103

现在k=2(别忘了,设A[-1]和A[n]等于正无穷大)

234 186

420

最后的答案呢?就是各次合并的重量之和呗。420+234+131+67=852,哈哈,算对了。

证明嘛,基本思想是通过树的最优性得到一个节点间深度的约束,之后 证明操作一次之后的解可以和原来的解一一对应,并保证节点移动之后他所在的 深度不会改变。详见TAOCP。

实现

具体实现这个算法需要一点技巧,精髓在于不停快速寻找最小的k,即维护一个“2-递减序列”

朴素的实现的时间复杂度是O(n*n),但可以用一个平衡树来优化(好熟悉的优化方法),使得最终复杂度为O(nlogn)