# 傻瓜函数式编程

2006年6月19日，星期一

### 开篇

（在被众人鄙视之前）我唯一想说的是，在这些拖拉的日子里总会时不时读到一些不明觉厉的文章。如果没有打开不应该打开的网站，每隔几天你都可以看到至少一篇这样的东西。它们的共性：难懂，耗时，于是这些文章就慢慢的堆积成山了。很快你就会发现自己已经累积了一堆的收藏链接还有数不清的PDF文件，此时你只希望隐入一个杳无人烟的深山老林里什么也不做，用一年半载好好的消化这些私藏宝贝。当然，我是说最好每天还是能有人来给送吃的顺带帮忙打扫卫生倒垃圾，哇哈哈。

# [时间复杂度 Time Complexity of Algorithms

（然后把栗子剥开吃掉~

……

$$T=\frac{n\left ( n-1 \right )}{2}T_{0}+\left ( n-1 \right )T_{1}$$，其中$$T_{0}$$为一次比较需要的时间，$$T_{1}$$为一次交换所需要的时间

（还记得函数在无穷大的极限吗？
n趋向于无穷大时，起作用的将会是n的二次项，也就是$$\frac{n^{2}}{2}T_{0}$$

# [Dr.Lib]Note:Algorithm – 莫队算法

http://blog.csdn.net/acm_cxlove/article/details/8894431

# [Dr.Lib]Note:Math – Extend Baby Step Giant Step

http://hi.baidu.com/aekdycoin/item/236937318413c680c2cf29d4

http://seter.is-programmer.com/posts/32188.html

## Baby Step Giant Step

$A^{x}=B (\mod C) 0\leq x <C,C\in \mathbb{P}$

$x = i\ast m + j ( 0 \leq i,j< m) ,m = \lceil \sqrt{C} \rceil$

$(A^{i})^{m} * A^{j} = B \mod C$

(1) for i = 0 -> m, 插入$$Hash (A^i \mod C, i)$$

(2) 枚举 i ,对于每一个枚举到的i,令  $${A}’ = (A^m)^i \mod C$$

## 扩展Baby Step Giant Step

$A^{x}=B (\mod C) 0\leq x <C,C\in \mathbb{N^*}$

$$\frac{A}{G’}*A^(x-1)=B’\mod C’$$。

# [Dr.Lib]Note:Data Structures – 左偏树

——http://seter.is-programmer.com/posts/30733.html

——以此献给所有OIer

《Zero One Park》

Try Again ~

Try Again ~

# [Dr.Lib]Note:Algorithm – Manhattan minimum spanning tree

Via http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=lineSweep

We first break this down into a simpler problem. Standard MST algorithms for general graphs (e.g., Prim’s algorithm) can compute the MST in O((E + N) log N) time for E edges. If we can exploit geometric properties to reduce the number of edges to O(N), then this is merely O(N log N). In fact we can consider, for each point P, only its nearest neighbors in each of the 8 octants of the plane (see the figure below). The figure shows the situation in just one of the octants, the West-Northwest one. Q is the closest neighbour (with the dashed line indicating points at the same Manhattan distance as Q), and R is some other point in the octant. If PR is an edge in a spanning tree, then it can be removed and replaced by either PQ or QR to produce a better spanning tree, because the shape of the octant guarantees that |QR| ≤ |PR|. Thus, we do not need to consider PR when building the spanning tree.

This reduces the problem to that of finding the nearest neighbour in each octant. We’ll just consider the octant shown; the others are no different and can be handled by symmetry. It should be clear that within this octant, finding the nearest neighbour is equivalent to just finding the point with the largest value of x − y, subject to an upper bound on x + y and a lower bound on y, and this is the form in which we’ll consider the problem.

Now imagine for the moment that the lower bound on y did not exist. In this case we could solve the problem for every P quite easily: sweep through the points in increasing order of x + y, and Q will be the point with the largest x − y value of those seen so far. This is where the divide-and-conquer principle comes into play: we partition the point set into two halves with a horizontal line, and recursively solve the problem for each half. For points P in the upper half, nothing further needs to be done, because points in the bottom half cannot play Q to their P. For the bottom half, we have to consider that by ignoring the upper half so far we may have missed some closer points. However, we can take these points into account in a similar manner as before: walk through all the points in x + y order, keeping track of the best point in the top half (largest x − y value), and for each point in the bottom half, checking whether this best top-half point is better than the current neighbour.

So far I have blithely assumed that any set of points can be efficiently partitioned on Y and also walked in x + y order without saying how this should be done. In fact, one of the most beautiful aspects of this class of divide-and-conquer plus line-sweep algorithms is that it has essentially the same structure as a merge sort, to the point that a merge-sort by x + y can be folded into the algorithm in such a way that each subset is sorted on x + y just when this is needed (the points initially all being sorted on Y). This gives the algorithm a running time of O(N log N).

The idea of finding the closest point within an angle range can also be used to solve the Euclidean MST problem, but the O(N log N) running time is no longer guaranteed in the worst cases, because the distance is no longer a linear equation. It is actually possible to compute the Euclidean MST in O(N log N) time, because it is a subset of the Delaunay triangulation.

# [Dr.Lib]Note:Data Structures – Bottom-up SPLAY

Let us play！

Let u splay！

BST 平均 O(logn)

Treap 期望 O(logn)

Splay 均摊 O(logn)

AVL 严格 O(logn)

SBT 严格 O(logn)

AVL不大指望能在赛场写出来，SBT在绝大多数时候是个不错的选择，Treap…嗯……但偏偏Splay被人看成是BST中的万金油……为什么呐？

Splay一个最大的特点就是：每一个操作都要splay一下……除此之外没了……

## Splay

splay操作的作用就是把某一节点旋转到根，以加快之后的访问速度。它还可以在旋转过程中维持树的基本平衡。

1. zig: 当目标节点是根节点的 左子节点 或 右子节点 时，进行一次zig，将目标节点调整到根节点的位置。
2. zig-zig:当目标节点是根节点的 左（右）子节点的左（右）子节点 时，进行一次zig-zig，将目标节点调整到根节点位置。
3. zig-zag:当目标节点是根节点的 左（右）子节点的右（左）子节点 时，进行一次zig-zag，将目标节点调整到根节点位置。